Description
Consider the problem of joint estimation of the means for a large number of distributions in R^d using separate, independent data sets from each of them, sometimes also called "multi-task averaging" problem.
We propose an improved estimator (compared to the naive empirical means of each data set) to exploit possible similarities between means, without any related information being known in advance. First, for each data set, similar or neighboring means are determined from the data by multiple testing. Then each naive estimator is shrunk towards the local average of its neighbors. We prove that this approach provides a reduction in mean squared error that can be significant when the (effective) dimensionality of the data is large, and when the unknown means exhibit structure such as clustering or concentration on a low-dimensional set. This is directly linked to the fact that the separation distance for testing is smaller than the estimation error in high dimension and generalizes the well-known James-Stein phenomenon. An application of this approach is the estimation of multiple kernel mean embeddings, which plays an important role in many modern applications.
(This is based on joined work with Hannah Marienwald and Jean-Baptiste Fermanian)