cherapanamjeri22a@v178@PMLR

Total: 1

#1 Optimal Mean Estimation without a Variance [PDF] [Copy] [Kimi1] [REL]

Authors: Yeshwanth Cherapanamjeri, Nilesh Tripuraneni, Peter Bartlett, Michael Jordan

We study the problem of heavy-tailed mean estimation in settings where the variance of the data-generating distribution does not exist. Concretely, given a sample \bmX={Xi}ni=1 from a distribution \mcD over \mbRd with mean μ which satisfies the following \emph{weak-moment} assumption for some α[0,1]: \normv=1:\mbEX\ts\mcD[\abs\inpXμv1+α]1,

and given a target failure probability, δ, our goal is to design an estimator which attains the smallest possible confidence interval as a function of n,d,δ. For the specific case of α=1, foundational work of Lugosi and Mendelson exhibits an estimator achieving \emph{optimal} subgaussian confidence intervals, and subsequent work has led to computationally efficient versions of this estimator. Here, we study the case of general α, and provide a precise characterization of the optimal achievable confidence interval by establishing the following information-theoretic lower bound: Ω\lprpdn+\lprpdnα(1+α)+\lprplog1/δnα(1+α).
and devising an estimator matching the aforementioned lower bound up to constants. Moreover, our estimator is computationally efficient.

Subject: COLT.2022 - Accept