RUS  ENG
Full version
JOURNALS // Problemy Peredachi Informatsii // Archive

Probl. Peredachi Inf., 2016 Volume 52, Issue 4, Pages 31–48 (Mi ppi2220)

This article is cited in 3 papers

Methods of Signal Processing

On risk concentration for convex combinations of linear estimators

G. K. Golubevab

a Kharkevich Institute for Information Transmission Problems, Russian Academy of Sciences, Moscow, Russia
b CNRS, Aix-Marseille Université, I2M, UMR, Marseille, France

Abstract: We consider the estimation problem for an unknown vector $\beta\in\mathbb R^p$ in a linear model $Y=X\beta+\sigma\xi$, where $\xi\in\mathbb R^n$ is a standard discrete white Gaussian noise and $X$ is a known $n\times p$ matrix with $n\ge p$. It is assumed that $p$ is large and $X$ is an ill-conditioned matrix. To estimate $\beta$ in this situation, we use a family of spectral regularizations of the maximum likelihood method $\widetilde\beta^\alpha(Y)= H^\alpha(X^\top X)\widehat\beta^\circ(Y)$, $\alpha\in\mathbb R^+$, where $\widehat\beta^\circ(Y)$ is the maximum likelihood estimate for $\beta$ and $\{H^\alpha(\cdot)\colon\mathbb R^+\to[0,1],\ \alpha\in\mathbb R^+\}$ is a given ordered family of functions indexed by a regularization parameter $\alpha$. The final estimate for $\beta$ is constructed as a convex combination (in $\alpha$) of the estimates $\widetilde\beta^\alpha(Y)$ with weights chosen based on the observations $Y$. We present inequalities for large deviations of the norm of the prediction error of this method.

UDC: 621.391.1

Received: 25.11.2015
Revised: 05.04.2016


 English version:
Problems of Information Transmission, 2016, 52:4, 344–358

Bibliographic databases:


© Steklov Math. Inst. of RAS, 2024