Abstract:
We consider a recursive algorithm to construct an aggregated estimator from a
finite number of base decision rules in the classification problem. The estimator approximately
minimizes a convex risk functional under the $\ell_1$-constraint. It is defined by a stochastic version
of the mirror descent algorithm which performs descent of the gradient type in the dual space
with an additional averaging. The main result of the paper is an upper bound for the expected
accuracy of the proposed estimator. This bound is of the order $C\sqrt{(\log M)/t}$ with an explicit
and small constant factor $C$, where $M$ is the dimension of the problem and $t$ stands for the
sample size. A similar bound is proved for a more general setting, which covers, in particular,
the regression model with squared loss.