RUS  ENG
Full version
SEMINARS

Principle Seminar of the Department of Probability Theory, Moscow State University
November 22, 2017 16:45, Moscow, MSU, auditorium 12-24


Algorithmic statistics, an overview

A. Kh. Shen'

Directeur de Recherche, CNRS, LIRMM, Montpellier, France

Abstract: Following Kolmogorov, one can describe the basic question of statistics as follows: for a given object (binary string) $x$ we look for “explanations”, or good statistical models. Such a model is a finite distribution on binary strings; we can consider for simplicity only uniform distributions on finite sets of strings, this does not matter much. The “quality” of a finite set $A$ as an explanation for a binary string $x$ is measured by two parameters: its “complexity” (the model should be simple) and its “randomness deficiency” (the object $x$ should be typical element of $A$; no hidden regularities in $x$ that are rare in $A$). Both notions can be defined in terms of Kolmogorov complexity theory, and for given $x$ we can study the trade-off between these two parameters (the notion of $\alpha$-$\beta$-stochasticity). A different approach is to consider “two-part descriptions” of a string $x$: first some finite set $A$ is given, and then the ordinal number of $x$ in $A$ is specified. This approach corresponds to the notion of “Kolmogorov structural function”, it was also studied by the names of “logical depth” and “sophistication”. One can also study the resource-bounded version of Kolmogorov complexity. In the last decades it became clear that these approaches are equivalent with logarithmic precision (Bennett, Vitanyi, Vereshchagin, Bauwens and others). Though it hardly can be considered as something practical (the time bounds involved are too large), it is still and interesting and important result of algorithmic information theory.


© Steklov Math. Inst. of RAS, 2024