Аннотация:
We consider the problem of numerically recovering an unknown function $f$ from m point samples of $f$ with error to be measured in some Banach space norm $| \cdot |_X$. Bounds on the error of recovery can only be proved if there is additional information in the form that $f \in K$ where $K \subset X$ is compact. Two theories have emerged to define optimal performance of such a numerical algorithm. Optimal recovery assumes the point samples have no noise. Mini-max estimates assume the measurement are corrupted by additive i.i.d. Gaussian noise of mean zero and variance $\sigma^2$. One would expect that the minimax bounds (claimed to be optimal) would converge to the Optimal Recovery bounds as $\sigma \to 0$. However, the existing mini-max bounds in the literature do not provide such estimates.
The goal of this talk is to understand what is going on. We restrict our attention to the case $f$ is defined on a nice domain $\Omega \subset R^d$ and the model class $K$ is the unit ball of a Besov space $B^s_\tau(L_p(\Omega))$ and the error is to be measured in an $L_q(\Omega)$ norm. We show that the existing mini-max rates in the literature are not clearly stated in terms of their dependence on $\sigma$. We go on to establish the true minimax rates as a function of σ and show that these rates converge to the optimal recovery rate when $\sigma$ converges to zero. Another important aspect of our analysis is that it does not depend on wavelet decompositions which are somewhat opaque when the support of the wavelet intersects the bounday. This is joint work in collaboration with Robert Nowak, Rahul Parhi, Guergana Petrova, and Jonathan Siegel.