Abstract:
This is joint work with Tino Ullrich (University of Bonn).
We consider approximations of multivariate functions using m terms from its tensorized
Faber–Schauder expansion. The univariate Faber–Schauder system on $[0, 1]$ is given by
dyadic dilates and translates (in the wavelet sense) of the $L_1$ normalized simple hat
function with support in $[0, 1]$. We obtain a hierarchical basis which will be tensorized
over all levels (hyperbolic) to get the dictionary $\mathcal F$. The worst-case error with respect
to a class of functions $\mathbf{\mathrm{F}} \hookrightarrow X$ is measured by the usual best $m$-term widths denoted
by $\sigma_m(\mathbf{\mathrm{F}}, \mathcal{F})_X$, where the error is measured in $X$. We constructively prove the following
sharp asymptotical bound for the class of Besov spaces with small mixed smoothness (i.e.
$1/p < r < min\{1/\theta - 1, 2\}$) in $L_q (p < q \le \infty)$ $$\sigma_m(S^r_{p,\theta}B,\mathcal{F})_q \asymp m^{-r}$$
Note that this asymptotical rate of convergence does not depend on the dimension
$d$ (only the constants behind). In addition, this result holds for $q = 1$ and to our best
knowledge this is the first sharp result involving $L_1$ as a target space. We emphasize two
more things. First, the selection procedure for the coefficients is a level-wise constructive
greedy strategy which only touches a finite prescribed number of coefficients. And second,
due to the use of the Faber–Schauder system, the coefficients are finite linear combinations
of discrete Function values. Hence, this method can be considered as a nonlinear adaptive
sampling algorithm leading to a pure polynomial rate of convergence for any $d$.