Abstract:
Assume that an unknown function of $t$ variables can be measured sequentially with random errors at arbitrarily assigned $t$-tuples of its arguments. Let this function depend actually only on a variables' subset $S$. We propose an algorithm of search for a subset $S$, $|S|=s$, including sequential choice of the variables' values as inputs, a stopping time, and a decision based on outputs. Under the uniform a priori distribution we estimate the mean error probability and mean duration of our strategy and study their asymptotic behavior as $t\to\infty$, $s=\rm{const}$. The case of an unknown (but bounded from above) amount of significant variables is also studied.