Abstract:
This paper is the second part of [M. V. Burnashev, Sh. Amari, and T. S. Han, Theory Probab. Appl., 45 (2000), pp. 558–568]. A parameter estimation problem is considered where some part of the data cannot be directly observed. Our helper observes those data and can send us some limited amount of information about them. What kind of information allows us to get a minimal mean-square error in a parameter estimate? In particular, what is the minimal information required to get the same mean-square error as when we directly observe all the data? Some upper bounds for that minimal amount of information and some related results are obtained.