Abstract:
The paper offers a construction of a linear Fisher information theory when the initial entity is not a family of distributions on a space of observations, but rather a family of scalar products on a finite-dimensional (abstract) linear space. It is shown that all the basic properties if Fisher information (monotonicity, invariance under contraction onto a sufficient subspace, additivity in changing over to the tensor product of spaces, and the Rao–Cramer inequality) are preserved in the linear theory as well. A linear analog of the maximum-likelihood method is created; a particular case of this analog is a correctly posed version of the method of moments that makes it possible to utilize an arbitrarily large number of general and sample moments for parameter estimation (unlike the classical method of moments).