Аннотация:
A new method is introduced for the construction of control variates to reduce the variance of additive functionals of Markov Chain Monte Carlo (MCMC) samplers. These control variates are obtained by minimizing the asymptotic variance associated with the Langevin diffusion over a family of functions. To motivate our approach, we then show that the asymptotic variance of some well-known MCMC algorithms, including the Random Walk Metropolis and the (Metropolis) Unadjusted/Adjusted Langevin Algorithm, are well approximated by that of the Langevin diffusion. We finally theoretically justify the use of a class of linear control variates we introduce. In particular, we show that the variance of the resulting estimators is smaller, for a given computational complexity, than the standard Monte Carlo estimator. Several examples of Bayesian inference problems support our findings showing, in some cases, very significant reduction of the variance.