|
СЕМИНАРЫ |
Семинар «Математические основы искусственного интеллекта»
|
|||
|
An introduction to Kolmogorov complexity with applications to reinforcement learning Bruno Bauwens HSE Univeristy, Moscow |
|||
Аннотация: Ray Solomonoff considered a general version the problem of sequence prediction: how to predict the next bit of a sequence when provided with unlimited computational power? He proposed to apply Bayesian reasoning to a prior distribution defined by the output of a randomized universal Turing machine. In this talk, a gentle introduction to Kolmogorov complexity is given. We also prove its is incomputabity, and provide elegant examples of Godel and Rosser sentences. We also prove that Kolmogorov complexity is approximately equal to the negative logarithm of Solomonoff's prior distribution. This implies that Solomonoff induction has a bias towards ‘simple’ explanations. Afterwards, Hutter's solution for reinforcement learning will be discussed, which is a generalization of Solomonoff induction. He also provides a time bounded version, (which still seems not practical) and relies on a proof system for the optimization of computational resources. Finally, I briefly make some philosophical comments on Vitanyi's work on the information distance and on work that aims to understand the implicit Bayes of stochastic gradient descent in neural nets. This talk only requires basic skills in discrete mathematics. It is intended for people interested in computability theory or the foundations of machine learning. Язык доклада: английский Список литературы
|