Abstract:
Machine learning (ML) in its current form implies that the answer to any problem can be well approximated by a function of a very peculiar form: a specially adjusted iteration of Heaviside theta-functions. It is natural to ask whether the answers to questions that we already know can be naturally represented in this form. We provide elementary and yet nonevident examples showing that this is indeed possible, and suggest to look for a systematic reformulation of existing knowledge in an ML-consistent way. The success or failure of these attempts can shed light on a variety of problems, both scientific and epistemological.