Abstract:
We studied the theoretical foundations of artificial neural networks as applied to the possibility of approximating functions of many variables by superposition of functions of one variable. We considered the most important universal approximation theorems. We also studied the approximation theorems with the required number of neurons in a layer (width constraint) or the number of layers in a neural network (depth constraint), and the theorems in which their authors prove the existence of min bounds both for the number of layers and for the number of neurons per layer.