Abstract:
Various neuron approximations can be used to reduce the computational complexity of neural networks. One such approximation based on summation and maximum operations is a bipolar morphological neuron. This paper presents an improved structure of the bipolar morphological neuron that enhances its computational efficiency and a new approach to training based on continuous approximations of the maximum and knowledge distillation. Experiments were conducted on the MNIST dataset using a LeNet-like neural network architecture and on the CIFAR10 dataset using a ResNet-22 model architecture. The proposed training method achieves 99.45% classification accuracy on the LeNet-like model, with the same accuracy of the classical network, and 86.69% accuracy on the ResNet-22 model, compared to 86.43% accuracy of the classical model. The results show that the proposed method with logsum-exp (LSE) approximation of the maximum and layer-by-layer knowledge distillation, allows for a simplified bipolar morphological network that is not inferior to classical networks.