RUS  ENG
Full version
JOURNALS // Proceedings of the Institute for System Programming of the RAS // Archive

Proceedings of ISP RAS, 2022 Volume 34, Issue 4, Pages 135–152 (Mi tisp710)

This article is cited in 1 paper

Speech enhancement method based on modified encoder-decoder pyramid transformer

A. A. Lependin, R. S. Nasretdinov, I. D. Ilyashenko

Altai State University

Abstract: The development of new technologies for voice communication has led to the need of improvement of speech enhancement methods. Modern users of information systems place high demands on both the intelligibility of the voice signal and its perceptual quality. In this work we propose a new approach to solving the problem of speech enhancement. For this, a modified pyramidal transformer neural network with an encoder-decoder structure was developed. The encoder compressed the spectrum of the voice signal into a pyramidal series of internal embeddings. The decoder with self-attention transformations reconstructed the mask of the complex ratio of the cleaned and noisy signals based on the embeddings calculated by the encoder. Two possible loss functions were considered for training the proposed neural network model. It was shown that the use of frequency encoding mixed with the input data has improved the performance of the proposed approach. The neural network was trained and tested on the DNS Challenge 2021 dataset. It showed high performance compared to modern speech enhancement methods. We provide a qualitative analysis of the training process of the implemented neural network. It showed that the network gradually moved from simple noise masking in the early training epochs to restoring the missing formant components of the speaker's voice in later epochs. This led to high performance metrics and subjective quality of enhanced speech.

Keywords: speech enhancement, noise reduction, noise masking, deep neural network, deep learning, encoder-decoder architecture, pyramid transformer, self-attention

DOI: 10.15514/ISPRAS-2022-34(4)-10



© Steklov Math. Inst. of RAS, 2024