RUS  ENG
Full version
JOURNALS // Journal of Siberian Federal University. Mathematics & Physics // Archive

J. Sib. Fed. Univ. Math. Phys., 2015 Volume 8, Issue 2, Pages 208–216 (Mi jsfu423)

Automated recognition of paralinguistic signals in spoken dialogue systems: ways of improvement

Maxim Sidorova, Alexander Schmitta, Eugene S. Semenkinb

a Institute of Communications Engineering, Ulm University, Albert Einstein-Allee, 43, Ulm, 89081, Germany
b Institute of Computer Science and Telecommunications, Siberian State Aerospace University, Krasnoyarskiy Rabochiy, 31, Krasnoyarsk, 660014, Russia

Abstract: The ability of artificial systems to recognize paralinguistic signals, such as emotions, depression, or openness, is useful in various applications. However, the performance of such recognizers is not yet perfect. In this study we consider several directions which can significantly improve the performance of such systems. Firstly, we propose building speaker- or gender-specific emotion models. Thus, an emotion recognition (ER) procedure is followed by a gender- or speaker-identifier. Speaker- or gender-specific information is used either for including into the feature vector directly, or for creating separate emotion recognition models for each gender or speaker. Secondly, a feature selection procedure is an important part of any classification problem; therefore, we proposed using a feature selection technique, based on a genetic algorithm or an information gain approach. Both methods result in higher performance than baseline methods without any feature selection algorithms. Finally, we suggest analysing not only audio signals, but also combined audio-visual cues. The early fusion method (or feature-based fusion) has been used in our investigations to combine different modalities into a multimodal approach. The results obtained show that the multimodal approach outperforms single modalities on the considered corpora. The suggested methods have been evaluated on a number of emotional databases of three languages (English, German and Japanese), in both acted and non-acted settings. The results of numerical experiments are also shown in the study.

Keywords: recognition of paralinguistic signals, machine learning algorithms, speaker-adaptive emotion recognition, multimodal approach.

UDC: 519.87

Received: 19.01.2015
Received in revised form: 25.02.2015
Accepted: 20.03.2015

Language: English



© Steklov Math. Inst. of RAS, 2024