RUS  ENG
Full version
JOURNALS // Informatics and Automation // Archive

Tr. SPIIRAN, 2017 Issue 52, Pages 32–50 (Mi trspy943)

This article is cited in 7 papers

Theoretical and Applied Mathematics

Identification features analysis in speech data using GMM-UBM speaker verification system

I. A. Rakhmanenko, R. V. Meshcheryakov

Tomsk State University of Control Systems and Radioelectronics (TUSUR)

Abstract: This paper is devoted to feature selection and evaluation in an automatic text-independent speaker verification task. In order to solve this problem a speaker verification system based on the Gaussian mixture model and the universal background model (GMM-UBM system) was used.
The application sphere and challenges of modern systems of automatic speaker identification were considered. Overview of the modern speaker recognition methods and main speech features used in speaker identification is provided. Features extraction process used in this article was examined. Reviewed speech features were used for speaker verification including mel-cepstral coefficients (MFCC), line spectral pairs (LSP), perceptual linear prediction cepstral coefficients (PLP), short-term energy, formant frequencies, fundamental frequency, voicing probability, zero crossing rate (ZCR), jitter and shimmer.
The experimental evaluation of the GMM-UBM system using different speech features was conducted on a 50 speaker set and a result is presented. Feature selection was done using the genetic algorithm and the greedy adding and deleting algorithm.
Equal error rate (EER) equals 0,579 % when using 256 component Gaussian mixture model and the obtained feature vector. Comparing to standard 14 MFCC vector, 42,1 % of EER improvement was acquired.

Keywords: speaker recognition; speaker verification; Gaussian mixture model; GMM-UBM system; mel frequency cepstral coefficients; speech features; feature selection; speech processing; genetic algorithm; greedy algorithm.

UDC: 004.934.8'1

DOI: 10.15622/sp.52.2



Bibliographic databases:


© Steklov Math. Inst. of RAS, 2024