Speech Emotion Recognition Based on Voice Fundamental Frequency
Teodora Dimitrova-Grekow , Aneta Klis , Magdalena Igras-Cybulska
AbstractThe human voice is one of the basic means of communication, thanks to which one also can easily convey the emotional state. This paper presents experiments on emotion recognition in human speech based on the fundamental frequency. AGH Emotional Speech Corpus was used. This database consists of audio samples of seven emotions acted by 12 different speakers (6 female and 6 male). We explored phrases of all the emotions – all together and in various combinations. Fast Fourier Transformation and magnitude spectrum analysis were applied to extract the fundamental tone out of the speech audio samples. After extraction of several statistical features of the fundamental frequency, we studied if they carry information on the emotional state of the speaker applying different AI methods. Analysis of the outcome data was conducted with classifiers: K-Nearest Neighbours with local induction, Random Forest, Bagging, JRip, and Random Subspace Method from algorithms collection for data mining WEKA. The results prove that the fundamental frequency is a prospective choice for further experiments.
|Journal series||Archives of Acoustics, ISSN 0137-5075, e-ISSN 2300-262X, (N/A 70 pkt)|
|Publication size in sheets||0.5|
|Keywords in English||emotion recognition; speech signal analysis; voice analysis; fundamental frequency; speech corpora|
|License||Journal (articles only); published final; ; with publication|
|Score||= 70.0, 30-03-2020, ArticleFromJournal|
|Publication indicators||: 2018 = 0.755; : 2018 = 0.899 (2) - 2018=0.893 (5)|
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.