New Parameters for Improving Emotion Recognition in Human Voice

Teodora Dimitrova-Grekow , Paweł Konopko

Abstract

Distinguishing emotion in human speech is a challenging area with many different attempts, mainly based on frequency analysis. In this paper, we present a novel course of searching for parameters. The experiments described below are based on AGH Emotional Speech Corpus consisting of audio samples of seven emotions acted by 12 different speakers. A comparison with the previously used database subset was done and several new extended data sets were analyzed. Fast Fourier Transformation and magnitude spectrum analysis was applied twice to extract the fluctuations of the fundamental tone out of the speech audio samples. We investigated whether they improve the accuracy of emotion recognition using different methods of artificial intelligence. The most efficient analysis of the outcome data was conducted with Random Forest classifier from the algorithms collection for data mining WEKA. The results show that the fundamental frequency fluctuations are a perspective choice for further experiments.
Author Teodora Dimitrova-Grekow (FCS / DDMCG)
Teodora Dimitrova-Grekow,,
- Department of Digital Media and Computer Graphics
, Paweł Konopko (FCS)
Paweł Konopko,,
- Faculty of Computer Science
Pages4206-4211
Publication size in sheets0.5
Book 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), 2019, Institute of Electrical and Electronics Engineers, ISBN 978-1-7281-4568-6, 6000 p.
Internal identifierROC 19-20
Languageen angielski
Score (nominal)70
Score sourceconferenceList
ScoreMinisterial score = 70.0, 05-03-2020, ChapterFromConference
Citation count*
Cite
Share Share

Get link to the record


* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.
Back
Confirmation
Are you sure?