Speech emotion recognition in emotional feedback for Human-Robot Interaction

Abstract

 For robots to plan their actions autonomously and interact with people, recognizing human emotions is crucial. For most humans nonverbal cues such as pitch, loudness, spectrum, speech rate are efficient carriers of emotions. The features of the sound of a spoken voice probably contains crucial information on the emotional state of the speaker, within this framework, a machine might use such properties of sound to recognize emotions. This work evaluated six different kinds of classifiers to predict six basic universal emotions from non-verbal features of human speech. The classification techniques used information from six audio files extracted from the eNTERFACE05 audio-visual emotion database. The information gain from a decision tree was also used in order to choose the most significant speech features, from a set of acoustic features commonly extracted in emotion analysis. The classifiers were evaluated with the proposed features and the features selected by the decision tree. With this feature selection could be observed that each one of compared classifiers increased the global accuracy and the recall. The best performance was obtained with Support Vector Machine and bayesNet.

Authors and Affiliations

Javier R´azuri, David Sundgren, Rahim Rahmani, Aron Larsson, Antonio Cardenas, Isis Bonet

Keywords

Related Articles

 Adaptive Neuro-Fuzzy Inference System for Dynamic Load Balancing in 3GPP LTE

 ANFIS is applicable in modeling of key parameters when investigating the performance and functionality of wireless networks. The need to save both capital and operational expenditure in the management of wireless n...

Wearable Computing System with Input-Output Devices Based on Eye-Based Human Computer Interaction Allowing Location Based Web Services

Wearable computing with Input-Output devices Base on Eye-Based Human Computer Interaction: EBHCI which allows location based web services including navigation, location/attitude/health condition monitoring is proposed. T...

An improvement direction for filter selection techniques using information theory measures and quadratic optimization

Filter selection techniques are known for their simplicity and efficiency. However this kind of methods doesn’t take into consideration the features inter-redundancy. Consequently the un-removed redundant features remain...

 Micro-Blog Emotion Classification Method Research Based on Cross-Media Features

 Although the sentiment analysis of tweet has caused more and more attention in recent years, most existing methods mainly analyze the text information. Because of the fuzziness of emotion expression, users are more...

 A Comparison between Regression, Artificial Neural Networks and Support Vector Machines for Predicting Stock Market Index

 Obtaining accurate prediction of stock index sig-nificantly helps decision maker to take correct actions to develop a better economy. The inability to predict fluctuation of the stock market might cause serious pro...

Download PDF file
  • EP ID EP147949
  • DOI 10.14569/IJARAI.2015.040204
  • Views 116
  • Downloads 0

How To Cite

Javier R´azuri, David Sundgren, Rahim Rahmani, Aron Larsson, Antonio Cardenas, Isis Bonet (2015).  Speech emotion recognition in emotional feedback for Human-Robot Interaction. International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 20-27. https://europub.co.uk./articles/-A-147949