Speech emotion recognition in emotional feedback for Human-Robot Interaction

Abstract

 For robots to plan their actions autonomously and interact with people, recognizing human emotions is crucial. For most humans nonverbal cues such as pitch, loudness, spectrum, speech rate are efficient carriers of emotions. The features of the sound of a spoken voice probably contains crucial information on the emotional state of the speaker, within this framework, a machine might use such properties of sound to recognize emotions. This work evaluated six different kinds of classifiers to predict six basic universal emotions from non-verbal features of human speech. The classification techniques used information from six audio files extracted from the eNTERFACE05 audio-visual emotion database. The information gain from a decision tree was also used in order to choose the most significant speech features, from a set of acoustic features commonly extracted in emotion analysis. The classifiers were evaluated with the proposed features and the features selected by the decision tree. With this feature selection could be observed that each one of compared classifiers increased the global accuracy and the recall. The best performance was obtained with Support Vector Machine and bayesNet.

Authors and Affiliations

Javier R´azuri, David Sundgren, Rahim Rahmani, Aron Larsson, Antonio Cardenas, Isis Bonet

Keywords

Related Articles

 Blocking Black Area Method for Speech Segmentation

 Speech segmentation is an important sub problem of automatic speech recognition. This research is concerned with the development of a continuous speech segmentation system using Bangla Language. This paper presents...

 A Minimal Spiking Neural Network to Rapidly Train and Classify Handwritten Digits in Binary and 10-Digit Tasks

 This paper reports the results of experiments to develop a minimal neural network for pattern classification. The network uses biologically plausible neural and learning mechanisms and is applied to a subset of the...

A Mechanism of Generating Joint Plans for Self-interested Agents, and by the Agents

Generating joint plans for multiple self-interested agents is one of the most challenging problems in AI, since complications arise when each agent brings into a multi-agent system its personal abilities and utilities. S...

 A Model for Facial Emotion Inference Based on Planar Dynamic Emotional Surfaces

 Emotions have direct influence on the human life and are of great importance in relationships and in the way interactions between individuals develop. Because of this, they are also important for the development of...

Download PDF file
  • EP ID EP147949
  • DOI 10.14569/IJARAI.2015.040204
  • Views 124
  • Downloads 0

How To Cite

Javier R´azuri, David Sundgren, Rahim Rahmani, Aron Larsson, Antonio Cardenas, Isis Bonet (2015).  Speech emotion recognition in emotional feedback for Human-Robot Interaction. International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 20-27. https://europub.co.uk./articles/-A-147949