Impact of Features Extraction Technique on Emotion Recognition Using Deep Learning Model

Authors

  • Anwar Salah College of Information Technology, Software department, University of Babylon, Babylon, Iraq, Iraq
  • Nashwan Hussein College of Information Technology, Software department, University of Babylon, Babylon, Iraq, Iraq

Keywords:

Deep learning, Convolutional Neural Network (CNN), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Histogram of Oriented Gradient (HOG).

Abstract

A computer system would have a far harder time recognizing emotions from facial expressions than a human would. In a variety of settings, especially for human-computer interaction, the social signal processing sub-field of identifying emotions based on facial expressions is applied. Many studies have examined automatic emotion recognition, the majority of which make use of machine learning techniques. It remains a challenging issue in computer vision to recognize basic emotions including happiness, contempt, anger, fear, surprise, and sadness. Deep learning has received more attention recently as a possible option for a number of real-world problems, containing emotion recognition. In this paper, we proposed the usage of a 1-Diminsion Convolutional Neural Network (1D-CNN) to recognize some of the basic emotions and employed a different type of preprocessing and feature extraction ways to show how these methods impacted the performance of the proposed CNN model. The experiments on the Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D) revealed a high accuracy rate of 99.8%.

Downloads

Published

2023-06-15

How to Cite

Anwar Salah, & Nashwan Hussein. (2023). Impact of Features Extraction Technique on Emotion Recognition Using Deep Learning Model. Utilitas Mathematica, 120, 345–355. Retrieved from http://utilitasmathematica.com/index.php/Index/article/view/1661

Citation Check