A COMPREHENSIVE STUDY OF CNN-BASED FACIAL EMOTION RECOGNITION IN IMAGES AND VIDEOS

Authors

  • Bhadra Sai Tarun Mediboina
  • Basant Sah

Keywords:

Emotional States, Seven Classes, Surprising, Convolutional Neural Networks (CNNs), Information abstraction, SoftMax Layer, Realtime Videos

Abstract

Human emotional states are categorized using facial emotion recognition. Sorting each face image into the seven classes of facial emotions is the goal. Convolutional Neural Networks (CNNs) are employed in the emotion classification process. Real-time videos and A range of grayscale images from the dataset are captured for input. Next, the CNN's sequence of convolution and purpose of pooling layers is information abstraction, and the SoftMax layer is used for classification. A few methods are used to address the model's overfitting issue, including dropout, cluster standardization, and L2 regularization. The facial expression dataset from the Picture Folders (fer2013) collection is applied to experiments, and our model performs more accurately in predicting individual emotions than previous research has. Furthermore, the model exhibits good performance in predicting the mood of every picture in the live video stream.

Downloads

Published

2025-08-30

How to Cite

Bhadra Sai Tarun Mediboina, & Basant Sah. (2025). A COMPREHENSIVE STUDY OF CNN-BASED FACIAL EMOTION RECOGNITION IN IMAGES AND VIDEOS. Utilitas Mathematica, 122(2), 528–533. Retrieved from https://utilitasmathematica.com/index.php/Index/article/view/2741

Citation Check

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.