ARTIFICIAL INTELLIGENCE SYSTEM FOR THE ACCURATE EVALUATION OF SUBJECTIVE EXAMINATION ANSWERS USING NLP

Authors

  • Sriharshini Bondugula
  • Dr.Nazimunisa

Keywords:

Subjective Answer Evaluation,

Abstract

Manual marking of descriptive answers is time-consuming and prone to inconsistencies.
This project employs machine learning (ML) and natural language processing (NLP) to
mark subjective answers automatically for accuracy, efficiency, and fairness. The system
follows a two-stage process: in the first stage, student answers are compared with model
answers using Word Mover's Distance (WMD) and Cosine Similarity to find semantic
similarity. Second, similarity scores are used to train an ML model to rate answers
independent of predefined solutions. Preprocessing techniques such as tokenization,
stemming, lemmatization, and stop-word filtering improve accuracy while preserving
semantics through Word2Vec, TF-IDF, and Bag of Words. Tests on varied datasets
indicate that Word2Vec is more effective than other models with an accuracy of up to
88%. Future enhancements involve domain-specific models, deep learning (BERT,
GPT), and multilingual support. This system transforms electronic learning by
streamlining teacher work, providing just grading, and enhancing subjective response
evaluation in teaching.

Downloads

Published

2025-07-20

How to Cite

Sriharshini Bondugula, & Dr.Nazimunisa. (2025). ARTIFICIAL INTELLIGENCE SYSTEM FOR THE ACCURATE EVALUATION OF SUBJECTIVE EXAMINATION ANSWERS USING NLP. Utilitas Mathematica, 122(1), 2208–2220. Retrieved from https://utilitasmathematica.com/index.php/Index/article/view/2488

Citation Check

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.