ARTIFICIAL INTELLIGENCE SYSTEM FOR THE ACCURATE EVALUATION OF SUBJECTIVE EXAMINATION ANSWERS USING NLP
Keywords:
Subjective Answer Evaluation,Abstract
Manual marking of descriptive answers is time-consuming and prone to inconsistencies.
This project employs machine learning (ML) and natural language processing (NLP) to
mark subjective answers automatically for accuracy, efficiency, and fairness. The system
follows a two-stage process: in the first stage, student answers are compared with model
answers using Word Mover's Distance (WMD) and Cosine Similarity to find semantic
similarity. Second, similarity scores are used to train an ML model to rate answers
independent of predefined solutions. Preprocessing techniques such as tokenization,
stemming, lemmatization, and stop-word filtering improve accuracy while preserving
semantics through Word2Vec, TF-IDF, and Bag of Words. Tests on varied datasets
indicate that Word2Vec is more effective than other models with an accuracy of up to
88%. Future enhancements involve domain-specific models, deep learning (BERT,
GPT), and multilingual support. This system transforms electronic learning by
streamlining teacher work, providing just grading, and enhancing subjective response
evaluation in teaching.











