A HYBRID TRANSFORMERS MODEL FOR MULTILINGUAL FAKE NEWS DETECTION
Keywords:
Fake News Detection, Multilingual NLP, Transformer Models, mBERT, mT5, GPT, Text Classification, Misinformation DetectionAbstract
In the digital era, misinformation and fake news have become significant global concerns, especially across multilingual platforms where content is consumed in various languages. Traditional fake news detection systems are mostly designed for a single language, limiting their effectiveness in diverse linguistic environments. This project introduces a comprehensive approach to multilingual fake news detection using transformerbased models—mBERT, mT5, and GPT. Each model is finetuned on a curated dataset consisting of news articles in four different languages. The system includes careful preprocessing steps such as language detection, tokenization, and normalization to handle language-specific characteristics. mBERT is employed for extracting contextual embeddings, mT5 treats fake news detection as a text-to-text transformation task, and a GPTbased model is used to tackle ambiguous cases through promptbased reasoning. Performance is evaluated using metrics such as accuracy, precision, recall, F1-score, and confusion matrix. A comparative analysis highlights the strengths and weaknesses of each model, showcasing that multilingual transformer architectures significantly improve fake news detection accuracy and reliability across multiple languages. This work contributes to the development of robust, scalable, and trustworthy solutions to combat the spread of misinformation on a global scale.











