A FRAMEWORK FOR AUTOMATED SOFTWARE TESTING USING MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE

Authors

  • Yadab Sutradhar
  • Md Tanvir Rahman Khan
  • Md Rana Hossain
  • Md Masum

Keywords:

Traditional Testing Automation, AI/ML in Software Testing, Defect Prediction, Test Case Generation, Test Case Prioritization, Natural Language Processing, Machine Learning Models, Testing Frameworks, Test Automation Tools

Abstract

Traditional software testing automation relies on executing predefined test scripts using tools like Selenium, JUnit, and TestNG. While effective for regression testing and improving execution speed, these methods are limited by their lack of flexibility, scalability, and adaptability to evolving applications. Additionally, they do not provide insights into test case effectiveness or defect prediction, requiring continuous human intervention to update scripts. In contrast, AI/ML-based approaches in software testing have demonstrated significant advancements, including defect prediction, test case prioritization, and test case generation from natural language descriptions. This section provides an overview of traditional testing automation, surveys the integration of AI/ML techniques into testing processes, and reviews existing tools and frameworks. It highlights the growing potential of intelligent automation in addressing the limitations of traditional methods, while also identifying gaps in the current literature, such as the lack of unified frameworks and limited scalability.

Downloads

Published

2025-05-05

How to Cite

Yadab Sutradhar, Md Tanvir Rahman Khan, Md Rana Hossain, & Md Masum. (2025). A FRAMEWORK FOR AUTOMATED SOFTWARE TESTING USING MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE. Utilitas Mathematica, 122(1), 481–497. Retrieved from https://utilitasmathematica.com/index.php/Index/article/view/2150

Citation Check