Insights
Will AI mean the extinction of the tester?
Santiago Martínez, Head of QE&Testing, UST Spain & LATAM
AI is neither the Kryptonite that nullifies the powers of SuperQA, nor the meteorite of the mass extinction of the Testersaurs. On the contrary, it will be a great help to improve software quality levels and the quality of life of QAs. And this will happen in a very near future that is almost upon us.
Santiago Martínez, Head of QE&Testing, UST Spain & LATAM
We all know that AI is increasingly present in a direct way in our daily lives with practical applications in the automotive industry, our household appliances, healthcare, data security, application development... and of course also in software quality assurance.
DIVIDER
AI in software testing
Some of the applications of AI tools in software engineering and quality are for example:
- Requirements Engineering. Registration, Evaluation, Analysis and maintenance of system requirements using Knowledge Engineering, Computer Vision, Pattern Recognition, Automatic Generation of Sequence Diagrams, etc. Ex: Functionize, AppliTools, Tricentis.
- Generation of test cases from Epics, User Stories, Acceptance Criteria, functional documentation, etc. Test cases are created using test generation techniques based on genetic algorithms, neural networks or heuristic search. Ex: Tricentis, AppliTools, AccelQ.
- Automatic test data generation. We all know that in many cases and especially for test automation, data is the Achilles heel. AI can help us in the generation of synthetic data associated with the casuistry of our tests. Ex : GenRocket, Mabs
- Performance testing. You can simulate the user load on a system and evaluate its performance under stress conditions, high load, peaks, etc. Ex: Parasoft. Microfocus
- Early defect detection. AI identifies patterns and anomalies in complex data sets, facilitating early defect detection during development and testing using Machine Learning.
- Improved test accuracy and coverage: AI optimizes resource allocation and prioritizes testing, ensuring higher software quality using Machine Learning
- Visual testing Visual testing captures the visual part of a web or graphical interface of an application and compares it with the expected results by design. Using computer vision algorithms, detect and report differences between the final rendering and the intended visualization to help focus more thorough testing on the elements that differ. Ex: Microfocus.
- Usability testing. AI can analyze user interaction data and provide valuable information about the usability of an application.
- Source code validation based on style guides and documentation from architecture and development teams. Ex: Diffblue, Sealights, Codacy.
- Self-repairing automation code. With IA we can get our automatic tests to self-repair in parallel to changes in the SUT (Software Under Test). This works in a limited way, at least as far as we have been able to verify with some pilot test in SelfHealing UST with Healenium. Ex: BrowserStack, DeepCode
- Continuous Integration and Deployment with Generative AI and Machine Learning . Ex : Dynatrace, Seerene, Synk
- Collaborative workspaces in software life cycle with AI support. Processing of all the complex information of a work context to facilitate our tasks Ex: Copilot.
Despite the rapid and steady progress being made with AI in all of these software quality engineering and testing needs, almost all of the technologies we have discussed are still in their genesis. When it comes to the question of will AI replace the process of QA, we are still a long way from letting them work for us and being able to dispense with human support and intelligence for all of them. Moreover, manual testing is still essential and not even the emergence of test automation at the time allows us to dispense with it 100%.
In this QA world, the adoption of AI is still limited in general, especially among the most experienced professionals, which also suggests a certain resistance to adopting this technology.
Despite the low initial adoption of AI, its use in the generation of Test Cases, Plans and data sets has steadily increased since 2022. AI adoption for Test Case and Plan generation grew from 35% to 37%, and for Test Data generation from 32% to 36%. This suggests a continued trend towards greater adoption of AI-driven testing in QA tasks.
DIVIDER
Challenges with AI in software testing
The main obstacles pointed out by the experts, and which all QA professionals who are researching the subject are encountering, can be summarized as follows:
The lack of capable and reliable AI tools. It is true that large manufacturers already have some AI capabilities included in their solutions, but in most cases in the demos or seminars we have attended the fi¡nctionalities shown are very limited and are proposed to be completed in 2025 onwards. All will come.
We at UST for example are including automatic test and case generation capabilities in Cucumber using cCopilot in our NoSkript automation framework, but it is still far from being a 100% functional tool.
Lack of well-trained and qualified people. There is still not a sufficient number of QA professionals well trained in the use of these tools. We are finding that the learning curve to start taking advantage of tools like ChatGPT or Copilot for professional QA use takes time, but they are certainly very helpful in partial tasks.
Difficulty in calculating ROI. Currently it is difficult to evaluate the ROI of the use of AI in testing. As the tools, processes and projects that use them mature, this obstacle will be overcome.
Concerns about security and privacy. At the moment, customers are reluctant to share sensitive information with AI engines, even though manufacturers assure that it will be stored in isolated and exclusive environments. Trust will be gained as we move forward, something similar happened with Cloud at the beginning.
And finally, but perhaps one of the most important difficulties at present:
Scarce information to feed AI engines. It is essential to note that there are still many business and technology environments where human contextualization is absolutely necessary to ensure a quality product. All QAs know clients and projects where documentation is very scarce, inconsistent or outdated. The information for testing is scattered in the minds of the team or in documents that are not even digitized. All this always implies an extra effort of the human intelligence of the testers and their ability to extract the information from whatever sources and to analyze it afterwards. In these situations it would be difficult for an AI system to replace us, but it is true that it can help us a lot if we are able to feed and train it properly.
DIVIDER
Conclusion
As I said at the beginning, I think that these technologies can be an important support for the moment, but not a replacement of the "human" testing teams. They should serve to improve the quality of the testing work, increase its effectiveness, efficiency and degree of test coverage.
But "science is advancing like crazy" and what I'm talking about now may be refuted and solved by an advanced AI system in a couple of years or less. So as AI capabilities get enhanced, we need to understand the impact of AI for QAs and be prepared to see how to get the most out of it to improve our hard work as QAs.
A wide spectrum of new challenges is opening up for the QA of today and tomorrow. Knowledge and understanding of AI techniques have become indispensable for testing professionals and to be competitive we have to follow this unstoppable trend.
Training, experimenting with the tools and acquiring skills in areas such as data analysis, programming and understanding AI algorithms are essential to take full advantage of the opportunities offered by this technological revolution.
By the way, I didn't say "testersaurus" in a derogatory tone, but with all my affection and admiration.
Greetings and long life to testing!