Let’s analyze a typical healthcare industry use case to understand why testing professionals need to upgrade and where the upgrade is desired. In this example, a leading software development organization has been contracted to develop a software that will integrate with multiple hospitals and pathology labs across the country to get details in real time of blood samples of millions of patients with different combinations of diseases and using different medications as part of their treatment. The software system has to analyze these reports and identify those patients that are likely to develop Diabetes with an accuracy greater than 99%. Writing a rules driven algorithm will not work because there would be too many combination of factors impacting Diabetes, many of which are probably unknown. The only option is to implement machine learning (ML) and train the software system to go through these combinations and report whether or not an individual will develop Diabetes. This means, no manual intervention once the system is in production.
Let’s assume the software development organization decides to use Microsoft Azure Machine Learning Studio as the software development platform. Since, waiting for field trials with real human subjects is not an option to know the software’s reliability, the big question now is, how will software testing team test and certify that the software is performing its expected function. What tools, techniques or subject domains should the testing team be experienced with in order to test and certify this software system?
In addition to core testing skills, listed below is an indicative set of skills that are needed.
- Fundamentals of machine learning – Understanding how supervised and unsupervised learning happens
- Common ML algorithms – Differences between these algorithms and when to use a specific algorithm (Example: Regression for predicting values, Anomaly detection for finding unusual data points, Clustering for detecting structures, Two / Multi class classification for predicting categories)
- Statistical terms like Standard Deviation, Coefficient or determination (R square), Relative absolute error, Mean absolute error, Sensitivity and specificity
- Understanding of Mathematical frameworks like Markov decision process and its relation to Reinforced learning (RL) (optimization & probability are required for RL), Game theory
- Natural language processing (NLP) because systems should be able to read and communicate with English as spoken / written / interpreted by humans across the globe
- Economics in order to appreciate rational decision making process under constraints
- Philosophy in order to understand foundations of human learning and rationality, mind and the difference between biological body and the sense of self (ego or “I”)
- Psychology to know how we think and act and our cognitive perception process
- Domain, in the above example, knowing how to read blood reports, understand variable, knowing what diseases and medicines can impact diabetes etc (of course, you could hire a doctor to be part of the testing team, but that skill within the team is necessary)
While the skill set looks threatening, it is not expected that single individual has all these skills. The objective is to recommend that a team consisting of these skills and individuals having such skills are what is needed to test. Having pure manual testing skills or skills with programming will not be sufficient.
Good news is, we still have 2-3 years before big bang occurs to fine tune our skills. In all probability, many of us would have studies these subjects as part of our academic education. Just revisit what we learned in our colleges / schools and confidently embark on the journey or testing AI systems.
Here is a concluding prayer. May “Natural Stupidity” not hinder testing team’s tribe from succeeding in the world of “Artificial Intelligence”.
About The Author
Venkata Ramana Lanka (LRV) is an accomplished software test management professional with extensive background in software testing, test automation framework design, building domain specific testing solution accelerators and leading large software testing teams. LRV is a hands-on manager with proven ability to direct and improve quality initiatives, reduce defects and improve overall efficiency and productivity. LRV is a vivid reader and loves spending time reading topics such as world philosophy, psychology, astronomy & space science and technology innovations. LRV has written multiple White papers and articles that have been published in international software testing magazines. He continues to speak on latest trends in technology and its impact on software testing at software testing conferences. LRV works as a Senior Director, Independent Validation Services (IVS) unit for Virtusa. He is based out of Hyderabad, India and heads the IVS function for BFS segment in Virtusa.