Agile Testing: Don’t Forget the Negative Test Scenarios

negative test scenarios

With software testing, and agile testing in particular, it is critical not to forget about the negative test scenarios .   Agile forces teams to move a break neck speeds and often we are lucky to get the happy path scenarios tested, so you are probably thinking how in the world can we also have time to cover the negative test scenarios.   Unfortunately, when there are production quality issues, the root of most of those defects is that the test scenario was not tested.   Planning in agile, contrary to what many may think is that test planning is a critical piece of the overall process of quality.

I have been running an agile team for over a year now, and we have done a great job at covering happy path scenarios, or those items which we identified which need to be tested.   However, we have done a very poor job of identifying and documenting negative test scenarios.  For our product, we manage electronic file transfers between our customers and other internal systems.  With this one scenario, two different files were received at the same time, and the second file ended up overwriting the first file.  While we were able to reprocess the overwritten files, it was a painful lesson because we did not factor that negative test scenario in our testing.

While it is impossible to account for all the different negative test scenarios, it is possible to document those which might be most probable and automate as much of those scenarios that you can, so that you can speed up the testing process.  In the event that defects are discovered like it happened on my team, it provides a great learning opportunity to grow and raise awareness which will help identify potential missed opportunities.  I would venture to guess that most agile testing covers 90 to 95 percent of happy path scenarios.  If we spend more time and identify 5 to 10 percent more negative test scenarios, then I believe our overall quality will increase.

Artificial Intelligence is Real – Avoid being the “Andrea Gail” of Perfect Storm

Artificial IntelligenceArtificial Intelligence or AI is for real and set to disrupt us in all walks of life. While, the question of what happens at the point of singularity and beyond is subject of curiosity, what is evident is practitioners of software testing will need a very fast and significant upgrade to their skills ( at least to  majority of the practitioners). Ignoring the urgency and not taking corrective actions will result in the same fate suffered by the fishing boat “Andrea Gail” in the movie Perfect Storm. No doubt the crew on this boat were great in their job and had been fishing for a long period. It would have been a happy ending “IF” course correction was done while they still had time.

Let’s analyze a typical healthcare industry use case to understand why testing professionals need to upgrade and where the upgrade is desired. In this example, a leading software development organization has been contracted to develop a software that will integrate with multiple hospitals and pathology labs across the country to get details in real time of blood samples of millions of patients with different combinations of diseases and using different medications as part of their treatment. The software system has to analyze these reports and identify those patients that are likely to develop Diabetes with an accuracy greater than 99%. Writing a rules driven algorithm will not work because there would be too many combination of factors impacting Diabetes, many of which are probably unknown. The only option is to implement machine learning (ML) and train the software system to go through these combinations and report whether or not an individual will develop Diabetes. This means, no manual intervention once the system is in production.

Let’s assume the software development organization decides to use Microsoft Azure Machine Learning Studio as the software development platform. Since, waiting for field trials with real human subjects is not an option to know the software’s reliability, the big question now is, how will software testing team test and certify that the software is performing its expected function.   What tools, techniques or subject domains should the testing team be experienced with in order to test and certify this software system?

In addition to core testing skills, listed below is an indicative set of skills that are needed.

  • Fundamentals of machine learning – Understanding how supervised and unsupervised learning happens
  • Common ML algorithms – Differences between these algorithms and when to use a specific algorithm (Example: Regression for predicting values, Anomaly detection for finding unusual data points, Clustering for detecting structures, Two / Multi class classification for predicting categories)
  • Statistical terms like Standard Deviation, Coefficient or determination (R square), Relative absolute error, Mean absolute error, Sensitivity and specificity
  • Understanding of Mathematical frameworks like Markov decision process and its relation to Reinforced learning (RL) (optimization & probability are required for RL), Game theory
  • Natural language processing (NLP) because systems should be able to read and communicate with English as spoken / written / interpreted by humans across the globe
  • Economics in order to appreciate rational decision making process under constraints
  • Philosophy in order to understand foundations of human learning and rationality, mind and the difference between biological body and the sense of self (ego or “I”)
  • Psychology to know how we think and act and our cognitive perception process
  • Domain, in the above example, knowing how to read blood reports, understand variable, knowing what diseases and medicines can impact diabetes etc (of course, you could hire a doctor to be part of the testing team, but that skill within the team is necessary)

While the skill set looks threatening, it is not expected that single individual has all these skills. The objective is to recommend that a team consisting of these skills and individuals having such skills are what is needed to test. Having pure manual testing skills or skills with programming will not be sufficient.

Good news is, we still have 2-3 years before big bang occurs to fine tune our skills. In all probability, many of us would have studies these subjects as part of our academic education. Just revisit what we learned in our colleges / schools and confidently embark on the journey or testing AI systems.

Here is a concluding prayer. May “Natural Stupidity” not hinder testing team’s tribe from succeeding in the world of “Artificial Intelligence”.

About The Author

Venkata Ramana Lanka (LRV) is an accomplished software test management professional with extensive background in software testing, test automation framework design, building domain specific testing solution accelerators and leading large software testing teams. LRV is a hands-on manager with proven ability to direct and improve quality initiatives, reduce defects and improve overall efficiency and productivity. LRV is a vivid reader and loves spending time reading topics such as world philosophy, psychology, astronomy & space science and technology innovations. LRV has written multiple White papers and articles that have been published in international software testing magazines. He continues to speak on latest trends in technology and its impact on software testing at software testing conferences. LRV works as a Senior Director, Independent Validation Services (IVS) unit for Virtusa. He is based out of Hyderabad, India and heads the IVS function for BFS segment in Virtusa.

Why is End to End Testing is Critical in Agile?

agile end to end testingI am a firm believer that end to end testing is critical in Agile.  I have personally experienced countless instances of where production issues could have been avoided if a few end to end tests would have been executed.   Most of my professional career has been devoted to software testing and end to end best practices so I have a high degree of understanding of the testing process.   Testing is a critical piece of the agile process but it is often misunderstood.

Here are some items that will help you determine if end to end testing is needed:

Dependencies: For the code change that is being made, is it isolated to a specific product or does the change impact another product or service?  If the answer is no, then no end to end testing is needed.  If the answer is yes, then end to end testing is required.  This is the #1 factor when determining if end to end testing is required and it is often the primary reason why issues occur in production.  It is important that the agile teams communicate both internally and externally.   Often an agile team will make a change and not believe there is an impact to another product or service but the impact occurs because certain misunderstandings or incorrect assumptions are made.

Critical Business Workflows:  If a code change is made within a critical business workflow, then it is always a good idea to perform end to end testing to ensure no adverse impacts occur.  Often running a few end to end tests will prevent issues from occurring in production.

Billing/Payments: It is critical to run end to end tests anytime there is a change to billing or payments.  Billing and payments can impact all businesses and it is important to spend the extra time performing end to end tests.  I have personally experienced many issues with companies due to billing and payment related issues.

Customer Impacting:  If the change has the potential to impact the customer, then end to end testing should be completed.

This end to end testing framework is not exhaustive, but it should cover the critical areas where performance testing is required.   Always if you think there is a remote potential it is a good idea to run end to end testing.   Most companies have test automation that will cover the critical business scenarios so it is always a good idea to run those automated regression tests to ensure you are covered.   Many in agile believe that end to end tests are a waste of time and not needed but I have experienced too many issues.   I also am a firm believer that if then agile team who has made the change has access to either an upstream or downstream system, then they should be able to perform the testing so they are not dependent on another team to perform the validation.   I believe that as long as you are running end to end tests that cover the code change, you don’t need to run hundreds of tests to ensure that things are working.  Run enough end to end tests but don’t run more that is needed.

 

 

Performance Testing: Common Myths/Confusions

Performance Testing Myths

Performance Testing: Common Myths/Confusions

Performance Testing in today’s changing times is still considered a niche skill. It is because of this reason there are many associated myths around it that lingers on and gives rise to false notions and picture which ultimately affects the kind of picture presented to client. Performance testing rings is associated with such statements as “Delivery impacting testing”, “Redundant testing”, “Not sure if we have budget for such things” etc. Below are some of such myths and a possible resolution for the same. Please do post if you think otherwise or there something similar to this in your mind.

#1-“I always consider Average reading or response time while presenting it to client”. Question – “Why so”. Answer – “Not sure. That’s what my client wants”.

In most cases, I have received the answer for this question as “I will decide as per my client’s need”.  I don’t think this is quite true and if you agree to it you must always have the strong reason to support it. This poses us with 2 issues. Firstly, is this reasoning “I will decide as per my client’s need” alright? And secondly, if we don’t agree to first issue then how should we decide which reading to provide? Let us, for the sake of this discussion, concentrate on the second question.

Let’s consider we have 10 readings for transaction as:
3 18 2 4 20 7 3 12 12 10

Arranging them in ascending pattern:
2 3 3 4 7 10 12 12 18 20

Average of the above readings is 9.1 and 80 percentile is 12.

Now this tells me:

  • 80 out of 100 times this transaction will be completed in 12 secs or faster. Of course, what reading of percentile (75, 80, 90) you give depends on the criticality of the functionality you are testing but it provides you an idea of the range in which majority of your transactions will fall. This cannot be true about average because it is not a relative information of the various readings your transaction has done. Consider due to any circumstances (inherent system behaviour like SQL taking more time, extreme paging etc.) the peak transactions rise to 40 and 50 instead of 18 and 20. This change will impact average to rise from 9.1 to 14.3, while the 80 percentile reading remains at 12 secs. The question you might ask here is did we just ignored major issues like SQL taking more time, extreme paging. The answer is no. We know that there is some issue but as I said before as per the criticality the percentile reading can be changed. If it had been 90 percentile response here then it would be 40 secs and would have been concern to me/client. But if we consider 80 percentile, it says that 80 time out of 100 I won’t face any issues. If the 20 times are tolerable then you can sign off the readings or else you can look closer.
  • Also, when a second round of the same test is conducted, and is found that 80 percentile reading for this transaction is 15 secs, one can be sure that there is a 40% degradation in this transaction. This helps a lot if your system is undergoing tuning or performance improvements and knowing this info will point out to you exactly how bad or even good has the change affected your responses. For average, again this can’t be true reading because we do not have any relative information about the times that are considered for calculation of average response time.

#2-Once the bottleneck is found, the step ahead is not something which is an integral part of Performance Testing

This mostly depends upon the experience level of the person involved in the project and it should be respected however, pointing out the issue should be the first step of Performance Testing. The people involved should develop, with experience, the ability to understand the system and suggest the possible workarounds for such kind of scenarios. It might sound like speaking unpragmatically but believe me this is the kind of market demand and should be answered. At least, experience of the level where suggestions are made as to what could be a potential issue should be something which could be to catered. You can figure below example for a better insight to the approach. Of course, the example is just one possibility out of the ocean of such issues you might be facing.

Lets assume for example that there is an issue of higher response time for one of the transactions you are dealing with. What will your first step be? I personally, will like to check the DB logs. Lets again, with the fear of becoming specific, assume that the Database involved is Oracle. Ask for AWR reports. Analysis of AWR is something which requires understanding of SQLs, execution plan, parsing and bind variables. You can either go for ADDM report (which is an automated analysis of AWR report) or you can do the analysis yourself. Getting back to the question in hand, if the issue is response time, we can safely consider the top SQLs in AWR. Just looking at the top performing SQL will not solve your problem. You must consider the query and understand if that is corresponding to the transaction which is in question. What I mean here is, if your UI transaction is doing a submit transaction and the top SQL is a select query, it is not the correct data to be analysed.

Next you go for the query popping to the top of the pile by the elapsed time since the response time is the question. If the query ran once or twice during the period of execution and still the elapsed time is quite high, this could be your potential bottleneck. You can also consider hard parse to soft parse ratio which normally is high in cases where there is an SQL with higher elapsed time as discussed. If all this is what you can conclude then the suggestion you can make as a possible resolution is to look for any hard coded values in SQL queries.

#3-It is assumed that all kinds of Performance Testing (Load, Stress, Endurance, assisting in DR etc) will be executed if Performance Testing is employed for a particular project.

This is something which always annoys me when I interview people. I get all kinds of answers for this. “I will do load test first and if that is successful, I will consider doing stress and endurance test” This could be true because if your load test is not successful, you cannot go ahead with stress or endurance test. But this however, is not the reason for including other tests in your plan. As far I understand, the Non-Functional requirements should concur with test that are in scope. Figure for example something like this.

“NFR # – The system should cater for 5% of user load YOY.” This clearly translates to you as Performance Tester including stress test in your plan. How you design this test is another topic.

“NFR # – The system should be able to sustain for a period of application up time” This translates to you including endurance or long haul test in your plan.

There are even more of such myths associated with Performance Testing as it is still considered a niche skill. But I strongly believe that experience and proactiveness is of utmost importance to resolve this myths/confusion and bring about a kind of change in terms of how we deliver to client.

About the Author

Mital Majmundar is a Quality Assurance Engineer with a primary focus on Performance and Automation testing.   Please visit his LinkedIn Profile here.

Protect Your Customer Data Now!

Protect Customer Data

Secure Customer Data

There are countless stories of companies that have failed to secure customer data.  Here are a few:

  • Equifax: 143 million customers affected.  If you have had a credit check done with Equifax chances are pretty high you have been affected.
  • Yahoo: 3 Billion Yahoo accounts were affected which is the largest security breach in history.
  • Home Depot: 56 million credit cards affected
  • Target: 40 million credit cards affected

It is critical that if you are a part of any organization, that you take customer data very seriously and do everything that you can to protect it.  Your company and your personal reputation are at stake.  There are several things that you can do both internally and externally.  Shockingly almost 70% off all data leaks are internal.

There are certain things that most IT organizations do to ensure that the network is secure such as:

  • Have a secure firewall
  • Make employees change passwords on a regular basis
  • Secure servers, laptops, and desktops with the appropriate security settings
  • Keep anti-virus software updated
  • Patch servers and computers
  • Annual security training

That is simply not enough.  The areas which are most frequently targeted are: network, data, and users.  Here are some practical steps your organization can take to secure customer data:

  • Secure File Transfer: Most organizations use FTP to transfer files.  You need to upgrade that to SFTP, FTPS, AS2 or another more secure protocol.
  • Monitoring:  There needs to be monitoring in place so you are aware of what is happening on your network.  If you know who is connecting and what they are doing, then you will be able to detect network breaches.
  • Data storage: Never store any data in the DMZ. All data should be stored elsewhere.
  • Data:  It is important to follow the appropriate corporate standards and delete secure customer data when it is no longer needed.
  • Scanning:  It is critical to scan any documents before they come into the network.  These incoming files can contain malware and viruses which can harm your entire network.
  • Encryption: You must encrypt your customer data.  Encryption allows another layer of security which will protect secure customer data and prevent security breaches if it gets in the wrong hands.

In addition to those, it is important to not do any of the following bad user security practices.

  • Uncontrolled access through IoT.  Unsecured devices are connecting to your network all the time.  Items include: personal computers, mobile phones, tablets, smart watches and other wearable devices.  It is important to restrict the amount of access and where these devices are able to access.  For example, you don’t want these devices to access shared servers where there is secure customer information.  Only devices that should access that information can gain information.
  • Provide proper tools.  IT professionals are going to find ways to do things better.  This can include using open source and other less secure methods.  It is a lot cheaper to pay for the licensing of the needed software versus paying millions of dollars in fines and restitution.
  • Data Monitoring.  It is important to know what your teams are doing on your corporate network in terms of who is logging in, what information are the transferring internally and externally and who they are sending that information to.  Detail logging information is critical and those logs need to be monitored to ensure security compliance.

I hope this information has been helpful.  I strongly encourage you to check your security to ensure your secure customer data is protected.