7 Great Reasons to Write Detailed Test Cases

AI in software testingLet’s face it, writing detailed test cases takes a lot of time and effort.  As a tester, I know this is very tedious work.  However, I know first hand there are some tremendous benefits that far outweigh the time involved.  It certainly is not easy, but if planned out properly can be done extremely efficiently.  You will probably get some push back in certain areas and using certain methodologies but it is extremely important in my opinion.  Agile for example, is not in favor over detailed documentation.

 

 

Here are 7 Great Reasons to Write Detailed Test Cases

  1. Planning :  It is important to write detailed test cases because it helps you to think through what needs to be tested. Writing detailed test cases takes planning.   That planning will result in accelerating the testing timeline and identifying more defects.  You need to be able to organize your testing in a way that is most optimal.  Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.
  2. Offshore:  If you have an offshore team, you know how challenging that can be.  It is really important to write everything out in detail when you communicate offshore so that everyone understands.   It is critical to write detailed test cases is no different.   Without those details, the offshore team will really struggle to understand what needs to be tested.  Getting clarifications on a test case can often take a few days of back and forth and that is extremely time consuming and frustrating.
  3. Automation :  If you are considering automating test cases, it is really important to have all the details documented.   Automation engineers are highly technical but they might not understand all the flows of the application, especially if they have not automated that application before.   Without the details, there is a high possibility that steps will get missed and perhaps that will cause the automation scripts to not be written properly.
  4. Performance :  The performance engineers must also write performance test scripts.   They also are more technical in nature, but they really struggle to get the right amount of information needed.   It really helps the performance test engineers to have document test case steps so that they will be able to create their performance test scripts a lot faster.
  5. Audit :  I have had the experience in testing applications that fall within domains which require regulatory compliance such as telecommunications and insurance.   These domains require internal and external audit teams to review all testing activities.   It is important to have the teams write detailed test cases so that audit will have a solid understanding of what is tested and will minimize the amount of questions that will eventually come back to the testing team.
  6. Development:  I have found that having detailed test cases will help the development team, especially when there are defects, to provide additional guidance and direction.  This helps to accelerate the fix time and thus the ability to retest and close those defects.
  7. Training : I have found that it is extremely helpful to have detailed test cases in order to train new testing resources.   I typically will have the new employees start understanding how things work by executing the functional test cases.  This will help them come up to speed a lot faster than they would be able to otherwise.

As you can see, there is valid justification to write detailed test cases.  I am sure if I spend more time, I will be able to come up with another 7 great reasons.  I hope this information is helpful and will encourage you to write more detailed test cases in the future.

Learn How AI is Transforming Software Testing

AI in software testingThere is no doubt that AI is Transforming Software Testing.   Over the years you can see how software testing has transformed from manual testing into automated testing.   It now has reached another milestone and is further transforming with the advent of AI.  There are many tools today which have started incorporating AI in order to provide a high level of quality.   As a software quality engineer, it is important to understand those changes and be able to evolve with the technology.   If you haven’t done that yet, don’t worry since the technology is currently in a fairly infant state.

Here are several ways that AI is Transforming Software Testing

  • AI will transform manual testing.   Manual testing is very time consuming and expensive.   AI will enable the creation of manual tests and be able to accelerate the testing timeline by running those scripts automatically.
  • AI will enable testing teams to cover more scenarios and cases.   This will identify more defects due to the increased amount of coverage across the application.
  • AI will eliminate the need for assumptions.   Software testers make a lot of assumptions when they are building and executing test cases.
  • AI will help in using predictive analytics to predict customer needs.   By identifying those needs this will result in a much better customer experience and customer satisfaction will greatly increase.
  • AI enables visual validation.   This validation will identify more defects that traditional software testing methods.
  • AI will help find software bugs much faster and find more of them.
  • There are several tools that incorporate AI/Machine Learning to speed up the development and maintenance of automated tests.   One of those companies is Testim.   Maintaining automated test cases can be very expensive and time consuming.   Reducing the amount of maintenance will allow test automation engineers to focus on new automated tests and that will add a higher degree of quality to your applications.
  • There are some AI tools that will complement existing tools that are on the market today.  One of those tools is Test.ai.  Test.ai leverages a simple Cucumber like syntax, so it greatly simplifies the development of automated scripts.
  • Some tools do all the testing for you.  I know that is hard to believe and I admit I am also a bit skeptical.  ReTest helps to eliminate the need to be able to have programming skills.   It leverages AI to fully test applications.

AI will create opportunities for software testers to move into new roles.   Some of those roles will include:

AI QA Strategy:   This role will use the knowledge gained within AI to understand how this technology can be applied to software testing.

AI Test Engineer:   This role will combine software testing expertise along with experience in AI to develop and execute testing activities.

AI Test Data Engineer:   this role will combine software testing expertise along with AI in order to understand data and leverage predictive analytics to verify information.

I strongly believe that software testing will continue to be a prominent role within IT organizations.   I do believe it will evolve and continue to evolve.  This will require additional training on technologies such as AI in order to keep up with technical evolution.  AI is a brand new technology, so it will require time and resources will need to be trained on how to use the technology effectively.

 

Creating Predictive Analytics for Quality Engineering

predictive analytics Creating Predictive Analytics for Quality Engineering

If you are in the IT profession, you know that metrics are extremely important in helping to make decisions.   This is also especially true for Quality Engineering teams.   10-15 years ago, testing was primarily conducted by software quality analysts and test cases were executed manually.   Most software testing teams were small, and they would run a limited number of test cases to ensure things worked.   Using this approach, it was relatively easy to know if there software was ready for production, and that QA manager could pull the team into a room and determine if the software was ready to be deployed.   Those times have drastically changed.

Here are a few reasons why software testing has evolved:

There is a need based upon this evolution to have software testing metrics in order to make better decisions.   This data needs to be consistently captured and analyzed.   It is important to create predictive analytics so that you will be able to determine the current state of the quality engineering effort and accurately predict what would happen in production.

Quality is required. 

Speed is required. 

Resources and time is limited. 

Decisions must be made. 

Software must be deployed to production.   

In order for these things to happen data analytics must be performed.  A base set of data is needed.  Some of those data elements include:

Sprint Velocity

Planned/Executed test cases

Manual vs Automated tests

Defects

Root Cause Analysis

Defect Leakage

Once this data has been identified, it needs to be captured and segregated.  When that information is gathered, you will be able to start and see trends.  If you are testing a certain application, you will be able to predict how long it will take to perform testing, how many defects you plan to identify, and most likely how many defects will make it do production.  Predictive analytics will evolve over a period of years.  Many companies have started using AI/Machine Learning in helping perform this analysis.

This is also a continuous process.  It is something that is not done once and completed.  Additional metrics and more information will be needed.  Those metrics will have to be captured and predictive analytics models will need to be created or modified.

Digital transformation requires that quality engineering teams transform how testing is planned, executed, and measured.   The key to digital transformation is a focus on the customer.   This requires that the quality engineering teams truly understand the business, and more importantly can accurately predict customer behavior.   Issues such as usability, compatibility, performance, and security are extremely crucial.  Provided these issues are tested, and the results are acceptable, this will create a really positive customer experience.  For example, if a mobile application is slow, the customer is not going to have patience and will quit using it.

Predictive analytics can be used for defects.  Here is some helpful information that will improve quality:

  • Type of defect
  • What phase was the defect identified
  • What is the root cause of the issue
  • What changes need to be made so that defect will not make it into production
  • Is the defect reproducible?

Once this is understood, changes can be made to prevent similar issues from occurring.  Using these predictive analytics, overall quality will greatly improve and speed to market will accelerate.  It is important to have the right amount of data so that predictive decisions can be made.

 

 

One Key Metric Should Drive Quality Engineering

quality engineering metrics If you are in an IT organization, you know how important Quality Engineering metrics are.   Gone are the days that you can talk to a few quality engineers and get their gut feel on determining if a software application has a high degree of quality.   It requires a lot more effort and energy and numbers to figure that out.   Quality Engineering metrics are the heartbeat of any IT organization.   While you should have several there is one that you should spend the majority of your time and effort focusing on.   That Quality Engineering metric is: Defects.   Defects tell so much of the story.  Once you are able to gather that metric and classify it you can do some pretty amazing things.

I have had many quality engineering positions over the years and understanding defects is the first one that I put my energy and effort doing research.   I start to ask a few questions:

  1. How does the organization feel about defects?  Is it seen as a positive tool or a negative one?   Do developers take defects personally or do they encourage their quality engineering counterparts to create defects?   This is a really important piece of information because it will help me to understand a lot about an organization and their appetite to influence change.
  2. Are all defects entered into a central tool?  This is necessary so that you will be able to capture all defects and not have to hunt through multiple applications to find them.
  3. How much technical debt does an organization have?   From what I have researched, most organizations carry a good bit of technical debt.   They are reluctant to spend time and energy in resolving defects.  It creates a negative experience from a business perspective and internal customers often have to workaround issues to get their desired result.
  4. Is there a standard for defects?  Once defects are being captured, there are certain criteria that needs to be gathered on each defect so that you can start to see trends and make decisions.  Some of those standards include severity, business priority, root cause, project or sprint, environment, and application.  By gathering this information you can start to classify defects based upon that criteria.
  5. Are defects being captured in production?  This is critical.  This metric will help you understand if the applications are stable, and if defects are being captured prior to a production deployment.   Often, production defects are captured in a separate tool, which makes it very hard to consolidate and gain access to for the quality engineering organization.   If they are being captured, what information is gathered?  Is it possible to tie it to a specific release or feature?
  6. Which teams are finding the majority of the defects?  Once I can get my hands on this information, I find it extremely helpful.  In one of my previous companies, I did analysis and found that most of the defects were being captured by UAT testers.   This led me to infer that they had the most subject matter expertise on the applications that were being tested.   I began to build a relationship with that team, and did several things to help the UAT testers and gain additional knowledge from that team.   The first thing, was to review the test cases they had created.   While they were at a very high level, my QA team was able to gain some valuable information.  We took that information and incorporated it into our test cases.  Second, we looked at their test cases and mapped those to our test cases.   My team had started automating test cases, so let let the UAT testers see the execution of those scripts and they agreed to let us run the regression test cases for them.   This was a huge boost in productivity for them and it really helped to solidify the relationship.

Using this framework, I did analysis on a company where I previously worked and identified a defect leakage percentage of 38%.  This number was mind blowing and really unacceptable. I established a goal to reduce defect leakage in production and set the target at 8%.

Here were some key focus areas when the team spent the bulk of their energy:

After a year of hard work, the results were impressive.  We were able to get the production defect leakage down to 7%.  This was a huge milestone and everyone was thrilled.   The business was really happy with the improvements and became a fan of the quality engineering team.   While there are many quality engineering metrics that should be captured, defects is the first one that you should start with.

Build Strong Edge Test Cases

build strong edge test casesBuild Strong Edge Test Cases

If you are a software engineer, there is a lot of effort in software testing.  With management demanding more and more quality, there is a strong push to create efficient test cases which prevent defects occurring in production.   In my 15+ years of software testing, I have found that most organizations do a very good job in covering your happy path scenarios.   I have however found that creation of negative test cases and development of edge test cases are very limited.   The main reason for this is that there is often very little time but more importantly, a lack of creativity to build these scenarios.

Creating strong edge test cases requires a very creative mind.  Sure, you need to be able to understand how the system works, but you also need to think outside the box and ask the hard questions in your test case workflows.  If you only go by a rigid set of requirements and never deviate outside of that you aren’t going to find any edge scenarios.  Building strong edge test cases requires solid application knowledge.  For example, it is important to know what will happen when the same user tries to access the same information and perform an update on it.  Will the record get locked?  Which person will update the record?  These are the types of questions that have to be answered.  Here are some additional potential edge scenarios:

  • Have a user login and disable the user to see what happens
  • Have two users try to update the same record
  • Disable the connection between the application and database
  • Have the same user try to login from two different computers

There are many more possibilities when building edge scenarios.  Over a period of time, you can begin to identify these edge test cases a lot easier, and you will begin to see tendencies which will cause these conditions.  Chances are pretty great that if the application will allow you to do something, your business user is going to try it.  These edge scenarios are also the ones that the development team is not typically going to think about, so they often will not program for it, and it will really require some thinking on their part.  These edge test cases will often stir up a lot of controversy, because these are often things that are not spelled out in the requirements.  Some of them will usually result in some significant frustration from a business perspective because it can cause a lot of uncertainty and could impact downstream processes because it wasn’t identified.

Edge test case can also typically be negative scenarios.  They could be automated, but they may not be the best candidates, because they are often complex in nature.  These are the types of things that requires deep thinking and creativity.  The testers that are the most creative, will always strive to get to the edge of coverage and push beyond in order to prevent business users from finding defects.   The good thing about edge testing is that you can perform these types of test in waterfall, agile or other SDLC cycles.

5 Step Install Robot Framework Ride using PIP

software testingIf you would like to learn how to 5 step install robot framework ride using PIP I can provide a simple process and get it installed quickly.  You have probably already installed Python, and most people use pip to make it super easy.  I have outlined the 5 steps below to install robot framework ride using pip.

 

RIDE is a lightweight and intuitive editor for Robot Framework test data.  If you would like to learn more information about the RIDE framework click here.

5 Step Install Robot Framework Ride using PIP

Step 1: Find Install location

Go to the location where you have installed Python.

Step 2: Copy the path of the folder location.

Step 3: Type cmd to open the command line

Step 4: Type cd and paste the path of the directory.

Step 5: Type pip install robotframework -ride and press Enter

5 Step Install Robot Framework Ride using PIP

That is it.  The 5 Step Install Robot Framework Ride using PIP.