Let’s face it, writing detailed test cases takes a lot of time and effort. As a tester, I know this is very tedious work. However, I know first hand there are some tremendous benefits that far outweigh the time involved. It certainly is not easy, but if planned out properly can be done extremely efficiently. You will probably get some push back in certain areas and using certain methodologies but it is extremely important in my opinion. Agile for example, is not in favor over detailed documentation.
Here are 7 Great Reasons to Write Detailed Test Cases
- Planning: It is important to write detailed test cases because it helps you to think through what needs to be tested. Writing detailed test cases takes planning. That planning will result in accelerating the testing timeline and identifying more defects. You need to be able to organize your testing in a way that is most optimal. Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.
- Offshore: If you have an offshore team, you know how challenging that can be. It is really important to write everything out in detail when you communicate offshore so that everyone understands. It is critical to write detailed test cases is no different. Without those details, the offshore team will really struggle to understand what needs to be tested. Getting clarifications on a test case can often take a few days of back and forth and that is extremely time consuming and frustrating.
- Automation: If you are considering automating test cases, it is really important to have all the details documented. Automation engineers are highly technical but they might not understand all the flows of the application, especially if they have not automated that application before. Without the details, there is a high possibility that steps will get missed and perhaps that will cause the automation scripts to not be written properly.
- Performance: The performance engineers must also write performance test scripts. They also are more technical in nature, but they really struggle to get the right amount of information needed. It really helps the performance test engineers to have document test case steps so that they will be able to create their performance test scripts a lot faster.
- Audit: I have had the experience in testing applications that fall within domains which require regulatory compliance such as telecommunications and insurance. These domains require internal and external audit teams to review all testing activities. It is important to have the teams write detailed test cases so that audit will have a solid understanding of what is tested and will minimize the amount of questions that will eventually come back to the testing team.
- Development: I have found that having detailed test cases will help the development team, especially when there are defects, to provide additional guidance and direction. This helps to accelerate the fix time and thus the ability to retest and close those defects.
- Training: I have found that it is extremely helpful to have detailed test cases in order to train new testing resources. I typically will have the new employees start understanding how things work by executing the functional test cases. This will help them come up to speed a lot faster than they would be able to otherwise.
As you can see, there is valid justification to write detailed test cases. I am sure if I spend more time, I will be able to come up with another 7 great reasons. I hope this information is helpful and will encourage you to write more detailed test cases in the future.
There is no doubt that AI is Transforming Software Testing. Over the years you can see how software testing has transformed from manual testing into automated testing. It now has reached another milestone and is further transforming with the advent of AI. There are many tools today which have started incorporating AI in order to provide a high level of quality. As a software quality engineer, it is important to understand those changes and be able to evolve with the technology. If you haven’t done that yet, don’t worry since the technology is currently in a fairly infant state.
Here are several ways that AI is Transforming Software Testing
- AI will transform manual testing. Manual testing is very time consuming and expensive. AI will enable the creation of manual tests and be able to accelerate the testing timeline by running those scripts automatically.
- AI will enable testing teams to cover more scenarios and cases. This will identify more defects due to the increased amount of coverage across the application.
- AI will eliminate the need for assumptions. Software testers make a lot of assumptions when they are building and executing test cases.
- AI will help in using predictive analytics to predict customer needs. By identifying those needs this will result in a much better customer experience and customer satisfaction will greatly increase.
- AI enables visual validation. This validation will identify more defects that traditional software testing methods.
- AI will help find software bugs much faster and find more of them.
- There are several tools that incorporate AI/Machine Learning to speed up the development and maintenance of automated tests. One of those companies is Testim. Maintaining automated test cases can be very expensive and time consuming. Reducing the amount of maintenance will allow test automation engineers to focus on new automated tests and that will add a higher degree of quality to your applications.
- There are some AI tools that will complement existing tools that are on the market today. One of those tools is Test.ai. Test.ai leverages a simple Cucumber like syntax, so it greatly simplifies the development of automated scripts.
- Some tools do all the testing for you. I know that is hard to believe and I admit I am also a bit skeptical. ReTest helps to eliminate the need to be able to have programming skills. It leverages AI to fully test applications.
AI will create opportunities for software testers to move into new roles. Some of those roles will include:
AI QA Strategy: This role will use the knowledge gained within AI to understand how this technology can be applied to software testing.
AI Test Engineer: This role will combine software testing expertise along with experience in AI to develop and execute testing activities.
AI Test Data Engineer: this role will combine software testing expertise along with AI in order to understand data and leverage predictive analytics to verify information.
I strongly believe that software testing will continue to be a prominent role within IT organizations. I do believe it will evolve and continue to evolve. This will require additional training on technologies such as AI in order to keep up with technical evolution. AI is a brand new technology, so it will require time and resources will need to be trained on how to use the technology effectively.
Creating Predictive Analytics for Quality Engineering
If you are in the IT profession, you know that metrics are extremely important in helping to make decisions. This is also especially true for Quality Engineering teams. 10-15 years ago, testing was primarily conducted by software quality analysts and test cases were executed manually. Most software testing teams were small, and they would run a limited number of test cases to ensure things worked. Using this approach, it was relatively easy to know if there software was ready for production, and that QA manager could pull the team into a room and determine if the software was ready to be deployed. Those times have drastically changed.
Here are a few reasons why software testing has evolved:
- There is a lot of software testing tools that enable status reporting
- Automation and Performance testing tools are widely utilized
- Applications are more complex and tightly integrated using interfaces across multiple technologies
- There is tremendous pressure to deploy products quickly to market
- Testing applications earlier in the lifecycle (shift left)
- Distributed teams
There is a need based upon this evolution to have software testing metrics in order to make better decisions. This data needs to be consistently captured and analyzed. It is important to create predictive analytics so that you will be able to determine the current state of the quality engineering effort and accurately predict what would happen in production.
Quality is required.
Speed is required.
Resources and time is limited.
Decisions must be made.
Software must be deployed to production.
In order for these things to happen data analytics must be performed. A base set of data is needed. Some of those data elements include:
Planned/Executed test cases
Manual vs Automated tests
Root Cause Analysis
Once this data has been identified, it needs to be captured and segregated. When that information is gathered, you will be able to start and see trends. If you are testing a certain application, you will be able to predict how long it will take to perform testing, how many defects you plan to identify, and most likely how many defects will make it do production. Predictive analytics will evolve over a period of years. Many companies have started using AI/Machine Learning in helping perform this analysis.
This is also a continuous process. It is something that is not done once and completed. Additional metrics and more information will be needed. Those metrics will have to be captured and predictive analytics models will need to be created or modified.
Digital transformation requires that quality engineering teams transform how testing is planned, executed, and measured. The key to digital transformation is a focus on the customer. This requires that the quality engineering teams truly understand the business, and more importantly can accurately predict customer behavior. Issues such as usability, compatibility, performance, and security are extremely crucial. Provided these issues are tested, and the results are acceptable, this will create a really positive customer experience. For example, if a mobile application is slow, the customer is not going to have patience and will quit using it.
Predictive analytics can be used for defects. Here is some helpful information that will improve quality:
- Type of defect
- What phase was the defect identified
- What is the root cause of the issue
- What changes need to be made so that defect will not make it into production
- Is the defect reproducible?
Once this is understood, changes can be made to prevent similar issues from occurring. Using these predictive analytics, overall quality will greatly improve and speed to market will accelerate. It is important to have the right amount of data so that predictive decisions can be made.
If you are in an IT organization, you know how important Quality Engineering metrics are. Gone are the days that you can talk to a few quality engineers and get their gut feel on determining if a software application has a high degree of quality. It requires a lot more effort and energy and numbers to figure that out. Quality Engineering metrics are the heartbeat of any IT organization. While you should have several there is one that you should spend the majority of your time and effort focusing on. That Quality Engineering metric is: Defects. Defects tell so much of the story. Once you are able to gather that metric and classify it you can do some pretty amazing things.
I have had many quality engineering positions over the years and understanding defects is the first one that I put my energy and effort doing research. I start to ask a few questions:
- How does the organization feel about defects? Is it seen as a positive tool or a negative one? Do developers take defects personally or do they encourage their quality engineering counterparts to create defects? This is a really important piece of information because it will help me to understand a lot about an organization and their appetite to influence change.
- Are all defects entered into a central tool? This is necessary so that you will be able to capture all defects and not have to hunt through multiple applications to find them.
- How much technical debt does an organization have? From what I have researched, most organizations carry a good bit of technical debt. They are reluctant to spend time and energy in resolving defects. It creates a negative experience from a business perspective and internal customers often have to workaround issues to get their desired result.
- Is there a standard for defects? Once defects are being captured, there are certain criteria that needs to be gathered on each defect so that you can start to see trends and make decisions. Some of those standards include severity, business priority, root cause, project or sprint, environment, and application. By gathering this information you can start to classify defects based upon that criteria.
- Are defects being captured in production? This is critical. This metric will help you understand if the applications are stable, and if defects are being captured prior to a production deployment. Often, production defects are captured in a separate tool, which makes it very hard to consolidate and gain access to for the quality engineering organization. If they are being captured, what information is gathered? Is it possible to tie it to a specific release or feature?
- Which teams are finding the majority of the defects? Once I can get my hands on this information, I find it extremely helpful. In one of my previous companies, I did analysis and found that most of the defects were being captured by UAT testers. This led me to infer that they had the most subject matter expertise on the applications that were being tested. I began to build a relationship with that team, and did several things to help the UAT testers and gain additional knowledge from that team. The first thing, was to review the test cases they had created. While they were at a very high level, my QA team was able to gain some valuable information. We took that information and incorporated it into our test cases. Second, we looked at their test cases and mapped those to our test cases. My team had started automating test cases, so let let the UAT testers see the execution of those scripts and they agreed to let us run the regression test cases for them. This was a huge boost in productivity for them and it really helped to solidify the relationship.
Using this framework, I did analysis on a company where I previously worked and identified a defect leakage percentage of 38%. This number was mind blowing and really unacceptable. I established a goal to reduce defect leakage in production and set the target at 8%.
Here were some key focus areas when the team spent the bulk of their energy:
- Reviewed all production defects and developed test cases for those. Once the test cases were built we incorporated those test cases into our regression suite.
- Leveraged the information gained from the UAT testers to build robust Quality Engineering test cases
- Continued to build more test automation scripts so we could spend more energy building and executing test cases
- Partnered with the development organization and ran our automated scripts sooner in the lifecycle so that we could find more defects upfront
After a year of hard work, the results were impressive. We were able to get the production defect leakage down to 7%. This was a huge milestone and everyone was thrilled. The business was really happy with the improvements and became a fan of the quality engineering team. While there are many quality engineering metrics that should be captured, defects is the first one that you should start with.
ISTQB Agile Tester Certification
If you are an interested in obtaining more information about the ISTQB Agile Tester Certification, you have come to the right place. Most projects today have moved from Waterfall to Agile, so it is important that you have the right information to leverage best practices when testing on an Agile project. Once you understand Agile concepts and how testing should really be done, you can provide some tremendous education to your peers and other agile team members.
The certification for ISTQB Agile Tester Certification is designed for professionals who are working within Agile. It is also for professionals who are planning to start implementing Agile methods in the near future, or are working within companies that plan to do so, The certification provides an advantage for those who would like to know the required Agile activities, roles, methods, and methodologies specific to their role.
The ISTQB Agile Tester Certification qualification is aimed at four main groups of professionals:
1. Professionals who have achieved in-depth testing experience in traditional methods and would like to get an Agile Tester Certificate.
2. Junior professional testers who are just starting in the testing profession, have received the Foundation Level certificate, and would like to know more about the tester’s role in an Agile environment.
3. Professionals who are relatively new to testing and are required to implement test approaches, methods and techniques in their day to day job in Agile projects.
4. Professionals who are experienced in their role (including unit testing) and need more understanding and knowledge about how to perform and manage testing on all levels in Agile projects.
These professionals include people who are in roles such as testers, test analysts, test engineers, test consultants, test managers, user acceptance testers, and software developers. This ISTQB Agile Tester Certification may also be appropriate for anyone who wants a deeper understanding of software testing in the Agile world, such as project managers, quality managers, software development managers, business analysts, IT directors, and management consultants
Prerequisite: You must have the ISTQB CTFL Foundation Level certification
Exam: 1 hour with 40 multiple choice questions
Pass Rate: 65%
Exam Registration: Click here to register for the ISTQB Agile Tester Certification exam.
Exam Cost: $199 USD
Recommended Book: Agile Testing Foundations: An ISTQB Foundation Level Agile Tester guide
Syllabus: In order to pass the exam, you must study the syllabus and understand the material. Click here to download the syllabus.
Sample Exam: It is always a great idea to review the sample exams so that you can get familiar with the types of questions that you will see on the test. The more questions you can review, the more confident and prepared you will be for your exam. Click here for sample questions and click here for sample answers.
Outline: Here is a basic outline of the material you must know in order to successfully pass the ISTQB Agile Tester Certification exam.
Chapter 1: Agile Software Development
The tester should remember the basic concept of Agile software development based on the Agile Manifesto.
The tester should understand the advantages of the whole-team approach and the benefits of early and frequent feedback.
The tester should recall Agile software development approaches.
The tester should be able to write testable user stories in collaboration with developers and business representatives.
The tester should understand how retrospectives can be used as a mechanism for process improvement in Agile projects.
The tester should understand the use and purpose of continuous integration.
The tester should know the differences between iteration and release planning, and how a tester adds value in each of these activities.
Chapter 2: Fundamental Agile Testing Principles, Practices, and Processes
The tester should be able to describe the differences between testing activities in Agile projects and non-Agile projects.
The tester should be able to describe how development and testing activities are integrated in Agile projects.
The tester should be able to describe the role of independent testing in Agile projects.
The tester should be able to describe the tools and techniques used to communicate the status of testing in an Agile project, including test progress and product quality.
The tester should be able to describe the process of evolving tests across multiple iterations and explain why test automation is important to manage regression risk in Agile projects.
The tester should understand the skills (people, domain, and testing) of a tester in an Agile team.
The tester should be able to understand the role of a tester within an Agile team.
Chapter 3: Agile Testing Methods, Techniques, and Tools
The tester should be able to recall the concepts of test-driven development, acceptance testdriven development, and behavior-driven development.
The tester should be able to recall the concepts of the test pyramid.
The tester should be able to summarize the testing quadrants and their relationships with testing levels and testing types.
For a given Agile project, the tester should be able to work as a tester in a Scrum team.
The tester should be able to assess quality risks within an Agile project.
The tester should be able to estimate testing effort based on iteration content and quality risks.
The tester should be able to interpret relevant information to support testing activities.
The tester should be able to explain to business stakeholders how to define testable acceptance criteria.
Given a user story, the tester should be able to write acceptance test-driven development test cases.
For both functional and non-functional behavior, the tester should be able to write test cases using black box test design techniques based on given user stories.
The tester should be able to perform exploratory testing to support the testing of an Agile project.
The tester should be able to recall different tools available to testers according to their purpose and to activities in Agile projects.
I hope this information has been helpful. I wish you the best of luck as you prepare and pass your ISTQB Agile Tester Certification!