Let’s face it, writing detailed test cases takes a lot of time and effort. As a tester, I know this is very tedious work. However, I know first hand there are some tremendous benefits that far outweigh the time involved. It certainly is not easy, but if planned out properly can be done extremely efficiently. You will probably get some push back in certain areas and using certain methodologies but it is extremely important in my opinion. Agile for example, is not in favor over detailed documentation.
Here are 7 Great Reasons to Write Detailed Test Cases
Planning : It is important to write detailed test cases because it helps you to think through what needs to be tested. Writing detailed test cases takes planning. That planning will result in accelerating the testing timeline and identifying more defects. You need to be able to organize your testing in a way that is most optimal. Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.
Offshore: If you have an offshore team, you know how challenging that can be. It is really important to write everything out in detail when you communicate offshore so that everyone understands. It is critical to write detailed test cases is no different. Without those details, the offshore team will really struggle to understand what needs to be tested. Getting clarifications on a test case can often take a few days of back and forth and that is extremely time consuming and frustrating.
Automation : If you are considering automating test cases, it is really important to have all the details documented. Automation engineers are highly technical but they might not understand all the flows of the application, especially if they have not automated that application before. Without the details, there is a high possibility that steps will get missed and perhaps that will cause the automation scripts to not be written properly.
Performance : The performance engineers must also write performance test scripts. They also are more technical in nature, but they really struggle to get the right amount of information needed. It really helps the performance test engineers to have document test case steps so that they will be able to create their performance test scripts a lot faster.
Audit : I have had the experience in testing applications that fall within domains which require regulatory compliance such as telecommunications and insurance. These domains require internal and external audit teams to review all testing activities. It is important to have the teams write detailed test cases so that audit will have a solid understanding of what is tested and will minimize the amount of questions that will eventually come back to the testing team.
Development: I have found that having detailed test cases will help the development team, especially when there are defects, to provide additional guidance and direction. This helps to accelerate the fix time and thus the ability to retest and close those defects.
Training : I have found that it is extremely helpful to have detailed test cases in order to train new testing resources. I typically will have the new employees start understanding how things work by executing the functional test cases. This will help them come up to speed a lot faster than they would be able to otherwise.
As you can see, there is valid justification to write detailed test cases. I am sure if I spend more time, I will be able to come up with another 7 great reasons. I hope this information is helpful and will encourage you to write more detailed test cases in the future.
There is no doubt that AI is Transforming Software Testing. Over the years you can see how software testing has transformed from manual testing into automated testing. It now has reached another milestone and is further transforming with the advent of AI. There are many tools today which have started incorporating AI in order to provide a high level of quality. As a software quality engineer, it is important to understand those changes and be able to evolve with the technology. If you haven’t done that yet, don’t worry since the technology is currently in a fairly infant state.
AI will transform manual testing.Manual testing is very time consuming and expensive. AI will enable the creation of manual tests and be able to accelerate the testing timeline by running those scripts automatically.
AI will enable testing teams to cover more scenarios and cases. This will identify more defects due to the increased amount of coverage across the application.
AI will help in using predictive analytics to predict customer needs. By identifying those needs this will result in a much better customer experience and customer satisfaction will greatly increase.
AI enables visual validation. This validation will identify more defects that traditional software testing methods.
AI will help find software bugs much faster and find more of them.
There are several tools that incorporate AI/Machine Learning to speed up the development and maintenance of automated tests. One of those companies is Testim. Maintaining automated test cases can be very expensive and time consuming. Reducing the amount of maintenance will allow test automation engineers to focus on new automated tests and that will add a higher degree of quality to your applications.
There are some AI tools that will complement existing tools that are on the market today. One of those tools is Test.ai. Test.ai leverages a simple Cucumber like syntax, so it greatly simplifies the development of automated scripts.
Some tools do all the testing for you. I know that is hard to believe and I admit I am also a bit skeptical. ReTest helps to eliminate the need to be able to have programming skills. It leverages AI to fully test applications.
AI will create opportunities for software testers to move into new roles. Some of those roles will include:
AI QA Strategy: This role will use the knowledge gained within AI to understand how this technology can be applied to software testing.
AI Test Engineer: This role will combine software testing expertise along with experience in AI to develop and execute testing activities.
AI Test Data Engineer: this role will combine software testing expertise along with AI in order to understand data and leverage predictive analytics to verify information.
I strongly believe that software testing will continue to be a prominent role within IT organizations. I do believe it will evolve and continue to evolve. This will require additional training on technologies such as AI in order to keep up with technical evolution. AI is a brand new technology, so it will require time and resources will need to be trained on how to use the technology effectively.
Creating Predictive Analytics for Quality Engineering
If you are in the IT profession, you know that metrics are extremely important in helping to make decisions. This is also especially true for Quality Engineering teams. 10-15 years ago, testing was primarily conducted by software quality analysts and test cases were executed manually. Most software testing teams were small, and they would run a limited number of test cases to ensure things worked. Using this approach, it was relatively easy to know if there software was ready for production, and that QA manager could pull the team into a room and determine if the software was ready to be deployed. Those times have drastically changed.
There is a need based upon this evolution to have software testing metrics in order to make better decisions. This data needs to be consistently captured and analyzed. It is important to create predictive analytics so that you will be able to determine the current state of the quality engineering effort and accurately predict what would happen in production.
Once this data has been identified, it needs to be captured and segregated. When that information is gathered, you will be able to start and see trends. If you are testing a certain application, you will be able to predict how long it will take to perform testing, how many defects you plan to identify, and most likely how many defects will make it do production. Predictive analytics will evolve over a period of years. Many companies have started using AI/Machine Learning in helping perform this analysis.
This is also a continuous process. It is something that is not done once and completed. Additional metrics and more information will be needed. Those metrics will have to be captured and predictive analytics models will need to be created or modified.
Digital transformation requires that quality engineering teams transform how testing is planned, executed, and measured. The key to digital transformation is a focus on the customer. This requires that the quality engineering teams truly understand the business, and more importantly can accurately predict customer behavior. Issues such as usability, compatibility, performance, and security are extremely crucial. Provided these issues are tested, and the results are acceptable, this will create a really positive customer experience. For example, if a mobile application is slow, the customer is not going to have patience and will quit using it.
Predictive analytics can be used for defects. Here is some helpful information that will improve quality:
Type of defect
What phase was the defect identified
What is the root cause of the issue
What changes need to be made so that defect will not make it into production
Is the defect reproducible?
Once this is understood, changes can be made to prevent similar issues from occurring. Using these predictive analytics, overall quality will greatly improve and speed to market will accelerate. It is important to have the right amount of data so that predictive decisions can be made.
Test Planning requires careful consideration and it takes time to do it right in order to ensure proper test coverage. Test organizations that don’t perform proper test planning often rush to test execution with little to no regard to proper test coverage. The result often has devastating results and has significant long term impacts. Lets review some issue which will occur as a result of poor planning.
Insufficient Test Coverage: Rushing to test execution without proper test planning will result in insufficient test coverage
Large Regression Suites: Testers will take the test cases and move all of them into a regression suite and it will result in requiring extra time to run the test cases which have been duplicated
Redundant Automation Scripts: If the regression suites above are automated, it will result in a maintenance nightmare for the automation team which will require additional time and money to resolve.
Proper test coverage requires significant test planning time. The more time spent in test planning will result in more efficient and reduced test execution time. In general if you spend 3 to 4 times more in planning that test execution, you will have a meaningful result during test execution. With the emphasis on Agile and Waterfall methodologies becoming more efficient, it is more important now than ever for testers to really think how something should be tested and have great test cases. I prefer my teams have less test cases that are more efficient versus having many test cases that result in duplicate validation. Testers often start out by thinking the more test cases that are created and executed the better testers they are versus their peers. This is simply not true. Efficient test coverage helps everyone and it reduces the amount of time that is spent testing. With teams becoming more product focused, it is important to test the product you are working on and have other teams validate their products.
I hope this information has been helpful in understanding how to have efficient test coverage.
The product model is rapidly gaining traction within the software development lifecycle. Companies continue to look at options to improve the software development delivery process. I am seeing more and more organizations move in this direction. The challenge involved is determining how software testing will play a role in this process.
The product model leverages the agile methodology to help develop a product or series of products. Typically the teams are 6-8 people and they develop solutions in short iterations (Sprints) and deploy small code changes into production on a frequent basis (2-4 weeks). The team typically will have business analysts, development, and testing. The numbers will vary based upon the needs within the product team. I see the product model working well in companies that are small to mid-sized and have a relatively lean IT organization. In addition, I believe those companies that have a few products will be able to perform the best. If you have a lot of products and those products interface with each other, you will have significant challenges with integration and moving data from one system to another. This will require extreme collaboration to ensure interfaces and data are provided when they are needed, otherwise the release going to into production could potentially get delayed.
The product model team is typically driven by a product manager or product owner. That individual is responsible for providing direction and helps to remove obstacles. It is important that the team works well together so they can increase their velocity in terms of software products. If there is one or two team members that aren’t pulling their weight, it could affect the entire team. Either those team members improve or they will need to be replaced with those who have the proper skills. The team also must be self-sufficient and they need to do everything possible not to be dependent on other organizations either inside or outside the organization. Certain pieces such as infrastructure setup might be outside the team but it requires constant collaboration to make things happen. If there are multiple products, those teams will usually roll up to a program manager.
In terms of software testing within the product model, it will involve functional, automation, and performance. Some software testing organizations may differ in terms of test automation and performance activities. I have setup a shared service where the functional testers will reach out to automation and performance resources who can build and execute automation and performance tests. I have also seen examples where the automation and performance activities are accomplished by the testers within the team. As long as those activities get done, it should work well either way.
Software testing has been rapidly growing over the past several years and here are some software testing trends to be aware of in 2017.
More Open Source Tools-As more pressure is placed on reducing expenses, more software testing teams will look at alternative options in reducing software testing licensing. Adoption of open source tools will continue as Agile and DevOps practices evolve. This software testing trend will continue for the next several years.
Test Automation Growth-Most software testing organizations already have some test automation practice in place, however, there will be a focused effort to increase the amount of automation that is in place. Companies will continue to focus test automation in the areas of smoke testing and regression testing. Where companies can, they will use automation tools such as Selenium and Appium.
Performance Engineering-There will be additional attention in the area of performance engineering due to additional data demands and production level failures related to system performance. Companies will begin to require performance testing to be completed prior to production implementation.
Digital Transformation-More attention will be place on customer satisfaction and this will require a digital transformation with the customer workflow. This will require extensive amounts of software testing to ensure that the customer has a positive experience. This transformation will go across multiple systems and require security in terms of the customer data. In most cases there will be a mobile component to the digital transformation.
Big Data-Explosion of demand for data due to IoT devices and additional data in order to make better business decisions will require new technology to support the additional capacity. Software testers will need to be able to learn more about Big Data and will need to verify that the data is correct before it is consumed. Platforms such as Hadoop will need to be learned. This will require a solid test data management practice.
Cloud-Continue cloud adoption will grow as more traditional companies such as insurance and finance begin to slowly transform their business to the cloud. This transformation will continue to evolve for the next several years. Software testers will need to understand more about public, private and hybrid cloud solutions. Software testing companies will also look for software testing platforms to help them accelerate software testing.
Agile/DevOps-Most companies will have Agile as the default software development lifecycle. As Agile continues to take shape, companies will begin to focus on building DevOps practices. This will also include Continuous Development and Continuous Integration. The DevOps approach will help to build collaboration across teams and help increase the amount of automation used in developing and deploying software. In addition, most companies will start to integrate Security as a part of their DevOps practice.
I hope this list helps. It will be interesting to see where software testing heads as we look at 2017 and beyond.