The world of software quality has changed tremendously over the last 5 years. There are many reasons why this has happened, and it is critical that education serve as the primary strategy to influence change in an organization. Here are a few critical areas where the CIO can gain a better understanding of some of the challenges that impact software quality.
Primarily due to Agile, the robust requirements that used to be a cornerstone of the waterfall methology have been thrown in the trash. While there are some organizations that continue to document and provide best practices gathering requirements, most organizations feel this is outdated and no longer necessary. The lack of proper documentation and requirements have a direct correlation on software quality. Here are some specific reasons
- Without proper documentation a developer will code software based upon their understanding. This often will result in buggy code and requires rework after production, which will be very expensive to fix.
- Without proper documentation a tester will write test cases based upon their understanding. This often will result in test cases that have to be written again and will result in the tester missing defects that will go into production.
- Without proper documentation, the test automation engineer, will build automation test cases which will have to be changed once the manual test case changes, and will miss defects that go into production.
- Without proper documentation, the production support developer will fix problems in production and will break other production code, because they didn’t get an accurate picture from the developer that originally built the code.
These are a few examples, but you should start understanding why requirements are critical.
Agile has changed the approach on how software is delivered into production. It has some tremendous benefits, and done properly, it can greatly increase productivity within an organization. It is quick, lean, and provides fantastic feedback from the business. CIO’s love it because it provides rapid return on investment.
There are some challenges from a software quality perspective that need to be incorporated and education needs to happen across all levels of an organization. Unless you are deeply entrenched on an agile team, you will probably make a ton of assumptions that are incorrect. Within an agile team, everyone has a responsibility for software quality. Here are some areas that will have a direct impact on software quality:
- Agile Stories must be well written. It is not enough to throw out tasks without enough detail.
- Agile Planning is critical. There is some real misunderstanding about agile as it relates to planning. The more planning and organizing that can be done, the better the team will respond and be able to pack more work within a given sprint.
- Documentation is needed. This is another area which is often misunderstood. Providing documentation allows the team to understand details and more effectively code and test the desired solution.
- Developers must still test. This is important. Just because the agile team has a tester, doesn’t mean that a developer doesn’t have to test.
- All testing can be automated. Well, perhaps it could. But it might not make sense, especially if the code isn’t stable and will need to change over sprints. ROI, is still important within agile, so just because you can automate a test, doesn’t mean that you should. This is the most misunderstood item that CIO’s need software quality education.
CIO’s often ask why there are so many defects found in production. Well, that is a fairly complicated question. In order to answer that, a full analysis will need to be done on the defects to gain a better understanding. Many years ago, when I began a new job, the same question was asked. In order to come up with the correct answer, the CIO brought in an outside company to perform a software testing assessment. While I was fairly new, it wasn’t uncommon for this to occur. In fact, I welcomed the opportunity, because I already had a hypothesis as to why this was happening. Typically this is primarily due to little or very poor requirements. The company came in and did the assessment and found that there were 38% of defects that were making it into production. That is a really high percentage, most companies will average around 5%. Over a period of time, we started to tackle the problem, and after 1 year of work, we were able to reduce the production defect leakage to 5%. This was a tremendous accomplishment and required a team effort from project managers,business analysts, developers, and testers to make this happen.
Software Quality Metrics
CIO’s are very metrics driven. They use data everyday to make better decisions. While there is usually some form of metrics around software quality, it usually does not make it into the CIO’s hand for one reason or another. I believe software quality metrics will tell a story, and provide great insight to those that are willing to look and interpret the data. Several years ago, my team and I started to perform analysis on what data was important and which metrics would help us make decisions. Once we agreed on what those metrics would be, we started to gather that information release over release. We started to see trends that would help us test more thoroughly those areas which where problematic. That resulted in bringing production defects down. We also, built a web based dashboard, that would allow the CIO and anyone else in the company to see how testing was progressing. Using this dashboard, we could determine if we were going to meet our testing timelines, and see what outstanding defects were holding up production deployments. This was a true game changer for the organization.
Educating CIO’s on sofware quality will take time. CIO’s want to have high quality software, they often don’t understand how to get there. They don’t want their business partners to suffer through using software that doesn’t work properly. It is important as a quality champion, you spend time with your CIO and provide software quality education, so that you can avoid having significant issues in production. Software quality can be done effectively and efficiently within an organization.
I recently conducted an informal Testing Tools survey on LinkedIn and asked the following question:
If you are a software tester, I need your help. What software testing tools does your organization currently use?
I received a significant number of replies and based upon that information I have gathered some very interesting information. I compiled 230 responses and identified the most popular testing tools.
Big Winner: Selenium
As you can see Selenium received the largest number of votes by a wide margin. For most people in the software testing industry, this is not a huge surprise. With open source testing tools making significant strides in the last few years and companies looking to save expenses by going open source, it is expected that a testing tool such as Selenium would be widely used. The big takeaway for testers: Learn Selenium. In order to effectively use Selenium, you are going to need to learn how to learn Java programming.
Big Loser: HP ALM/UFT
If I would have asked this question 5 years ago, HP QC and QTP would have been at the top of the list. I doubt ALM/UFT will be able to get much higher in terms of utilization, but it will be interesting to see if Micro Focus, who recently purchased these tools from HP, will be able to turn things around. The tool stack is well integrated between ALM/UFT/LoadRunner but that isn’t enough anymore. In my opinion the leading factor is cost. Testing organizations simply don’t have the budgets anymore to pay large sums of money for licensing costs. The second factor is these tools failed to keep up with the changes in technology and let smaller and leaner organizations beat them in developing better testing tools.
Web Service Testing Tools
It is interesting to see that web service testing tools such as Postman and SoapUI are slowly creeping up to the top. I believe more and more testing organizations are moving up closer to unit testing and beginning to hit more web service testing and this is an area where testing organizations have tremendous opportunities to find defects earlier in the cycle.
Performance Testing Tools
I am not surprised by the relatively low numbers of performance testing tools currently being used by testing organizations, but I would strongly recommend companies begin to take this seriously. When performance testing is not adequately done, it is only a matter of time before catastrophic issues occur. Have testers forgotten about Healthcare.gov?
I asked a fairly broad question on LinkedIn. It was by design. I primarily wanted to understand what software testing tools were being used, but I also wanted to get a better idea of what tools testers use outside of testing tools. If you take a look at the chart above, JIRA and Jenkins were heavily used tools. Long gone are the days were testers only have to worry about learning testing tools.
I hope this information has been helpful. I have been able to gain a better appreciation for the challenges software quality engineers face everyday. I encourage you to take this information and learn a new software testing tool today!
If you are responsible for performance engineering, you know there is a true art to understanding how things work. Performance Engineering requires extensive understanding of the applications that are tested and the performance aspects of how the applications behave. Early in my career I thought performance was based upon pass and fail criteria only but over the years I have understood that is not always the case. Here are some performance engineering principles you must consider:
- There are many components which drive application performance including (but not limited to) CPU, Memory, Cache, Databases, Servers, Network Traffic, and Transactions
- It is always important to run a baseline test so you have a point of reference in determining what is acceptable
- I always recommend getting at least 2 acceptable performance runs in in order to eliminate any anomalies
- The production environment will likely be the only environment that is sized adequately, many organizations will have a dedicated performance testing environment but you will need to extrapolate your information to determine what will be acceptable in production
- Focus on building performance scripts that cover the top 80-90% of the most used transactions and based your performance tests on those only
- It would be nice to have a tool such as App Dynamics which will help in troubleshooting performance related issues
- Run your performance test for at least 3 hours in order to identify any performance bottlenecks
- Always involve development and infrastructure teams in helping with the performance analysis, they should be major participants in the review and sign off of the performance results
- Performance Engineers should understand how the applications work and be able to help identify performance issues
- It is important to view the performance test as a whole versus looking at each transaction. In general if there are a few high transactions, that will be fine, but the overall test should be satisfactory
- You need to build a large end to end suite so you can see how the application or applications behave together
The list should help you gain a better understanding of performance engineering. It is important to constantly work on gaining a greater understanding of performance engineering. As technology changes and evolves, if will be necessary to keep up with new trends. In addition, there are some great performance engineering tools that can help complement those tools that you already use.
There is growing attention these days on Quality Engineering KPI’s. Key performance indicator is defined as “A measure of performance, commonly used to help organization and define success what is typically in terms of making progress towards its long term organizational goals.”
Key performance indicator so companies try positive results towards what an organization views as important. Many people often get confused between what a key performance indicator is versus a business metric. A business metric is defined as a “quantifiable measure businesses use to track, monitor, and assess the success or failure various business processes.” Can key performance indicator’s and business metrics be measured the same? Absolutely. The differences the organization may not identify them as a critical measurement of their long-term goals. It is very important for organizations to have a long term goals because it helps everyone move towards ensuring these goals are met and allows the company to grow and expand. Without goals employees will struggle to understand what is important to the company. Too often companies gather tons of information and metrics but they waste valuable time because they are unable to tie the metric to long term goals. In addition to organizations, departments such as quality engineering need to have KPI’s that are tracked. Some Quality Engineering KPI’s examples include:
- % automated test cases
- Defect leakage %
- No critical or high defects
- 100% requirements coverage
- 100% test case execution
It isn’t for engineering to go both key performance indicator’s and business metrics. This helps managers and testers make better decisions for the organization. Also there is a need to use the key performance indicators and metrics to ensure they are relevant to long-term goals and organizational goals. There is a balance that needs to be closely monitored to ensure too many performance indicators and metrics are not being measured. It will also help the process together these matters can be leveraged using a real-time dashboard. I hope this is helped provide a high-level overview of Quality Engineering KPI’s.
If you would like more information on Agile, DevOps or Software Testing, please visit my Software Testing Blog or my Software Testing YouTube Channel.
Quality Assurance has always been an evolving discipline in software development. With the emerging trends in IT industry, the need to better understand, manage, and adopt the QA activities is increasing. With the onset of agile and lately DevOps, the way organizations develop software has changed, and so have the ways to enforce QA. Software development cycles have become short and quick. With this QA teams face new challenges as they work to keep pace. The advantages of overcoming the challenges include quality, optimization, process improvement and higher productivity.
Understanding QA in DevOps Landscape
DevOps advocates good principles and practices that help improve communication and collaboration between the organizational silos. This also implies to QA organization and their development counterparts. But in a DevOps scenario, the walls will be eliminated and this helps facilitate knowledge sharing, experience and specialized skills to deliver quality systems. In the era of DevOps the focus of QA teams will be more on preventing defects than finding them.
Challenges faced by QA teams
QA culture – In the context of DevOps, quality requires a change in how it is being conducted. This also implies an intense transference in the organizational culture as well. It is very important and also challenging to think of innovative ways of identifying unique techniques to test the software quickly and efficiently. This will enable to continuously ensure quality while also growing and evolving the QA services provided.
Facilitation of quality – From a DevOps perspective, QA team needs to understand the business for the system being verified. For this to happen, QA team should partner with business experts, including the product owner, to understand how the system being tested needs to function in order to support the business. QA teams will be disabled if not involved in those initial discussions. This involvement helps QA to become the facilitator of quality.
Collaboration – QA is the binding entity between development and operations. So QA team should be involved right from the early stages of development. This will enable them to collaborate to have software developed and supported more effectively. Also QA should be considered as responsibility of entire project team rather than the responsibility of dedicated QA team.
Early testing – One of the main objectives of testing in DevOps is early detection of defects in development cycle. For this to happen testing must begin very early in the cycle. QA should begin testing with whatever code is available even if the features are not complete. This requires lot of maturity in documenting self-sufficient user stories that do not depend on another for testing.
Test coverage – In DevOps there is a rush to deliver software quickly with the techniques like continuous integration and deployment. Also because of rapidly changing requirements, there is a possibility to miss testing critical functions. To overcome this challenge, a thorough and detailed traceability of requirements to test functions should be maintained.
Build verification – As DevOps encourages frequent builds, there is a higher possibility of code breaking existing features. For this reason it is not practical to have testers do these verifications manually. It is recommended to rely on automated testing for these smoke tests.
If the above discussed challenges are addressed then QA in DevOps can play a critical role in accelerating development and release schedules. DevOps guiding principles like test first, free communication and seamless collaboration help resolve some of the QA challenges and also enable the QA team to take their deliverables to the next level. In DevOps testing is a continuous process and supports the process of incorporating continuous feedback to enable better quality.
If you would like more information on Agile, DevOps or Software Testing, please visit my Software Testing Blog or my Software Testing YouTube Channel.