Behavior Driven Development has gained a lot of traction over the past several years with the adoption of Agile across most IT organizations. One of the most common used tools is Cucumber and it uses the Gherkin framework to automate tests. Gherkin language is defined as a business readable, domain specific language created especially for behavior descriptions. The purpose of Gherkin is to promote behavior driven development across the entire team. This includes business analysts, developers, testers, and product owners. The Gherkin framework promotes firmly written requirements and eliminate ambiguity. While Gherkin is primarily used in English, there are a total of 37 spoken languages that can be used. Gherkin serves two basic purposes: provides project documentation and helps in creating automated tests. The most commonly used automation tool for Behavior Driven Development is Cucumber.
Gherkin Language Keywords
Here are the primary Gherkin keywords:
- Example or Scenario
- Given, When, Then, And, But
- Scenario Outline or Scenario Template
Here are secondary Gherkin keywords:
- Document Strings: ” “
- Data Tables: |
- Tags: @
- Comments: #
Gherkin Syntax Example
#This is an example of how you can use the Gherkin syntax
Feature: Login functionality of Amazon
Given: I am an Amazon user
When: I enter username as username.
And I enter the password as password
Then I should be redirected to the home page of Amazon
Gherkin Best Practices
- Each feature should be executed separately
- Each scenario should be executed separately
- Leverage your scenarios with your requirements
- Make your steps modular and simple to understand
- Combine common scenarios
- Gherkin syntax is easy and simple for everyone including non-programmers to understand
- Programmers can leverage Gherkin to write automated tests
- Requirements are firm and unambiguous
- Gherkin can be used to write user stories
- You don’t need to be an expert to understand Gherkin
- Gherkin automatically links acceptance tests to automated tests
- There is significant reuse of tests
- Requires a lot of collaboration across the team and business partners
- There might be some situations where it isn’t an ideal solution
- Poorly written Gherkin will result in high test maintenance
There has been a lot of information published about Behavior Driven Development or BDD. It is counter to what most people are used to so it causes a lot of confusion. It is very popular and a lot of organizations have started to embrace this methodology which results in great success to those who can consistently apply some basic principles. Behavior Driven Development allows companies to shift further left and identify issues much earlier in the process than using the traditional waterfall methodology.
What is BDD?
Behavior-Driven Development is an extension of Test-Driven Development or TDD. Behavior-Driven Development is an approach to build features based upon user stories. You will typically have a product owner that will communicate the expectations for the software in form of a user story. That information will be stated in terms of business objectives or goals. Both the developer and the tester will use this information to both develop and test if these features meet the expectation of the product owner. While BDD is a process, just like any process it requires great communication between all members. What does the product owner want to see? How can that be translated into software features? Are those features really needed or are there other features that might meet business needs. Behavior-Driven Development heavily relies on communication and collaboration which are also one of the key tenants of Agile.
Behavior Driven Development premise is that the tests are written before the code is developed. In basic terms it tells you how a piece of code needs to be tested. You want to test the behavior of a given feature. It is extremely important to do this first so that you will have a high degree of testing coverage. BDD requires the person who is creating the tests to think about the business scenario. As you build code, you are building a very large repository of tests which can be executed over and over again using tools such as Jenkins. Now you might wonder if the developers are writing all these tests, why are testers needed? Well testers are needed more than ever. Developers are often focused on a small module or piece of code and don’t have an overall understanding of how the system works as a whole. Testers typically have a broader understanding and are usually business subject matter experts. It is important for testers to understand the broader context of the what and why of software and the business intended use.
Three Best Practices of BDD
- Discover: The first best practice is the most important one in my opinion. Created a shared understanding across the business and the Agile team of what the requirements are through collaboration. This is a critical step, and one that most Agile teams will overlook and rush to build tests and code. This collaboration needs to occur through structured conversations using specific rules and examples.
- Define: The second best practice is to use real world business scenarios and document how the system should behave. This documentation reinforces best practice 1 and 3. The most commonly used framework for defining scenarios is Gerkin.
- Automation: The third best practice is to automate the documentation. This will allow the documentation to grow and become dynamic. This process will verify that the system works as expected and verifies best practice 1 and 2. Most teams today use Cucumber to automate BDD tests.
BDD Framework Process
Here is a sample flow of how things work within the BDD framework:
- Create a user story with high level functionality
- Hold a requirements session and further define functionality with business examples
- Define business scenario using Gerkin
- Automate the scenario using Cucumber
- Write code so that the test scenario will pass
- Run additional tests including regression, performance, etc.
- Release code into production
I hope this information has been helpful and has provided you with some great information about Behavior Driven Development
The world of software quality has changed tremendously over the last 5 years. There are many reasons why this has happened, and it is critical that education serve as the primary strategy to influence change in an organization. Here are a few critical areas where the CIO can gain a better understanding of some of the challenges that impact software quality.
Primarily due to Agile, the robust requirements that used to be a cornerstone of the waterfall methology have been thrown in the trash. While there are some organizations that continue to document and provide best practices gathering requirements, most organizations feel this is outdated and no longer necessary. The lack of proper documentation and requirements have a direct correlation on software quality. Here are some specific reasons
- Without proper documentation a developer will code software based upon their understanding. This often will result in buggy code and requires rework after production, which will be very expensive to fix.
- Without proper documentation a tester will write test cases based upon their understanding. This often will result in test cases that have to be written again and will result in the tester missing defects that will go into production.
- Without proper documentation, the test automation engineer, will build automation test cases which will have to be changed once the manual test case changes, and will miss defects that go into production.
- Without proper documentation, the production support developer will fix problems in production and will break other production code, because they didn’t get an accurate picture from the developer that originally built the code.
These are a few examples, but you should start understanding why requirements are critical.
Agile has changed the approach on how software is delivered into production. It has some tremendous benefits, and done properly, it can greatly increase productivity within an organization. It is quick, lean, and provides fantastic feedback from the business. CIO’s love it because it provides rapid return on investment.
There are some challenges from a software quality perspective that need to be incorporated and education needs to happen across all levels of an organization. Unless you are deeply entrenched on an agile team, you will probably make a ton of assumptions that are incorrect. Within an agile team, everyone has a responsibility for software quality. Here are some areas that will have a direct impact on software quality:
- Agile Stories must be well written. It is not enough to throw out tasks without enough detail.
- Agile Planning is critical. There is some real misunderstanding about agile as it relates to planning. The more planning and organizing that can be done, the better the team will respond and be able to pack more work within a given sprint.
- Documentation is needed. This is another area which is often misunderstood. Providing documentation allows the team to understand details and more effectively code and test the desired solution.
- Developers must still test. This is important. Just because the agile team has a tester, doesn’t mean that a developer doesn’t have to test.
- All testing can be automated. Well, perhaps it could. But it might not make sense, especially if the code isn’t stable and will need to change over sprints. ROI, is still important within agile, so just because you can automate a test, doesn’t mean that you should. This is the most misunderstood item that CIO’s need software quality education.
CIO’s often ask why there are so many defects found in production. Well, that is a fairly complicated question. In order to answer that, a full analysis will need to be done on the defects to gain a better understanding. Many years ago, when I began a new job, the same question was asked. In order to come up with the correct answer, the CIO brought in an outside company to perform a software testing assessment. While I was fairly new, it wasn’t uncommon for this to occur. In fact, I welcomed the opportunity, because I already had a hypothesis as to why this was happening. Typically this is primarily due to little or very poor requirements. The company came in and did the assessment and found that there were 38% of defects that were making it into production. That is a really high percentage, most companies will average around 5%. Over a period of time, we started to tackle the problem, and after 1 year of work, we were able to reduce the production defect leakage to 5%. This was a tremendous accomplishment and required a team effort from project managers,business analysts, developers, and testers to make this happen.
Software Quality Metrics
CIO’s are very metrics driven. They use data everyday to make better decisions. While there is usually some form of metrics around software quality, it usually does not make it into the CIO’s hand for one reason or another. I believe software quality metrics will tell a story, and provide great insight to those that are willing to look and interpret the data. Several years ago, my team and I started to perform analysis on what data was important and which metrics would help us make decisions. Once we agreed on what those metrics would be, we started to gather that information release over release. We started to see trends that would help us test more thoroughly those areas which where problematic. That resulted in bringing production defects down. We also, built a web based dashboard, that would allow the CIO and anyone else in the company to see how testing was progressing. Using this dashboard, we could determine if we were going to meet our testing timelines, and see what outstanding defects were holding up production deployments. This was a true game changer for the organization.
Educating CIO’s on sofware quality will take time. CIO’s want to have high quality software, they often don’t understand how to get there. They don’t want their business partners to suffer through using software that doesn’t work properly. It is important as a quality champion, you spend time with your CIO and provide software quality education, so that you can avoid having significant issues in production. Software quality can be done effectively and efficiently within an organization.
I recently conducted an informal Testing Tools survey on LinkedIn and asked the following question:
If you are a software tester, I need your help. What software testing tools does your organization currently use?
I received a significant number of replies and based upon that information I have gathered some very interesting information. I compiled 230 responses and identified the most popular testing tools.
Big Winner: Selenium
As you can see Selenium received the largest number of votes by a wide margin. For most people in the software testing industry, this is not a huge surprise. With open source testing tools making significant strides in the last few years and companies looking to save expenses by going open source, it is expected that a testing tool such as Selenium would be widely used. The big takeaway for testers: Learn Selenium. In order to effectively use Selenium, you are going to need to learn how to learn Java programming.
Big Loser: HP ALM/UFT
If I would have asked this question 5 years ago, HP QC and QTP would have been at the top of the list. I doubt ALM/UFT will be able to get much higher in terms of utilization, but it will be interesting to see if Micro Focus, who recently purchased these tools from HP, will be able to turn things around. The tool stack is well integrated between ALM/UFT/LoadRunner but that isn’t enough anymore. In my opinion the leading factor is cost. Testing organizations simply don’t have the budgets anymore to pay large sums of money for licensing costs. The second factor is these tools failed to keep up with the changes in technology and let smaller and leaner organizations beat them in developing better testing tools.
Web Service Testing Tools
It is interesting to see that web service testing tools such as Postman and SoapUI are slowly creeping up to the top. I believe more and more testing organizations are moving up closer to unit testing and beginning to hit more web service testing and this is an area where testing organizations have tremendous opportunities to find defects earlier in the cycle.
Performance Testing Tools
I am not surprised by the relatively low numbers of performance testing tools currently being used by testing organizations, but I would strongly recommend companies begin to take this seriously. When performance testing is not adequately done, it is only a matter of time before catastrophic issues occur. Have testers forgotten about Healthcare.gov?
I asked a fairly broad question on LinkedIn. It was by design. I primarily wanted to understand what software testing tools were being used, but I also wanted to get a better idea of what tools testers use outside of testing tools. If you take a look at the chart above, JIRA and Jenkins were heavily used tools. Long gone are the days were testers only have to worry about learning testing tools.
I hope this information has been helpful. I have been able to gain a better appreciation for the challenges software quality engineers face everyday. I encourage you to take this information and learn a new software testing tool today!
If you are responsible for performance engineering, you know there is a true art to understanding how things work. Performance Engineering requires extensive understanding of the applications that are tested and the performance aspects of how the applications behave. Early in my career I thought performance was based upon pass and fail criteria only but over the years I have understood that is not always the case. Here are some performance engineering principles you must consider:
- There are many components which drive application performance including (but not limited to) CPU, Memory, Cache, Databases, Servers, Network Traffic, and Transactions
- It is always important to run a baseline test so you have a point of reference in determining what is acceptable
- I always recommend getting at least 2 acceptable performance runs in in order to eliminate any anomalies
- The production environment will likely be the only environment that is sized adequately, many organizations will have a dedicated performance testing environment but you will need to extrapolate your information to determine what will be acceptable in production
- Focus on building performance scripts that cover the top 80-90% of the most used transactions and based your performance tests on those only
- It would be nice to have a tool such as App Dynamics which will help in troubleshooting performance related issues
- Run your performance test for at least 3 hours in order to identify any performance bottlenecks
- Always involve development and infrastructure teams in helping with the performance analysis, they should be major participants in the review and sign off of the performance results
- Performance Engineers should understand how the applications work and be able to help identify performance issues
- It is important to view the performance test as a whole versus looking at each transaction. In general if there are a few high transactions, that will be fine, but the overall test should be satisfactory
- You need to build a large end to end suite so you can see how the application or applications behave together
The list should help you gain a better understanding of performance engineering. It is important to constantly work on gaining a greater understanding of performance engineering. As technology changes and evolves, if will be necessary to keep up with new trends. In addition, there are some great performance engineering tools that can help complement those tools that you already use.