I would like to share my experience with agile performance testing with you. It is not about the knowledge of a tool like LoadRunner itself. it is about the process to set up a performance testing environment and how to embed it in your agile process. I had a session about this topic during the Agile Testing Days 2010 in Berlin.
This blog is a summary of that presentation, this is the second blog about how to integrate performance testing into your agile environment. In this blog I described how to set up an agile performance environment.
With performance testing we would like to do two things:
- Teams should be able to deliver features that are done, including performance testing;
- Detect regression;
The goal of sprint script is to test a feature. We have the general requirement that the client always should respond within four seconds. In some cases the product owner has extra performance requirements.that need to tested. During the sprint planning it is discussed for features if a performance test is required. Reason could be for example a change in the framework. It is discussed with all the members of the team if a performance test is required.
If it is decided that a performance test is required, the testers build script. If necessary they use integration tests to generate data or request a customer database. The testers verify the scripts on their local machine and as soon as they are ready, the scripts are executed in the performance test environment. This is environment is available during weekdays for the teams. The new scripts are added to the cvs database.
The scripts have a lifespan of one sprint. The scripts are not maintained when the testing of the feature is done. If the new feature is important, that is determined by the team, product owner, lead architect, for the project or product owner, the scripts could be promoted to the weekend scripts or the weekend scripts could be expanded to cover the new feature.
The second goal of the performance tests environment is detect regression. It could be possible that a team develops a new feature that impact the global performance of the application. They did not think about writing a sprint script and did not expect any impact on the performance.
For these case we have developed weekend scripts. Weekend scripts detect regression and trend data is generated. As a result, we are able to see if the performance increases or decreases during a release of six months.
The weekend scripts cover the most import areas of our software. These areas are discussed with the architects, testers, product owner and lead architect. Goal of the weekend scripts is not to test everything. However, we try to test the 20% of the code that generates 80% of the generic performance problems.
The technical test engineer (toolsmith) is responsible for writing and maintaining the weekend scripts. He is supported by the development teams.
The weekend scripts are automatically executed during the weekend. The reporting process is explained in the next session.
Weekend scripts have a life span of multiple releases and need to be maintained. When the software changes, the weekend scripts need to be changed also (only when the change impacts the scripts of course).
In our organisation we use a lean principle, when there is an error in the production line, first fix the error, then continue the line. The teams first fix the failing automated tests, discuss the open problems found during development and as last start working on new features. When something fails in an automated tests or is reported as a problem, it is automatically processed in the process.
When a weekend script fails, this could be because of an error or the response time is to long, an call is created in our bug tracking system. The bug is assigned to the team responsible for the module. When the team comes in on Monday and discussed the open bugs, the bug reported by the performance test is automatically discussed and assigned to a team member. During the week the team will fix the bug, and in the next weekend the test will run green or fail again.. in the second case they are two bugs assigned to the team.
The problems found during the execution of sprint script are not reported automatically. These test are executed by testers during the sprint and any issues are handled during the test execution.
The advantages of this approach:
- We don’t need any extra testing when we deliver a new release. All the testing is already done during the sprint and the regression tests are executed every weekend;
- Testing has become more attractive for (some) testers. Our testers are also executing performance tests, they have expended their work experience.
- teams still need support of the technical test engineer or more experienced testers. Especially when testers are doing performance tests the first time, Loadrunner generates a lot of data… where to look for the possible problems.
- it is expensive to set up performance tests, mainly the time and also the licenses costs. Of course depending on which tool you use. Be wise, and determine if you are going to setup your own performance tests or use external capacity;