Ten Tips for Agile Testing

This is an copy of an article I wrote last year for the Agile Development Magazine, it was published in summer 2007. Since that time we changed and improved ourselve, but I will discuss that later.


Two years ago I started as test manager on a J2EE project. The project team had switched from a waterfall approach to an agile approach with Scrum a few months earlier. The first question the project manager asked me was, “Can you write a test plan for our first release?”

I quickly produced a project test plan that called for a test phase of a few months and a separate test team. It came complete with a capacity calculation per week for the testers, a MS-project document, and a matrix with all the quality attributes and the effort we should spent to test every attribute. What a mistake!

Two years later, we’ve learned a great deal about agile testing. This article presents ten tips for agile testing based on our experience. However, don’t expect to find the perfect test approach for your company or software project in this article. That is still something you will have to find out yourself!

Integrate the testers in the development teams
Teams are responsible for delivering software that meets expected requirements and quality. However, if we want teams to test the software, we must give them the knowledge to do it right. Testers have that knowledge. By integrating testers into the development teams, teams obtain the skills they need to test their software. When you try this, make sure you choose the right mix: one tester on three programmers is a fair but minimal number.

Use risk based testing
You can never test everything with the same (extensive) depth; even in a waterfall project you have to make choices. In an agile project all the activities are time boxed so you have to make choices about how extensively you want to test each feature. We use a risk based approach to determine which test activities we are going to carry out for a feature during the iteration. The risk level of every feature is determined by the customer and the teams. It is a very transparent process so the customer knows exactly which test activities are executed for every feature.

Have testers review unit tests
In our organization the developers are responsible for creating and maintaining the unit tests. Unit tests are a very important part of our test harness. Developers often have a different way of thinking; for example, they tend to test only the success scenario.

To create unit tests that are as good as possible, our testers review the unit tests for all our high-risk items. The review has two advantages. First, the unit tests are improved because testers and developers supplement each other: the developer knows where the weak places in the source are, and the tester can use his knowledge of testing to give tips to improve the unit tests. Second, the testers know exactly which test cases are executed in the unit tests and can concentrate on executing other (e.g. higher-level) test cases.

Create a test automation framework and appoint a toolsmith
Automated testing is very important because new features and refactoring can introduce problems that can be difficult to find. By using an automated test framework, we can maintain quality levels during the iteration. Our testers are able to create new tests easily and quickly in the framework. We have a dedicated test engineer (we call him a toolsmith) that maintains and optimizes the test automation framework, reviews the new automated tests of the testers and analyzes the daily test results. Testers in the teams can spend more time creating and extending automated tests because the toolsmith supports them.

Display quality metrics in a public location
Almost every software project has a problem registration system, automated test results, and in some cases nightly or continuous build results. But how often do team members look at the results or count the open problems? We installed a monitor in the coffee room that displays the actual metrics of the currently open problems, the percentage of successful unit tests, the percentage of successful nightly builds, and the current state of the continuous build. By displaying the metrics in public the teams are confronted with the information. The information is no longer just a number in a system or a record with some information in a metrics database.

Add a test scrum
One advantage of having a separate test team in one room is that the communication between the testers is good. When you have a project like ours where the testers are distributed across several teams, the communication becomes more difficult. To solve this problem, we use a test scrum to align the test activities. Our test scrum is held twice a week and every team is represented by one tester. The test scrum is a scrum like the daily team scrum but focused on test activities. The test manager is the scrummaster of the test scrum.

Implement test retrospectives
Every team in our project holds a retrospective meeting at the end of the iteration. In the retrospective, the team discusses the process: what went well and what went wrong. The testers in the team learn and discover new ways to improve their tests; it is good when they share this knowledge with testers from the other teams.

We have a test retrospective every iteration so the testers can exchange knowledge and experience and discuss problems they have. It is important that the retrospective is only related to test issues; you shouldn’t discuss team issues (they should be discussed in the team retrospective). As with the test scrum, the test manager is the scrummaster of the test retrospective.

Plan open problems
We try to fix all the problems that we find during the iteration in that same iteration, but some times we end the iteration with open problems. The best way to handle those problems is to add those problems to the sprint backlog for the next iteration. By explicitly planning those problems, the chance that they are “forgotten” and pile up is very small.

Remember: Testing is still testing
When you test in an agile software project you can still use the “traditional” test techniques. We use exploratory testing but also apply test techniques such as boundary value analysis, equivalence partitioning, cause/effect diagram, and pair-wise testing. Which test technique we choose for a feature depend on its risk category. Exploratory testing is used in every category; but if the risk is higher we also apply more formal test techniques. The challenge for the tester is to use a formal test technique without delivering extensive test documentation.

Start doing it!
The last but most important tip is start doing it! Don’t talk too much about how you are going to introduce testers, or which test approach is perfect for your organization. You are going to make mistakes or discover that the chosen approach doesn’t fit your organization. That’s a given. But, the sooner you start, the sooner you can begin learning and improving your test approach. With the retrospective meetings, you have the opportunity to adapt your test approach every month.

Conclusion
Last month, we delivered the new release of our software. We started developing this release in January. During the last two weeks of May, we did some more exploratory testing, solved the remaining problems, and prepared the delivery. I wrote no extensive test plan, did no capacity calculation, and created no matrix with quality attributes. We just started testing and improved our testing every month.

Testing and Scrum

Here you can find a link to a presentation I gave during the Scrum Falls in London in November 2007 or just watch the presention below.

Below is a short description of the presentation.

Scrum is a project management framework; it doesn’t contain any developer or test practices. In most companies Scrum is used in combination with XP, Scrum is used for project management and XP practices are used to guide development.

If you’re a traditional tester and you start in a project that is going to use or is using scrum you will have a hard time to find out what you have to do. Scrum doesn’t say anything about testing and XP does say something about testing but it is not a guidebook for a tester.

I want to share our test experiences with the audience, we use scrum for two years and I think we have a good test process that is interesting for everyone who is involved in building software.

In this presentation I will share our experience from Planon about testing and scrum. I want to answer the following questions:

  • Do you need specialized testers in Scrum; 
  • How did we introduce testers in the development teams; 
  • What is the role of the tester in our team; 
  • How do you recruit testers for a Scrum team; 
  • How you do organize the testers over the teams; 
  • Do you write a (master) test plan? 
  • What about formal testing techniques in Scrum; 
  • What kind of test automation harness do we use; 
  • How do we report about testing; 

The audience will have good overview of the test activities we do at Planon and will get tips how to improve the testing process in their own projects.

Our First Why-Did-We-Miss-That Session

Today we had our first Why-Did-We-Miss-That session. I read about this meeting in the blog of Elisabeth Hendrickson.

We already discussed last year that we should analyse our customer problems. We didn’t know how to do it via a structured way and because of all the other important things we didn’t find time to sit down to analyse the problems.

So that’s why I just planned this session, now your of course are curious to our experience…

I planned the session from 13:30 till 15:30, two hours. One hour for the first part en the rest for the second part.

I prepared the meeting, ordered index cards, reserved a room, created a filter in our bug registration system and send every tester the link to the blog. I explained the purpose of the meeting.

We started analyzing and discussing the problems. We analyzed approximately 40 problems of the 140 problems. We run out of time, so we stopped and started with the next step. But first coffee and smoke break.

One person read the cards and the rest of the team created groups. For some cards it was very clear to which group they belong, but we also had a lot of one-card groups. It didn’t work let all persons group cards, we lost the overview. During the grouping it became clear that not all people wrote actionable cards, so some explanation was needed during grouping the cards.

After the grouping, we made up actions for every group. Turned the stack of card up side down and wrote down the actions on the cards. We came up with 15 actions, all kind of actions for example extend our default ET Charter should be extended, more installation testing and educate the help desk to make clear the differences between the old and new product.

As final test we selected random 15 problems and analyzed if they fitted in one the defined groups. 11 problems fitted in a group, only 4 problems would not have discovered. A good score in my opinion.

Of course we did retro at the end of the meeting, the following improvements the following items were mentioned:

  1. It is a intensive meeting, so plan it early on the day and not with another meeting on the same day;
  2. All calls should be in the same language;
  3. We should make a pre-selection, but this has a risk that you will not find all improvements;
  4. The next meeting will be a few months after a customer release (not an internal release) or when there is are for example 50 new problems;
  5. We didn’t find anything surprises…. I think this is a good thing and proves that our Test Retrospectives work well and that we already continuously try to improve our selves.

So what is the conclusion for us? A great tool to periodic analyse our test process.

Still wandering if this is something for you? Just try it and improve it, don’t be afraid of trying something new.

InfoQ: Does “Done” Mean “Shippable”?

A very interesting article about what is done and what is shippable, my opinion is in the comments of the article.