Today we had our first Why-Did-We-Miss-That session. I read about this meeting in the blog of Elisabeth Hendrickson.
We already discussed last year that we should analyse our customer problems. We didn’t know how to do it via a structured way and because of all the other important things we didn’t find time to sit down to analyse the problems.
So that’s why I just planned this session, now your of course are curious to our experience…
I planned the session from 13:30 till 15:30, two hours. One hour for the first part en the rest for the second part.
I prepared the meeting, ordered index cards, reserved a room, created a filter in our bug registration system and send every tester the link to the blog. I explained the purpose of the meeting.
We started analyzing and discussing the problems. We analyzed approximately 40 problems of the 140 problems. We run out of time, so we stopped and started with the next step. But first coffee and smoke break.
One person read the cards and the rest of the team created groups. For some cards it was very clear to which group they belong, but we also had a lot of one-card groups. It didn’t work let all persons group cards, we lost the overview. During the grouping it became clear that not all people wrote actionable cards, so some explanation was needed during grouping the cards.
After the grouping, we made up actions for every group. Turned the stack of card up side down and wrote down the actions on the cards. We came up with 15 actions, all kind of actions for example extend our default ET Charter should be extended, more installation testing and educate the help desk to make clear the differences between the old and new product.
As final test we selected random 15 problems and analyzed if they fitted in one the defined groups. 11 problems fitted in a group, only 4 problems would not have discovered. A good score in my opinion.
Of course we did retro at the end of the meeting, the following improvements the following items were mentioned:
- It is a intensive meeting, so plan it early on the day and not with another meeting on the same day;
- All calls should be in the same language;
- We should make a pre-selection, but this has a risk that you will not find all improvements;
- The next meeting will be a few months after a customer release (not an internal release) or when there is are for example 50 new problems;
- We didn’t find anything surprises…. I think this is a good thing and proves that our Test Retrospectives work well and that we already continuously try to improve our selves.
So what is the conclusion for us? A great tool to periodic analyse our test process.
Still wandering if this is something for you? Just try it and improve it, don’t be afraid of trying something new.