After the product ships and the bug reports come trickling in, the question that always comes back to haunt the testers is “Why did this bug slip through?”
There are some obvious answers. “We didn’t have time to test this” is common, as is “We didn’t have the equipment”.
“We had no idea the product could even do that” is another, particularly for the more complex applications with tricky integrations and sneaky business logic.
The meticulous tester will then go on to create a specific regression test covering the case in the bug report. Too often, in my experience, we stop there.
A standard quality improvement process to use in this case is The five whys1. Why did we not know the product could do that? The documentation was missing. The requirements were missing. The testers didn’t understand the business requirements well enough to understand that this was something the customers would want to do. The feature had low discoverability. Continue following the whys all the way down until it’s no longer useful, then try to do something about these things. Books have been written on this topic, so I won’t go into details.
What I wanted to bring up, though, is an important question to ask that is NOT a why, but a “which”:
Which other tests did we fail to do?
Just like bugs, blind spots in testing rarely appear alone. If a bug report triggers only a single new regression test, be wary. There’s almost certainly some other, related functionality you missed the first time around. The whys above can help you find the extent of the blind spot. Make sure it is covered for the next release!
1 This process is also called root cause analysis, but I prefer the whys, because “root cause analysis” has the connotation to me of following a step-by-step formula to end up with a specific answer that logically follows from the premises, while “why” sparks creativity. Your connotation mileage may vary.