Test better: Do customer support

Earlier today, I had a conversation with one of my excellent, new colleagues from a small company my employer recently acquired. New colleagues bring new perspectives. This particular company has practiced something that a lot of software companies would benefit from: A large portion of their developers and project folks have started out in customer support.

Customer support is great. Everyone should do it on a regular basis. Developers, certainly, and definitely testers. There is no better way to learn about all the weird ways customers configure and integrate the systems you are developing, the workflows they employ, and which things tend to go wrong out there in the real world. All of it information that should inform the testers and test process, much more than it often does.

Too many R&D folks like us hardly ever meet the customers. Possibly during the planning phase, or perhaps a guided tour of the customer’s facilities, observing users over the shoulders. Until you’ve been in there trying to solve an actual, complicated problem the customer is facing, you have seen nothing. Thus, I find myself on the phone with a customer overseas in the evening, remoting into their systems running procmon to nail a strange problem that only occurs on some users’ computers and never in our labs – and enjoying it.

If you are a manager of testers and developers and don’t want them to spend time doing work that the regular support folks do anyway, typically for less money, think again. There is no amount of training, conferences and courses that will teach them what they learn doing support for their own product. In the end, this is the best customer service you can provide – an organization that really understands what your customers need, top to bottom.

The five whys and the which

After the product ships and the bug reports come trickling in, the question that always comes back to haunt the testers is “Why did this bug slip through?”

There are some obvious answers. “We didn’t have time to test this” is common, as is “We didn’t have the equipment”.

“We had no idea the product could even do that” is another, particularly for the more complex applications with tricky integrations and sneaky business logic.

The meticulous tester will then go on to create a specific regression test covering the case in the bug report. Too often, in my experience, we stop there.

A standard quality improvement process to use in this case is The five whys1. Why did we not know the product could do that? The documentation was missing. The requirements were missing. The testers didn’t understand the business requirements well enough to understand that this was something the customers would want to do. The feature had low discoverability. Continue following the whys all the way down until it’s no longer useful, then try to do something about these things. Books have been written on this topic, so I won’t go into details.

What I wanted to bring up, though, is an important question to ask that is NOT a why, but a “which”:

Which other tests did we fail to do?

Just like bugs, blind spots in testing rarely appear alone. If a bug report triggers only a single new regression test, be wary. There’s almost certainly some other, related functionality you missed the first time around. The whys above can help you find the extent of the blind spot. Make sure it is covered for the next release!

This process is also called root cause analysis, but I prefer the whys, because “root cause analysis” has the connotation to me of following a step-by-step formula to end up with a specific answer that logically follows from the premises, while “why” sparks creativity. Your connotation mileage may vary.