Risk based testing 1: Why?

Testing, in its core, is the act of figuring out things you didn’t know.

A test is to ask yourself a question, and then to perform actions designed to find the answer to that question.

The answers will tell you whether the product is ready for release – but only if you asked the right questions in the first place.

So how do you know whether you are asking the right questions?

Risk based testing is a methodical way to find the relevant questions, to prioritize them, and to challenge your beliefs and intuitions by exposing them both to yourself and to others.

An experienced tester will always have an intuitive sense of the relevant risks and priorities while testing, even without a formal identification process. For a small project, that might be enough. However, even the experienced tester is prone to cognitive bias. Making yourself think through a topic by making an explicit list of risks is a good way to overcome bias. Inviting others to join in the process is even better.

For a larger project, a formal risk analysis process is vital. On one hand, it is your tool to increase the likelihood that you will be asking the right questions. You know you won’t have time to test everything, so you better make sure you know you’re spending your limited time on the most important things.

On the other hand, the documentation you get out of the risk analysis process is an important tool for learning from the project after it’s done. It will let you look back at a decision you made and discuss whether it was the right one – and if it turned out not to have been, then you’ll know why you made that decision at the time, and what you need to change in order to make a better decision next time.

 

A techie in heels is still a techie

In a discussion forum, a while back, someone posted a link to an interesting article about the oil industry. I can’t remember the exact context nor the details of the article. Something about the volatility of the market, I think. The article was interesting, and relevant to the discussion at hand.

Then a guy in the discussion forum followed up with an annoyed post: Sure, great article, but why was the article illustrated with a pretty woman in a short skirt and high heels?

This person was genuinely looking at this from a feministic point of view, which I can appreciate. It is annoying when pretty women are mindlessly used as eyepieces to draw attention to something completely unrelated.

However, the next poster quickly noted that the woman in the photo was Thina Saltvedt, one of Norway’s most highly qualified oil analytics, and that she was the main source for the article. This brings the situation into a different light. A woman doesn’t dress in a masculine power suit. The assumption is automatically that her reason for being placed there is because she’s pretty, not because she has relevant qualifications.

Why do I tell this story here? After all, it has nothing to do with testing, and very little to do with technology.

Well, it’s March 8. Among the links being shared on the web today, there was a story about how programmers who look stereotypically feminine are automatically assumed to be less competent than programmers who look less stereotypically feminine. Even if they are both women.

Even as more women are joining IT, and more people are starting to catch on to the idea that women can actually code, we’re still looking at a gap where women who look like women are not taken seriously. Wear jeans and a geeky t-shirt, and you might just be a real techie, even if you’re a woman. Wear a cute dress, however… you must be the designer, or the project manager, or at best a non-technical tester with no clue about anything beyond the buttons in the UI.

IT folks are pretty logical people. I think we can all see that when we make these assumptions – with a tribute to the late Leonard Nimoy – we are being highly illogical.