Best practice: The resolve comment

One of the most valuable practices the QA team introduced in my current employer’s R&D department is the resolve comment. As is usual with anything new, some developers and development teams took to it instantly, while others needed a bit more time to see the light. These days, however, it is considered a standard part of the development workflow in this department.

The resolve comment is written when a developer resolves a ticket for a feature or bugfix. Sometimes it can be just short of a novel, though usually it’s just a few short paragraphs.

In case of a feature, it explains how the feature was implemented, and notes what the implications might be for other functionality, hints of what should be tested, and any potential risks the developer can think of.

In case of a bugfix, it also explains what was wrong, how it was fixed, and often also notes what measures should be taken to ensure this type of bug does not occur again.

These comments are valuable to several people:

  • It lets the developer who will do the peer review understand the issue a lot faster than just by looking at the code and a one-liner commit message.
  • It lets the testers, also, understand the issue both faster and more thoroughly, which means better testing in less time.
  • It also saves both developers and testers from a lot of back and forth as the tester tries to gather more information on the issue in order to test it properly.
  • It saves the developers a lot of time and frustration if problems are discovered with this feature or fix in the future, and they need to go back and look at what was done and why it was done in that way.
  • The process of writing down what was wrong, serves to highlight areas of the code that would benefit from a general refactoring or debt down payment
  • The tech writers love it, especially for features.
  • A more subtle advantage is that the writing of these messages is a repetitive reminder that many people in many roles are touched when the code changes. It is a low-level team-building activity!

If you are trying to improve quality, the first thing you need is to make sure that everyone involved has the information they need to do their part. This little rule is a good way to start.

Getting developers and testers to trust each other

“The developers are ignoring our input!”

“Tester? Eh, guess he wasn’t good enough to get a job as a developer.”

Ever heard that before?

One of the keys to good QA is to have a team with a high level of trust. Actually, that’s one of the keys to good anything, when a team is involved. But right now we’re talking about testing.

Trust, of course, has to be earned. And it has to go both ways. So what do the testers need to do to earn trust from the developers, and what do the developers have to do to earn the testers’ trust?

Testers

Do a good job

Know the product. Know the business. Know the code base. Know the history. Know the context. Understand how all of these fit together.

There is no excuse for a tester not to know just as much about the product as the developers do. Testing is often viewed as the “easier” job in software development, one that doesn’t require as much skill as coding. That is simply not true. As a tester, you need to be deeply technical, highly logical and strongly creative, even if you never write a line of code. Expect nothing less from your own work than you would expect from the best developer on the team.

Even if you get everything else in this blog post right, nobody will trust you if you are incompetent.

Communicate clearly

One of the most important tasks of a tester is to provide information about bugs we have found. There is almost nothing a developer appreciates more than a well-written bug report. On the other hand, there is almost nothing that irritates a developer more than a badly written bug report.

If you aim to build trust between yourself and your developer, which type of bug report should you choose to write? Exactly.

(How to write a good bug report is a blog topic of its own – I will return to it)

Be reasonable and pragmatic

As testers, bugs are important to us. Especially the ones we just found, or the ones we worked really hard to find. It can be annoying when our bug reports are wontfixed or postponed to next release. It can be really annoying when the release goes out the door and you know about all the bugs that haven’t been fixed.

Deadline’s going to happen, though, and it won’t help you to be grumpy about it. Do what you can to inform the relevant stakeholders about the quality and the risk, and be reasonable about it. If you make a huge fuss about every minor issue, everyone will ignore you when you found the disaster that really needs to stop the release.

Developers

Do a good job

Don’t code in a way that causes stupid bugs. Regressions are stupid bugs caused by unmaintainable code with low modularity and no unit test coverage.

Testers don’t like stupid bugs. It makes them have to repeat the work they did last week just because they can’t be reasonably confident that the bugfix you did in one part of the code didn’t break anything somewhere else.

Even if you get everything else in this blog post right, nobody will trust you if you are incompetent.

Don’t assume it’s not a bug

The type of developer that annoys testers the most are those who always assume a reported problem is not a bug – and especially not a bug in their part of the code – until it has been proved to them beyond all doubt. Those who instinctively close all bug reports from junior personnel without even reading the details, and will only grudgingly accept reopening it after involvement from the junior tester’s managers.

Sure, some bug reports will turn out not to be actual bugs. If you get a lot of these from the test team or a specific tester, you should bring it up with them. In all likelihood, though, if there’s a bug report, there is an actual problem somewhere, even if you don’t see it at first glance. As a developer, you are responsible for the quality of the product, and it should be expected of you to make an effort to improve it. A dismissive attitude towards the testers is the surest way to get less help with that.

Communicate clearly

A tester almost always works with too little information. Tell them what you did, how you did it, what kind of side-effects it could have, what the risks are, how to configure it, and they will be forever grateful.

Your ten minutes of writing up this as you close your development ticket will save the tester hours, potentially days, of figuring it out on their own. The tech writers will be happy too.

Take an active part in quality improvement

Ask the testers how the product can be made more testable. Add test hooks where needed. Create test apps. Do some basic testing of your own products before sending them off to the testers. Refactor. Make downpayments on technical debt. Make use of the testers’ expertise to make your own work better.

The tester’s job description

Most of the blog posts both here and in my other blog fall into the category “too obvious to even write about”. In fact, most of my blog posts end up not getting written at all, because I think what I have to say is so obvious nobody will ever bother reading it. It’s my personal form of writer’s block, one that I am aiming to overcome. Everything I think about is of course obvious to me once I’ve thought about it. That doesn’t mean it’s obvious to everyone else – particularly to those outside of or new to the field of testing.

So, then, the tester’s job description. What is it?

Doh, that’s obvious! It’s testing the product!

Of course it is. And I have worked with many people – both within and outside the test teams – who have argued that this is the only thing the tester should concern themselves with. Everything else is better left to others in the organization.

I’ve never been content with this view. As a tester, I have always seen it as my job to do what I can to make the product better, and to put in my effort where it can make the biggest difference. I believe it is not just my right, but my obligation to stick my nose in other people’s business, if I see room for improvement that affects the product quality.

For example, if you as a tester know (as all testers should know) that code reviews are significantly more effective at finding bugs than system tests are, are you going to encourage the development teams who don’t do systematic code reviews to start doing them, or will you shut up and just run your tests?

Or, if you see signs of an architectural choice that you know is bad practice and believe is likely to cause problems further down the road, even if it may be years in the future, do you file a bug report, or are you satisfied that the feature passed all your functional tests, and send the build on its merry way?

How well I have been received varies greatly. Some developers and managers are defensive. I was once told in very clear language that this was none of my business, after politely asking whether or not the project implemented unit tests. But most are welcoming and eager to take part in the joint tester/developer effort to make the product the best it can be. Sometimes it takes time to build up the trust to get there – but that is a topic for another blog post.

Who do you serve?

As a tester, who do you serve?

Do you serve your company owners, maximizing short and long term profitability?

Do you serve your manager, fulfilling the tasks she has given you to complete the project?

Do you serve your test team, pulling your weight to get the job done?

Do you serve the developers you work with, making their daily life easier with perfect bug reports and reducing the heat they get from support when bugs slip through?

Do you serve the folks over in the sales department, giving them products that your customers recommend to their friends?

Do you serve the first line support guys, making sure they don’t get flooded with unnecessary support cases and bug reports?

Do you serve the end users, polishing out those little annoyances so they can get on with their work without even noticing the tool they use to do it?

Do you serve the public, pointing out that CPU-hungry procedure that wastes fossil-fueled energy?

Ideally, you are serving all of them at the same time.

Inevitably, there will be conflicts.

To the sales guy, maybe delivering on schedule to the paying customer is more important than giving a completely annoyance-free application to the end users in the customer’s organization. To the developer, those branding issues that the board cares about to build market recognition are just annoying to have to deal with. Digging down to find that CPU hog to save the environment might mean that your team mates have to work harder to get the other tests covered.

The balance is up to you. This is where your professional integrity comes in. Where do you put your efforts, when you have to choose? When are you satisfied with a job well done?

The test case controversy

Test cases are a controversial topic in the testing community – for a large part thanks to testing guru James Bach, who has been arguing strongly against the value of test cases.

At the bottom of it, my views are not that different from Mr. Bach’s, but I would still like to moderate the anti-test-case view. I think test cases serve some important purposes.

First off, it is important to underline the difference between a test and a test case. One of Mr. Bach’s pet peeves is that running test cases is not testing, it is checking. I agree – to a point.

My main argument in favor of using test cases is that the human brain is notoriously adept at fooling itself. (Manual) test cases can be an effective method for reducing the effect of availability bias, making sure you cover areas you would otherwise overlook or dismiss as less important. Indeed, the very process of creating test cases can be a way to force yourself to reduce cognitive bias, by forcing your brain to slow down. (On a related note, I strongly recommend for any tester or other person whose profession requires critical thinking to read Daniel Kahneman’s excellent book on cognitive bias, Thinking, Fast and Slow.)

Test cases can also be a good tool for communication. A good test suite can be an excellent way for a fresh tester on the team to learn about the intricacies of an application or non-obvious business requirements of a feature.

Another point in favor of test cases is the sad fact that we live in an imperfect world. While I in principle agree with the idea that anyone who does testing for a living should be awesome, and that a product should never ship without having been tested by these awesome individuals until they are satisfied, the truth is that not every tester is awesome (yet), and not every product release has an available herd of awesome testers, with deep knowledge of both the business and the application in addition to well-developed software testing skills and enough time to apply those skills before deadline.

There are applications and situations where shipping without going through awesome testing by awesome testers is unforgivable. If you are developing an internet banking service, a government self-service platform for tax returns, or the software that controls a medical robot for surgery, there is no excuse for not using top skilled test engineers. But to be fair, the vast majority of software does not fall into this category. Budgets are short and deadlines are shorter. If you have a well-designed test suite created by (or at least reviewed by) good testers, you are better off getting your first line to run those tests, hopefully with the aid of a skilled tester covering the most critical paths, than you would be if you had to ship at the same time and just had the one tester. Even the most amazing testing skill in the world doesn’t always grant you the time to go both broad and deep before the deadline bell rings.

Writing test cases is always a tradeoff, balancing speed, accuracy and maintainability. A test case does not exist in a void – it must be designed with the target audience in mind, knowing how familiar the person running the test will be with the application and particular feature. Sometimes it will be efficient to give just the most brief cues about functions to trigger and things to remember to verify. Other times, the efficient path is to explain step by step how to build up the state that should be checked. The key point for the skilled test designer is to include just enough information for the test runner to know how to check those details that are more important than they may seem.

However, and this is the crucial point: Having a test case in front of you is never an excuse to shut off your brain. Every professional tester should know, and every non-tester who is asked to help out with running test cases should be explained and frequently reminded, that a test case is not a test. The test happens in your brain, as you look at what happens when you perform the steps in the test case, and make a sound, human judgement of whether this is a good result.

Also, though I am an advocate for using test cases, for the reasons explained above, and particularly the one about cognitive bias, it should be clear that I do not in any way recommend only using test cases. As all other approaches to testing, a test suite is a method that should never be used alone – at least not if you want to claim that the product has been well tested.

 

Nostalgia mixed in with good riddance

My first real tech job was for Opera Software. I worked for Opera from 2002 to 2008. As every other Opera employee I already fiercely loved the web when I started. During my six years there I also came to appreciate the intricacies of the web’s building blocks, and the intelligence, skill and sheer passion of those who contributed to create it and make it better. I knew the CSS 2.1 spec like the back of my hand, and had lunch discussions over differing interpretations of the box model.

I had multiple reasons for leaving Opera when I did. I have never regretted it, and even less so after what has happened over the past year or two. But I have often looked back with some nostalgia, dreaming of a world where we actually had specs, where writing a test case was a matter of understanding the specification and converting it to elegance in green and red. I will admit to a moment of looking back with starry eyes when I was reminded of those days, as Opera’s entire W3C compliance test suite, to which I contributed a decent number of tests, was open-sourced recently.

Fortunately, I did not have to stay in Neverland for too long. A stumble across the W3C Memes tumblr site reminded me of the dark sides of the web, of why I gave up on following both the W3C and the WHATWG mailing lists some time after I left Opera. When I did that, I lost touch with most of the community. I also left behind a vast desert of politics, bike sheds and filibustering tactics that would make me scream in frustration. Sometimes I think it’s a miracle that the web has advanced as much as it has over the past ten years.

I still love the web, though. And, knowing what lies behind it, I deeply admire those smart and passionate folks that keep working to make it better, one furiously-debated-over paragraph at a time.

New blog

As is customary, every new blog must start with a blog post explaining its existence. This is it.

I already have a blog. It’s in Norwegian, and I’ve used it to write about a range of topics from cooking to politics, through philosophy, literature, and the occasional funny story. What I haven’t used it for much (there are notable exceptions!) is to write about what I do for a living and spend more than the allotted 8 hours a day thinking about.

I’ve been thinking for years that I should blog about these topics – tech, testing, test management – but it never really seemed to fit in my old blog. And so that blog has been all but abandoned for a long time. You know what they say. When you can’t keep up one project, it’s always advisable to start a new project!

Thus, here it is. A blog about testing, about generic techy stuff, about test management, perhaps the occasional office story. We’ll see what it turns into over time.