Getting developers and testers to trust each other

“The developers are ignoring our input!”

“Tester? Eh, guess he wasn’t good enough to get a job as a developer.”

Ever heard that before?

One of the keys to good QA is to have a team with a high level of trust. Actually, that’s one of the keys to good anything, when a team is involved. But right now we’re talking about testing.

Trust, of course, has to be earned. And it has to go both ways. So what do the testers need to do to earn trust from the developers, and what do the developers have to do to earn the testers’ trust?


Do a good job

Know the product. Know the business. Know the code base. Know the history. Know the context. Understand how all of these fit together.

There is no excuse for a tester not to know just as much about the product as the developers do. Testing is often viewed as the “easier” job in software development, one that doesn’t require as much skill as coding. That is simply not true. As a tester, you need to be deeply technical, highly logical and strongly creative, even if you never write a line of code. Expect nothing less from your own work than you would expect from the best developer on the team.

Even if you get everything else in this blog post right, nobody will trust you if you are incompetent.

Communicate clearly

One of the most important tasks of a tester is to provide information about bugs we have found. There is almost nothing a developer appreciates more than a well-written bug report. On the other hand, there is almost nothing that irritates a developer more than a badly written bug report.

If you aim to build trust between yourself and your developer, which type of bug report should you choose to write? Exactly.

(How to write a good bug report is a blog topic of its own – I will return to it)

Be reasonable and pragmatic

As testers, bugs are important to us. Especially the ones we just found, or the ones we worked really hard to find. It can be annoying when our bug reports are wontfixed or postponed to next release. It can be really annoying when the release goes out the door and you know about all the bugs that haven’t been fixed.

Deadline’s going to happen, though, and it won’t help you to be grumpy about it. Do what you can to inform the relevant stakeholders about the quality and the risk, and be reasonable about it. If you make a huge fuss about every minor issue, everyone will ignore you when you found the disaster that really needs to stop the release.


Do a good job

Don’t code in a way that causes stupid bugs. Regressions are stupid bugs caused by unmaintainable code with low modularity and no unit test coverage.

Testers don’t like stupid bugs. It makes them have to repeat the work they did last week just because they can’t be reasonably confident that the bugfix you did in one part of the code didn’t break anything somewhere else.

Even if you get everything else in this blog post right, nobody will trust you if you are incompetent.

Don’t assume it’s not a bug

The type of developer that annoys testers the most are those who always assume a reported problem is not a bug – and especially not a bug in their part of the code – until it has been proved to them beyond all doubt. Those who instinctively close all bug reports from junior personnel without even reading the details, and will only grudgingly accept reopening it after involvement from the junior tester’s managers.

Sure, some bug reports will turn out not to be actual bugs. If you get a lot of these from the test team or a specific tester, you should bring it up with them. In all likelihood, though, if there’s a bug report, there is an actual problem somewhere, even if you don’t see it at first glance. As a developer, you are responsible for the quality of the product, and it should be expected of you to make an effort to improve it. A dismissive attitude towards the testers is the surest way to get less help with that.

Communicate clearly

A tester almost always works with too little information. Tell them what you did, how you did it, what kind of side-effects it could have, what the risks are, how to configure it, and they will be forever grateful.

Your ten minutes of writing up this as you close your development ticket will save the tester hours, potentially days, of figuring it out on their own. The tech writers will be happy too.

Take an active part in quality improvement

Ask the testers how the product can be made more testable. Add test hooks where needed. Create test apps. Do some basic testing of your own products before sending them off to the testers. Refactor. Make downpayments on technical debt. Make use of the testers’ expertise to make your own work better.

The tester’s job description

Most of the blog posts both here and in my other blog fall into the category “too obvious to even write about”. In fact, most of my blog posts end up not getting written at all, because I think what I have to say is so obvious nobody will ever bother reading it. It’s my personal form of writer’s block, one that I am aiming to overcome. Everything I think about is of course obvious to me once I’ve thought about it. That doesn’t mean it’s obvious to everyone else – particularly to those outside of or new to the field of testing.

So, then, the tester’s job description. What is it?

Doh, that’s obvious! It’s testing the product!

Of course it is. And I have worked with many people – both within and outside the test teams – who have argued that this is the only thing the tester should concern themselves with. Everything else is better left to others in the organization.

I’ve never been content with this view. As a tester, I have always seen it as my job to do what I can to make the product better, and to put in my effort where it can make the biggest difference. I believe it is not just my right, but my obligation to stick my nose in other people’s business, if I see room for improvement that affects the product quality.

For example, if you as a tester know (as all testers should know) that code reviews are significantly more effective at finding bugs than system tests are, are you going to encourage the development teams who don’t do systematic code reviews to start doing them, or will you shut up and just run your tests?

Or, if you see signs of an architectural choice that you know is bad practice and believe is likely to cause problems further down the road, even if it may be years in the future, do you file a bug report, or are you satisfied that the feature passed all your functional tests, and send the build on its merry way?

How well I have been received varies greatly. Some developers and managers are defensive. I was once told in very clear language that this was none of my business, after politely asking whether or not the project implemented unit tests. But most are welcoming and eager to take part in the joint tester/developer effort to make the product the best it can be. Sometimes it takes time to build up the trust to get there – but that is a topic for another blog post.

Who do you serve?

As a tester, who do you serve?

Do you serve your company owners, maximizing short and long term profitability?

Do you serve your manager, fulfilling the tasks she has given you to complete the project?

Do you serve your test team, pulling your weight to get the job done?

Do you serve the developers you work with, making their daily life easier with perfect bug reports and reducing the heat they get from support when bugs slip through?

Do you serve the folks over in the sales department, giving them products that your customers recommend to their friends?

Do you serve the first line support guys, making sure they don’t get flooded with unnecessary support cases and bug reports?

Do you serve the end users, polishing out those little annoyances so they can get on with their work without even noticing the tool they use to do it?

Do you serve the public, pointing out that CPU-hungry procedure that wastes fossil-fueled energy?

Ideally, you are serving all of them at the same time.

Inevitably, there will be conflicts.

To the sales guy, maybe delivering on schedule to the paying customer is more important than giving a completely annoyance-free application to the end users in the customer’s organization. To the developer, those branding issues that the board cares about to build market recognition are just annoying to have to deal with. Digging down to find that CPU hog to save the environment might mean that your team mates have to work harder to get the other tests covered.

The balance is up to you. This is where your professional integrity comes in. Where do you put your efforts, when you have to choose? When are you satisfied with a job well done?

The test case controversy

Test cases are a controversial topic in the testing community – for a large part thanks to testing guru James Bach, who has been arguing strongly against the value of test cases.

At the bottom of it, my views are not that different from Mr. Bach’s, but I would still like to moderate the anti-test-case view. I think test cases serve some important purposes.

First off, it is important to underline the difference between a test and a test case. One of Mr. Bach’s pet peeves is that running test cases is not testing, it is checking. I agree – to a point.

My main argument in favor of using test cases is that the human brain is notoriously adept at fooling itself. (Manual) test cases can be an effective method for reducing the effect of availability bias, making sure you cover areas you would otherwise overlook or dismiss as less important. Indeed, the very process of creating test cases can be a way to force yourself to reduce cognitive bias, by forcing your brain to slow down. (On a related note, I strongly recommend for any tester or other person whose profession requires critical thinking to read Daniel Kahneman’s excellent book on cognitive bias, Thinking, Fast and Slow.)

Test cases can also be a good tool for communication. A good test suite can be an excellent way for a fresh tester on the team to learn about the intricacies of an application or non-obvious business requirements of a feature.

Another point in favor of test cases is the sad fact that we live in an imperfect world. While I in principle agree with the idea that anyone who does testing for a living should be awesome, and that a product should never ship without having been tested by these awesome individuals until they are satisfied, the truth is that not every tester is awesome (yet), and not every product release has an available herd of awesome testers, with deep knowledge of both the business and the application in addition to well-developed software testing skills and enough time to apply those skills before deadline.

There are applications and situations where shipping without going through awesome testing by awesome testers is unforgivable. If you are developing an internet banking service, a government self-service platform for tax returns, or the software that controls a medical robot for surgery, there is no excuse for not using top skilled test engineers. But to be fair, the vast majority of software does not fall into this category. Budgets are short and deadlines are shorter. If you have a well-designed test suite created by (or at least reviewed by) good testers, you are better off getting your first line to run those tests, hopefully with the aid of a skilled tester covering the most critical paths, than you would be if you had to ship at the same time and just had the one tester. Even the most amazing testing skill in the world doesn’t always grant you the time to go both broad and deep before the deadline bell rings.

Writing test cases is always a tradeoff, balancing speed, accuracy and maintainability. A test case does not exist in a void – it must be designed with the target audience in mind, knowing how familiar the person running the test will be with the application and particular feature. Sometimes it will be efficient to give just the most brief cues about functions to trigger and things to remember to verify. Other times, the efficient path is to explain step by step how to build up the state that should be checked. The key point for the skilled test designer is to include just enough information for the test runner to know how to check those details that are more important than they may seem.

However, and this is the crucial point: Having a test case in front of you is never an excuse to shut off your brain. Every professional tester should know, and every non-tester who is asked to help out with running test cases should be explained and frequently reminded, that a test case is not a test. The test happens in your brain, as you look at what happens when you perform the steps in the test case, and make a sound, human judgement of whether this is a good result.

Also, though I am an advocate for using test cases, for the reasons explained above, and particularly the one about cognitive bias, it should be clear that I do not in any way recommend only using test cases. As all other approaches to testing, a test suite is a method that should never be used alone – at least not if you want to claim that the product has been well tested.