Guest post: Thoughts on designing unit and system tests

This week in Tech and Test brings a guest post from my esteemed colleague and minion, excuse me, Minion, Tony Meijer, on the topic of automated testing:

Why do we write unit tests? A simple question, right? Think about it for a few minutes.

Most people I ask answer with ‘to avoid regressions’ or ‘to find bugs’. Let’s examine that. Unit tests are built to test one independent unit of code and most regressions are due to subtle compatibility issues between many units of code, so that seems to be an incorrect assumption. However, when we are doing refactoring then unit tests are actually a very good defense against bugs since we normally restructure code without changing its behavior.

So, how about bugs then? Again, a component may behave as you expect it to behave and it will not matter. Most bugs, at least most severe bugs, are due to a sum of many incremental quirks over a series of code-units that results in a faulty behavior, at least in my experience.

So, why do we write unit tests then?

I would like to say that we do it because, when done correctly, it creates higher quality code through cleaner interfaces. And higher quality code is a worthwhile cause indeed because it decreases the number of bugs.

But that brings us to how we avoid regressions and bugs and what I think is the one way to do that (apart from continuous refactoring and continuous code reviews), namely system tests and integration tests. System tests and their more avoided cousin integration tests are automatic tests that test a group of code-units and their behaviors.

So, what constitutes a well-written unit, system and integration test then?

For unit tests, these practices usually lead to reasonable tests:

For each functionality in the code unit, test a basic value that should work (to see that it works) and a value that should not work (to see that it handles bad input correctly).

Also, mock out everything not in the code unit. If you cannot do that, then the code is most likely too interdependent on other pieces of code.

Avoid unnecessary asserts like the plague. I know what you are thinking (but it cannot hurt!). In my opinion, unit tests are part of the design specification created to test a very specific piece of functionality. If you push in a bunch of checks, that commonly means that you do not know what you are testing.

For system and integration tests I recommend the following:

For each functionality in a system- and integration test, test with good reasonable data (to see that it works), data that should not work (to see that it reacts to problems reasonably), as many boundaries as you can find (this is usually where bugs are found), and, if you are dealing with networks, as weird a load as you can easily simulate (this does not imply only a high load, simply sending data in the wrong order or unevenly is simple and a tough enough test).

Toss in as many asserts as you can think up, these are system tests and should be considered a fishing-expedition, see what you can find.

Test with different configurations and change it on the fly.

Avoid mockups as far as you can in this stage.

Author: Tony Meijer

User Acceptance testing done wrong

If you’re a test manager for a software product that has business customers, you’ve gotten it: The request for a suitable set of your test cases that the customer can run on site to do their UAT, or User Acceptance Test. Usually, their managers demand that they do UAT before they accept that the contract has been fulfilled.

This way of doing UAT is wrong on so many levels. I have usually responded to such requests with an explanation of why this is a bad idea both for my own company and for the customer.

Here’s why it’s a bad idea:

User acceptance testing has two aspects, tied to two different understandings of the word “user”. The aspect typically intended in a UAT-prescribing software delivery contract understands the “user” as “the company buying the software”, and the testing is aimed at checking that functional and performance requirements are fulfilled.

The other aspect refers to the actual end user of the software, where the intention is to verify that the software does what the users need it to do, in a way that makes sense to them. It also has a psychological aspect, of letting end users become familiar with and feel that they have a say in the process of building the new software that they will be made to use in the future.

So why is it so bad for the vendor to provide the test cases they have written anyway, to save the customer some time and hassle of creating their own?

It is bad for the customer, because the bugs that slip through internal QA are usually the ones that are triggered by customer workflows or data that the internal QA team had no idea would be used. By repeating the same set of tests in UAT, the bugs continue to go undiscovered.

It is also, mostly, a waste of time. While there is some merit to running test cases in a production or near-production environment instead of in a test lab, the actual gain from repeating the exact same tests is likely to be low. If you are going to spend time and resources doing testing, you want to maximise the value you get out of it.

It is also bad for the vendor. This may seem counterintuitive, at least to sales folks. After all, if you give the customer a set of tests that you know will pass, the contract is fulfilled and the vendor get their money. Everyone’s happy, right?

Wrong. The sales guy is happy, because he gets his quota fulfilled. Everyone else is miserable. Those bugs I mentioned further up, that go undiscovered in UAT, won’t stay undiscovered forever. Their discovery has just been delayed to where you really don’t want it: In production.

Fixing bugs in production is expensive. They are often urgent, which means that developers who were done with that project and now started on the next project have to be pulled out, causing delays in their current project. They require patching or updating the production environment, which may require downtime for the customer, and usually requires meticulous planning and overtime for the support staff. And, of course, the users’ first impression of your buggy product will take a long time to mend.

The next time your friendly sales or project manager comes to you and asks for UAT test cases, politely explain to them, and ask them to explain to the customer, why that is not in either party’s best interest. Offer to supply all your documentation of the testing your team has done, while explaining why it is a really good idea for the customer to design their own tests for UAT.

If they still insist, company policy may require you to go along with their request. If that happens, however, I strongly suggest you take up that policy with your managers for future reference. Learn to use catchphrases like ROI when you discuss this, they really like that.