This week in Tech and Test brings a guest post from my esteemed colleague and minion, excuse me, Minion, Tony Meijer, on the topic of automated testing:
Why do we write unit tests? A simple question, right? Think about it for a few minutes.
Most people I ask answer with ‘to avoid regressions’ or ‘to find bugs’. Let’s examine that. Unit tests are built to test one independent unit of code and most regressions are due to subtle compatibility issues between many units of code, so that seems to be an incorrect assumption. However, when we are doing refactoring then unit tests are actually a very good defense against bugs since we normally restructure code without changing its behavior.
So, how about bugs then? Again, a component may behave as you expect it to behave and it will not matter. Most bugs, at least most severe bugs, are due to a sum of many incremental quirks over a series of code-units that results in a faulty behavior, at least in my experience.
So, why do we write unit tests then?
I would like to say that we do it because, when done correctly, it creates higher quality code through cleaner interfaces. And higher quality code is a worthwhile cause indeed because it decreases the number of bugs.
But that brings us to how we avoid regressions and bugs and what I think is the one way to do that (apart from continuous refactoring and continuous code reviews), namely system tests and integration tests. System tests and their more avoided cousin integration tests are automatic tests that test a group of code-units and their behaviors.
So, what constitutes a well-written unit, system and integration test then?
For unit tests, these practices usually lead to reasonable tests:
For each functionality in the code unit, test a basic value that should work (to see that it works) and a value that should not work (to see that it handles bad input correctly).
Also, mock out everything not in the code unit. If you cannot do that, then the code is most likely too interdependent on other pieces of code.
Avoid unnecessary asserts like the plague. I know what you are thinking (but it cannot hurt!). In my opinion, unit tests are part of the design specification created to test a very specific piece of functionality. If you push in a bunch of checks, that commonly means that you do not know what you are testing.
For system and integration tests I recommend the following:
For each functionality in a system- and integration test, test with good reasonable data (to see that it works), data that should not work (to see that it reacts to problems reasonably), as many boundaries as you can find (this is usually where bugs are found), and, if you are dealing with networks, as weird a load as you can easily simulate (this does not imply only a high load, simply sending data in the wrong order or unevenly is simple and a tough enough test).
Toss in as many asserts as you can think up, these are system tests and should be considered a fishing-expedition, see what you can find.
Test with different configurations and change it on the fly.
Avoid mockups as far as you can in this stage.
Author: Tony Meijer