New testers tend to be preoccupied with the motions of the test. They’ve studied methods for identifying boundaries, and know the importance of negative tests. If they have been diligent, they even know to test error handling and recovery. Still, the bright but inexperienced tester often stops a step short of actually knowing whether the test passed or failed.
Let’s look at an example: Testing that an element can be saved to a database. You prepare the element, save it, and the application displays a happy message saying that the element was saved. Done, test passed! Right?
The experienced tester, of course, would not think of stopping there. All you have tested so far is that you get feedback when saving to the database. And you haven’t even tested that it’s the correct feedback. If the save happened to fail behind the scenes, you’d actually have a much more serious issue – the dreaded silent fail. And, of course, you haven’t tested what you said you would test: That the element was saved to the database.
For every test you perform or design, whether manual or automated, the most important question you can ask yourself is: “Does this really prove to me that the application did what I am trying to test if it does?”
For the database example, there are multiple ways to complete the test. You can simply reopen the saved element, or you can continue by using the element in a new operation that needs to read and use the element to work. Or you can inspect the element directly in the database. What you choose to do will depend on the application itself – for example, if it caches elements, reopening the element may not be proof enough. It’s up to you to know what is proof enough.
So, developers introducing a caching layer could invalidate your proof? Or if they add asynchronous writes? Or if they switch the persistence completely? Perhaps proof *is* best done by exercising the application to see that the functional aspects are as expected, i.e. reopening the element.
Not sure what’s best…
What’s best depends on the context. And the context may, of course, change over time, which is why tests need maintenance and testers need to stay on top of what’s going on under the hood. The important lesson here is of course that testers should know and understand quite a bit of the software’s architecture to be able to make the sensible judgement calls. You can always design some test that is completely bullet proof, but that test is likely to be very inefficient, especially if it’s on the system or system integration test level. The more you know, the better you can balance the confidence in the test vs time spent testing.
As an example – under some circumstances even reopening the element might not be sufficient proof, if caching is in play. Restarting the server, the client, and the caching proxy between each test and you can be pretty confident, but you’ll also have spent way too much time that you probably should have spent running other tests instead. In that case you may be better off inspecting the database element directly in the db. But if you know that no caching is done at any point, it’s more efficient to reload than to inspect the database tables.
A skilled and experienced tester will pick the appropriate verification method based on their knowledge about the system under test.
A lovely little real life example which just recently played it’s part in saving a customer demo:
https://docs.google.com/document/d/1VKZmdA-6moB-yrdm8hjT2GFzaxY44NOshIH6rixag74/edit#heading=h.jup9zduelq1v
If one had exclusively relied on the nice and shiny little green checkmark icons in the software, then he would just continue to happily live with a totally malformed setup. Don’t want to know how many demos or productions might already have suffered from “unexplainable” issues due to issues like the mentioned one.