Acceptance Strategies

How much coverage should there be? Are failing tests ever OK?

Previous: Continuous Development Next: Reacting to failure

Writing a complete suite of tests to examine our application code can be a labourious process that results in complex tests even for features that will rarely be used and can't actually fail. Sometimes there is code that is so straightforward it seems that there's no point in writing tests for it because they're always going to pass. Sometimes there's code in our project that we didn't write, or that was generated automatically. How should we write tests for that code?

If we use a code coverage tool to tell us how much of our code is being tested we will face the dilemma of what an acceptable level of coverage looks like. It's unlikely to be 100%, but whether it's 50% or 80% depends on our code.

How much code should be manually tested when there are automated tests that cover the same set of features?

What happens when our code isn't a straightforward deterministic function, but has elements that are flaky and might fail depending on external APIs? Sometimes it's impossible to mock everything, so is it acceptible to live with a test that could fail without it being our code's fault?

How much of our code we need to cover with automated tests depends on the complexity and size of the code. Aiming for 100% is a nice thing to do, but realistically there's often too much code and not enough time to make that happen, so where do we miss tests out?

When we writes it makes absolute sense to cover things that are hard to understand and prone to breaking. If code is complicated enough to need an explanatory comment or two then it's likely it'll need some tests as well. If code uses 'magic' side effects that a developer may miss when they're reading through the code that's usually a good sign that you should be covering those effects with a test too. It's also a good idea to cover big things like class constructors to make sure they instantiate objects properly, and test that any injected code is correct. It's easy to modify a class that's a dependency of other classes in a way that makes it incompatible with the sub class, so we need to catch those test cases.

Code such as getters and setters should be tested if it manipulates the property when it's either fetched or changed. These are places where another developer might miss something that's being affected so the tests will catch if things are changed in ways that won't work.

Critically important features in the application should always be tested throughly. If a block of code can delete data, or change it in a way that would mean there's no way to revert back to the previous values, then it needs to be extensively tested. We don't want to deploy code that could eventually lead to data loss.

The same is true of code that impacts the security of both the application and the user's data.

Manual tests

Flaky tests

No right answers

Previous: Continuous Development Next: Reacting to failure