What to test?

Deciding what actually needs testing can be tricky. Is there one weird trick that can help?

Previous: The aims of testing Next: Manual processes

Knowing that you need to test is obvious enough. The next challenge is knowing what to test, especially given the time necessary to implement good tests.

One thing that's particularly important to remember when you're deciding what to test is that running tests takes time and resources, especially for manual tests. A manual tester's time is a precious resource that shouldn't be squandered. Manual testing should be limited to things that are either impossible to test automatically or to things that an automatic test wouldn't be able to cover such as UX, or design implementation.

What aspects of the code your automated testing should cover is more difficult. The simplest answer is to just test everything. If you test everything imaginable then there's no way you'll miss anything out, and you'll discover all the bugs and nothing will ever be able to break when you change code in the future. A test for every possible input to every function and method should do the job, with some extra tests for layouts and network calls. Test all the things!

It's that easy. Of course.

In practise while you could take this approach it really means you'll end up spending a huge amount of time writing tests that will probably tell you very little in the future.

Actually what you need to test is a subjective judgement call that you need to make yourself based on what you're building. Some websites and applications don't need a huge amount of tests. Some need to hit 100% coverage. There isn't a simple rule that tells you what to do.

However, there are a few things that can help;

There are coverage tools that will help you instrument what you've tested and what you haven't. For Javascript the most popular tool is Istanbul. It's worthwhile looking in to.

One of the greatest challenges in web development is writing code that has to work across different browsers, and tests are a great way to discover what works and what doesn't. A test that passes in one browser might fail in another due to differences in the rendering engine or Javascript implementation.

It's my belief that tests shouldn't be so brittle as to expect everything to be identical in every browser. Accepting small differences that have no impact on the user's experience of the website can make your life a lot happier, and your testing process a lot easier. For this reason I don't recommend testing CSS properties outside of things like fonts and colors. You can test the position of an element to make sure it's pixel-perfect, but that test is going to break a lot.

Similarly it's also worthwile testing things are rendering correctly if they can have a negative impact on the page layout. For example, I test the rendered size of an image or an icon.

CSS Property Test
Position (eg top, left, offset, etc) No
Color (eg color, border-color, etc) Yes
Font (font-family) Yes
Size (font-size) Yes
Weight (font-weight) Yes

Font size is particularly important. There are a large number of ways to define the size of text in CSS, and they can cascade throught stylesheet in unpredictable ways if you mix units. Using automated tests to ensure your users will always be able to read what's on screen is a good idea.

After you've written an extensive suite of tests that your code nimbly passes you might still find that the occasional bug sneaks through, and only gets picked up during a manual test or gets reported by a user. When this happens you should fix the issue (doh!) but you should also add in tests to make sure the fix works.

The are a few reasons why you'd do this, but the absolute number one reason is because telling a customer that something is fixed and then they see the bug again is pretty much the worst thing you can do with regards to your user's confidence in your ability. Bugs happen. Bugs are forgivable. A fix that doesn't fix the problem is not.

Previous: The aims of testing Next: Manual processes