Building Tests

How to write great tests.

Previous: Performance Testing Next: Anatomy of a test

Building end-to-end tests seems like a daunting task at the beginning. There's a lot to do, and a lot of places where things can go wrong. They're quite hard to debug. The benefit, if you're already doing a brilliant job of writing web software, appears trivial. Stick with it though, because it is worthwhile.

Once you've spent a little time writing tests and the process of how everything works has 'clicked', then adding tests to a project should be quite painless. Furthermore, the actual tests themselves are simple, and there's plenty of cut'n'paste code to make things go faster. Depending on the code framework you're using there's likely to even be generators that can take you 75% of the way to a tested application with just a couple of commands. Building well tested applications shouldn't be a pain - if it's hard to test then that's a sign there's something up with the code. Good code is easy to test.

Before we get to any actual tests though, let's look at the what we need to choose first.

The first question that springs to mind with testing is when to write your tests. On the face of it the answer is obvious - how can you write tests before you have anything to test? You can't test if there's no code, right?

Well, actually you can. If there's no code to test then the test will fail. That's obvious. That actually tells us something though - namely that we need to write some code. By writing your tests before you write the code that makes them pass you have to think about the code in a new way. You have to plan the tests to cover every aspect of the feature you're writing that needs to be tested, and then design the code so that the tests pass. This can be a really good way of writing code that works first time.

Writing tests before you write your code is called TDD, or Test Driven Development. The tests are what drives progress on the application forwards. Alternatively you can adopt BDD, or Behaviour Driven Development, where you don't define the tests first but instead define how the software should behave. This is very closely related to TDD but it means you can move forwards more quickly as you're not constantly revisiting your test code as you learn new things about the application you're building.

Sadly though, at least with TDD, the complexity of HTML and CSS and needing things to work in half a dozen different browsers, it's pretty much impossible to write tests for front end end-to-end testing before you have anything to see in the browser. You can definitely plan your tests, but you won't really be able to get very far with actually coding them until at least the framework of your page is created.

Once you've decided when to write tests or behaviours the next task is to settle on a testing technology stack. At the very least you're going to need to use a scripting language with a test runner, but you may also need a separate assertion framework. As we're going to be writing end-to-end tests in this section it's assumed that you're going to be working with Selenium WebDriver to actually drive your test browsers because, at least at the moment, there isn't really any alternative.

Which technology you should use is a question that you and your team should answer together. There isn't a right answer. There isn't a "best in class" or an "industry standard" to help. The example code in the following chapter is written using NodeJS and Protractor but that shouldn't be taken as a recommendation for those tools. They're just what works for me.

Each part of the end-to-end testing stack has it's own specific job to do.

If you really get in to testing then you might need a server to run all these things, and Selenium Grid to talk to different (virtual) machines.

Something that's really important to remember is that your tests are actually code that runs to gather information from a browser and assert whether or not it matches an expectation. The reason why this is important is because code is powerful - your tests are not simple configuration for a piece of software that does the job of testing, your tests are an application in their own right, and your test code can do really clever stuff.

You can write test code that react to failure and retests things automatically. You can write tests that push thousands of different variables through a form in order to check every possible variation of things that might break it. You can run tests as different users, on different browsers, at different times of the day. Where you draw the line is really up to you.

With that in mind, I would highly recommend starting a library of code snippets that you can regularly reuse during your testing. A code snippet is simply a block of code that you cut'n'paste in to your test when you want to do something over and over again. You might have a code snippet for testing the specific page structure that you use on every project, or you might have a library of "bad strings" that you want your validation code to catch every time you test a form, or you might have a login function that you use in every test suite. There's plenty of scope for building a library of your own code to use again and again. This is efficient too because you won't need to write the same code again and again. Plus, if you have less work to do you're more likely to test things, and that's definitely a good thing.

The final point to make about building tests before we actually get around to building some is that it's important to actually run your tests regularly. It's too easy to write some at the start, then get in to the process of building actual features that you can see and play with, and forget to run your tests again. Then when you remember they're all failing and that's quite off-putting. Fixing code isn't as interesting or as much fun as writing new code. This is why it's important to run the tests as often as possible.

There are techniques that help with this process. You can tie your test runner to a commit hook in your source control. This means the tests will be run every time you make a new commit. If you do this then you can either have the test runner inform you about what happened via some sort of notification (an email, a Slack message, a desktop notification, etc), or if you're feeling a little braver you can write a commit hook that actually rejects the commit if the tests fail. This is what I would recommend - code that doesn't pass it's tests shouldn't get as far as the repo.

Alternatively you can test on the server and have an integration server that builds the application and runs tests on it whenever code is either commited to the repo or merged in to a specific branch (eg develop or master). This method of running tests also works well but it tends to mean the tests are run less often - that might be a good thing though, especially if you have a large number of tests to run.

Previous: Performance Testing Next: Anatomy of a test