Manual processes /

Checklists

Even manual processes need documentation.

Previous: User stories for usability Next: Visual Testing

Checklists might seem like an anachronistic choice for testing tool, as they're not exactly the cutting edge of tech tooling. But don't underestimate the power of a simple list.

Checklists are used outside of the tech industry with surprising regularity. Airline pilots use them to ensure all flight checks have been made. Surgeons use them to make sure nothing has been left inside a patient. Lawyers use them for complex discovery in litigation. A testing checklist is trivial to implement but can have a surprising big impact.

Checklist management is such a simple task that it's practically the second application most developers learn to write after "Hello World". This means there are a lot of checklist apps out there to choose from.

The most popular software-as-a-service checklist manager is Basecamp from 37 Signals. It does a lot more besides lists, but the heart of the application is the creation of task lists that a user can tick things off of as they complete something. The extras like commenting on tasks, file attachments, collaboration, etc are useful too, but not strictly necessary if you're good at organising outside of your app.

Alternatively, a spreadsheet does the job well. Using Google Sheets gives you the collaborative features of Basecamp, and you can add comments by adding an extra column. It's a popular approach.

Lastly, and quite reasonably, if you don't need to share your lists online don't over look a piece of paper and a pen. For instant accessibility you can't beat it.

Once you've picked a method of managing your list and you've written out what you're testing as a list of items, the next thing to do is to start the testing process and tick some things off.

One of the advantages of using a collaborative checklist tool (Basecamp, Google, etc) is that other users can be involved in the same test run without repeating tests by marking a test as "in progress" so another tester knows that they shouldn't work through the same test. This also has the advantage that other people can see the velocity of the test process, and allocate more or less resources if it looks like the testing is taking more or less time than expected.

Once a checklist has been completed it's worthwhile spending some time reviewing the outcomes if there were any problems. A short review meeting between the test team and the development team can often highlight simple fixes. Likewise for design and UX teams. Talking through the test run with someone who has just spent time working their way through a set of features can tease out new information about the application that no one has yet realised.

A meeting like this doesn't need to be a formal affair; just a 5 minute standup with everyone involved is usually enough.

After a checklist has been completed, reviewed and work has been done to resolve the problems then it's a good idea to revisit the problem tests to make sure the features they relate to work as expected. If there are dependent features that could have been impacted by code changes then those should be retested too.

Revisting a list as soon as possible maintains forward momentum on the project, and means there's less time spent getting up to speed with a feature. There's always an overhead when a tester comes to look at a new feature, so the more you can do to get a feature completed and signed off the better. Keep your testing team's context switching to a minimum.

Another approach that can work well if you're using software that enables it is item tagging. This can be something as simple as ensuring a specific term is included in the item title, or it can be a first class feature in your list manager, but what it means is that you can refer back to a list of testing tasks and easily filter it for tests that apply in a given context. For example, if you tag your tests with a feature name you can later find all of the tests about that feature. Or you can tag your tests with a commit hash that tells you which commit in your source control software included the new code for the things that are being tested.

Using simple 'tricks' like tagging doesn't make a great deal of difference to a single test run but it can be a hugely valuable process in the longer term.

Previous: User stories for usability Next: Visual Testing