Manual processes /

Test Scripting

How does a new tester know what to test and how to test it?

Previous: Manual processes Next: User stories for usability

A test script is a document that gives very specific step-by-step instructions about how to test a feature or to replicate a reported bug. Test scripts are useful if somone is testing a piece of code without much familiarity with the way it should work, or how it should be used. They're closely related to the documentation - in fact, documentation can be a great starting point for writing a test script and vice verse, a test script can be expanded to become the basis of the documentation for a feature.

It might be tempting to tailor your test scripts to the person who'll be doing the testing, adding verbosity for a user who has less technical ability than someone else, but generally this makes the scripts less reusable between testers and less useful as a basis for documentation, so I avoid it.

A test script follows the same basic format regardless of what it's about. A test script is an interactive document - the tester should record what happened in the test during the testing process.

It can also be useful to have a unique ID for a test, a record of the version of the code that was tested, and the data that was used during the test. In essence, you need to be in a position where any test that finds a problem can be replicated by someone else, usually a developer tasked with fixing the problem.

This sounds like a lot of work. In comparison to the way most people test it is, but there's a lot of replication between tests so a standard form or template can make things go a lot faster.

TODO An example test script

We should take a little time to think about exactly what the outcome of a manual test should look like. For a test, or a group of tests, that pass the outcome is probably little more than a "yes, that works". Testing is about discovering problems, so when there aren't any problems that's awesome and we can move along to the next test.

What happens when there is a problem though?

A good bug report includes at least enough detail for the developer who comes to fix the bug to replicate the same circumstances and see the same problem. This means the state of the application should be recorded (there are technical solutions for this - see Further Reading for some links), but if that's not possible then the tester needs to be able to say what steps were taken, what data was used, and what the unexpected outcome was. If the problem is visual then a screenshot can be helpful.

Sometimes a bug isn't quite so straightforward to replicate. Sometimes bugs don't happen every time a test is run - if the feature relies on an external input such as time or an external API then it can be difficult to replicate the issue. This is not a reason to give up though. Testing with a mock API, or a proxy API, that returns deterministic data (eg it's the same every time) can resolve issues with external data services. Testing using a virtual machine can be useful in controlling other factors.

Even with those things in place it can still be the case that a bug might not always happen. In these cases it's very often due to a race condition - two asynchronous functions are running, and whether the bug occurs depends on which one finishes first. Verbose logging is the testers friend if that's the case.

Signing off (by the test manager, lead developer, etc) requires that the test task is completed. This means that every test that needs to be run has been, but that doesn't automatically mean you need to run every test.

A test run should "fail fast". That is to say, if a tester finds a problem with the application that will result in subsequent tests failing they should abandon the test run as early as possible. There's no good reason to carry on with testing an application that you know isn't going to pass.

If tests do pass then there should be an impetus to react to the success of the test run as soon as possible too. Success is useful; things can move forwards if a test run passes.

Previous: Manual processes Next: User stories for usability