Manual processes
There is no better way to make sure something works than seeing it yourself.
If you want to make a software product that people love then it has to work, and the only way to really know that it works is to see it working yourself.
In the past testing has been seen as a final necessary chore that no one really wants to do. It's quite understandable - repeatedly trying the same feature again and again gets boring. I think there's a flaw in that thinking though, and once you get past it you can reframe testing as something that's not just a boring hurdle to signing off on a project, but a really useful part of a project that genuinuely makes things better.
The first part of the problem with manual testing is that there's rarely a well defined strategy. As a developer you're often just told "test it and find any bugs". That doesn't work. You have to get organised, otherwise it feels like you're stuck in a loop doing the same task over and over. Manual testing is a great example of the DRY principle - don't repeat yourself. Test things exactly once.
Before you start a manual test run write down the objectives of the tests - this document is called a test plan. The content of such a plan is;
- Write a list of features you need to check are working as expected.
- Write down the acceptance criteria for each feature - what would it look like if the feature is working? What would be a problem?
- Write down what permutations of inputs are available for those features.
- Gather test data that will be input into the feature if necessary.
- Work out how you're going to reset the feature between each test in necessary.
If you need test data to ensure a feature is working then it's never a good idea to just make it up. Use a test data generator.
The first reason not to invent your own test data are that creatively generating data is harder than it appears, and that test data should look as close to real data as possible.
The second reason is that some developers have a tendancy to invent test data that a client might find "inappropriate". Using a test data generator is an easy way to mitigate that problem.
A test data generator is an application that understands the format of specific content types and can build objects from a random collection of valid data. For example, the online test data generator Mockaroo can generate several thousand test data objects instantly, with realistic names, telephone numbers, email addresses and more.
"id": 1 "first_name": "Chris" "last_name": "Young" "email": "cyoung0@bloglines.com" "gender": "Male" "ip_address": "149.48.145.136" "id": 2 "first_name": "Joseph" "last_name": "White" "email": "jwhite1@mashable.com" "gender": "Male" "ip_address": "42.59.158.216"
Once you have a good test plan that details what to test, and any required testing data, the next task is get on with the actual testing process.
How you test is a matter of preference. Some developers like to group tests by similarity, testing all the features that work in a similar way together (Eg testing all the read functions, then all the create functions, then update, and so on). Other developers like to test by user flow, testing features as a user might use them (testing a create, then reading the tested data, then updating it, and finally deleting it). So long as the tests are all run and they're documented as you test then it makes no difference, so use what you prefer.
If you're developing a test plan for someone else to work through, or if you're testing to discover a specific issue with a feature, then it might be necessary to write a test script.