The aims of testing
Is testing simply the act of confirming things work as expected, or is there more to it than that?
We've got this far without actually examining why we need to test web software. On the face of it the question is so obvious it hardly needs to be asked - we test so that we know things work. Is that all there is to it?
There are as many reasons to test as there are possible outcomes of not testing. The most obvious is that your code might be broken, resulting in an application that doesn't do what it's supposed to, and consequently some frustrated and annoyed users. There are further reasons though. Testing makes you think about the software you write. It brings up questions around security, performance, and feature completeness. It can keep you grounded and less prone to scope creep. That isn't to say testing is a universal panacea that can magically fix a broken development process though - far from it. Equally, if you're building great apps without a good testing process then adding testing isn't going to change very much (well played if this is where you're at - it's not common).
Increasing your emphasis on testing can someties result in your team squabbling, complaining and occasionally declaring all-out war with the product team. Testing takes time, and a team that isn't making applications often doesn't have any time to spare. It's hard to ask a team to do more if they're already lacking resources. Paradoxically this makes it hard to know when to bring in better testing processes - testing can save time by discovering problems earlier, but only if you spend time making good tests. The balance is up to you.
Testing is not about simply fixing bugs. Testing is about understanding the application you're building, discovering what does or doesn't work (that could be a bug but also a UX problem or a missing feature), and making sure that new code introduced doesn't have negative side effects on the code that's already there.
A moderately sized web application will often have dozens of interconnected parts that a single developer won't be able to hold in their head at once. That means any testing can't be a straightforward matter of the person who wrote the code taking time to go over the code again, plugging in different input values and checking the outputs. Working in a team means testing together.
Testing is really asking the question "Does all of this application really work?", but for all meaings of the word "work" rather than just things the application should do.
The difficulty in asking the question of whether or not your application works though is knowing who can answer that question.
If you're in a large organisation you may have a dedicated test team. This is great, but most developers don't have that level of resource available. Even if you do, testing can't only be the responsibility of the testers - the code that's written needs to be testable, so every developer needs to know how to write code that can be tested.
For smaller organisations the question is more nuanced. Should a developer test their own code? There are good arguments on both sides - a developer is likely to know what they've written best, so maybe they're best placed to make sure the tests that need to be written are available. Conversely though, developers can get incredibly close to their code, to the point where they skip over "obvious" things and work around hard problems. This makes tests that are written by the developer of some code less likely to cover the strange corner cases that would mean rewrites. Those are often precisely the parts of the code that need tests the most.
I've found that the answer to this issue can be simply that a second developer looks through the code and decides what needs to be tested, and then the original developer writes the tests for those cases. If you're working with a language that has mature tools then these tests cases can sometimes be generated automatically.
If a second developer is looking at the code this can highlight the importance of documentation, comments and docblocks. Without those in place it's incredibly hard to figure out what is going to need testing.
Once your developers have figured out a way to divide up who should test their code it's time to bring in more people, particularly non-technical team members (non-technical in the sense that they're not developers themselves).
Having non-technical team members following test scripts, completing user stories, and marking tests on checklists can be a brilliant way to improve an application. If someone without a deep understanding of how an application is supposed to be used struggles to use a feature then that feature needs more work. It's far better to discover that before the feature is in front of the end user.
It's also wise to bring in designers and product team members to test the application. Given guidance that their job is to discover differences between a given design and the implementation that a developer has created, a designer is usually more inclined to highlight where things aren't quite right, or to judge whether something is close enough to their original vision.
Members of the product team whose job it is to liase with customers can also be incredibly useful in testing a product. They can utilise their understanding of what the customer or end user needs the application to do, and find areas of the app that don't work as they would expect.
It's important to draw on more than just expertise in coding in testing what you build.
Which brings us neatly to the user. The person who really understands an application is the one who has to use it on a daily basis.
There are a number of interesting ways to involve the user in testing your applications. Firstly, and absolutely critically really, you should be logging what your application is doing and looking at those logs on a regular basis. There's no point have information you're not utilising. If a feature in your application is throwing up lots of errors then that's a red flag that you need to pay attention to. Quite often users won't report a problem, especially if they can either ignore the error by working around it or by using a different approach. That doesn't mean there isn't a problem though.
The next way you can enable the user to become a valuable tester is to give them a bug reporting mechanism. If there's an email address or a form that the user can fill in then you'll get much more visibility into the ways your application isn't working. Better yet, build a bug reporting mechanism into your application as a first class feature and you'll be able to capture things that the user might miss out of a report. Having a stack trace is fantastically useful for finding edge case problems.
Lastly, you can test features rather than bugs with tools that afford your application A-B split testing or bandit testing. These ideas go a bit beyond the scope of this book though.
One common problem with testing, especially on the web, is that code runs in vastly different environments - the number of things that can make a difference to whether or noy code will work is practically endless. This means what works on one computer might not work at all the same way on someone else's computer. This is where testing comes into it's own.
A common excuse that developers use when they're trying to find a bug is "it works on my machine!" While it may well be true, it often conceals a belief that the developer doesn't believe the bug report. This is why test reporting is difficult - if a developer can't replicate the bug (by seeing it happen) then tracking down what's happening and fixing it is all the more tricky. Another reason why first class bug reporting that includes a memory dump, a stack trace, of other evidence is particularly useful.