Introduction to testing

What we did in the first two decades of the web, and why it wasn't that good.

Previous: Home Next: The aims of testing

This is a book about testing websites.

When you first learn to build a website there's a huge emphasis on learning to structure your HTML properly, style it semantically with CSS, maybe add in some JavaScript to make things move around or improve the user's experience interacting with the page. If you're on a more technical path then you might learn about writing an API using a server-side scripting language. Then there's learning to be done about UX and UI design. Some courses touch on accessibility.

Very few, far too few, web design and development courses ever go near the topic on testing a website to make sure the thing you make actually works.

It's usually worse if you're self-taught. The number of times I've seen a blog post or a magazine article about a fancy new framework or a browser feature you might want to use ever comes close to mentioning how you might check that your page actually does what you want it to do.

On the face of it this doesn't seem all that important. You can check your code works by loading it in a browser, or several browsers if you're good, and just use it. If it doesn't do what you want then you go back to the code and tweak it until it does. Awesome. Now you have a working website that you can push to a server and every user happy. But we know this isn't the case. Web designers and developers spend a huge amount of time fire-fighting - fixing bugs that only came to light days or weeks or even years after a website went live.

Let's stop that. Let's test things better, and release software we don't have to keep coming back to.

As this is a book aimed at developers there's going to be a vast range of levels when it comes to experience and technical knowledge between anyone who reads it. That makes it tricky to pitch it at the "right" audience - should it be for skilled and experienced technical developers who understand that they need to raise their testing game, or should it be aimed lower with explanations about concepts that you might assume an experienced developer would understand so that even the most junior team member can adding testing knowledge to their toolkit?

In my opinion testing should be something that every developer does, so with that in mind I've written the book to include some things that more senior developers will probably know already. If that's you then that's brilliant - just skip over that stuff. Don't worry that you're reading something that might be aimed at less experienced developers. If you're concerned that your websites aren't tested well enough then there's definitely something here for you. And if you're a less experienced developer reading this then that's brilliant too. The sooner you can bring great tests to your work the better.

The very first website was little more than a page of text that could be loaded across the internet. From those humble beginnings we've moved from adding bold and italic text, including images inline with content, through to tables, and iFrames, and CSS and finally the modern web of today that's awash with incredibly complicated JavaScript, asychronous loading, animation and more. Websites are a marvel. It's frankly amazing that they work at all.

Back when Tim Berners-Lee put his first page of text online it's unlikely there was any formal way to test what he'd done. In the early days of the internet testing really consisted of opening a website in a browser and manually checking whether the content would load. If it did, brilliant. If it didn't, usually you'd forgotten to set the permissions correctly in your FTP app, so you fixed them and tried again, and the website worked.

For the first decade or so that was really everything that most web designers and developers did. If the website looked OK in Netscape Navigator you'd check it looked the same in Internet Explorer. If the layout was close (they were never the same) you'd fill in any forms on the page with some fake details and check they'd return what they were supposed to return. That was all there was to it.

There's little wonder that websites built in the early days of development were rife with bugs, usability problems, and security holes big enough to drive buses through. Testing was simply a matter of making sure code worked when the right inputs were given.

We needed something better.

The first big change in manual testing was the switch from "testing a website works" to testing to try to break things. There was a realisation that there's no point only checking your email input element accepts valid email addresses - you had to make sure it rejected invalid ones. As obvious as that seems now, it was a paradigm shift at the time. Websites improved immensely.

The next big change came with a technological improvement. As servers improved in their capabilities, and fell in price, a website running on it's on server was a reasonable propasition. With that came the ability to configure servers more explicitly, to use frameworks to build out bigger applications, and to orchestrate deployment. That in turn lead to the beginnings of unit testing. Developers could, if they wanted, ensure that there code worked.

This was all there really way until around 2010 - unit testing for the server side code and manual testing for the front end.

As websites grew in complexity, with the advent of ajax requests, DOM manipulating jQuery plugins and data driven pages, it became abundantly clear that we needed to test better. There were an increasing number of ways to interact with a page and with that came an increased burden on testing.

Larger companies responsible for bigger sites recruited testing teams to manually check everything that a page could do, in every browser, over and over again. Smaller companies couldn't afford such a "luxury".

There needed to be a better way to test bigger and bigger applications. This is when browser automation started becoming more commonplace.

Selenium, and front end testing using automation wasn't completely new, but it's usage for testing was far from mainstream. By 2008 WebDriver had been created internally at Google, and the two platforms merged in 2009 to become Selenium WebDriver. At this point front end testing frameworks began to spring up.

At the same time Selenium and WebDriver were catching hold for automated testing there were efforts from other parts of the testing community to develop web software testing processes to make manual testing more efficient and much easier.

Testers began to approach testing in rigourous ways. The use of user stories made sure that the testing that was done actually covered the whole application. Written test scripts ensured that the tester worked through every possibility instead of a random scattershot approach that relied on luck to test everything that an application could do. And lastly checklists began to gain popularity as tools to ensure tests were actually done, with a record of what was tested and when. (If you're not doing these things already then check out the Manual Processes section)

Project management for website development was also moving forwards. Tools like Trello, JIRA and Pivotal Tracker gained popularity, with agile methodologies being seen as a great way to develop a web project without needing to plan everything up from. Source control with GIT and Subversion also made testing easier as features code be written on branches in isolation, throughly tested and only merged in when they were working properly. This had the effect of reducing the necessity to manually test everything during development. Lastly, and in hand with the improvements in source control, came the rise of issue tracking. Being able to record what problems there are in a project is as important as actually fixing the bugs themselves, so better management was necessary. Most software-as-a-service source control applications came with their own issue trackers, so bugs were recorded with the source code that they refered to.

The final piece of the testing puzzle, at least for manual (or semi-automatic) testing, was the invention of visual testing tools. These applications take screenshots of a website or web app in various states, and automatically calculate the difference between areas of the screen. Things that shouldn't be affected by changes the user has made are highlighted. Things that have changed are clearly visible. This gives designers the ability to check whether HTML and CSS code are rendering properly between different browsers.

Visual testing tools take us to up the state of the art for manual testing today.

If there are any aspects of testing that you're not already using then now is the time to start. A lot of developers are put off due to the perceived complexity of writing tests, and the necessity to write code that is actually testable in the first place. These are terrible reasons not to use testing - it's sacrificing the ability to move fast and have the confidence that the changes you (or someone else if you work in a team) aren't breaking code that's already been written.

However, that said, there are strategies to improve the software we write without going straight to the most complicated solution, even if it's the best one. We can, and should, improve the way we do manual testing before anything else.

Previous: Home Next: The aims of testing