When I was a kid, I watched a cartoon called The Jetsons. The futuristic family had a robotic maid named Rosie who took care of the household chores. To make things interesting, she was given a personality, and occasionally she demonstrated emotions. After watching this show, I would think about the future and imagine robots doing the majority of our work for us while we sat back and did more important things. Although many of these imaginings have not materialized, I often think of this when I am writing test automation as a QA engineer for the Church.
Even a simple application can have a large number of test cases that need to be verified. Let’s pretend we have an application with 100 test cases. The sooner we realize a given test case is failing, the better off we will be. Accordingly, we would want to test after each successful build. Let’s suppose that ten new builds are created each day. If we tested all of this functionality with each new build, we would be testing 1,000 (100 X 10) cases day in and day out. Umm . . . I think we need a Rosie!
Taking the construction of Rosie a piece at a time, we first need to decide how our tests will exercise the given application. Unit tests work well for testing individual components and methods. They are typically written in the same language as the application and are run with a framework such as JUnit or TestNG.
At times we will want to test at a higher level by writing functional tests. The tools to do this will vary depending on the application. For Web applications, Selenium can be used because of the ability to test across many browsers. The Watij/WatiN/Watir family can occasionally be used, and some of us are optimistically watching WebDriver. These tools usually consist of a library that automates a browser window. For example, links can be clicked, forms filled in, and buttons pushed. At times when the overhead of an actual browser is not wanted, tools like HttpUnit can be helpful. Many tools exist to test windows clients as well.
Another critical piece of building a Rosie is using a continuous integration system. These tools will wait until new code is checked into a code repository, then grab the latest code, build it, deploy it, run tests against it, and alert us to the results, all with no human interaction.
An added benefit to automated testing is measuring how much of our code was executed by our tests (commonly called code coverage). Although having a high coverage number doesn’t guarantee a well-tested application, a low coverage number does indicate more testing is needed.
I’ve found that by utilizing the tools available, I have been more effective in my role. The tedious work of regression testing is minimized and made more accurate by a well-built and maintained set of automated tests. With the extra time available, we can then focus on improving other tools and processes in keeping with our recent guidance to do more with less.
Brandon Nicholls is a senior QA engineer for the Church.