Test Driven Development (TDD) is a development process based on very short testing and coding iterations where the test code for a block of functionality always precedes any actual implementation. A test should be written to fail before coding any sort of implementation; this follows the Red-Green-Refactor idea. I have been exposed to TDD in the past year or so and actually practiced it a little without knowing it while working on college projects.
The development team I belong to consists of Christian Hargraves (a strong proponent of TDD) and me. As a team we’ve been trying to do TDD and pair programming at least a couple of times a week. Our pattern is for one developer to write the unit test and create any classes or methods required to simply make the code compile and the test fail. At this point, the other developer takes the helm and does the bare minimum to make the test pass. He then writes the next failing test. The result is that the simplest solution is implemented first. This cycle continues until both developers are confident that the functionality will meet all the requirements and that the tests are all passing. This method of development is often referred to as ping pong programming.
Through my TDD experience I have gained a greater appreciation for several things that help maintain a good coding rhythm, and impact productivity. First of all, if you are writing a test for a nonexistent method or class, it can be tedious jumping back and forth between editor tabs creating this code. IDE integration here is amazing—the ability to create classes and methods on the fly from within a test is very helpful.
Something that may not initially seem too important is the execution time of tests being written. If you are constantly executing slow running tests during mini development cycles it can throw off the rhythm. Strategies such as stubbing, mocking, and simple code optimizations can increase the speed of your tests. Another tool is code coverage; a great metric but I’ve always been leery of using it as a measuring stick. This being said, code coverage can still be a great tool for identifying weaknesses in the current tests. If you are writing good tests, you get an extra little bit of confidence from knowing which code has been thoroughly tested.
An issue I’ve struggled with while doing TDD has been my tendency to approach everything with a black box testing mentality. I found that in order to turn a black box into a white box I needed to gain more confidence in that code by writing tests at a lower level (not always possible based upon available source). I found that as I did this, I felt my overall confidence in the code increased and my need for integration testing went down.
Our team is settling on a good balance on how we use TDD to develop new features. I think simple common sense can help find this balance—simply ask, “Is this testing adding any value or confidence?” The TDD process has helped me learn our existing code base faster, and I have a much higher level of confidence in the code we write. I wouldn’t hesitate to refactor any piece of our code.
Dax Haslam is a software engineer for the Church.