This article was originally published at Methods and Tools
What is the one activity or phase that improves the quality of your application? The answer is an easy one: Testing, and plenty of it.
But when do we traditionally perform testing? If you are following a "waterfall" style approach to software development, it’s very likely that you have a testing phase somewhere towards the expected end of the project. This is peculiar, because we all know that at this stage in a software development project, the cost of any code or requirement changes is known to be much higher. Figure 1 presents us with the five major phases of the waterfall approach. I have superimposed Barry Boehm’s "Cost of Change" curve on to Figure 1.
Figure 1: Traditional "Cost of Change" curve with the waterfall model superimposed
Clearly, if we can move the testing phase further to the left we might stand a chance of reducing costs whilst enjoying the natural effect of a longer testing period. TDD helps us achieve this by embracing agile techniques that sport iterative or incremental development. Instead of performing the five waterfall stages noted in Figure 1 (typically horizontally, left to right, i.e. requirements gathering then analysis & design, etc), we perform the stages vertically, bottom up, i.e. perform a little of each stage until we are complete.
By embracing iterative development, TDD allows us to be much more flexible towards our client’s ever-changing requirements. Instead of being in the position where we had spent a lot of waterfall time developing a product or feature, where the cost of change is very high, we find ourselves in the enviable position whereby we can demonstrate parts of the product or feature much earlier. By demonstrating features earlier, the cost of the inevitable changes is much lower. Of course, knowing about requirement changes earlier in the process mean that important architectural decisions (i.e. those that make requirement changes much harder to implement) have yet to be made or are at least in their infancy and are not the brick wall they might be.
Over the course of this article I will demonstrate how TDD can improve your application’s quality by introducing testing much earlier into the process. Additionally, I hope that the agile world’s tenet of flexibility and acceptance that client’s will change their mind during the course of a project, will become clear. By maintaining a suite of tests that are both repeatable and automated, we can enjoy the luxury of being able to make sweeping changes to our code with the knowledge that any deviations and bugs will be picked up by re-running the test suite.
What is TDD?
TDD is radical process that promotes the notion of writing test cases that then dictate or drive the further development of a class or piece of code. This is often referred to as "writing tests first". Indeed, I will refer to the phrase "writing tests first" as one of TDD’s primary principles.
Whilst I refer to TDD as a "process", followers of Agile communities including XPers (eXtreme Programming), will know TDD as an integral "core" practice. Appreciating that many of the XP-oriented books push TDD as a primary part of their content, we are lucky enough to be able to adopt TDD as a process that can be "bolted" on to our existing software development process, regardless of whether it is Agile, waterfall, or a mix of the two. There may well be entrenched differences in the semantics of the words practice and process. I believe that TDD in an Agile/XP environment, perhaps were Scrum is practiced, TDD can also be practiced. However, outside of a strictly Agile environment, TDD is a sub-process that becomes part of the development process.
TDD is radical because traditionally developers either test their own code, or they have a testing group that undertakes this task. Sadly developers who test their own code are to close to the problem and the solution; they have a personal ego investment that has to be looked after. Thus developers rarely perform as much testing as we would like.
In fact, the first time a developer tests his/her code, it’s likely to be heavily tested. However, the second time a developer is asked/expected to test the same piece of code/functionality, testing will not be as involved. By the time they are asked to test a third and fourth time, boredom sets in and the quality and coverage of testing reduces considerable. As a developer myself, I know this has happened to me and an informal poll of my colleagues in the community revealed similar results.
If you are building a product using the C# language, typically TDD suggests that you’ll write and maintain a set of tests also written in C#. Ideally the tests will exercise all of your production code. One of the TDD catch phrases is: "no code goes into production without associated tests". That is a fairly tall order, however if you are able to achieve it you will be able realise high quality code, with any bugs being caught before they are shipped.
If you have been reading Martin Fowler’s works about refactoring, TDD lends itself to the refactoring process. After all, refactoring is about changing a class/method’s internal design without affecting the external behaviour. What better way to ensure your refactoring hasn’t broken anything than with a solid, reusable set of test cases? I know that I’m not alone – we’ve all written some code, tested it, then re-written it (or "improved" it!) only to find that the re-write introduces problems elsewhere in the application.
TDD is a different mind-set. It is the ‘tests’ that ‘drive’ the ‘development’, not the other way around. By writing a test case for a scenario, your test either dictates your implementation or gives you some good pointers. It might not be the ideal implementation, but that’s part of the TDD process – small, fine-grained steps lead to increased developer confidence which results in larger steps downstream.
Removing The Boredom Of Testing
Notice that I used the phrase: "writing tests first". If we are to remove the boredom of the testing process, we need to codify the tests that we would perform. By writing test cases, typically in the programming language that the application or class under test is written using, we are essentially automating the test process.
Automating the process is the key. If we can make the tests easy to perform once, twice, three and four times, ideally at the click of a button, developers will be happy to run the tests frequently. Of course, each time we run the tests, we are performing the same tests, over and over again, the tests are said to be repeatable.
Repeatable testing is also a major benefit. Were a developer performs manual testing, we know from experience that they test less each time. Similarly, when asked to perform a particular test, unless there is a very precise test script available, the chances are that a human-driven non-trivial test will be performed identically each and every time.
TDD Frameworks and Tools
TDD has origins in the Java and Smalltalk camps. Originally there was a tool by the name of SUnit (for Smalltalk). That was followed closely by a tool called JUnit (for Java). Since then, the term ‘xUnit framework’ has been used to represent language agnostic versions of the same tool. The xUnit framework has been ported to many different platforms and languages.
NUnit is a .NET implementation of an xUnit tool or framework that works with C# and Visual Basic.NET amongst others. Outside of the Microsoft IDEs, Borland’s Delphi has its own xUnit testing framework called DUnit. Given the uptake of Microsoft .NET, over the course of this article I will demonstrate NUnit using a C# example.
I will assume that you have downloaded and installed NUnit and that you have a C# IDE installed (e.g. Microsoft Visual Studio 2003, Delphi 2005 or C#Builder). I am using Microsoft Visual Studio 2003, if you are using a different environment the screenshots will vary.