TDD in Practice - Dealing with Hard-To-Test Areas

Page 1 of 2
  1. Introduction
  2. Test Doubles

Introduction

*This article was created (with permission) from posts on Ian's blog on CodeBetter *

I wanted to talk about the issues that people get when they begin working with TDD, the same issues that tend to make them abandon TDD after an initial experiment. Those are the 'hard-to-test' areas, the things production code needs to do, that those presentations and introductory books just don't seem to explain well.

First, let's start with a quick review of TDD, and then get into why people fail when they start trying to use it.

Quick review of TDD

Clean Code Now

TDD is an approach to development in which we write our tests, before writing production code. The benefit of this are:

  • Tests help us improve quality: Tests give us prompt feedback. We receive immediate confirmation that our code behaves as expected. The cheapest point to fix a defect is at the point you create it.
  • Tests help us spend less time in the debugger. When something breaks our tests are often granular enough to show us what has gone wrong, without requiring us to debug. If they don’t then we probably don’t have granular enough or well-authored tests. Debugging eats time, so anything that helps us stay out of the debugger helps us deliver for a lower cost.
  • Tests help us produce clean code: We don’t add speculative functionality, only code for which we have a test.
  • Tests help us deliver good design: Our test proves not just our code, but our design, because the act of writing a test forces us to make decisions about the design of the SUT.
  • Tests help us keep a good design: Our tests allow us to refactor – changing the implementation to remove code smells, while confirming that our code continues to work. This allows us to do incremental re-architecture, keeping the design lean and fit while we add new features.
  • Tests help to document our system: If you want to know how the SUT should behave examples are an effective means of communicating that information. Tests provide those examples.

Automated tests lower the cost of performing these tests. We pay a cost once, but because we can then re-run our tests at a marginal cost they help us keep those benefits throughout the system lifetime. Automated tests are ‘the gift that keeps on giving’. Software spends more of its life in maintenance than in development, so reducing the cost of maintenance lowers the cost of software.

The Steps

The steps in TDD are often described as Red-Green-Refactor

  1. Red: Write a failing test (there are no tests-for-tests, so this checks your test for you)
  2. Green: Make it pass
  3. Refactor: Clear up any smells in the implementation resulting from the code we just added.

Where to find out more

Kent Beck’s book Test-Driven Development, By Example remains the classic text for learning the basics of TDD.

Quick Definitions

System Under Test (SUT) – Whatever we are testing, this may differ depending on the level of the test. For a unit test this might be a class or method on that class. For acceptance tests this may be a slice of the application.

Depended Upon Component (DOC) – Something that the SUT depends on, a class or component.

What do we mean by hard-to-test?

The Wall

When we start using TDD we rapidly hit a wall of hard-to-test areas. Perhaps the simple red-green-refactor cycle gets begins to get bogged down when we start working with infrastructure layer code that talks to the Db or an external web service. Perhaps we don’t know hot to drive our UI through a xUnit framework. Or perhaps we had a legacy codebase, and putting even the smallest part under test quickly became a marathon instead of short sprints.

TDD newbies often find that it all gets a bit sticky, and faced with schedule pressure, drop TDD. Having dropped it they lose faith in its ability to deliver for them and still meet schedule pressure. We are all the same, under pressure we fall back on what we know; hit a few difficulties in TDD and developers stop writing tests.

The common thread among hard-to-test areas is that they break the rhythm of development from our rapid test and check-in cycle, and are expensive and time-consuming to write. The tests are often fragile, failing erratically and difficult to maintain.

The Database

  • Slow Tests: Database tests run slowly, up to 50 times more slowly than normal tests. This breaks the cycle of TDD. Developers tend to skip running all the tests because it takes too long.
  • Shared Fixture Bugs: A database is an example of a shared fixture. A shared fixture shares state across multiple tests. The danger here is that Test A and Test B pass in isolation, but running Test A after test B changes the value of that fixture so that the other test fails unexpectedly. These kinds of bugs are expensive to track down and fix. You end up with a binary search pattern to try and resolve shared fixture issues: trying out combinations of tests to see what combinations fail. Because that is so time consuming developers tend to ignore or delete these tests when they fail.
  • Obscure Tests: To avoid shared fixture issues people sometimes try to start with a clean database. In the setup for their test they populate the Db with any values they need, and in the teardown clean them out. These tests become obscure, because the setup and teardown code adds a lot of noise, distracting from what is really under test. This makes tests hard to read as they are less granular, and thereby harder to find the cause of failure in. The Db setup and teardown code is another point of failure. Remember that the only test we have for out tests themselves is to write a failing test. Once you get too much complexity in your test itself it can become difficult to know if your test is functioning correctly. It also makes them harder to write. You spend a lot of time writing setup and tear down code which shifts your focus away from the code you are trying to bring under test, breaking the TDD rhythm.
  • Conditional Logic: Database tests also tend to end up with conditional logic – we are not really sure what we are going to get back, so we have to insert a conditional check to see what we got back. Our tests should not contain conditional logic. We should be able to predict the behavior of our tests. Among other issues, we test our tests by making them fail first. Introducing too many paths creates the risk that the errors are in our test not in the SUT.

The UI

  • Not xUnit strength: xUnit tools are great at driving an API, but are less good at driving a UI. This tends to be because a UI runs in a framework that the test runner would need to emulate, or interact with. Testing a WinForms app needs the message pump, testing a Web Forms app needs the ASP.NET pipeline. Solutions like NUnitAsp have proved less effective at testing UIs than scripting tools like Watir or Selenium, often lacking support for features like JavaScript on pages.
  • Slow Tests: UI tests tend to be slow tests because they are end-to-end, touching the entire stack down to the Db.
  • Fragile Tests: UI tests tend to be fragile, because they often fall foul of attempts to refactor our UI. So changing the order and position of fields on the UI, or the type of control used will often break our tests. This makes UI tests expensive to maintain.

The Usual Suspects

We can identify a list of the usual suspects, who cause issues for successful unit testing.

  • Communicating Across a Network
  • Touching the File System
  • Requires the Environment to be configured
  • An out-of-process call (includes talking to Db)
  • UI

Where to find out more

XUnit Patterns: Gerard Meszaros' site and book are essential reading if you want to understand the patterns involved in test-driven development

Working with Legacy Code: Michael Feathers' book is the definitive guide to test-first development in scenarios where you are working with legacy code that has no tests.

Depend upon Abstractions

The Gang of Four’s first principle is to program against abstractions not implementations. If we use abstractions then we can solve the hard-to-test problem by implementing the abstraction in terms of the hard-to-test dependency in production, but with a simple-to-test dependency in test. So we could use an IDatabase to abstract out our interaction with the Db, using concrete Ado.Net classes in production, but replace it with an in-memory collection for testing. Jeremy Miller summarizes this approach as ‘isolate the ugly stuff’.

We need to show some rationality here. No one wants IString, everyone wants IDatabase, and we probably don’t need an ICustomer, but we might. The advantage of TDD is in flushing out where ICustomer is useful by exercising the SUT. So when designing for testability we need to think about which dependencies we want to allow to be concrete, and which won’t don’t. Different schools of programming make different value judgments here. Classicist approaches tend to avoid replacing all dependencies, focusing instead on ones that are needful to support extensibility and layering.

Design principles should help us identify when to use interfaces and these tend provide the opportunities we need to use abstractions to isolate hard-to-test code.

Layers

A layered architecture also creates a need for abstractions. While the layers must communicate, like a layer cake higher lays may depend on lower-layers but not vice-versa. To effect this, higher layers in our architecture should depend on an abstraction exposed by the lower layer, not a concrete type. The two layers can communicate via the agreed contract, but the higher layer has no dependency on the lower layer. Robert Martin's Dependency Inversion Principle states that High level modules should not depend upon low level modules. Both should depend upon abstractions. Abstractions should not depend upon details. Details should depend upon abstractions.

This dovetails with hard-to-test areas as layer boundaries often co-inside with hard-to-test areas such as the UI or access to external systems. So using abstractions when we layer helps us to achieve testability. To ensure cohesion we often talk to a façade when we cross a layer boundary, which hides the complexity of the other layer, and this again simplifies testing, by removing the need to create the objects that implement the façade as part of our test setup.

Open-Closed Principle

In Agile Principles, Patterns, and Practices in C# Robert Martin describes the Open-Closed Principle as “Software entities (classes, modules, functions, etc.) should be open for extension but closed for modification.

To achieve this ‘impossible thing before breakfast’ we use polymorphism. By creating an abstraction (either explicitly by using an interface or abstract base class, or implicitly by marking methods as virtual), we both define a contract allows us to define how we interact with that type - it is closed for modification – but allows many concrete implementations of that type – we are open for extension. Note that an abstract type

Seams

The extension points provided by abstractions are ideal for testing as they allow us to replace depended upon components for testing. Michael Feathers calls these points seams: A seam is a place where you can alter behavior in your program without editing in that place. A virtual method can be especially useful in legacy code to create a point of extensibility to test code, where we cannot reasonably extract or introduce an interface to allow us to do so.

You might also like...

Comments

Contribute

Why not write for us? Or you could submit an event or a user group in your area. Alternatively just tell us what you think!

Our tools

We've got automatic conversion tools to convert C# to VB.NET, VB.NET to C#. Also you can compress javascript and compress css and generate sql connection strings.

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” - Brian Kernighan