Saturday, December 29, 2007

Real world test driven development (TDD) - Perhaps "Requirements Driven Design and Development"?

Test driven development seems to divide programmers into three different camps; those that think that it is the best thing since sliced bread, those that think it is a good idea but impractical due to the time it takes to use, and those that are completely ignorant of what exactly it is.

It seems, that if you start to try using TDD for any projects, it will very quickly become clear why you might want to implement using this technique. Simply, it isn't so much about writing tests, but more about evolving a solid, loosely coupled solution that meets the needs of the requirements and nothing more. From the point of view of that statement, surely nobody would argue that up front design is a good idea, and in so agreeing to that statement, you also agree that you agree with the principles of TDD.

But there's more! As a by-product of evolving this wonderful software design, that is neither over-engineered nor highly coupled, we also end up with tests, and code!
It probably seems strange that I have mentioned that we end up with tests; very quickly a practitioner of TDD will notice that tests are manifestations of the requirements.

So we're really looking at test driven development from the point of view of what we want to do with the solution, which is the software's requirements. This technique doesn't in any way imply that we don't gather requirements, but what is does do is allow us a framework by which to evolve yet more requirements as they become apparent. What I mean by that is that we have a bunch of tests, that will always test for functionality, and should that functionality cease to work correctly, the test will fail, and the requirement will not be met. This means that we are then able to add, or change functionality, and know that a particular requirement is or is not met.

Well, I've worked on a few projects in my time, and in every one, we are asked how far from completion we are. How can we tell? Well, if we're sensible, we have some unit tests. The number of not yet implemented tests, and the number of failing tests should give us an idea of how far we are away from a successful build of the solution. We now have a technique that allows us to design our software according to requirement, and produces tests to give us an idea of how we are progressing.

Upon implementing the code to allow a particular test to pass, we also develop our code solution. Now, I'm wondering how we could possibly not want to develop good mature designs, unit tests and a solution using a technique that is directly aimed at that process?

One major barrier to the use of TDD are things like patterns. Don't misquote me here, design patterns are awesome, but what they often do is give a developer a design possibly before they fully understand the problem space. What that means is that as a developer, we're often tempted to jump straight in and start coding, without full consideration of the requirements. TDD forces you to implement only what is needed of the solution and nothing more. Design patterns lead developers to thinking that TDD is slow because they already know all the answers. What they should really be asking is, does that design pattern come with code and unit tests already constructed for their requirements. Almost certainly it does not.

Instead of rushing to start with a preconceived idea of implementation, a developer should instead use TDD to evolve the design. When it becomes apparent that the solution has evolved to a point at which the use of a design pattern is appropriate, to solve a requirement, then it should be used. In using this technique, a developer has documented evidence that such a pattern is a requirement of the solution, through the tests that are already in place.

Test driven development really requires the use of mock objects, which are strange to say the least, until you understand how they work, and then they become invaluable. When the design pattern requirement becomes apparent, a new set of tests should probably be constructed to test the pattern separately. In the original test project where the use of a pattern was discovered, mock objects should be used to 'pretend' that the pattern is being called.

We use mock objects so that a particular test class only tests one class directly, and everything else is mocked, and so has been assumed to be working unless the mocked code's unit tests fail.

A big part of TDD is that we end up with a very loosely coupled solution. This is largely due to the mock objects that I previously mentioned. Mock objects should really use interfaces to access code, and not concrete classes. Using a class mock would make the implementation be coupled to that class and so would be easily broken, or brittle in construction.

In working through code attempting to use interfaces, a developer will often come across resistance in one way or another. The problems may be due to the fact that the .NET framework does not offer an interface to a particular object directly. In that instance, what that is actually telling you is that you need to create your own wrapper class and interface. This is actually good news, if you stop and think about it, you are in effect abstracting functionality away from the implementation in the framework. This process would allow refactoring to take place in a much simpler way, and would also allow that implementation to be swapped out should the there be the need to do so. It is starting to sound like the beginnings of a Strategy, or Provider pattern now isn't it?! Shhh! Don't tell the naysayers!
One thing of note is that when the developer is trying to mock and test a particular aspect of the system, and it becomes hard work, that is often a good point at which to reconsider the implementation of those requirements. In the case of an object that doesn't have an interface, that leads us to wrapping up something that essentially provides functionality for us. Implementing tests on such a design should be once again very straight forward.

Generics are somewhat of an exception to the rule. They're great for giving type to classes, but unless you can perform all your tests on the functionality with only interfaces, you should consider refactoring the design, or hiding the generic interfaces behind non-generic interfaces. If this becomes too hard still, you may well find yourself wondering whether the generic class is really appropriate. This process is good, but don't do as many people do and blame TDD because you can't test the code; or at least don't keep the generic class that you can't test unless you're happy to have all aspects of the system that the generic class touches also be untested.

Red - green - refactor. What a shame that somebody came up with that as a mantra for TDD. Write the test, it fails, implement the code to make it pass. Seems pretty straight forward in concept, but as you can see from the discussion above, there is so much more to it.

It is strange that so many people like the idea of TDD, but so few have actually found a use for it. It is also a shame that there is so little information on the use of TDD and its practical implementations. I have seen many a simplistic example, and have given then myself in times, but when the technique is used commercially we want to be able to use it to its full potential, and solve every problem we come across with it, otherwise we simply will not use it.

If anybody has questions regarding TDD, please leave comments, and I will do my best to find answers, or offer possible solutions for you.