Thursday, April 3, 2008

"Signature of the body and declaration in a method implementation do not match"

Whilst teaching somebody about unit testing and particular mocking, I came across an error with some code that said, "Signature of the body and declaration in a method implementation do not match".

This seemed like quite a bizarre error message, and worried me, since there wasn't anything immediately obvious as being a problem.

After looking into the problem for a while, there was no help on the net for it, and a whole lot of 'binary search commenting', I tracked the problem down to being a class that had a generic method.

The lights came on! The problem it seems is that whilst in the code it wasn't changing, it was entirely possible for subsequent calls to the generic method to pass different types to that method, thus affecting the method signature.

There's a simple solution, it is to make the whole class have the generic type on it instead, so for the lifetime of the class instance, it can only ever serve up objects on one type only.

Here's a small snippet of code:

What we had first:

public class GenericFactory
{
public objectType Create<objectType>(fullTypeName)
{
.
.
}
}

changes to:

public class GenericFactory<objectType>
{
public objectType Create(fullTypeName)
{
.
.
}
}

Nice and easy, since it is a bizarre error, I hope it helps some people out there. Beware of generics, unless you really understand what you're doing with them!

Martin Platt.

Tuesday, March 11, 2008

BUG - VS Code Coverage automatically closes testrunconfig Dialog

This refers to a solution which has a project that is a custom project template, such as BizTalk or Reporting Services.

When the testrunconfig Dialog box is opened, either by double clicking it in solutionitems, or through the test menu, 'Edit Test Run Configurations', when the Code Coverage item is selected, the dialog box disappears.

The workaround is to unload the offending projects allows that Code Coverage item to be changed, but be aware that you might need to reload the project to allow the tests to run after making the code coverage changes.

I have this logged as a bug with Microsoft, but as yet is not officially confirmed.

Monday, February 18, 2008

Unit Testing - Watch out!

I just thought that I'd write a little article to discuss some of the things to watch out for when unit testing. People tend to talk more about why it is or isn't good, and that you should use it, I'm proposing to tell you what to do when you do!

The first thing, the level of confidence in the test of the application is only as good as the tests that you write. Seems obvious, right? Using unit testing techniques does not make your code any more manageable, or correct if the tests that you write are of a poor quality. so all that you can say about unit tests, is that the code passes for the tests, and never that it 100% works, unless of course you are 100% certain that you have catered for all eventualities in the code.

Using code coverage as a tool to target your tests will not guarantee anything about the quality of confidence you should have in those unit tests. Here, I mean that having 100% coverage means that all the code is tested; it doesn't. What it means is that all the code has been exercise, but not that the code has been exercised with every possible value. We tend to use boundary conditions and the like to use in our tests reasoning that these values will best test our code. They don't, they're only representations.
Code coverage, you might not be able to exercise 100% of the code, if your code uses some .NET object that you can't exercise all the CLI for. You should find out whether that is the case though. It can often be an indication that the boundary condition by itself is woefully inadequate.

Integration tests. Until you do integration tests, you can't only speculate as to how well the code will work in the real environment. What happens if you get data that you didn't expect? What it means is that you write more tests, so that it doesn't happen again, even if the data was in error. The further upstream you go in terms of integration tests, toward a full system test, the better the tests are, and the more likely it is to fail. You're using real data, in real situations, and so can have a certain amount of confidence that the components at least work in that real world scenario. Look at exercising as much of the code using the coverage tools again, obviously some parts, such as testing exception handling belong only in the unit tests, but anything that can have data driven through it, to get as much coverage as possible is good.

If we do things right, then when we find errors, we write more tests, so that the tests become more and more complete, and our experience in where the tests could fail with the components increases, and thus gives us a better indication and knowledge of the risks involved with deploying the software.

So long as we bear in mind that we're never going to be able to say with 100% certainty, that a component is 100% bug free, we're probably safe. What we do with unit testing techniques is mitigate the risk, improve our knowledge of the system and its inner workings thus limiting risk and document part of the system.

These techniques are very useful, if used to their full potential, but never make a sweeping statement that the software is 100% bug free because all the tests pass, since that proves nothing.

Happy coding!

Tuesday, February 12, 2008

Software patterns - useful or over-used?

Recently, on the MSDN forums I had someone comment that design patterns are largely over-used. The answer I gave to a question involved the use of a design pattern for a developer who was clearly inexperienced.

The comment was that the developer should learn proper OO techniques rather than simply saying that they "would use that pattern". I would tend to agree that understanding proper OO principles is a pre-requisite to most jobs in the software industry these days. That got me thinking, does using design patterns give rise to a set of developers who understand that they need to use a pattern rather than understanding the intent behind it?

I would say no. If said developer is using a pattern because someone said so rather than because they know that it fits their requirements, perhaps they need some career counselling.

That brings me on to patterns. A pattern is a well define solution to a particular programming problem, and that solution may in fact consist of a number of patterns to make up the full solution.

It is very true that you could use TDD, and evolve such a design, or pattern, and that might be the acceptible way to approach the situation, it certainly would be if you're using pure TDD. It just seems that having an idea of possible approaches, a solution space, if you like is a great idea. It gives a set of scenarios to consider, or a set of implementations that you must consider, which is a good thing.

Then, if you're not doing pure TDD, you might want to adopt a pattern, which is a well understood way forward to implementation. If you're an experienced developer, you'll probably have implemented these patterns before without calling them patterns, so in that case, it's a descriptive label to put on an "experience catalogue". It means other developers who have also developed using those same algorithms can have a common understanding of what you're talking about.

Primarily, I see this as the major force behind patterns. The ability to communicate a shared understanding of a design, in a short and succinct manner.

People have often asked me what they need to do to make the code a such and such pattern, or instead, which falvour of the pattern is this, I want to use this pattern, how can I covert it from a similar pattern to that pattern? These sorts of questions seem strange to me. The pattern is there to help you think about the problem in isolation, decoupling, and good things like that. Whether it has one name or another name is really irrelevant, it's all about whether the software does what it is supposed to, and does it in an efficient manner.

I have also come across situations where people try to follow the pattern to the letter. To me, a pattern is a template, but there is no reason why that template can not be extended to fit the problem space, so long as that design is well considered.

So in commenting that software patterns are over-used, that really is kind of like saying that experience is over-used and overrated. Patterns, whilst they're not a new idea should be used more, if nothing else so that it is an enabler for communication.

Monday, February 11, 2008

Making a mockery of Testing

How much testing is enough testing?

If for example, you have to deliver a piece of software, some functionality and you have tight deadlines to meet, is 50% of code tested sufficient enough? How do you know that you have tested the most likely to fail 50% of the codebase?

That question probably sounds quite silly, but whenever I mention to people that they should be using Mock objects to test, they say that it is overkill, and that unit tests are enough. Further probing often reveals that these unit tests are really integration tests, tests such as ask for a particular record, or match filter to return records, and if records are returned, the test correctly passes.
That sort of a test is as brittle, weak and unknown a code fragment as the code fragment that it actually tests.

The database goes does, is that a code failure? Bet there's no test for the server actually going down, and if there was, how would you be able to descriminate between an infrastructure problem, and a coding error?

There are two approaches, one is to write tests, using MSTest, NUnit or whatever else you like to use, you pass a value, and expect a given response. This situation makes it incredibly difficult to test all of the code, since you have no way to control flow through a particular code branch. An example would be testing the exception handling block of a routine, given a unit test, there is no way to inject the exception into the code, so you're stuck with compiler switches to throw exceptions. Compiler switches is changing the code from what it will be in production to something else, which carries its own set of risks.

The alternative is Mocking. Now the concept behind mocking is that you create objects that are effectively proxies to the interfaces that a particular method or class uses. You then set expectations on those objects, and tell them to return data, which effectively drives the code through the desired execution path.
In that instance, if the database goes down, the tests will still pass, you are just driving the interface that is implemented at the data layer. You can't be touched, nobody can take that code down.

As a shortcut, to achieve that deadline, how much time would it take up to consistently and continually test and retest the code with such rigour?
Another question you will be asking yourself, is, is 50% tested code enough now, and will it actually take me less time to resolve all the bugs in the code using the debugger continually, instead of having repeatable and isolateable test targets to check instead?

Keep an eye out for a further article on how to do some mocking to enable the test coverage statistics to get up into the 90% area with nearly all classes.