Thursday, November 15, 2007

Agile - Should we all?

In the world of software it seems that it should be expected, no, encouraged that people come up with new ideas to make software more useful. Computing systems are no longer simple or can be conceived of in one effort.

With that idea in mind, why doesn't every project run in an agile manner? Well, I think it's down to the fact that people want to be able to budget, and constrain their costs. That's also of high priority, and completely understandable as an approach.

What happens then? Well, it seems that projects often have the higher levels of management, and those in charge of the purse strings winning the standoff. What happens then, is that the project becomes one that has all requirements defined up front, so that they can be detailed, and costs attached to them.

What happens now? Well, unfortunately, the prject is now forced into a waterfall type, up front design project. What that means is that either the client can't change their mind, and so can't evolve their ideas, or the project has to do work for new features without a budget. Is that good? From the point of view of the project is probably is, but the end product is going to suffer as a consequence.

Can we run a project an constrain it to a budget, and a project end date? You could do, but you would probably have to put limits on a budget for each iteration, which will in turn put a limit on the features. That limitation is imposed by the budget, so there's little we can do to limit that situation. However, what we do by running the project in this way is to allow the project to evolve and improve. Now, that's a good thing isn't it?

Why is it then, that most projects are not agile projects?

Monday, November 5, 2007

Risk, Who needs it?

Hello,

You're a software developer, you love to live by the seat of your pants, right? You can't function without loads of pressure, and the vast amounts of coffee that goes with it right? You'll probably want to stop reading this then!

So many projects and features for a piece of software get into a project plan without consideration to the risks involved. The client wants it, therefore it must go in. Is that a realistic proposition? Well, it might be, depending upon what the feature it is, and whether the inclusion of the feature overrides the risk involved in its implementation.

Something to consider, if late inclusion of a feature into an existing project is going to take that software back to a beta version, and the risk is unknown, would you do it? I wouldn't. You need to know the risks, and you need to be able to predict the worst case scenario, and describe what aspects of the software will be affected.

Given these assessments of impact and risk, you can attach a dollar cost to the cost of implementing the additional cost, versus the cost of not doing so, and always look at the worst case.

Imagine the number of features that started life as a throw away comment, and ended up being a major feature in a piece of software simply would not be there if decision makers knew the true cost of doing someting versus the cost of not doing something.

Business Head Blog Launched!

Hi!

Please visit the freshly launched "business head" blog, when you can rad how to set up and run a business. See it at http://teamplatt-businesshead.blogspot.com

Martin.

Need help with architectures, or coding?

Hi,

If you need help with requirements, architectures, development code and the like, leave me a comment in here, and I'll help you out.

Cheers,

Martin.

Architecting solutions based on all the requirements

Gathering great requirements from business analysts is one thing, finding requirements in an existing legacy system is another. The former implies a possible new development, the latter implies an existing system.

So many questions are asked, in forums and the like regarding new developments, and using the anti-pattern of redeveloping with new technologies, patterns seems commonplace, and yet it also seems that very little weight is given to that existing system. You don't hear of many people immersing themselves in that system, finding out what the real problems are, and testing against those problems to provide a more useful solution.

It seems to me that more weight is given to using in-favour design patterns, and new technologies such as WWF, WCF and WPF over whether the re-development of the legacy system is even cost effective, or provides any benefit.

Our proof of concepts should be an end-to-end solution that cover all aspects of the system, including those difficult areas that we want to avoid, as well as the 'buzzword ridden' layers that prove nothing that you don't already know.

Consideration of complete, end to end solutions, including as much functionality as is conceivable is going to leverage that proof of concepts' worth, and provide a good benchmark for timescales and cost benefits, as well as giving insight into areas of high risk and potential problems.

I know this bit of text seems to point out the obvious, but from my observations, that point clearly is not being taken on board.

We need to focus on business value, and the ability to make software solutions perform in a way that enables our clients businesses to succeed, instead of producing systems with all the latest technology, designed is a really complex ways that does nothing really well, and most things quite badly.

You boss is your client, and you client judges you on how well you perform, not how many acronyms you can place on your Curriculum Vitae!

Test Driven Development

Test-driven development (TDD) is a particularly useful technique to grow or evolve clean, refactored and readable code with a high degree of confidence. Test driven development is oftentimes confused with a code project that has unit tests, these approaches are very different, and so are the outcomes.

Why would I tell you this?
Well, since using TDD, I’m hugely impressed with the outcome, and the quality of code that can be produced. I talk to people about the technique and sometimes get people to give it a try, but often that attempt only gets as far as unit testing, or don’t perform the process properly and they see neither benefit or point to it at all. I hope to change that view for you.

Where do I start?
To start test driven development, you need tests to drive the development! Those tests are likely to have come from business requirements, so if you don’t have those, stop, and get them; otherwise you have no hope of completing this exercise at all.

The requirements are usually expressed in terms of what something has to do and so can be quite easily mapped to tests. Now at this stage, it is quite likely that the tests, or manifested requirements are expressed about the system as a whole. That’s okay, but we need to know what that means – it means initially we’re building an integration test, something that tests the system as a whole, the sum of its parts.

When using TDD, there is a common mantra, “Red, Green, Refactor”, which are a basic description of what has to happen following the writing of one test. All will become clear as you read this article.

Now, let us assume that we have a simple problem, which we want to convert numbers to words to be printed on a cheque, or money order. So for example if the amount were $599.53 then our output would be “FIVE HUNDRED AND NINETY NINE DOLLARS AND FIFTY THREE CENTS”, our constraint will limit the amount to one thousand dollars.

We can start with something simple to convert $10 to words.

To create our first test, we need to add a class library for our tests, and add references to the NUnit framework. I will not be going into how NUnit is used here, but will supply a link to some NUnit information pages.

Here is the first test:


using NUnit.Framework;

using System;

using WiseLittleBirdie.ChequeWriter;

namespace WiseLittleBirdie.ChequeWriter
{
[TestFixture]
public class ChequeTests
{
[Test]
public void CanConvertTenDollarsTest()
{
ChequeWriter chequeWriter = new ChequeWriter();

Double chequeAmount = 10.00;

string expectation = "TEN DOLLARS AND ZERO CENTS";

string result = chequeWriter.Convert(chequeAmount);

Assert.AreEqual(expectation, result, "The conversion of 10.00 dollars did not return the correct result.");
}
}
}

Notice that the test so far tests for the assumption that the chequeWriter can be created, and that we’re then testing for the output to be what we expect it to be? We will come back to that issue soon.

The code we’re now going to write, will allow the code to compile, but the test to fail due to an incorrect result.

using System;
using System.Collections.Generic;
using System.Text;

namespace WiseLittleBirdie.ChequeWriter
{
public class ChequeWriter
{
public string Convert(double amount)
{
return null;
}
}
}

Now, the code will compile, and run, and our test will fail, which checks that the test is indeed valid for at least one failure value. Due to the colour of the circle next to a failed test, this is referred to as “Red”.

Now we need to make the code pass the tests, so we implement the minimum to allow the situation to be true.

using System;
using System.Collections.Generic;
using System.Text;

namespace WiseLittleBirdie.ChequeWriter
{
public class ChequeWriter
{
public string Convert(double amount)
{
return "TEN DOLLARS AND ZERO CENTS";
}
}
}

Now we see the test will pass, referred to as “Green”. We’re happy now, aren’t we? Well, yeah, but that code really isn’t very useful yet, is it?

Now onto the final phase, refactor. We need to look at the test code, and see if we can make that neat, tidy and economical.

First of all, since we’re going to be continually changing code, there are some things we need to consider:

· Should we refer to objects using interfaces? Refactoring the interface and code isolates us slightly from the changes that we will make.
· Should we find a way to define how we create our objects, so that if we were to change our constructor, it won’t immediately mean global search and replace?

Do we need to implement either of these techniques yet? The answer to this question should probably be no. As yet, we have no requirement to do so, since there’s only one object and only one test. Arguably though, if you know that you’re going to have to do it, it may decrease the pain if the structure is implemented in the beginning.

So at the moment, we have no worthwhile refactoring to do.

Let’s expand the tests to a further case, a similar one to allow us to show some progress.

We will test now for $999.99 now, and compare the result to what we expect again.

[Test]
public void CanConvertNineHungredNinetyNineDollarsTest()
{
ChequeWriter chequeWriter = new ChequeWriter();

Double chequeAmount = 999.99;

string expectation = "NINE HUNDRED AND NINETY NINE DOLLARS
AND NINETY NINE CENTS";

string result = chequeWriter.Convert(chequeAmount);

Assert.AreEqual(expectation, result, "The conversion of 10.00 dollars did not return the correct result.");
}

Now, since we have already written a convert function for ten dollars, we would expect it now to fail. We compile the code, and run the test, and it does indeed fail, as expected.

We now need to make the test pass, again, doing the minimum we require to make that happen. We don’t want to implement anything more than we need to, so that we don’t end up with untested code, or code that simply isn’t required.

Here is the newly changed Convert Method.

public string Convert(double amount)
{
if (amount == 10)
{
return "TEN DOLLARS AND ZERO CENTS";
}
else
{
return "NINE HUNDRED AND NINETY NINE DOLLARS AND NINETY
NINE CENTS";
}
}

Now, can we refactor our code yet? Certainly! Our test classes have a lot of repetition, so let’s refactor. Here is the new test code.


using NUnit.Framework;

using System;

using WiseLittleBirdie.ChequeWriter;

namespace WiseLittleBirdie.ChequeWriter
{
[TestFixture]
public class ChequeTests
{
private void TestConversion(double amount, string expected)
{ ChequeWriter chequeWriter = new ChequeWriter();

Double chequeAmount = amount;

string expectation = expected;

string result = chequeWriter.Convert(chequeAmount);

Assert.AreEqual(expectation, result, "The conversion
did not return the correct result.");
}

[Test]
public void CanConvertTenDollarsTest()
{
TestConversion(10, "TEN DOLLARS AND ZERO CENTS");
}

[Test]
public void CanConvertNineHungredNinetyNineDollarsTest()
{
TestConversion(999.99, "NINE HUNDRED AND NINETY NINE
DOLLARS AND NINETY NINE CENTS");
}
}
}


It now looks a lot neater, and it much more readable. If we run the tests again, we can see that we still get a green light, which mean refactoring did not change the result of our code up until this point.
I left the refactoring of the CheckWriter object to include an Interface and creation in a factory still because that call only happens in one place. It’s also possible to put the creation of the ChequeWriter object into a Setup method for the test class, which would be equivalent to what you can see above. I decided to do it this way at this time because then it makes the code readable.

I know that you’re thinking that this is a pretty stupid program, and I’d have to agree, at this stage it is. One thing we can say about our development effort is that without debugging, we can tell whether a value of $10 or $999.99 can be converted readily to be printed on our cheque. We can be 100% sure that this is the case.


Now, we need to come up with a further case. We now want to convert $109.99 into words.

[Test]
public void CanConvertOneHundredAndNineDollarsTest()
{
TestConversion(109.99, "ONE HUNDRED AND NINE DOLLARS AND
NINETY NINE CENTS");
}

That was nice and easy, wasn’t it? So there is a real benefit already.

Now onto the red phase.
public string Convert(double amount)
{
if (amount == 10)
{
return "TEN DOLLARS AND ZERO CENTS";
}
else if (amount == 999.99)
{
return "NINE HUNDRED AND NINETY NINE DOLLARS AND NINETY
NINE CENTS";
}
else
{
return null;
}
}

It fails, as we’d expect it to.

And now green.

public string Convert(double amount)
{
if (amount == 10)
{
return "TEN DOLLARS AND ZERO CENTS";
}
else if (amount == 999.99)
{
return "NINE HUNDRED AND NINETY NINE DOLLARS AND NINETY
NINE CENTS";
}
else
{
return "ONE HUNDRED AND NINE DOLLARS AND NINETY NINE
CENTS";
}
}

And that passes.

Now refactoring. It seems that since all our tests have been the same so far, we have no cause to refactor any further.

If we now do test driven development in the same way with more numbers we should expect that our code will grow. We should expect that the more test cases that we have, the more the code will become useful for other cases without the need for extra code. The situation is possible largely because of the refactoring phase which is about writing more economical code. The will be a point at which all the cases will be more efficiently expressed using a method or two to calculate the words dynamically. I won’t go any further at this point so that we don’t lose focus on the technique, but suffice to say that we should be able to implement a useful piece of code if we continued.

After we’ve written code to ultimately pass each test, and refactored, our solution should converge on a result that implements all cases. If it doesn’t then we need more tests.

It is important when writing code using test driven development, that we do not miss out stages. Incorrectly writing a bunch of tests then making them pass will miss out on the important stages of testing the tests, and refactoring. If instead we wrote a test, then wrote more code than was necessary to pass the test, we would also end up with an implementation that bigger than required (code bloat). This increase in code size to implement more that can be tested by one of our tests could allow untested code to creep into our project. We don’t want that as that is uncertainty.

So far we have been developing our tests as a black box, such that we have no idea of the implementation of the class. If we were working on a more complex problem, we would then start to define a set of white box tests to logically test the workings of the dependent classes. It is important with these unit tests, to only test the class which you are focussed upon. Any dependencies should be able to be passed into the class constructor, so that it can be replaced with a mock object. Please be on the look out for new articles based on dependency injection and mock objects.

Earlier I mentioned about making an assumption about being able to create a CheckWriter object. In the unit tests above, we would and should test things such as whether we can create the object. Other things to look for are testing somewhere if we can create a factory object should we require one. Factory objects will also be discussed in a future article.

An example of something that might require white box testing could be a data layer that is called by our top level class, to save some aspect of the data that we are manipulating. This class would be written again using TDD, but the tests be at more of a logical level. This would involve the tests not being based on instance data, and the correct operation of other classes, but only on them doing what is expected under the many circumstances that are defined by the unit tests.

One very useful type of tool for test driven development, in fact for unit testing in general, is a code coverage tool. The idea here is to make sure that there are tests sufficient enough to exercise all code execution paths within the class. A reasonable aim would be to get 80% coverage at least, as it is quite a challenge to get that sort of coverage. Users of the tool should not be tempted to fake calls by making them raise dummy code just to make the coverage value go up. You must always make the tests force execution of a given path, thereby testing the real code, and hence reducing the uncertainty that would be otherwise present.


NUnit: http://www.nunit.org/
Coverage Eye.NET: http://www.gotdotnet.com/Community/UserSamples/Details.aspx?SampleGuid=881a36c6-6f45-4485-a94e-060130687151

Certifications (MCTS, MCPD, MCSD, MCAD, and so on)

Well, the question is, are certifications, such as MCSD, the new MCTS qualification, and the MCPD qualification worthwhile? Do they offer anything in the way of learning new things, and do the prove to the wider world that the qualified do indeed know more than the unqualified?

Personally I did an MCSD back in 2001, and at the time found the desktop and distributed parts to be a case of learning minute detail to be able to pass the exam. The exception, and useful parts were the "analysing requirements and defining solution architectures" exam, which was a real challenge, and meant to me that you really needed to know your stuff. Personally, I think it's a shame that that exam isn't still available.

Having passed the MCSD back in 2001 I gave myself 6 months off, after which I planned to start studying again. So, after a 6 month break that quickly turned into six years, I'm back again doing some certifications.

At first, whilst studying the MCTS exams, I found that most of it I already knew, just that the names had changed. However, I was gladly surprised to find that there was enough tested knowledge in there to make it worth my while studying for it.

I would recommend that the MCTS exams are aimed at your bog standard developer, so if like me, you've done that sort of thing for some time, you will probably not find it a huge challenge. That said, as mentioned earlier, I believe that there's enough coverage in there to allow you to learn new things, and improve your breadth of knowledge.

Will sitting these exams leads to more opportunities when looking for employment? Almost certainly, if it's between yourself and someone who doesn't have any certifications, I know which one I'd choose, all else being equal. That's the key, if you have the experience, this adds to it, if you do not, then I wouldn't expect it to really give you an all areas pass to a job. I do see more recently, employers asking for people with certifications as "highly regarded". Listed as mandatory though, are a whole bunch of other skills and experience that you must have, so that gives you some idea.

Then there are those people who get hold of answers to questions, and cheat the certifications, personally, I think that those people do denigrate the certifications, but as mentioned above, someone passing may get them through the door of the interview, but when asked questions, or asked to do something practically, those people are going to have a rather public fall.

One interesting thing that also happened. Whilst studying for the exam, the practice exam software crashed, it seemed, in the C# certification area, so I switched the language to VB.NET and started the exams again. What happened then was that I wasn't really all that aware that I had changed languages, as the knowledge required was more often than not, the same. I also tried that theory out with a C++.NET exam simulation from the CD, and found the same to be true again. That did impress me, and lead me to the question that really a lot of the exams are language agnostic, and the certification is more with the framework than with the language.

I hope this little article is of interest to you, and that you will leave me a couple of comments, as to what you think, and if you agree, or it helps to spur you on to doing the certifications.

Good luck!

New Architecture for legacy systems

I don't know what it is, but so many architects that I meet in the various projects in which I work are so focussed on writing new solutions with new technology, that they often completely miss the problem in the first place.

I've seen enthusiastic architects often propose solutions that replace the easy part of the solution, and make it "better" (when often it isn't broken) and then not even attempt, or only partially attempt to address the real problem.

Our anti-pattern is to replace all this stuff we currently have with new stuff, and still have the same fundemental problems as before.

Why instead, can't we take much more of an agile, iteratative approach, and replace things little by little, and slowly move away from one solution to a more mature one, without once putting all our eggs in one basket with a particular "buzz" technology?

What this whole process often involves is moving from one bad decision, to a new bad one, or worse still starting implementing a solution with a new technology, that by the time it is finished, is an old technology, and we iterate with the same.

To my mind, architecture is more about an approach, a pattern, that is agnostic of the technology that may be used by it. If we work toward a solution that looks like a pattern but instead uses a particular technology that completely undoes that pattern by coupling everything through the technology, then it seems a completely useless and expensive exercise.

A big area that seems to suffer from this particular ailment, is SOA and the web service based solutions. Often, I hear that the reason for choosing a web service based solution is because of a need to make that solution available, particularly to other platforms. Later, I find out that the solution is intranet based, and could have been implemented a lot better without the need at least initially for web services at all. We then get hacks based on that concept to achieve various things, largely borne out of the need for "cool" technology. Things such as WSE are such an example. Whilst there is defintely a place for these technologies, that unemotional choice is often not correctly made, in favour of a curriculum vitae boosting entry instead.

There really is a good case for technology aware-but-agnostic architects out there, my experience is that they are few and far between. An architect should be exploring all possibilities, rather than a set that suit their purposes, or career.

Unit testing, Test Driven Development (TDD)

It still amazes me that there are so many projects out and about that have lots of lines of code, interoperating, and communicating systems, and no unit tests.

I don't know how projects get out in time, or even slightly slipping without having any confidence as to how much of the solution actually works.

I would have expected all greenfield projects, and some new additions to software to be able to be built using TDD, and be able to be refactored, and for delivery of the project to be able to be done with confidence.

I know that it often scares people to be using MVC and MVP patterns and dependency injection, but I know that it would scare me more to be in charge of a project that didn't have any unit testing.

Another thing, of projects that I have worked on that have had unit tests, they're often not unit tests, and are instead system or integration tests. Who cares what they are, right? If you run an integration test, and the server goes does, does that mean that your code has failed to pass its requirements? Will it take you an age to figure out if the problem is environmental including lots of debugging, and so on? Quite possibly, as there's so much involved in the real system, so many dependencies. Ultimately you need those tests, but to be able to write solid code, unit tests are what you need.

What else intrigues me is that you hear so many people talking Agile, and yet so few actually show any evidence of being capable of doing it. It seems implausible to run an agile project without unit tests to constrain your planned and unplanned changes to code.

I heard often of people complaning that test driven development takes a lot of overhead, well yes it does, but in the long term, that will mean less time to finish the project, unless it is the most simple project you've ever seen, and even then, I think you could argue a good case for TDD.

I know there are people out there who use Agile and TDD type methodologies, please show yourselves, and make yourselves known to me, and to the world at large!

Generalised, Generic, Configurable frameworks. Worthwhile?

My opinion is no! I don't think that frameworks should be so generalised, as to not be able to do anything particularly well.
Generally I think, these sorts of frameworks are overengineered. The reason?
Well, what is the point in designing some framework that can be applied to literally any situation, but requires so much configuration and tuning to make it work at all? That doesn't seem useful to me, it seems like a headache! I'd say that this comment is especially relevant if you consider the situation where a framework can do all those different things, but in reality is only ever used for one very small and specialised set of tasks.
Personally I think it's a much better idea to implement a test driven approach, or at least one that includes proper unit testing, then you implement what you need, and refactor to include those "future features" as they become requirements.
Using proper OO, using interfaces and the like, and good designs should allow the refactoring of the code, to extend and expand it to fit new scenarios.

The big difference is that the solution then only implementing what it needs, and is therefore more likely to be tuned to those tasks, and be easier to maintain, and better performing to boot!

Service designs... People seem to have the wrong idea...

I hear a lot of people talk about web services like they're something special. What I don't understand is why people think that. All that they are is an available transport to allow multiple platforms to talk to the backend.

The web service code shouldn't be in the web service, but should be calling into some sort of factory, or broker to get it to do all the work.

We should really view a web service as an interface only, so it's a different view, but a similar sort of concept to a web page, mobile page, windows application, or whatever.

Code sat behind forms, or services should be as minimalistic as possible, so that we can de-couple our implementations from the method that we use to move to that data around. There's no difference between this concept and the concept of an MVC or MVP implementation.

Architecting High-Performance, Reusable, Granular Services and SOA.

Sorry people, but I don't believe the above is really a good idea at all.

You cannot write granular services that can perform well, in general. It's a shame that the selling point of SOA has often been this concept of re-usability... It is not! It is availability. You can access the service from many platforms, or that's the idea at least. The more technology specific the extensions you add to a service, the less you leverage the true power of SOA.

Managers particularly get to hear of SOA, and think of it as a great concept that will cut costs, save money and time, and generally be the answer to their prayers. Whilst it may be the latter, the other parts rely on an understanding of the technology. Writing services and expecting them to perform well, and be re-usable isn't going to happen, because in general you want your services to be quite specific to the problem that they address, to make them perform well.

Obviously if performance isn't a consideration, you'd probably be able to get away with granular services, but not for long! How many systems have you seen that started off as a few macros running on a users desktop application, and then becoming the company software platform, and having to attempt upscaling the solution?! What I'm saying is, if you're writing services, then that of itself should mean that the problem domain is complicated enough to justify the effort. don't do it, don't come up with a bunch of services such something to get a bunch of person records, then get a bunch of address records for a given person. Then you're going to try to munge them together, right? Please tell me you're joking! The whole process is over-engineered and would require so many workarounds that it's just not funny! Although I did have a giggle myself...

It's a much better idea to look at what you need, and how fast that needs to realistically perform, then come up with a solution that fits that requirement.

Generally speaking, distributed architecture means that you should be calling the backend only once if possible, to get better performance. If you're combining and consolidating services in the backend, you're introducing overhead and complexity that really isn't needed.

Can I get an Amen?!

Blogging starts again...

Well, I've been offline for a while, but I'm back!

I'm going to devote some time to writing little blogs regarding software. I'm particularly interested in Architecture, Project Methodologies, and Development.

I'm hoping that there's other people out there who share my interests...

Martin.