Friday, July 10, 2009

New technology - If you don't adopt are you untrendy?

I could have called this "The anti-pattern of choosing new technology" but decided against it in the end.

That probably gives you readers and idea of what I'm talking about in this post.

There are a lot of questions that I read that go something along the lines of "If I choose technology x, how would I make it work with the following scenario...". Spot the obvious mistake there?

The architecture always comes first, then start simple, and work to the more complex, or unknown solution, as requirements discount the most simple ideas.

If more architects of systems thought like that, certainly a lot of the work I have done over the years would not have been necessary. Somebody believed that a particular technology did something in a particular way, pushed it a little harder, and found that they were too far down the implementation to pull back, as the technology no longer fitted the requirements at all.

A case in point, and the forums are flooded with questions surrounding it - is LINQ. Don't get me wrong, I have nothing against the technology at all, it is great, in the right place. Honestly though, unless you have a very good justification for using it, it's quite probable that the old data reader / data adapter and sprocs is going to be a simpler and more understandable solution.

My problem with the technology is that it uses some logic to try to get the best performance out of a query - it's good, but I'm afraid, it isn't as good as me. What I mean there is that you have little control over how a statement gets implemented, so how can you possibly tune it? That for LINQ to do, right? So long as it does, that is fine.

So if you're starting out designing applications, take a bit of advice, and keep it simple, understand the technology fully before using it in big commercial applications where it ends up needing o be worked around.

Separation of Concerns

Here's the thing - recently I put together some questions to use for interviewing, and one of the questions on there was simply what was "separation of concerns".

None of my team got that question right, although some of them are quite experienced, but all of them told me that this was a dated question that will only invite a textbook answer.

The point here is that a good design has good separation of concerns, and is coherent, but what does that mean? Simply put, a subsystem with a particular responsibility performs only that set of tasks, and no tasks for other subsystems. It's a little like the normalisation rules for databases, that the key must refer to all the data in a table.

This poses a couple of immediate questions, what happens when you have a Person subsystem, and want to get the addresses that that person resides at, without making a chatty set of calls?

There are a few approaches - if we're talking services, then the service call is very granular, and uses a DTO in effect to transport data from multiple sources. In the backend, the multiple source are nicely separate.

If the problem is considered from the backend, the Address subsystem would have to be injected into the Person subsystem, and vice versa. In this way you are delegating responsibility to the correct subsystem, however, this could lead to many calls to the database. Do we care? Well, we might do if the data really does have to be realtime, which is wouldn't in the given example above.

The final implementation would be to implement functionality (a PersonAddress nested subsystem within Person subsystem) or an AddressPerson nested within the Address subsystem. These would have a very different focus, or context, but the code is quite likely to be repeated in both. I guess that final point is similar again to the database example, in that generally you should start with 5th normal form, then work backwards to compromise redundancy against speed. Same with coherence, but it is "separation of concerns" versus maintainability.

Decoupled design

Okay, so the question got asked on the Microsoft Forums about what was most appropriate, an interface, or an abstract class, so I thought that I'd bring that discussion over here for further discussion.

My response was to decouple, you would definitely use an Interface.

I was responded to with the question, but doesn't Inversion of Control containers now take care of abstraction, of de-coupling. I have to agree that it quite probably does.

That got me thinking, if you did use inversion of control, and always used abstract classes with no implementation, you would effectively have an interface. So why not use abstract classes everywhere? For me, it all comes down to clarity - the naming of an interface means that it is easy to understand it's function. An abstract class, on the other hand could be something ranging from effectively the interfacing class as described above, to something with lots of references to other abstract classes.

This is where I stand firm - my definition would be correct, if only to prevent that very thing from happening. When the implementation is part of a big team, those sorts of things can easily creep in, and before you know it, that beautiful, loosely coupled architecture is now reduced to a monolith.

So my advice here is always use interfaces, if you need an abstract class, create one, but only ever reference it using the interface. This bit here is another place where it is nice to have an interface - what if the base implementation changes? You can still create another base implementation class, and so long as the contract is unchanging, only that need change.

Coupling of code is all about the coupling of implementation from one to another - the very thing you have to avoid with SOA implementations and the like. The service call must be completely encapsulated.

Sunday, February 22, 2009

Unit Testing in VS

Can't believe it, every time I look at it, how comical it is - generate unit tests from code.

Yeah, it allows you to get tests up and running quickly, but makes one big supposition - that the code that you're generating a test for actually works.

Make TDD easier, and remove the feature, it's silly.

Martin.

LINQ

Hey guys!

I have to say, I'm a little disappointed.

Having played about with LINQ briefly I thought it was a really good idea, especially having to not need to work out complex joins and the like. I thought that is was a really good idea.

If you push the technology to the point where you threaten to use it in an enterprise solution, I found that whilst you don't have to work out the SQL side of things, or the joins, really you do. Some unexpected side effects within LINQ, such as not being able to mix and match LINQ to SQL and LINQ to Objects, so running in to problms with code re-use means that the technology requires a much more in depth knowledge than one might be required to learn to use stored procedures.

I also found that performance in the simple scenario was great, but again, if pushed hard the technology was not great performance wise. That meant that using LINQ meant mixing and matching the LINQ language with stored procedures, which in my mind at least undoes it's purpose - language independence.

One other thing - if you want to keep a traditional data layer, without any technology bleed, and make LINQ work properly, well basically, you can't. It seems that each layer having its own model, whilst a good idea in theory isn't something that you'd really want to do with LINQ. I might be being unfair to technology, but from what I could see, the LINQ objects needed to be used to keep state information, which makes the technology basically a desktop technology, much like the dataset in a lot of ways, but with a fancy language to sit in front of it.

I'm going to keep going with the technology, as I like the concept, to see if I can find a way around it, to make use of it in such a way that it is actually useful, but at this point, it simply seems like a fancy language, with some very cool technology, but no real follow through. Really, that's a shame.

Tuesday, February 3, 2009

Static classes and methods?

Hi,

Here's something that I really feel most people miss - questioning why they do something design wise.
A lot of people use static classes for reasons of laziness, it's easier to declare something as static than to create an instance and call methods on that.

Here's the thing, static classes generally go against OO design, you can't put an interface to a static class. That means that the code is going to be coupled to implementation, and so is not as easily maintainable.

I love unit testing, ideally with mocking, so my natural inclination is toward dependency injection. The resulting code due to following this approach with TDD is naturally a lot cleaner.

All static classes should be a thin veneer on an instance class, and so are then implementing a singleton pattern. I can see no other need for a static class - there's always an easy way around any other argument I have come across.

Take for example that it's nice to have access to a static class such as, MyDataLayer.Save(data). Instead how about defining a private property within the class, and lazy instantiating the reference within the property get? So for example:

private IDataLayer DataLayer
{
get
{
if (_dataLayer == null)
{
_dataLayer = new MyDataLayer();
}
return _dataLayer;
}
}

The only excuse you could have here for not using this approach is being lazy. This also allows you to inject dependencies through a special constructor, and the code will still work, so you can still unit test, and still get the easiness of static classes without all the drawbacks!

Happy Coding!

Martin.

Saturday, January 31, 2009

Real world architecture

Okay, so here's the thing - are there situations in architecture that really push you hard, drive you to come up with something of a compromise to achieve something placed upon you outside of your control?
I know that this particularly situation comes up, it's a bit like the 'on the buzzer' round on a game show. You have to think up something really creative to get around the constraint. yes, I know, the company doesn't have mature software development processes, blah blah blah, but sometimes these things happen that push you that bit harder. Come up with a solution that doesn't feel like it's selling your sole to the devil whilst still achieving whatever underlying constraint. I've been there many times, some companies have a better setup so this situation does not arise, but most do not in my experience.
So what do you do when this happens? Is there a core approach to architecture in this situation? I believe that there is, that we can have an initial approach of core process that we can use.
My first point is based around agile projects, all software development is an agile project to one form or another - I've never seen a project where the customer is happy with the outcome and doesn't want to add to it. That's not agile, but it should be! Anyway, I digress - what if you could put the architecture together an not case about such change? What I'm saying there is expect it? Changes the mindset somewhat, it is invigorating to not be constrained by the shackles of inflexibility is it not?

There's a couple of patterns that can help - first and foremost, the factory - delay implementation detail to a later date. This is a great start - its nice to be able to not care too much about the 'how' something should be done, only the 'what'. Okay, so most people are saying 'Duh, that's obvious'. Yeah, it probably is obvious by itself.
So I'd like to introduce the strategy pattern - this is a nice way to again abstract the implementation about from the function. Again, not exactly cutting edge thinking, I know.
Join those two concepts together, and we get something that Microsoft call the 'Provider' pattern. That is music to my ears - we now have a plug-able solution where were care a lot less about implementation. Sure we need to know the intent, but if someone changes their mind, which they likely will, we are happy to accommodate (in the next phase of course)
The second important thing to have in place is unit tests. I talk about them all the time personally. It's like driving without a seat belt, sure, you probably won't have a really bad crash and cause massive injury, but there's nothing like knowing you're creating a safe environment.
The only other constraint is to use proper OO techniques for encapsulation. So for example, you never expose anything like an IList, instead you would expose IPersonList, so you can change its meaning without affecting the solution. Let's face it half of that List<> functionality is going to go unused anyway, so why complicate matters and expose the whole interface? Yes, it's going overboard, but it insulates the project largely from change, and once you get used to doing things this way, it's not a big deal. Just seems pointless until the penny drops...

So, that said, what if you want the architecture to be SOA? Don't care. Why not have both, or some intelligent solution?

I'm thinking that there are people out there who have similar ideas an experiences? Please leave a comment if you do!

The best software to do "x"....

Hi,

Been pulling my team together for a new project, and whilst doing so, have been looking out for software that allows us to do things properly.
Now, before I start, I have to admit, that I absolutely love TFS, and VSTS, and that really makes it difficult to look for something else out there. Being realistic, to keep costs down in the current economic climate, I have been on the look out for cheaper or free alternatives. It seems that they're few and far between these days.

So first of all - non-VSTS unit testing? Well, clearly we're going to want a mocking framework there - that one is easy - Rhino Mocks. I've used NMock, which was great, but I find that once you're used to it, Rhino is the way to go. I tried Moq but found that it was less powerful than Rhino.

Okay, second on the list - code coverage. This one is difficult, Microsoft used to have a product, Coverage Eye which I used on a few occasions, and worked really well. There's NCover - that's quite expensive now, it seems, almost to the point where I should be considering VSTS which seems a lot better. The only alternative I found was partcover - it works, but the experience isn't at all integrated. And that leads me on to another point...

Unit testing framework... Okay, so the integrated MSTest framework is pretty good, and I get why you'd move the test output to a completely different location to where you would normally run from - to run the tests in isolation. No complaints from me there. However, trying to hook code coverage in to that build location from a coverage component that wasn't IDE integrated, and thus could easily capture the test folder location, is a pain. So much so that I'd suggest that using NUnit is unfortunately a necessity. The trouble is that then we're working development from our IDE, and tests from NUnit, which fires up code coverage. Not too bad a solution, but it can be a pain.

Personally, I'd really like to see something more in the professional versions of VS, what I'm saying is that every organisation is not so big it can afford, or wants to afford the all singing all dancing solution, much as I'm sure most would if it was cheaper. I'd love to see either the integration through VS to be brought a little more to the surface to open up the free version use a little better.

As previously mentioned, I love what Microsoft has done with Team System, and TFS, really I do, and I think for corporations that are able to afford it, they will go with that model. I would given the money, no doubts. Though, the alternative at the moment is really not the most compelling - I know, you say, "Save up and buy the proper version then", but at the moment, simply that's not going to happen, which cheapens the experience.

I think that the opening up of the IDE should be a little like love, you don't know that you've got it until you set it free!

And Microsoft, keep up the good work with these tools - for the first time in a long time, they really are impressive!