Monday, 21 May 2007

Did God use XP or Scrum?

I was wandering around my house in a sleepy haze this morning with thoughts gently pulsating into my peripheral consciousness. I switched on the light and said to myself "Let there be light, and there was light" and then two peripheral thoughts blurred into focus at once prompting me to rummage through my wife's academic books (she's a teacher of Religion) and read through the first chapter of the Jeudo-Christian Old Testament.

Jokes aside about developer's egos it is quite clear that God used iterative development (or creation). Give the first chapter of Genesis a read for the compelling evidence.

In fact I think there is a clear case for claiming God was a YAGNI type of mono-deity as he creates light on day one but doesn't introduce the Sun until day four (verse 16: "God made two great lights--the greater light to govern the day and the lesser light to govern the night.") and though I've personally never worked on a project of that scale I can safely guess that there was some serious refactoring going on there! This also shows that the Creator has a real talent for deciding what the next most important thing should be.

Some of the more observant out there may have noticed that God was also using a bit of Behaviour Driven Development but being God he uses "Let there be" rather than "Should" before each one of his tests.

You may be wary of reading quite so much into the text but I think it's safe to say there was no BUFD going and can you imagine it if there was: on the first day he drew up the requirements, on the second day he wrote the design document, on the third day he finally stared, on the fourth day he realised that he'd forgotten the light and had to submit a change request...

Tuesday, 15 May 2007

YAGNI battles the Black Swans

There's been a lot of fuss in the economic world about a book called The Black Swan: The Impact of the Highly Improbable by Allen Lane. Lane uses the metaphor of the discovery of the Black Swan to explain his theory: basically before the discovery of Australia people believed there were only white swans in the world and created a whole set of theories around this 'fact'. Then a black swan was discovered in Australia junking the theories.

Using this as a template Lane defines a Blank Swan as a highly improbable event with three principle characteristics: it is unpredictable; it carries a massive impact; and, after the fact, we concoct an explanation that makes it appear less random and more predictable than it was.

Lane goes onto criticize businesses, markets and politicians for their over confidence in prediction. Lane even goes to the extent to suggest that it is a cold, hard fact that we cannot predict Black Swans but by preempting the future based on our belief that only White Swans exist we set ourselves up for disaster. So when a Black Swan does come along - and Lane argues that they come up a lot more than we predict - we've made things a whole lot worse by planning around White Swans.

Any developer who's every used the phrase "You Ain't Gonna Need It" would identify with Lane's theory. YAGNI tell's us that we should only implement what we need not what we foresee we need - no matter how sure you are that you're gonna need it. By writing code which preempts the future design we are building systems for White Swans and when that Black Swan comes along (and it will) we're going to be running around trying to refactor a load of over-engineered code. The difference between a Black Swan developer and a White Swan developer is the Black Swan developer accepts this and uses YAGNI to battle it where as a White Swan developer just re-engineers his predictions by turning the latest Black Swan into another White Swan - that is until the next Black Swan rears it's ugly head.

I think YAGNI developers - either naturally or through experience - think that life is full of Black Swans. I think that this is why some developers don't get YAGNI: they only believe in white ones. Maybe a different approach is not to say to them "You ain't gonna need it" but maybe say "Black Swan".

Sunday, 13 May 2007

Encapsulating Top Trump objects

Many of the objects we design are like Top Trumps: on first inspection it looks like we should be exposing all their information so we can see it but when you look closer you realize it's 100% about behaviour.

What do I mean? A Top Trump game consists of you looking at your card and choosing what you believe the be the best statistic on your card and challanging your opponent. This leads us to expose the data of our Top Trump object using properties so we can see them clearly. In reality however the Top Trump card should be keeping it's data secret: it is only exposing it's data to you not to your opponent, you merle ask your opponent if your card beats his but you never know what data his card contains.

If we were to model this in an OO world we would probably have a TopTrump class with various comparison methods. We wouldn't want to do the comparisons ourself. As an example we wouldn't do:

public void ChallangeOnSpeed()
{
if(me.CurrentTrump.Speed > opponent.CurrentTrump.Speed)
{
me.TakeTrumpFrom(opponent);
}
else if(me.CurrentTrump.Speed < opponent.CurrentTrump.Speed)
{
me.GiveUpTrumpTo(opponent);
}
else
{
// it's a draw
}
}
Instead we'd have the Trumps make their own desicions:

public void Challange(IStatistic statistic)
{
ChallangeResult challangeResult = myTrump.ChallangeWith(statistic);

if(challangeResult.Equals(Won))
{
TakeTrumpFrom(opponent);
}
else if(challangeResult.Equals(Lost))
{
GiveUpTrumpTo(opponent);
}
else
{
// it's a draw
}
}
So what has this gained us? It is no longer down to the player to compare the information the trumps expose instead the responsibility for comparison is with the trumps themselves. Now the trumps are autonomous in deciding how they should behave: they are encapsulated. "And the point of that was?" To answer that question let's look at the evolution of our model. The first version of our software only supported one type of Trump and that was the Supercars pack. Our players could compare directly by choosing ChallangeOnSpeed etc. but now we've designed a new Marvel Comic Heroes pack which have different categories. We'd have to either add new methods or create new Player classes to handle the new Trump classes and new Game classes to handle the new Player classes. What a load of work! Then what happens when Dinosaurs pack or the Horror pack comes out? Or what if we go global and the speed is worked out in KM/H as well as MPH or dollars as well as sterling? By exposing the data we have overburdened the Player and the Game classes with responsibility causing a maintanance nightmare so now everytime our domain evolves our code breaks.

This is why properties break encapsulation when they expose data directly. When we put the responsibility to compare data on the class which doesn't own it then any change to the class that own's the data has a ripple effect through your whole code breaking Single Responsibility Principle. If on the other hand the object itself is responsible for carrying out the operations on it's data you only make one change and in turn you strengthen the concepts of your domain.

"OK" I hear you say "but you have to expose properties so you can render the data on the user interface". Ah but do you? What if the Trump object told the UI what to display? "Surely that would mean putting UI code into the Trump object and that's just wrong and anyway I need that data to save to the database as well does that mean we also end up with data code in the class that's even worse?" Not if we use the Mediator pattern (hey imagine a Design Patterns Top Trumps!).

Mediator to the rescue
On a side note I'd like to discuss why I said Mediator over Builder. Some people use the Builder pattern (Dave Astels did in his One Expectation Per Test example) which is a construction pattern used to seperate the construction of an object from it's representation. The purpose of the Builder is that at the end you get an object. However when working with a UI you may not have an object as an end product instead you may be just sticking your values into existing objects. The Mediator on the other hand merle acts as an interface which defines the interaction between objects to maintain loose coupling. You could argue that some implementations of Builder are really Mediator with a Factory method (or a combination of both as we shall soon see).

Well let's start by defining our mediator for our Top Trump. For those who have read my blog before you'd know I'm a big fan of Behaviour/Test Driven Development but for the sake of consisnese I'll deal only with the implementations for these examples.

public class TopTrump
{
int speed;

public void Mediate(ITopTrumpMediator mediator)
{
mediator.SetSpeed(speed);
}
}

public interface ITopTrumpMediator
{
void SetSpeed(int speed);
}
Of course you could use a property (what?) for setting the data if you so wished (I think that's perfectly legitamate as it is a behaviour to change the value) however there are good arguments for using a method one of them being overloading (suppose we wanted SetSpeed(string)).

Now when we implement our concrete Mediator we tie it to the view:

public class UiTopTrumpMediator : ITopTrumpMediator
{
private readonly ITopTrumView view;

public UiTopTrumpMediator(ITopTrumView view)
{
this.view = view;
}

public void SetSpeed(int speed)
{
view.SpeedTextBox.Speed = speed.ToString();
}
}
That works nicely and of course you could implement a database Mediator as well using one method regardless of the destination (I love OO don't you?). The only thing is our Mediator is a bit too close to the exact representation of the object. If we were to introduce our new packs we'd have to rewrite our mediator interface and all the classes that consume it. What we need to do is get back to our domain concepts and start dealing in those again:

public interface ITopTrump
{
void Mediate(ITopTrumpMediator mediator);
}

public interface ITopTrumpMediator
{
void AddStatistic(IStatistic);
}

public class SupercarTrump : ITopTrump
{
int speed;
SpeedUnit speedUnit;
 public void Mediate(ITopTrumpMediator mediator)
{
mediator.AddStatistic(new SpeedStatistic(speed, speedUnit));
}
}

public class DinosaursTrump : ITopTrump
{
StrengthStatistic strength;

public void Mediate(ITopTrumpMediator mediator)
{
mediator.AddStatistic(strength);
}
}
Then on IStatistic we'd add a ToString method like so:

public interface IStatistic
{
string ToString();
}

public class SpeedStatistic
{
public string ToString()
{
return String.Format("{0}{1}", speed, speedUnit);
}
}
Of course we could go on and on refining and refactoring - adding muli-lingual support to our mediator etc. - but hopefully you get the picture; by encapsulating the data of the object and placing the stress on it's behaviour we protect it's internal representation from being exposed thus decreasing it's coupling and making it more flexible and stable during change.

Now if I'm honest with you after having all the above revelations there was one scenario I struggled with which I hadn't found any good examples to. Basically everyone talks about displaying data to the UI but never about changing the object from the UI. Changing the data implies breaking encapsulation in someway and if the UI isn't allowed to know about the internal representation how is it supposed to change it? Basically how would our Trump Designer package create new Trump cards and edit existing ones?

Well creation is easy: we'd use the builder pattern and have a concrete implementation of ITopTrumpBuilder for each type of Top Trump card. The UI would then simply engage the ITopTrumpBuilder and pass it's data across in much the same fashion as with the mediator just in reverse. The builder could even tell us if the resulting Trump is valid before we try and get the product.

Remember memento? (not the film the pattern)
But still what about editing an object? There's a pattern called Memento which because of the film is probably the catchiest of all the pattern names but it still is quite a rarity to see it used. That's because Memento's core purpose is for undo behaviour which is very rare on enterprise systems but it is handy for general editing scenarios. Basically Memento is either a nested private class which either holds or is allowed to manipulate the state it's container (the originator) or an internal class which the originator loads and extracts it's values from. Therefore Mementos offer a very nice way of encapsulating edit behaviour if we combine it with Mediator to create a public interface which objects external to the domain can use.

public interface ITopTrumpMemento
{
void UpdateStatistic(IStatistic);
}

public class SupercarTrump : ITopTrump
{
private State state;

private class State
{
internal int Speed;
internal SpeedUnit speedUnit;
}

private class SupercarTrumpMemento : ITopTrumpMemento
{
private State state;

private SupercarTrumpMemento(State state)
{
this.state = state;
}

private State GetState()
{
return state;
}

public void UpdateStatistic(IStatistic statistic)
{
speedStatistic = statistic as SpeedStatistic;
if(speedStatistic != null)
{
state.Speed = speedStatistic.Speed;
}
}
}

public ITopTrumpMemento CreateMemento()
{
return new SupercarTrumpMemento(state);
}

public void Update(ITopTrumpMemento memento)
{
this.state = memento.GetState();
}
}
So there you go: now your UI (or database or webservice) can work with the ITopTrumpMemento interface for editing ITopTrump objects and you can add new TopTrump classes which store their internal data with varying different methods to your hearts content without every breaking any code!

There are advantages to this too numerous to mention; loose coupling is promoted as the UI never gets near the domain, testing is made far easier as you can use mocks of the IMediator, IBuilder and IMemento instead of working with the domain objects direct and also reusability is increased as the mediators take the responsibility away from your presenters.

Tip:
The trick with maintaining your encapsulation as neatly as possible is to try and ensure that your IMediators, IBuilders and IMementos all deal with the concepts of their domain (for example IStatistic) and not the structure of the data (e.g. int speed).

Friday, 11 May 2007

Eric Evans: Hear his voice

DotNetRocks have published a podcast of an Eric Evans interview discussing Domain Driven Design (Show #236).

There's about 11 minutes of irrelevant chatter at the beginning so just skip forward.

Have a listen and enjoy.

Thursday, 10 May 2007

Don't Expect too much

A long while ago (well over two years) there was a lot of fuss made on the testdrivendevelopment Yahoo group about having only one assertion per test. Dave Astels wrote a great little article and a little more fuss was made. It was one of those things you knew made sence but sounded a lot of work. I played with one assertion per test anyway and suddenly felt my code developing more fluidly as I focused on only one thing at a time and my tests looked clearer (more behavoir orientated) and refactoring became a lot simpler too.

Then Dave Astels came along and pushed the envelope further with One Expectation per Example. This really grabbed my attention as I had been enjoying the benefits of keeping my test code clean with one assertion per test but anything with mocks in it just turned into out of control beasts (especially some of the more complex logic such as MVP). Even simple four liners such as below would end up with four expectations:
public AddOrder(OrderDto)
{
Customer customer = session.GetCurrentCustomer();
customer.AddOrder(OrderFactory.CreateOrder(OrderDTO));

workspace.Save(customer);
}
Then everytime I needed to add new functionality I had to add more expectations and anyone who read the work (including myself after 24 hours) would struggle to make head or tale out of the monster. And if tests failed it would take a day to find which expectation went wrong. If you're not careful you end up with the TDD anti-pattern The Mockery.

I had read Dave Astels article several times but couldn't fathom out how it worked especially seeing it was written in Ruby with the behaviour driven RSpec. In the end I had to write it out into .NET myself before I got it.

So here is a break down of how I got Dave's One expectation per example to work for me:

One Expectation Per Example (C#)
One of the first things to note is Dave uses the builder pattern in his example. The idea is that the Address object interacts with a builder to pass it's data to rather than allowing objects to see it's state directly thus breaking encapsulation. I would like to go into this technique in more detail in another article but to deliver the point quickly think that you may create an HTML builder to display the address on the web.

Well let's start with Dave's first test:
[TestFixture]
public class OneExpectationPerExample
{
[Test]
public void ShouldCaptureStreetInformation()
{
Address addr = Address.Parse("ADDR1$CITY IL 60563");

Mockery mocks = new Mockery();
IBuilder builder = mocks.NewMock<IBuilder>();

Expect.Once.On(builder).SetProperty("Address1").To("ADDR1");

addr.Use(builder);

mocks.VerifyAllExpectationsHaveBeenMet();
}
}
You may have noticed that I've changed a few things maily to make it look consistant with .NET design practices. Basically I've introduced a Parse method rather than the from_string method Dave uses.

Now we need to get this baby to compile. First we need to create the Address class like so:
public class Address
{
public static Address Parse(string address)
{
throw new NotImplementedExpection();
}

public void Use(IBuilder builder)
{
throw new NotImplementedExpection();
}
}
And the IBuilder interface:
public interface IBuilder {}
Now it compiles but when we run it we get the following:
   mock object builder does not have a setter for property Address1
So we need to add the Address1 property to the IBuilder. Then we run and we get:
    TestCase 'OneExpectationPerExample.ShouldCaptureStreetInformation'
failed: NMock2.Internal.ExpectationException : not all expected invocations were performed
Expected:
1 time: builder.Address1 = (equal to "ADDR1") [called 0 times]
Let's implement some working code then:
public class Address
{
private readonly string address1;

private Address(string address1)
{
this.address1 = address1;
}

public static Address Parse(string address)
{
string[] splitAddress = address.Split('$');

return new Address(splitAddress[0]);
}

public void Use(IBuilder builder)
{
builder.Address1 = address1;
}
}
Run the tests again and they work! So let's move onto the second part of implementing the Csp. Here's the new test:

[Test]
public void ShouldCaptureCspInformation()
{
Address addr = Address.Parse("ADDR1$CITY IL 60563");

Mockery mocks = new Mockery();
IBuilder builder = mocks.NewMock<IBuilder%gt;();

Expect.Once.On(builder).SetProperty("Csp").To("CITY IL 60563");

addr.Use(builder);

mocks.VerifyAllExpectationsHaveBeenMet();
}
Now a little refactoring to get rid off our repeated code we turn it into this:
[TestFixture]
public class OneExpectationPerExample
{
private IBuilder builder;
private Address addr;
private Mockery mocks;

[SetUp]
public void SetUp()
{
mocks = new Mockery();

builder = mocks.NewMock<ibuilder>();

addr = Address.Parse("ADDR1$CITY IL 60563");
}

[TearDown]
public void TearDown()
{
mocks.VerifyAllExpectationsHaveBeenMet();
}

[Test]
public void ShouldCaptureStreetInformation()
{
Expect.Once.On(builder).SetProperty("Address1").To("ADDR1");

addr.Use(builder);
}

[Test]
public void ShouldCaptureCspInformation()
{
Expect.Once.On(builder).SetProperty("Csp").To("CITY IL 60563");

addr.Use(builder);
}
}
Looking good! We run the new test and we get the usual error for having no Csp property on the IBuilder so we add that:
public interface IBuilder
{
string Address1 { set; }
string Csp { set; }
}
Then we run the test again and we get:
   TestCase 'OneExpectationPerExample.ShouldCaptureCspInformation'
failed: NMock2.Internal.ExpectationException : unexpected invocation of builder.Address1 = "ADDR1"
Expected:
1 time: builder.Csp = (equal to "CITY IL 60563") [called 0 times]
Oh no. This is where Dave's article falls apart for .NET.
Basically RSpec has an option to create Quite Mocks which quitely ignore any unexpected calls. Unfortunately I know of no .NET mock libaries that have such behaviour (though I have since been reliably informed by
John Donaldson on the tdd yahoo group that it is possible with the NUnit mock library) . Though there is a way out: stub the whole thing out by using Method(Is.Anything):
[Test]
public void ShouldCaptureCspInformation()
{
Expect.Once.On(builder).SetProperty("Csp").To("CITY IL 60563");

// stub it as we're not interested in any other calls.
Stub.On(builder).Method(Is.Anything);

addr.Use(builder);
}
Just be careful to put the Stub AFTER the Expect and not before as NMock will use the Stub rather than the Expect and your test will keep failing.

So now we run the tests and we get:
   TestCase 'OneExpectationPerExample.ShouldCaptureCspInformation'
failed:
TearDown : System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation.
----> NMock2.Internal.ExpectationException : not all expected invocations were performed
Expected:
1 time: builder.Csp = (equal to "CITY IL 60563") [called 0 times]
Excellent NMock is now behaving correctly we can finish implementing the code:
public class Address
{
private readonly string address1;
private readonly string csp;

private Address(string address1, string csp)
{
this.address1 = address1;
this.csp = csp;
}

public static Address Parse(string address)
{
string[] splitAddress = address.Split('$');

return new Address(splitAddress[0], splitAddress[1]);
}

public void Use(IBuilder builder)
{
builder.Address1 = address1;
builder.Csp = csp;
}
}
Run the test and it works! Now if we run the whole fixture we get:
   TestCase 'OneExpectationPerExample.ShouldCaptureStreetInformation'
failed: NMock2.Internal.ExpectationException : unexpected invocation of builder.Csp = "CITY IL 60563"
All we need to do do is go back and add the Stub code to the street test. That's a bit of a bummer but we could refactor our tests to do the call in the tear down like so:
[TearDown]
public void UseTheBuilder()
{
Stub.On(builder).Method(Is.Anything);

addr.Use(builder);

mocks.VerifyAllExpectationsHaveBeenMet();
}

[Test]
public void ShouldCaptureStreetInformation()
{
Expect.Once.On(builder).SetProperty("Address1").To("ADDR1");
}

[Test]
public void ShouldCaptureCspInformation()
{
Expect.Once.On(builder).SetProperty("Csp").To("CITY IL 60563");
}
This approach comes across as slightly odd because the expectations are set in the test but the test is run in the tear down. I actually think this is neater in some ways as it ensures you have one test class for each set of behaviours the only off putting thing is the naming convention of the attributes.

I won't bother continuing with the rest of Dave's article as it's just more of the same from here. The only thing I'd add is he does use one class per behaviour set (or context) so when he tests the behaviour of a string with ZIP code he uses a whole new test fixture. This can feel a little extreme in some cases as you get a bit of test class explosion but all in all it does make your life a lot easier.

I hope the translation helps all you .NET B/TDDers out there to free you from The Mockery.

Tip:
In more complex behaviours you may need to pass a value from one mock to another. In those instances you can do:
   Stub.On(x).Method(Is.Anything).Will(Return.Value(y));

About Me

My photo
West Malling, Kent, United Kingdom
I am a ThoughtWorker and general Memeologist living in the UK. I have worked in IT since 2000 on many projects from public facing websites in media and e-commerce to rich-client banking applications and corporate intranets. I am passionate and committed to making IT a better world.