Wednesday, 8 July 2009

Value Driven Development

At the beginning of the year I blogged about using value as a measure of productivity. Since that time I have had the opportunity to refine my thinking and be influenced by some other ideas. I want to revisit the original idea of using value as a measure. I may be guilty of repeating myself a little but it's a small cost for clarity.

The concept focuses around one very simple idea: the primary objective of a project is to deliver value. The more value we deliver the more successful the project is. If you agree with this then my next idea shouldn't be too hard to grasp: the primary measure of a project is value. Not velocity, not story points, flow, throughput, cycle time, capacity, cyclic complexity, code quality, but value, pure and simple. Of course we can use the other measurements to help us discover more effective ways of delivering value but on their own they tell little about how much value is being delivered.

Strangely, despite the fact that agile and Lean and Kanban and all of it's derivatives tell us how important value is none of them propose explicitly measuring it. Agile proponents often argue that it is their focus on delivering value early and often which makes them unique over conservative methodologies but how do they know? I've never picked a story off the wall and been able to tell what worth the customer gains by delivering it - ten, a hundred, a thousand, a millions pounds? Hell, who knows? The BA? The PM? The product owner? And how many projects out there can raise their hands to the question "How much value has your project delivered in the last week/month/year?" So if we're all so very value focused in our lean and agile worlds how are we not answering these simple questions? Are the statements about delivering more value pure assumption? How do we know that when we make 'improvements' to our processes the effect is to deliver more value? Or is at case of substituting value with story points mapped against velocity? If so how do you know that twenty-five story points to improve online advertising generated any revenue? Or is it a case of take the points and run?

Before we can confidently improve the amount of value we deliver we need to be measuring the value we are delivering. The first problem that presents itself is how do we do this? Value is hard to quantify and even harder to estimate. Which sound awfully similar to the problems with size so why not transfer the techniques used to estimate of story size to estimating value. Start simple, use relative sizes, but instead of t-shirts maybe use precious metals (aluminium, bronze, silver, gold, platinum etc.) or any other metaphor that helps the team visualize features in relative worth and as with size estimation it's an estimate, it doesn't have to be exact, the important thing is that we have some sort of gage and that we track it and refine it.

Once stories have value estimates they can go through the normal delivery process: estimate size, queue them up, stick them in iterations (or do some funky Kanban pulling on them). Except now every story card has it's most important piece of information branded to it: it's value. During prioritisation the platinum card which is the size of a small T-Shirt should obviously be delivered before the XXL bronze one. When stories are worked on people know that it's worth putting that extra effort into putting the plating on the platinum one but perhaps the aluminium story requires a more pragmatic approach than spending two days of cross functional refactoring to perfect it's design. During standups if someone says a platinum card is blocked the whole team knows it's important to get this sorted and coming up to a release with three gold stories complete but a stubborn bronze proving difficult to get across the line you may be more tempted to make the call to leave it out this time and hell to the size points. Very quickly every part of the process focuses around the amount of value being delivered, every decision is being influenced by the value of the story, not by it's size, not by how cool it is, not how much can be crammed into an iteration, but by how much value this piece of functionality is actually going to bring to the customer. Now that's being agile!

The purpose of putting values on stories is to measure how much value has been delivered. Now let's be clear about what delivered means: out there, in production, available to everyone who should have it, in full use. Basically, not dev complete, not in UAT, not in live pilot to a handful of select users, but out there live and in concert, fully functional, fully delivered, bringing value. Until then the functionality is worthless; until it is delivering value then you ain't delivered any value. Once it is though start totting it up: add the value points onto the delivery (note: delivery not iteration - iterations cannot deliver value unless they release), repeat for the next delivery plotting the value delivered on a chart and voila: the project's true velocity: how much value, for each delivery, it has been delivering.

But what if the value didn't quite get delivered? What if the functionality was released but with a bug which somehow hindered it's full capability? Maybe it is too slow, too difficult to use, doesn't do quite the right thing etc. Anything that prevents the application from delivering all of the value it promised is an issue (regardless of whether it is a bug or 'a feature'). However by resolving the issue the lost value can be reclaimed. By fixing the bug, performance or usability problems, the project claws back the value it lost when it didn't get it quite right first time.

It's important the project's value velocity reflects this. So when a fix is raised it is given a value which is then deducted from the velocity: this is because the full value of that feature was not delivered. Once the fix is in use and active then the value points go back on. Personally I find this one of the most powerful ideas behind measuring the velocity of value as the bad habits of delivering bug ridden functionality iteration after iteration by dismissing fixes in order to keep earning the points will not get you anywhere as the value velocity either struggles to move or even, in extreme cases, goes backwards.

I think there are many other benefits to be gained from using value as the primary measure that are yet to be discovered (possibly placing value onto technical debt? Though I'm not yet convinced of that one). Explicitly placing value at the forefront of the project gives teams the power to optimize themselves and discover new ways to improve their efficiency. And it isn't necessarily limited to individual teams: projects themselves should be measured in this way with teams 'contributing' to value - not simply earning points for delivering 'their part' - breaking down silo attitudes by finding ways to effectively work as a whole so value can be delivered.

I think it's pretty exciting stuff. Of course all the other metrics out there may or may not find use, dependant on whether they are contributing to the delivery of value or not, and holes will be found in the technique, but overall I think it shifts the debate and discussion forward to a entirely new level of thinking. It will allow us to reach a whole new level in the Value Enlightenment.


Kurt Häusler said...

Sounds interesting, and reminds me a lot of what the Gilb's write about with their quality attributes. Definitely an idea that can be explored further. I look forward to reading more about it.

Martin P Jackson said...

I woke up this morning with this great new idea of Value Driven Development. Then I Googled it and found that several bright sparks had beaten me to it.

My thoughts come from the fact that as a tester, one who believes in the tenets of Eric Evans and his Domain Driven Design, and has used FIT and Fitnesse to deliver defect free software using Acceptance Test Driven Development, I really do not want to be referred to as part of a QA team.

That's when I thought up the idea of being in "Value Assurance" or VA. To me QA is the "are we doing it right" and VA "are we doing the right thing".

The danger with the term QA is that it brings to mind all those parallels to the manufacturing process. When you are making lots of copies of something then quality control is important.

Valuable software on the other hand is more like a continuous prototype for the customer. As you touched upon it could be more important to add value than fix a low-value defect. In the end it should be the work that provides most business value that gets done first.

Have you had any further thoughts on this, or found any like minded people out there.

Michael Brausam said...

Hi there,
I guess VDD a really good way to prioritize the work that has to be done. Thus, it's an instrument for product owners (or a product owner group) - and not for the (development) team. Do you agree?

About Me

My photo
West Malling, Kent, United Kingdom
I am a ThoughtWorker and general Memeologist living in the UK. I have worked in IT since 2000 on many projects from public facing websites in media and e-commerce to rich-client banking applications and corporate intranets. I am passionate and committed to making IT a better world.