Last year my wife and I were married (hence the fact she is now my wife) and being the independent thinking, creative couple we are, we planned and organised the whole wedding ourselves, right down to the music. Both of us being passionate and snobbish about our music - and having the luxury of a very large collection - we didn't want some spotty DJ ruining our day by playing some dreadfully cheesy tracks and mumbling his way through the evening. So we managed what many said was impossible: a completely cheese free wedding playlist which got everyone boogieing the night away (not to blow our own trumpets but we had to hold back drunken protesters when we turned the music off!).
This weekend was my best man's wedding and so impressed was he with our music he wanted me to emulate it for him; so I accepted and invited him over to peruse my record collection. Finally, a few days before the wedding, he gave me his list and I looked at it with horror: it was clear he had just given me a list of his favourite personal tracks. His wedding being a typical big family and friends affair (over 150 guests compared to our 50) I could tell instantly the list wouldn't work, but no matter how much I tried to explain this he was adamant that on his wedding he wanted his favourite tracks. So I conceded and took his laptop gave him copies of the tracks I had and installed the latest WinAmp and the excellent SqrSoft Advanced Crossfading plugin to give it that ultimate disco feel and used WinAmp's built in ReplayGain to level the volumes of the tracks out (to prevent quiet track then sudden loud track).
The wedding reception went as I had expected: the groom loved every track that came on and a few close friends got up a jumped around the dance floor with him. But after half and hour or so the groom was off mingling at the bar and the dance floor was empty. His new bride was soon begging us groomsmen to get people up and dancing to fill the depressingly empty dance floor and although we did drag a few people off the sides, quickly the floor was vacant again.
About an hour later the groom came running up to me waving his arms and shouting in distress that his new wife had only gone and changed the playlist: she had loaded her emergency backup playlist instead! Before I could even react she had rushed over, the train of her dress picked up in her hands, shouting at him that she'd changed it because no-one was dancing while he shouted back something to the effect that her girl band music was rubbish (I can't print the exact words) and as I turned to leave them to their first domestic as husband and wife both their eyes looked pleadingly into mine. So there I stood with both looking to me to take their side; the groom playing to my taste as his wife’s playlist was full of offensively bad music and his wife looking to my reasonable side and the fact that people were now dancing. It was an easy decision for me to make: I asked them "do you want people to dance?" they both replied instantly "YES", "were people dancing before you changed the playlist?", "no" the wife replies and a sharp look at the groom squeezes out a squeaky "no", "OK then", I firmly state "if we go into the dance room and people are dancing then we leave her playlist on and if they're not then we change it". So we marched into the dance room to find a mass of badly dancing relatives and the decision was made.
There is a moral to this humorous story of marital conflict; my friend had placed his personal preferences before his objective: to ensure people dance and have a good time at his wedding. When my wife and I did our playlist we didn't just dump our favourite tracks down, we made lots of compromises - which included a strict no Smiths or she'll annul our wedding - to ensure that the songs we played people would want to dance to but we used our principles of good taste to guide us. Our principles meant that there would be no cheese at the objective meant that although The Rolling Stones Beast of Burden may be on our personal playlist we knew that Brown Sugar is the one that get's everyone dancing and that people would bounce around to The Undertones Teenage Kicks way before Radiohead's Exit Music (For a Film). When we put together our playlist we carefully considered whether each track was appropriate (which is why we didn't have Super Furry Animals Man Don't Give a Fuck) and listened to the track trying to imagine our guests dancing to it and ultimately remembering that although every song wouldn't please everyone that everyone should be pleased by most of the songs.
Development is like choosing a playlist: a good developer will always remember the objective. We've all been there: developing our own little features, the one's we think are really cool, and then getting upset when the customer says they don't like it and they want to change it. Developers are very guilty of doing what they think the application should do, forgetting what they've been asked to deliver and instead of doing the best thing to meet the customer's requirements they end up writing code to solve problems that don't exist or aren't important. It's a hard thing to avoid and I am as guilty as anyone of doing it: I remember all too clearly arguing that a feature shouldn't be developed the way it was requested as I felt it was functionally incomplete forgetting that what the customer was trying to achieve was to solve a problem quickly but not perfectly (what I call developing Word when all they want is Notepad).
As developers we find it difficult to compromise the 'perfect solution' for what the customer wants and we get stuck in thinking that to not write a technically or functionally complete solution means writing a technically or functionally bad solution (as my friend believed that not playing his favourite music was to play bad music). I had a conversation with a developer the other day who, like my friend, didn't want to compromise his design to meet the customer's objective. He wanted to build the feature in a way that was more generic and technically clever than was required and saw any sacrifice to this principle as bad coding. So entrenched was he in this view point he was prepared to sacrifice the customer's requirements and the deadline as he couldn't conceive that writing good, flexible code which was specific to the customer's requirements was a good thing. I struggled to find a way around this loggerhead and in the end I tried to explain the emphasis should be on usability before reusability and not on over-engineering.
The difference between my friends playlist and ours was that although both of us stuck to our principles of what we believe is "good music" (in the same way as a developer I would stick to mine: Agile, TDD etc.) my wife and I always checked ourselves against the objective and adapted to the situation, where as my friend just hung on to his principles for principles sake, forgetting what he was trying to achieve, and ultimately it was this that separated our full dance floor from his empty one.
Thursday, 28 June 2007
Monday, 25 June 2007
Intelligent design
I stumbled across Jay Flowers blog today and found his mix of eastern philosophy and Agile development very interesting. Eastern philosophy appears to be seeping into a lot of western science (in particular psychology and to some extent quantum physics) and it's nice to see these ideas placed to argue the benefits of Agile.
However I don't want to write so much about the intricacies of eastern philosophy (Jay's blog does a good job already) instead I'd like to expand on a particular post of Jay's and specifically on the quote he provided from Alan Watts.
Religion is embedded deep into our cultures: whether you are atheist, agnostic, fundamentalist, conservative or liberal believer our society and culture has been influenced more by religion than it has by science; legal systems, for example, are based on religion: the ten commandments for Christians, Sharia law for Muslims, etc.. After reading Jay's blog it occurred to me how much our approach to development is also influenced by religious culture and to demonstrate this I am going to look to the creationist vs evolution debate.
I don't want to actually get knee-deep into the creationist vs evolution debate (I am not Richard Dawkins), instead I wish to concentrate on the affect of creationism on our western culture. Though many of us take the theory of evolution for granted less than 150 years ago the main position of the scientific community was a creationist stance. So ingrained was the creationist viewpoint that Darwin took twenty years to publish his theory due to the Galileo like treatment resulting in scientists counter to the biblical view being ostracized and attacked as heretics.
So what does all this have to do with development? Most of our theories on good development and management methodologies come from those established at the time of the Industrial Revolution and the rise of Victorian engineering as an established discipline in the nineteenth century. If we look at the rise of engineering and that of evolution we can see clearly that engineering practices were not informed by evolutionary theory but instead from a creationist viewpoint. To clearly illustrate this consider that as the first copies of Darwin's groundbreaking publication The Origin of Species the great engineer Isambard Kingdom Brunel had been cold in his grave for a good two months.
Creationists (being theists rather than deists) take issue with evolutionary principles that complex biological forms are evolved from simple primitives over numbers of adaptations (dare I say it: iteratively); instead creationists argue that something (mainly the Abrahamic God) created all living creatures instantaneously and perfectly. Culturally we have been indoctrinated with this view point resulting in a populist feeling of common sense that complex systems are only successful if they are designed upfront, created perfect and built to last an indefinable period (Big Design up Front). This view is expounded as evolution is often misinterpreted (often deliberately by creationists) as being based on pure chance.
As engineering and management theories were established under the creationist view of the the biological world so did they become polluted with it. After thousands of years of belief in creationism it is not surprising that 150 years later even modern practices such as software development find creationist ideas still embedded in them. Though evolutionary design is becoming more of a mainstream approach to software development many people still defend the old creationist cause and often argue against Agile methodologies with the cultural preconditioning that unless something is created perfect somehow its overall success is left to chance as everyone is left to do what they want. For many developers moving away from the creationist view is a big step and in the same way the concepts of adaptation in evolution provides answers for a seemingly perfect complex world so it does with Agile, and where evolution emphasis the disciples of natural selection over random chance so Agile has its own disciplines. Fortunately software development is aided by the fact that something intelligent is helping it (or at least most of the time)!
However I don't want to write so much about the intricacies of eastern philosophy (Jay's blog does a good job already) instead I'd like to expand on a particular post of Jay's and specifically on the quote he provided from Alan Watts.
Religion is embedded deep into our cultures: whether you are atheist, agnostic, fundamentalist, conservative or liberal believer our society and culture has been influenced more by religion than it has by science; legal systems, for example, are based on religion: the ten commandments for Christians, Sharia law for Muslims, etc.. After reading Jay's blog it occurred to me how much our approach to development is also influenced by religious culture and to demonstrate this I am going to look to the creationist vs evolution debate.
I don't want to actually get knee-deep into the creationist vs evolution debate (I am not Richard Dawkins), instead I wish to concentrate on the affect of creationism on our western culture. Though many of us take the theory of evolution for granted less than 150 years ago the main position of the scientific community was a creationist stance. So ingrained was the creationist viewpoint that Darwin took twenty years to publish his theory due to the Galileo like treatment resulting in scientists counter to the biblical view being ostracized and attacked as heretics.
So what does all this have to do with development? Most of our theories on good development and management methodologies come from those established at the time of the Industrial Revolution and the rise of Victorian engineering as an established discipline in the nineteenth century. If we look at the rise of engineering and that of evolution we can see clearly that engineering practices were not informed by evolutionary theory but instead from a creationist viewpoint. To clearly illustrate this consider that as the first copies of Darwin's groundbreaking publication The Origin of Species the great engineer Isambard Kingdom Brunel had been cold in his grave for a good two months.
Creationists (being theists rather than deists) take issue with evolutionary principles that complex biological forms are evolved from simple primitives over numbers of adaptations (dare I say it: iteratively); instead creationists argue that something (mainly the Abrahamic God) created all living creatures instantaneously and perfectly. Culturally we have been indoctrinated with this view point resulting in a populist feeling of common sense that complex systems are only successful if they are designed upfront, created perfect and built to last an indefinable period (Big Design up Front). This view is expounded as evolution is often misinterpreted (often deliberately by creationists) as being based on pure chance.
As engineering and management theories were established under the creationist view of the the biological world so did they become polluted with it. After thousands of years of belief in creationism it is not surprising that 150 years later even modern practices such as software development find creationist ideas still embedded in them. Though evolutionary design is becoming more of a mainstream approach to software development many people still defend the old creationist cause and often argue against Agile methodologies with the cultural preconditioning that unless something is created perfect somehow its overall success is left to chance as everyone is left to do what they want. For many developers moving away from the creationist view is a big step and in the same way the concepts of adaptation in evolution provides answers for a seemingly perfect complex world so it does with Agile, and where evolution emphasis the disciples of natural selection over random chance so Agile has its own disciplines. Fortunately software development is aided by the fact that something intelligent is helping it (or at least most of the time)!
Monday, 11 June 2007
Letting Schrödinger's cat out of the bag
On Friday I journeyed up to the greatest capital city in the world - London (try searching on Google if you don't agree) - and had the final round of interviews with ThoughtWorks. The end result was I got the job (I'm a very happy JMB) - which has already cost me a huge wedge of cash taking my wife out for a celebratory meal in a posh restaurant - and now I am just waiting the official offer before I start the ball rolling (hand in my notice etc.).
I'd like to say what a great day I had: I have never experienced an interviewing process so thorough but at the same time so incorporating. There were several points in the day when I had to stop myself just wandering round the office and just going up to people or sitting at a laptop and pretending to be a real ThoughtWorker (which I will soon be!). I have never felt so accommodated and welcome at an interview before, from the moment I walked in the door and despite the normal sweaty palms I felt fairly relaxed (or at least only slightly anxious) and even for the written tests I was made to feel as comfortable as possible.
What was really evident was the amount of probing ThoughtWorks do to ensure that you are going to be suitable from an environmental/cultural perspective. Though this sounds obvious (and I did half-dismiss its relevance when being told about it) there is a real effort from ThoughtWorks that you understand what life as a ThoughtWorker is like - good and bad. They try and build up the picture of ThoughtWorks as much as possible with all its warts. I really appreciated this, not just for its openess, as working for ThoughtWorks can sound like the development equivalent of getting to play premiership football or getting a Hollywood part, but you still need to understand that it isn't all red carpets and adoring fans but there is work to do and sometimes that work may not be too glamorous.
I'd also add that I was surprised at the level of feedback ThoughtWorks give you; virtually every other company that have offered me a job you get the normal stony faced interview - with a few smiles and laughs at your anecdotes and Dad jokes - and then the short "we'd like to offer you the position" phone call. ThoughtWorks gave me loads of direct feedback - which I must say was really flattering - and when I walked out of the office - and aimlessly wandered the streets of London for the 30 minutes it took me to come to my senses and realise I was hungry and needed to get home - I can't honestly say what my head was more full of the excitement of the job offer or the "they said I was...".
The only downside of the whole thing was, when I finally did pull myself together and get myself home, I had to attempt to excitedly reiterate nearly five hours of information to my friends and family: but hey, if you can't bore your closest and dearest half to death with a detailed breakdown of the logic test - which retrospectively I perversely enjoyed - then who can you?
I'd like to say what a great day I had: I have never experienced an interviewing process so thorough but at the same time so incorporating. There were several points in the day when I had to stop myself just wandering round the office and just going up to people or sitting at a laptop and pretending to be a real ThoughtWorker (which I will soon be!). I have never felt so accommodated and welcome at an interview before, from the moment I walked in the door and despite the normal sweaty palms I felt fairly relaxed (or at least only slightly anxious) and even for the written tests I was made to feel as comfortable as possible.
What was really evident was the amount of probing ThoughtWorks do to ensure that you are going to be suitable from an environmental/cultural perspective. Though this sounds obvious (and I did half-dismiss its relevance when being told about it) there is a real effort from ThoughtWorks that you understand what life as a ThoughtWorker is like - good and bad. They try and build up the picture of ThoughtWorks as much as possible with all its warts. I really appreciated this, not just for its openess, as working for ThoughtWorks can sound like the development equivalent of getting to play premiership football or getting a Hollywood part, but you still need to understand that it isn't all red carpets and adoring fans but there is work to do and sometimes that work may not be too glamorous.
I'd also add that I was surprised at the level of feedback ThoughtWorks give you; virtually every other company that have offered me a job you get the normal stony faced interview - with a few smiles and laughs at your anecdotes and Dad jokes - and then the short "we'd like to offer you the position" phone call. ThoughtWorks gave me loads of direct feedback - which I must say was really flattering - and when I walked out of the office - and aimlessly wandered the streets of London for the 30 minutes it took me to come to my senses and realise I was hungry and needed to get home - I can't honestly say what my head was more full of the excitement of the job offer or the "they said I was...".
The only downside of the whole thing was, when I finally did pull myself together and get myself home, I had to attempt to excitedly reiterate nearly five hours of information to my friends and family: but hey, if you can't bore your closest and dearest half to death with a detailed breakdown of the logic test - which retrospectively I perversely enjoyed - then who can you?
Tuesday, 5 June 2007
Schrödinger's cat
A couple of weeks ago I closed my eyes and clicked on the submit button on ThoughtWorks website and down went my CV, too late to change, into the ThoughtWorks database. Imagine my excitement as the next day I get an email asking to ring back to organise a phone interview.
Seemingly the interview went well as I was asked to complete a coding test. I had to choose one of three tests and I spent hours agonizing over what angle I should tackle this at. Everyone knows the ThoughtWorks mantra on simplicity, but just how simple is simple? I believe I've kept my code simple but what if it's too complicated? And if I make it simpler what if it's then too simple? I can't describe the hours of agonising until I finally came up with a solution I was happy with which I zipped up, closed my eyes a second time, and pressed my shaky finger down onto the mouse button. I hadn't felt this nervous since jumping off the top diving board or, in my teens, ringing a girl I met in the club!
As soon as I went to bed that evening the code went round and round in my head and I cursed myself for the hundreds of dumb decisions I'd made, regretting the fact that I hadn't just slept on it before sending it. "But hey", I said to myself "this is like the real world: we can always go back over our code improving it - a good artist knows when to stop" and to my reading ThoughtWorks is as much about knowing when to stop as it is keeping it simple.
Because of the bank holiday I have been waiting a week or so for the feedback on my code and if I've made it to the next stage. When I first submitted for the job I genuinely wanted to work for ThoughtWorks but was open to the possibility that I may not be successful but as the process has gone on I feel a stronger and stronger desire to get this job and as such the potential disappointment grows exponentially with it.
As I face the twin prospects of elation and despair I realised that going for jobs is like Schrödinger's cat: your life becomes the cat in the box, the nucleus the job. Until I open the box (i.e. get the final result of the application) there exists two realities: the one in which I get the job and the one in which I don't. So from now until that time I live with the turmoil of flipping between the belief that my code was the most amazing they've ever seen and I am going to get this job and life's going to be groovy, and desperate self-doubt which tells me they are so disgusted at the poor quality of my work it'll take at least five years for them to scrub the horror from their memories!
As the French say "C'est la vie" and the First Noble Truth is "suffering exists". This is a period in my life with an outcome which, to a certain extent, is outside of my control: there is a reality on whether I have, at this stage in my career, what it takes to be a ThoughtWorker and no matter how much I may believe I do (which I really, really do) that cat in the box is either alive or dead: it is not both.
Seemingly the interview went well as I was asked to complete a coding test. I had to choose one of three tests and I spent hours agonizing over what angle I should tackle this at. Everyone knows the ThoughtWorks mantra on simplicity, but just how simple is simple? I believe I've kept my code simple but what if it's too complicated? And if I make it simpler what if it's then too simple? I can't describe the hours of agonising until I finally came up with a solution I was happy with which I zipped up, closed my eyes a second time, and pressed my shaky finger down onto the mouse button. I hadn't felt this nervous since jumping off the top diving board or, in my teens, ringing a girl I met in the club!
As soon as I went to bed that evening the code went round and round in my head and I cursed myself for the hundreds of dumb decisions I'd made, regretting the fact that I hadn't just slept on it before sending it. "But hey", I said to myself "this is like the real world: we can always go back over our code improving it - a good artist knows when to stop" and to my reading ThoughtWorks is as much about knowing when to stop as it is keeping it simple.
Because of the bank holiday I have been waiting a week or so for the feedback on my code and if I've made it to the next stage. When I first submitted for the job I genuinely wanted to work for ThoughtWorks but was open to the possibility that I may not be successful but as the process has gone on I feel a stronger and stronger desire to get this job and as such the potential disappointment grows exponentially with it.
As I face the twin prospects of elation and despair I realised that going for jobs is like Schrödinger's cat: your life becomes the cat in the box, the nucleus the job. Until I open the box (i.e. get the final result of the application) there exists two realities: the one in which I get the job and the one in which I don't. So from now until that time I live with the turmoil of flipping between the belief that my code was the most amazing they've ever seen and I am going to get this job and life's going to be groovy, and desperate self-doubt which tells me they are so disgusted at the poor quality of my work it'll take at least five years for them to scrub the horror from their memories!
As the French say "C'est la vie" and the First Noble Truth is "suffering exists". This is a period in my life with an outcome which, to a certain extent, is outside of my control: there is a reality on whether I have, at this stage in my career, what it takes to be a ThoughtWorker and no matter how much I may believe I do (which I really, really do) that cat in the box is either alive or dead: it is not both.
Building Cathedrals
Within companies, large and small, it is very likely that there is an in-house development team working on a systems in continuous development. In every company I've worked for I have spent the majority of the role developing a highly strategic system, whether website or CMS/CRM (or one of the other many over general three letter acronyms out there). These systems are the Cathedrals of software: they are by far the largest projects in the company and are highly important and grandiose with great architectural vision.
These systems inevitably end up the bain of senior managers, project managers and developers lives. Inevitably a battle between the business demanding bags of "quick wins" (that phrase which makes many a developer quake in their boots) and the developers who want to "do things properly". Pour on way too much vision from all parties and you end up with the poor old project manager curling into the foetal position and jabbering on about "delivery" as everyone stands around kicking them for whatever failure the business or development team wish to sight for the last release.
In these circumstances I have found Agile to be a god send: the business gets quick returns, the project managers get to wrap themselves up snuggly in their Gant charts and click away happily at MS Project and the developers - although they still sit their pushing back deadlines and moaning they haven't got enough resource - actually deliver something. All in all it keeps the whole project off the undeliverable White Elephant which the Business and development team have managed to concoct between them.
After a couple of years of doing this with success I was shocked to find that a few of the problems of the old waterfall methodology had begun to raise their ugly heads, namely: degradation, fracturing and velocity. After some careful thought and backtracking through my career I started to notice some startling similarities between all the developments I had worked on.
Degradation
Degradation is a natural occurrence of any system. Software systems are fortunate that - unlike many systems in the natural world - they rarely degrade through use - think the rail network: the more the rails are used the worse their condition becomes and therefore need replacing. Although external aspects can cause a software system to degrade through use (database size, hardware age etc.) the actual lines of code are in the exact same condition as when written.
The biggest cause of degradation in software is change: as new features are written or bugs are fixed existing code can become messy, out of date or even completely redundant without anyone actually realising it. Also existing parts of the system which work perfectly can get used in a manner not originally intended and cause bugs and performance issues.
There are a number of techniques out there in the Agile world to help minimize degradation including Refactoring, Collective Ownership and Test Driven Development. However heed the warning: despite the best unit tests in the world and pair programming every hour you've been sent, the absolute prevention of degradation relies on two things: time and human intervention. Time is required to fix degraded code which increases the length each feature takes to implement and thus affects velocity. Human intervention requires someone recognising the degradation and further more being bothered to do anything about it (collective ownership and pair programming do help here but are no way a guarantee).
The danger of degradation is that a system can degrade so much that it has a severely negative impact on the progress of a project - sometimes even bringing it to a complete halt - and resulting in the development team battling the business for a "complete rewrite". Degradation is not good for developers' morale maily due to the fear that it sounds like an admission of writing bad code. This results in the, all too common, backlash that it's the business' fault for putting too much pressure to deliver "quick wins" and not accepting the need preserve the architectural vision of the developers; and here we are again in the same spot we were with the waterfall method.
Fracturing
Fracturing can look very similar to degradation but is actually quite different. Fracturing occurs all the time in any system which works to standards - which is all systems regardless of whether those standards are documented or are for a team or individual - as they shift and move to account for changes. One example of this is the naming convention on networks: many companies opt for some kind of convention for naming switches, printers, computers etc. which seems to suit until something comes along which no longer fits. For example when a hypothetical company starts out they have three offices in the UK so the naming style becomes [OFFICE]-[EQUIPMENT]-[000]. But then the company expands to Mexico and someone decides that the naming convention needs to include the country as well: the convention is now fractured as new machines now have [COUNTRY]-[OFFICE]-[EQUIPMENT]-[000]. Then an auditor comes along and says that you should obfuscate the name of your servers to make it harder for a hacker (as UK-LON-ACCOUNTSSQL is a nice signpost to what to hack) so the convention has to change causing even more fracturing.
This happens in code all the time as your standards shift like sand: yesterday you were using constructors for everything and then today you read Gang Of Four and decide that the Abstract Factory pattern is the way to go. The next day you read Domain Driven Design and you decide that you should separate the factory from the class that it creates. Fracturing, fracturing, fracturing. And then you read about NHibernate and decide that ADO is old news and then in a few years LINQ comes out and you swap to that. Fracturing, fracturing, fracturing. Then you discover TDD and start writing the code in tests but the rest of the team doesn't get it yet. Fracturing, fracturing, fracturing.
Of course you could stand still and code to strict guidelines which never change, the team leader walking around the floor with a big stick which comes down across your knuckles every time your code strays slightly in the wrong direction. But who wants to work in an environment like that? Fracturing is a result of many things but mostly it is a result of a desire to do better and as such an improvement in quality. What coders believe is out of sync is rarely their new sparkling code but their old "I'm so embarrassed I wrote that" code. As a result their reaction is a desire to knock down the old and rewrite everything from the ground up using their new knowledge and techniques (though most recognise this is not "a good thing").
Fracturing can also result from negative factors as well: e.g. someone leaves the company taking years of experience with them and you get a new bod in but they've got to get coding quickly so we throw the standards out and we have more fracturing. Or there's a lot of pressure to get that code out so drop those new advances we proved in the last project and go back to the old safe and sound method.
Velocity
The obvious thing to say is that project velocity is negatively impacted by degradation and fracturing but that would only be half the picture. The reality is also the reverse: velocity has an affect on degradation and fracturing. Agile methodologies such as XP place a great deal of stress on maintaining a realistic velocity and the advice is wise indeed: too much velocity and more code is written than can be maintained creating too much degradation. On the other hand too little velocity and so little code is written between the natural shifts that the amount of fracturing per line of code is at a much higher ratio than it would be if the velocity had been higher.
Another consideration is that if velocity is not carefully controlled development can end up in high and low periods. High periods become stressful causing mistakes, less time to fix degraded code, negative fracturing and ultimately burn out. Low periods create boredom, frustration which cause mistakes etc. or they encourage over-engineering again causing degradation and fracturing.
Getting the velocity correct is a real art form and requires so many different considerations that they are too many to list. However one thing that is required to get velocity correct is waste. Waste is the bain of all businesses and they often spend more money trying to eliminate waste than it's original cost. Developers are expensive and to have a developer potentially sitting around doing nothing is a no-no for many companies; they want every developer outputting 100% 100% of the time. However the inability to run to such tight budgets is a reality that virtually every industry has come to terms with. Take a builder for example: if he's going to build a house he'll approximate how much sand, bricks, cement, wood and everything else he'll need then he'll add a load on top. Ninety nine times out of a hundred the builder will end up with a huge load of excess materials which he'll probably end up throwing away (or taking to the next job). Of course this isn't environmentally friendly but the principle the builder is working off is that if he ends up just two bricks or half a bag of cement short he's going to have to order more. Which means a delay of a day, which puts the whole project off track: basically it's going to cost him a lot more than the acquisition and disposal of the excess had he ordered an extra palette of bricks or bag of cement. Thus the builder makes the decision that in order to maintain velocity there will be waste.
What to do
There is an old joke:
Eric Evans has many ideas in his book Domain Driven Design and is the very example of a pragmatist. Evan's accepts that nasty things happen to systems and bad code is written or is evolved. To Evan's it is not important that all of your code is shiny platinum quality but that code which is most critical to the business is clean and pure. This is why Evan's doesn't slate the Smart-UI pattern: because to him there are many situations where they are acceptable: it just isn't realistic to build a TVR when all you need is a 2CV.
Chapter IV (Strategic Design) of Domain Driven Design is dedicated to most of the concepts to protecting your system and is unfortunately the bit which gets least attention. Concepts such as Bounded Context, Core Domain and Large Scale Structure are very effective in dealing with degradation and fracturing. Although the best advise I can give is to read the book one of the ideas which interests me the most is the use of Bounded Contexts with Modules (A.K.A. packages - Chapter 5). Evan's explains that although the most common method for packaging tends to be around the horizontal layers (infrastructure, domain, application, UI) the best approach may be to have vertical layers based around responsibility (see Chapter 16: Large-Scale Structure). This way loosely coupled packages can be worked on in isolation and bought together within the application: there is almost a little hint of SOA going on. If an unimportant package starts to degrade or becomes fractured (for example uses NHibernate rather than ADO.NET) it doesn't require the rewriting of an entire horizontal layer to keep it clean: the layer can stay inconsistent within it's own boundary without having any affect on the other parts of the system. This leaves the development team to concentrate their efforts on maintaining the core domain and bringing business value.
This isn't the only solution though: the use of DDD on big systems will only bring benefit when it hits a certain point. If all your development team is able to do is continuously build fractured systems which degrade (either due to skill or resource shortages) then it may be best just to bite the bullet and accept it. A monolithic system takes a lot to keep going and, although DDD can be applied to prevent many of the issues discussed, if you cannot get the velocity then your domain will not develop rapidly enough to deliver the benefits of a big enterprise system. If that is the case it may be more realistic to abandon the vision and instead opt for a strategy that accurately reflects the business environment. This is where I believe Ruby-On-Rails is making headway; it isn't going to be apt for all enterprise systems but it does claim to take most of the pain out of those database-driven web-based applications that are proliferating through companies. The point is, even when a big-boy enterprise system sounds like the right thing to do, trying to develop one in a business which cannot or will not dedicate time, money and people is going to be a bigger disaster than if you ended up with a half-dozen Ruby-On-Rails projects delivering real business value. And you never know once you've delivered a critical mass you may find you can move things up to the next level naturally.
Conclusion
If you ever visit an old building, especially Cathedrals such as Canterbury, you'd know that they often gone through hundreds of iterations: bits have been damaged by fires, weather, pillaging or deliberately destroyed, they have been extended in ways sympathetic to the original and in ways originally considered abominations, some bits have been built by masters others are falling down because of their poor quality. The fact is these buildings are incredible not because they are perfect but because of the way they have adapted to use through the ages. Old buildings face huge challenges to adapt through all periods of history where architecture must be compromised and the original people, tools and materials change or may no longer be available. The challenge for these buildings across the centuries was not to maintain an unrealistic level of architectural perfection but to ensure that they maintained their use - the moment that stops the building will disappear.
These systems inevitably end up the bain of senior managers, project managers and developers lives. Inevitably a battle between the business demanding bags of "quick wins" (that phrase which makes many a developer quake in their boots) and the developers who want to "do things properly". Pour on way too much vision from all parties and you end up with the poor old project manager curling into the foetal position and jabbering on about "delivery" as everyone stands around kicking them for whatever failure the business or development team wish to sight for the last release.
In these circumstances I have found Agile to be a god send: the business gets quick returns, the project managers get to wrap themselves up snuggly in their Gant charts and click away happily at MS Project and the developers - although they still sit their pushing back deadlines and moaning they haven't got enough resource - actually deliver something. All in all it keeps the whole project off the undeliverable White Elephant which the Business and development team have managed to concoct between them.
After a couple of years of doing this with success I was shocked to find that a few of the problems of the old waterfall methodology had begun to raise their ugly heads, namely: degradation, fracturing and velocity. After some careful thought and backtracking through my career I started to notice some startling similarities between all the developments I had worked on.
Degradation
Degradation is a natural occurrence of any system. Software systems are fortunate that - unlike many systems in the natural world - they rarely degrade through use - think the rail network: the more the rails are used the worse their condition becomes and therefore need replacing. Although external aspects can cause a software system to degrade through use (database size, hardware age etc.) the actual lines of code are in the exact same condition as when written.
The biggest cause of degradation in software is change: as new features are written or bugs are fixed existing code can become messy, out of date or even completely redundant without anyone actually realising it. Also existing parts of the system which work perfectly can get used in a manner not originally intended and cause bugs and performance issues.
There are a number of techniques out there in the Agile world to help minimize degradation including Refactoring, Collective Ownership and Test Driven Development. However heed the warning: despite the best unit tests in the world and pair programming every hour you've been sent, the absolute prevention of degradation relies on two things: time and human intervention. Time is required to fix degraded code which increases the length each feature takes to implement and thus affects velocity. Human intervention requires someone recognising the degradation and further more being bothered to do anything about it (collective ownership and pair programming do help here but are no way a guarantee).
The danger of degradation is that a system can degrade so much that it has a severely negative impact on the progress of a project - sometimes even bringing it to a complete halt - and resulting in the development team battling the business for a "complete rewrite". Degradation is not good for developers' morale maily due to the fear that it sounds like an admission of writing bad code. This results in the, all too common, backlash that it's the business' fault for putting too much pressure to deliver "quick wins" and not accepting the need preserve the architectural vision of the developers; and here we are again in the same spot we were with the waterfall method.
Fracturing
Fracturing can look very similar to degradation but is actually quite different. Fracturing occurs all the time in any system which works to standards - which is all systems regardless of whether those standards are documented or are for a team or individual - as they shift and move to account for changes. One example of this is the naming convention on networks: many companies opt for some kind of convention for naming switches, printers, computers etc. which seems to suit until something comes along which no longer fits. For example when a hypothetical company starts out they have three offices in the UK so the naming style becomes [OFFICE]-[EQUIPMENT]-[000]. But then the company expands to Mexico and someone decides that the naming convention needs to include the country as well: the convention is now fractured as new machines now have [COUNTRY]-[OFFICE]-[EQUIPMENT]-[000]. Then an auditor comes along and says that you should obfuscate the name of your servers to make it harder for a hacker (as UK-LON-ACCOUNTSSQL is a nice signpost to what to hack) so the convention has to change causing even more fracturing.
This happens in code all the time as your standards shift like sand: yesterday you were using constructors for everything and then today you read Gang Of Four and decide that the Abstract Factory pattern is the way to go. The next day you read Domain Driven Design and you decide that you should separate the factory from the class that it creates. Fracturing, fracturing, fracturing. And then you read about NHibernate and decide that ADO is old news and then in a few years LINQ comes out and you swap to that. Fracturing, fracturing, fracturing. Then you discover TDD and start writing the code in tests but the rest of the team doesn't get it yet. Fracturing, fracturing, fracturing.
Of course you could stand still and code to strict guidelines which never change, the team leader walking around the floor with a big stick which comes down across your knuckles every time your code strays slightly in the wrong direction. But who wants to work in an environment like that? Fracturing is a result of many things but mostly it is a result of a desire to do better and as such an improvement in quality. What coders believe is out of sync is rarely their new sparkling code but their old "I'm so embarrassed I wrote that" code. As a result their reaction is a desire to knock down the old and rewrite everything from the ground up using their new knowledge and techniques (though most recognise this is not "a good thing").
Fracturing can also result from negative factors as well: e.g. someone leaves the company taking years of experience with them and you get a new bod in but they've got to get coding quickly so we throw the standards out and we have more fracturing. Or there's a lot of pressure to get that code out so drop those new advances we proved in the last project and go back to the old safe and sound method.
Velocity
The obvious thing to say is that project velocity is negatively impacted by degradation and fracturing but that would only be half the picture. The reality is also the reverse: velocity has an affect on degradation and fracturing. Agile methodologies such as XP place a great deal of stress on maintaining a realistic velocity and the advice is wise indeed: too much velocity and more code is written than can be maintained creating too much degradation. On the other hand too little velocity and so little code is written between the natural shifts that the amount of fracturing per line of code is at a much higher ratio than it would be if the velocity had been higher.
Another consideration is that if velocity is not carefully controlled development can end up in high and low periods. High periods become stressful causing mistakes, less time to fix degraded code, negative fracturing and ultimately burn out. Low periods create boredom, frustration which cause mistakes etc. or they encourage over-engineering again causing degradation and fracturing.
Getting the velocity correct is a real art form and requires so many different considerations that they are too many to list. However one thing that is required to get velocity correct is waste. Waste is the bain of all businesses and they often spend more money trying to eliminate waste than it's original cost. Developers are expensive and to have a developer potentially sitting around doing nothing is a no-no for many companies; they want every developer outputting 100% 100% of the time. However the inability to run to such tight budgets is a reality that virtually every industry has come to terms with. Take a builder for example: if he's going to build a house he'll approximate how much sand, bricks, cement, wood and everything else he'll need then he'll add a load on top. Ninety nine times out of a hundred the builder will end up with a huge load of excess materials which he'll probably end up throwing away (or taking to the next job). Of course this isn't environmentally friendly but the principle the builder is working off is that if he ends up just two bricks or half a bag of cement short he's going to have to order more. Which means a delay of a day, which puts the whole project off track: basically it's going to cost him a lot more than the acquisition and disposal of the excess had he ordered an extra palette of bricks or bag of cement. Thus the builder makes the decision that in order to maintain velocity there will be waste.
What to do
There is an old joke:
"What's the difference between a psychotic and a neurotic? Well, a psychotic thinks 2+2=5. Whereas a neurotic knows that 2+2=4, but it makes him mad. "Developers and businesses risk becoming psychotic about degradation and fracturing, instead believing that by coming up with some amazing architecture or definitive strategy all these problems will go away. Neurotics are only slightly less dangerous, as they become over anxious and fearful of degradation and fracturing they introduce processes, architectures and strategies until they become indistinct from the psychotics.
Eric Evans has many ideas in his book Domain Driven Design and is the very example of a pragmatist. Evan's accepts that nasty things happen to systems and bad code is written or is evolved. To Evan's it is not important that all of your code is shiny platinum quality but that code which is most critical to the business is clean and pure. This is why Evan's doesn't slate the Smart-UI pattern: because to him there are many situations where they are acceptable: it just isn't realistic to build a TVR when all you need is a 2CV.
Chapter IV (Strategic Design) of Domain Driven Design is dedicated to most of the concepts to protecting your system and is unfortunately the bit which gets least attention. Concepts such as Bounded Context, Core Domain and Large Scale Structure are very effective in dealing with degradation and fracturing. Although the best advise I can give is to read the book one of the ideas which interests me the most is the use of Bounded Contexts with Modules (A.K.A. packages - Chapter 5). Evan's explains that although the most common method for packaging tends to be around the horizontal layers (infrastructure, domain, application, UI) the best approach may be to have vertical layers based around responsibility (see Chapter 16: Large-Scale Structure). This way loosely coupled packages can be worked on in isolation and bought together within the application: there is almost a little hint of SOA going on. If an unimportant package starts to degrade or becomes fractured (for example uses NHibernate rather than ADO.NET) it doesn't require the rewriting of an entire horizontal layer to keep it clean: the layer can stay inconsistent within it's own boundary without having any affect on the other parts of the system. This leaves the development team to concentrate their efforts on maintaining the core domain and bringing business value.
This isn't the only solution though: the use of DDD on big systems will only bring benefit when it hits a certain point. If all your development team is able to do is continuously build fractured systems which degrade (either due to skill or resource shortages) then it may be best just to bite the bullet and accept it. A monolithic system takes a lot to keep going and, although DDD can be applied to prevent many of the issues discussed, if you cannot get the velocity then your domain will not develop rapidly enough to deliver the benefits of a big enterprise system. If that is the case it may be more realistic to abandon the vision and instead opt for a strategy that accurately reflects the business environment. This is where I believe Ruby-On-Rails is making headway; it isn't going to be apt for all enterprise systems but it does claim to take most of the pain out of those database-driven web-based applications that are proliferating through companies. The point is, even when a big-boy enterprise system sounds like the right thing to do, trying to develop one in a business which cannot or will not dedicate time, money and people is going to be a bigger disaster than if you ended up with a half-dozen Ruby-On-Rails projects delivering real business value. And you never know once you've delivered a critical mass you may find you can move things up to the next level naturally.
Conclusion
If you ever visit an old building, especially Cathedrals such as Canterbury, you'd know that they often gone through hundreds of iterations: bits have been damaged by fires, weather, pillaging or deliberately destroyed, they have been extended in ways sympathetic to the original and in ways originally considered abominations, some bits have been built by masters others are falling down because of their poor quality. The fact is these buildings are incredible not because they are perfect but because of the way they have adapted to use through the ages. Old buildings face huge challenges to adapt through all periods of history where architecture must be compromised and the original people, tools and materials change or may no longer be available. The challenge for these buildings across the centuries was not to maintain an unrealistic level of architectural perfection but to ensure that they maintained their use - the moment that stops the building will disappear.
Subscribe to:
Posts (Atom)
About Me
- Peter Gillard-Moss
- West Malling, Kent, United Kingdom
- I am a ThoughtWorker and general Memeologist living in the UK. I have worked in IT since 2000 on many projects from public facing websites in media and e-commerce to rich-client banking applications and corporate intranets. I am passionate and committed to making IT a better world.