Tag Archives: best practices

goto fail

If you’ve ever worked with me (or had me review your code), you’ll know that I am a stickler for good coding standards1. I consider a Checkstyle failure a build failure and it does cause the build to break. The reason for this is that we’re human, we make mistakes, and by adhering to good coding standards we can minimise the number of mistakes we make.

One of my absolute bugbears is single line statements with no braces2. My preference is:

if (condition) {

Since this isn’t a post about the ongoing Holy War on where the braces should go I will also accept:

if (condition)

I will grudgingly accept:

if (condition) { action(); }

I’m more inclined to accept the above with languages like Javascript where this is a commonly used convention.

Unless you’re coding in Python or similar I will not accept:

if (condition)

Why not? Well it’s just asking for failure. It’s so easy to just add another line and suddenly have something accidentally be executed that you didn’t want. Python gets round this with indention, which is significant, other languages just trip you up.

How bad can it be, I hear you ask? Well lets consider the following snippet of C:

if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
    goto fail;
    goto fail;

We are quite literally going to fail. This is a snippet from sslKeyEchange.c which is responsible for this little nasty. Ignoring the fact that the code is a whole series of WTF’s piled on one another it’s a bug that could have been so easily spotted by just using braces.

1 The actual phrase used would imply that my policies on coding belong in WWII era Germany.

2 For languages that support them.

Refactoring as a Displacement Activity

Refactoring is good right? We constantly read about how we should be writing simple and easy code to do the minimum necessary and refactoring it as we go to improve it and expand what it can do. Unfortunately, this can lead to some really bad habits, especially if it’s used as a displacement activity.

What do I mean by displacement activity? Well lets consider a hideous project that is under specced, under resourced and overdue. Our poor old developer is demotivated and drowning under a sea of requirements that they don’t know how they’re going to implement, but what they have done is written the code that handles the command line arguments. This code works to spec, is even tested and could be considered complete. But, it’s taking user input. It’s not as neat as it could be. In fact it’s quite messy.

So what does our developer do? Do they tackle the next in the long list of requirements and try and get closer to completion, or do they refactor the working code to make it nicer? While we all know they should be doing the former, all to often it’s the latter that happens. Our developer goes off and refactors the working code to make it nicer.

Having seen this happen on a number of occasions it got me to thinking what was going on here. Why mess about with the stuff that works when there is a pile of stuff that doesn’t work to be getting on with. The answer is brutally simple, it’s a psychological trick.

In order to feel that they have had a productive day our developer needs to make progress. By tackling the difficult, as yet unwritten features, the developer is taking a risk. They may not be able to complete the task in the time they’ve allotted. They may run into problems. They may get stuck. They may fail. By refactoring the working code, on the other had, they’re tackling a problem that they know they can solve; after all, they’ve solved it already. What they’re doing now is a refinement.

Of course, once the refinement is completed our developer is still left with the mountain of requirements to complete that they didn’t want to tackle, and they now have less time to complete those requirements, but dealing with that problem has been shifted to something we can deal with tomorrow. In the mean time “progress”, for a given definition of progress, has been made because the existing code is now cleaner. A classic displacement activity.

Taken to its extreme this kind of thinking can lead to making nothing absolutely perfectly. You tend towards, but never actually achieve, the perfect solution for your problem. This is compounded by the fact that as soon as you start dealing with the edges of software development (input and output) things start getting messy. Perfection can’t actually be achieved and you’ll end up swapping one compromise for another ad-infinitum. Annoyingly we know all about this pitfall. the KISS principle and MVPs are all about avoiding this type of displacement and producing something that is important: working code. The irony is that this “perfect code” that we’ve wasted time crafting will no doubt get hacked about later as the system grows and the missing requirements are added. One day we may learn.

I am not a developer

I am not a developer anymore” – Me, last night

This (probably not so) startling statement was made by me at yesterdays Norfolk Developers meeting as part of a discussion about the Developer to QA ratio in my team. Actually, the statement is probably more accurate if you change it to “I am not a professional developer anymore“. The reality is I’ve not done any real development at work for about a month now. I’m a manager. I don’t code, I… well, that’s the thing. I’m not so sure what I do any more.

Developers make the worst managers” – Me, c.2003

History has taught me that developers are not good at management. Don’t ask my why this is1, it’s just something I’ve observed during my career. It’s by no means a hard and fast rule, but there is a strong correlation there.

This doesn’t bode well for me. If you’ll excuse the WoW2 analogy: I feel like I’ve gone from being a level 60 Developer with Epic gear to being a level 1 Manager with noobie kit. Oh, everyone says I’m doing a wonderful job, but this is still the honeymoon period and I’ve yet to cock anything up.

Yesterday I… erm…” – Me, 9:00am most mornings during the standup meeting

Part (most?) of my problem revolves round the fact that I no longer produce anything. As a developer I can point to a list of git commits and say “I wrote this code“; I can point to a feature and say “I implemented this“; I can point to a bug report and say “I fixed this“. My job is now meta. I liaise, guide, advise and facilitate while others do the actual work. Hopefully while doing this I add value.

For a long time now I’ve kept a private work journal detailing what I’ve done during the day. It’s proven to be useful on more than one occasion and now my job is much less tangible it helps keep track of what it is I actually do. There is a blissful irony here though: I’m now so busy doing… whatever it is I do during my day, that I don’t always have time to note it down at the end of the day.

Have you tried pairing a QA and a developer together?” – Chris Oldwood, last night, 21:00

One thing I learned being a developer is that peer review is A Good Thing™, and that you can learn new things from the most unlikely places. It’s part of the reason why I am so candid about our development processes at things like nor(DEV): and SyncNorwich, warts and all; the feedback you get is invaluable.

The discussion that ensued after Cat Landin‘s talk last night on why developers are so bad at testing gave some really valuable insight into fixing some of the problems we have in our team. Sometimes all it takes is someone unencumbered by the politics, culture and mindset of an organisation to point out simple, but effective fixes.

After last night I have a number of “bug fixes” for our processes. Lets hope they’re as easy to refactor as code.

1Trust me, if I knew I’d be cashing in on “How to go from Developer to Manager” courses and books.

2World of Warcraft – and also from 5+ years ago, I’ve been clean a while now.

Chasing 100% Coverage

Unit tests, as we all know, are A Good Thing™. It stands to reason then that 100% unit test coverage must also be A Good Thing™, and therefore is something that should be strived for at all costs. I’d argue this is wrong though.

100% unit1 test coverage can be exceptionally difficult to achieve in real world applications without jumping through a whole load of hoops that end up being counterproductive. The edges of your system often talk to other systems which can be difficult to isolate or mock out. You can often be left chasing that last few percent and making counterintuitive changes to achieve them. At this point I’d say it’s better to leave the clean, readable, untested code in and just accept 100% coverage isn’t always possible.

This leads to another problem though. Once you’re not hitting 100% coverage you need to be sure that code that isn’t covered is actually code you can’t cover. As your code base gets bigger the amount a single missed line of code affects your coverage gets smaller.

PHPUnit takes a pragmatic approach to this issue; it allows you to mark blocks of code as being untestable. The net result is that it simply ignores these blocks of code in its coverage calculations allowing you to get 100% coverage of testable code.

Quite a few people who I’ve told about this have declared this to be ‘cheating’, however, lets look at a very real issue I have in one of my bits of Java code. I have code that uses the list of locales that can be retrieved by Java. It uses the US as default since it’s reasonable to assume that the US locale will be set up on a system. While highly improbably, it’s not impossible for a system to lack the US locale and the code handles this gracefully. Unit testing this code path is impossible as it involves changes to the environment. I could conceivably handle this in functional tests, but it’s not easy. I could remove the check and just let the code fall over in a crumpled heap in this case, but then if it ever does happen someone is going to have a nasty stack trace to deal with rather than a clear and concise error message.

If I could mark that single block of code as untestable I would then be able to see if the rest of my code was 100% covered at a glance. As it is I’ve got 99 point something coverage and I need to drill into the coverage results to ensure the missing coverage comes from that one class. Royal pain in the behind.

I am willing to concede that the ability to mark code as untestable can be abused, but then that’s what code reviews are for. If someone knows how to unit test a block of code that’s marked as untestable they can fail the review and give advice to the developer that submitted the code.

1 People seem to get unit and functional tests muddled up. A unit test should be able to run in isolation with no dependency on environment or other systems. Functional tests, on the other hand, can make assumptions about the environment and can require that other systems are running and correctly configured. The classic example is a database. Unit tests should mock out the database connection and used canned data within the unit tests. Functional tests can connect to a real database pre-populated with test data. Typically functional tests also test much larger blocks of code than individual unit tests do.

Is PHP an enterprise language?

During the last nor(DEV): we tried something a little different: a Question Time style panel where the ‘audience’ asked a panel of guests a series of contentious questions. These were then debated by both the panel and the audience. While the questions may have been pre-prepared the debate that followed was not, which resulted in some interesting discussion.

One of the questions that interested me the most was regarding PHP and its status as an ‘Enterprise’ level language. With my recent experience with PHP I’ve found my attitudes to it have changed drastically. This has resulted in, for me, a startling conclusion.

Historically I’ve been a Java developer. While we know Java is an enterprise language (it says so on the tin), I didn’t start out that way. While I’ve dabbled in Ada (university) and C++ (early graphics programming for fun) my first commercial programming was done in Perl and TCL. I may have been at a large enterprise, but, by my own definition, I come from a Script Kiddie background. Perhaps I’m a little more open to scripting languages than others.

That said, I’ve always felt that scripting languages have somehow been less than ‘proper‘ languages like Java. Still, right tool for the job and all that, and given I’m not getting away from PHP anytime soon1 I decided to look at exactly why PHP wasn’t as ‘good’ as Java.

It’s not Object Oriented

Having done OO for a large part of my programming life I find it hard to use anything that doesn’t support objects. I suspect this is more a failing of the developer than functional programming, however, it’s a moot point as PHP is a fully OO language, I just needed to RTFM.

It’s not got a proper IDE

Around 1 or 2KLoc2 I find I’ve now got more objects than I can remember the interfaces for, which results in lots of bouncing between files to refresh my memory. This can get problematic when it breaks my train of thought. I’ve tried a number of editors and found Sublime Text was OK, but it wasn’t Eclipse. Thankfully I found PHPStorm, a proper IDE for PHP that works on a Mac. It shares a number of features with Eclipse (including being as buggy as hell) and has massively improved life when writing PHP. It costs money, but the investment is well worth it if you’re doing anything more than dabbling in PHP.

You can’t test it easily

Actually, there’s PHPUnit which I now prefer to JUnit. OK, so unit testing the code that actually renders your web pages isn’t so easy, but if you’ve got proper separation of responsibilities there should be minimal code before it hands everything off to your fully tested backend code.

It lacks ‘modern’ features

Such as…
PHP can handle closures so it’s actually more feature rich than Java in some respects. The way PHP handles method overloading is convoluted to say the least, but method overloading isn’t exactly modern and I suspect it stems from the way that PHP handles varargs (something else Java only got recently).

The code is hard to read

Yes, a lot of PHP is hard to read (go look at the WordPress codebase) but I don’t think PHP is to blame here. I spent the longest time trying to bludgeon PHP into looking like Java when I should have been bludgeoning my brain into thinking the PHP way – something I’ve finally done.

I think this last point is key when it comes to the problems with PHP. The barriers to entry with PHP are very low as you can throw together a usable web app with a UI very quickly, even as a novice developer. The result is a lot of bad PHP out there in the wild which lacks the design, testing and layout that us ‘proper’ developers would use with our ‘grownup’ languages. I’ll freely admit that some of that bad PHP belongs to me. Having matured as a PHP developer I’m hoping my next application will see me being a ‘grownup’ PHP developer.

This is a similar conclusion that that nor(DEV): panel came to. The question isn’t “is PHP3 an enterprise language?“, it’s “are your developers enterprise level developers?“.

1My hosting provider, while nice and cheap, are rather limited on what I can run, thus PHP for my personal projects.

2KLoC: Kilo-Lines of Code, or thousand lines of code.

3You can replace PHP here with a number of languages.

SyncNorwich Agile August

What is Agile? Such a seemingly simple question can cause a great deal of debate. It’s little wonder then that when you ask three speakers to talk about Agile they answer the question in three slightly different ways.

When I was invited to speak at the SyncNorwich I was very conscious of the fact that I was a relative newbie to the scene and that my team don’t practice a named Agile methodology1. While I could explain our process in detail I’d didn’t think it wild be beneficial as it’s something we’ve put together to meet our needs – it’s tailor made for my team and where they are now.

I could just roll out this “this is Scrum, this is Kanban” talk, but it’s been done to death and I can’t stand there in good faith and tell people to practice something I don’t actually use. That did get me thinking thought. Why don’t I use pure Scrum, or pure Kanban? The reality is that, while both methodologies look great on paper, the real world has a nasty habit on getting in the way. So why not talk about that?

What was really great was the two other talks that night also focused on how two very different companies have adopted Agile to meet the realities of their world. One the one hand you’ve got Aviva with a variation of Agile that may look ponderous and lumbering to much smaller companies, but that is surprisingly nimble for such a massive organisation. On the other hand is Axon Active who are a very distributed company using Scrum as a fantastic tool for communicating between remote working colleagues. By contrast Virgin Wines is a fairly traditional SME with a single dev team located in one office, although we do suffer from our own unique set of problems.

Each talk brought home the core principles of agile with small, iterative steps and tight feedback loops, and then looked at how it applied to the realities faced by each organisation. The purists may declare us to be “wrong“, but I’d argue they’re out of touch. That’s not to say our implementations of Agile are necessarily “right” either, we just believe them to be “right” for where our companies are right now.

1What we currently use is a mishmash of Scrum and Kanban which doesn’t fit into the pigeonhole of Scrumban. We’re also sort of Xanpan like, and we are heading in a more Xanpan direction, but I don’t think we’ll ever end up being true Xanpan. Since names like Scranpan are verging on the ridiculous I would, if I were forced to give our methodology a name, call it Fred. Not that it matters hugely, give it 3 months and we’ll be practicing something slightly different (which we’ll call Steve for arguments sake).

Costing and Commitment

One of the hardest aspects of Scrum seems to be the accurate costing of stories. We all know the theory: you break your work into chunks, cost those chunks, and commit to a certain amount of work each week based on costing and velocity. Pure Scrum™ then states that if you do not complete your committed work then Your Sprint Has Failed. All well and good, apart from one small problem: it’s all bollocks.

I’ve long had an issue with the traditional costing/commitment/sprint cycle insofar as it doesn’t translate from theory into the real world. In her recent talk at NorDev, Liz Keogh pointed out that Scrum practitioners are distancing themselves from the word “commitment” and from Scrum Failures as they are causing inherent problems in the process. Allan Kelly recently blogged about how he considers commitment to be harmful. Clearly I’m not alone in my concerns.

As always, when it comes to things like this, I’ve simply borrowed ideas for other people and customised the process to suit me and my team. If this means I’m doing it “wrong” then I make absolutely no apologies for it, it works for us and it works well.


Developers suck at costing. I mean really suck. It’s like some mental block and I have yet to find a quick and effective way to clear this. Story points are supposed to make developers better at costing because you’re removing the time element and trying to group issues by complexity. That’s lovely on paper, but I’ve found it just confuses developers – oh, we all get the “a Chihuahua is 1 point so a Labrador must be 5 or 8 points” thing, but that’s dogs, visualising the relative size and complexity of code is not quite as simple.

What’s worse is that story points only relate to time once you have a velocity. You can’t have velocity without points and developers generally find it really hard to guesstimate1 without times. Chicken and egg. In the end I just loosely correlated story points and time, making everyone happy. I’ve also made story points really short because once you go past a few days estimates start becoming pure guesses. What we end up with is:

Points Meaning
0 Quicker to do it than cost it.
0.5 One line change.
1 Easily done in an hour.
2 Will take a couple of hours.
3 A morning/afternoons work.
5 Will take most of the day.
8 It’s a days work.
13 Not confident I can do it in a day.
20 Couple of days work.
40 Going to take most of the week.
100 Going to take a couple of weeks.

Notice this is a very loose correlation to time, and it gets looser the larger the story point count. Given these vagaries I will only allow 40 and 100 point costings to be given to bugs. Stories should be broken up into chunks of two days or less so you’ve got a good understanding of what you’re doing and how long it’s going to take2.

With that in mind 40 points really becomes “this is going to be a bitch to fix” and 100 points is saved for when the entire team looks at you blankly when diagnosing the problem: “Err… let me go look at it for a few days and I’ll get back to you“.

Stopping inflation

Story point inflation is a big problem with scrum. Developers naturally want to buy some contingency time and are tempted to pad estimates. Story point deflation is even worse with developers being hopelessly optimistic and then failing to deliver. Throw in the The Business trying to game the system and it’s quickly become a mess. I combat this in a few ways.

Firstly, points are loosely correlated to time. In ideal conditions a developer wants to be completing about 8 points a day. This is probably less once you take meetings, walkups and other distractions into account. While an 8 point story should be costed such as the developer can complete it in a normal day with distractions accounted for, the same doesn’t hold true for a series of 1 point stories. If they’re all about an hour long and there’s an hours worth of distractions in the day then the developer is only getting 7 points done in that day.

Minor fluctuations in average per developer throughput are fine, but when your velocity starts going further out of whack it’s time to speak to everyone and get them to think about how they’re estimating things.

Secondly, points are loosely correlated to time. A developer can track how long it takes them to complete an issue and if they’re consistently under or over estimating it becomes very apparent as the story points bear no correlation to the actual expended effort. A 5 pointer taking 7 hours isn’t a problem, but any more than that and it probably wanted to be an 8 pointer. Make a note, adjust future estimates accordingly. I encourage all my developers to track how long an issue really takes them and to see how that relates to the initial estimate.

Thirdly, costing is done as a group exercise (we play planning poker) and we work on the premise of an “average developer”. Obviously if we take someone who is unfamiliar with the code it’s going to take them longer. You’ll generally find there’s some outlying estimates with someone very familiar with that part of the code giving low estimates and people unfamiliar with it padding the value. We usually come to a consensus fairly quickly and, if we can’t I just take an average.

I am aware that this goes against what Traditional Scrum™ teaches us, but then I’m not practicing that, I’m practicing some mongrel Scrumban process that I made up as I went along.

Velocity and commitment

I use an the average velocity of the past 7 sprints3 adjusted to take into account holiday when planning a sprint. We then pile a load of work into the sprint based on this figure and get to work. Traditionally we’ve said that we’ve committed to that number of story points and issues but only because that’s the terminology that I learned with Scrum. Like everything else, it’s a guestimate. It’s what we hope to do, a line in the sand. There are no sprint failures. What there is is a discussion of why things were not completed and why actual velocity didn’t match expected velocity. Most of the time the reasons are benign and we can move on. If it turns out there are problems or impediments then we can address them. It’s a public discussion and people don’t get to hide.

Epics and Epic Points

The problem with having story points covering such a small time period is that larger pieces of work start costing huge numbers of points. A month is something like 200 points and a year becomes 2500 points. With only 2000 hours in a year we start getting a big disconnect between points and time which The Business will be all over. They’ll start arguing that if a 1 year project is 2500 points then why can’t we have 2500 1 point issues in that time?

To get round this issue we use epic points which are used to roughly cost epics because they’re broken down into stories and properly costed. While story points are educated guesstimates epic points are real finger in the air jobs. They follow the same sequence as story points, but they go up to 1000 (1, 2, 3, 5, 8, 13, 20, 40, 100, 250, 500, 1000). We provide a handy table that lets the business know that if you have an epic with x many points and you lob y developers at the problem then it will take roughly z days/weeks/months. The figures are deliberately wooly and are used for prioritisation of epics, not delivery dates. We’re also very clear on the fact that if 1 developer can do it in 4 weeks, 2 developers can’t do it in 2. That’s more likely to be 3 weeks.

Epic points are malleable and get revisited regularly during the life of an epic. They can go up, down or remain unchanged based on how the team feel about the work that’s left. It’s only as the project nears completion that the epic points and remaining story points start bearing a relationship to each other. Prior to that epic points allow The Business to know if it’s a long way off, or getting closer.

1 What, you think this is based on some scientific method or something? Lovely idea, we’re making educated guesses.

2 I’ve had developers tell me they can’t cost an issue because they didn’t know what’s involved. If you don’t know what’s involved then how can you work on it? Calling it 20 points and hoping for the best isn’t going to work. Instead you need to create a costing task, and spend some time working out what’s involved. Then, when you understand the issue, you can then cost it properly.

3 A figure based purely on the fact that JIRA shows me the past 7 sprints in the velocity view.

Why you should be documenting your code

It seems there are two schools of thought when it codes to documenting code. In one corner you’ve got those who feel that the “code should be expressive“; those that feel the “unit tests should document the code“; and there are those who argue that developers are lazy and the documentation gets out of date. These people will argue that there is no point documenting code. And in the other corner there’s me… well, me and the rest of my team, who have all seen sense after I’ve pointed out the error of their ways. 🙂

The rule for Javadoc in our team is “do it properly, or not at all“. I don’t mandate it because if you do you will just end up with rubbish. I do strongly encourage it though. The key word, however, is properly. I’m not just talking about ending the first sentence with a full stop, and making sure you have all the correct param, returns and throws tags; I mean well written, well thought out documentation.

I’ll handle the lazy developers argument first. The same argument can be made for not unit testing, and yet I doubt there are many people who would take the stance that not unit testing is OK. It’s part of development. Writing good Javadoc should also be a part of development.

But it’ll get out of date!“, I hear you cry. So not only has a developer done a half arsed job when updating some code by not updating the documentation, everyone who peer reviewed it missed that too. Please, feel free to hide behind that excuse. You may also want to stop doing reviews because you’re not doing them correctly. Also, what else are you missing?

The unit tests should document the code. Let’s consider a hypothetical situation. I’ve got your object and I’m calling a getter. I want to know if you’ll return null or not, because I may have some extra processing if to do if I do get a null value. You expect me to stop, mid flow, go find the unit tests, locate the relevant tests and work out if null is something that might come back? Really? If there are no tests dealing with null return values can I safely assume that null is never returned, or did you just forget to include that? I’ve never bought that argument.

As to the code being descriptive, well yes, it should. But it’s not always evident what’s going on. I can leap into our hypothetical get method and have a poke about to see what’s happening, but what if we’re getting the value from a delegate? What if that talks to a cache and a DAO? How far do I need to delve into your code to work out what’s going on?

And then we come on to the best argument of all: “You only need to document API’s“. It’s all API.

But what is good documentation? To help with that we’ll expand our scenario. Our getter has the following interface:

public String getName(int id);

What would most people put?

 * Gets the name for the given id.
 * @param id the id
 * @returns the name
public String getName(int id);

It’s valid Javadoc, I’ll give you that, but it’s this kind of rubbish that has given the anti-documentation lot such great ammo. It’s told me nothing that the code hasn’t already told me. Worse still, it’s told me nothing I couldn’t have worked out from the autocomplete. I’d agree with you that having no Javadoc is better than that. This is why I say do it properly, or not at all.

The documentation needs to tell me things I can’t work out from the interface. The method takes an ID, you may want to be more explicit what the ID is. Are there lower or upper bounds for the ID? You return a String, are there any assumptions I can make about this String? Could the method return null?

 * Returns the name of the user identified by the provided
 * user ID, or <code>null</code> if no user with the
 * specified ID exists. The user ID must be valid, that is
 * greater than 1.
 * @param id a valid user ID
 * @returns the name associated with the given ID, or
 * <code>null</code> if no user exists
 * @throws IllegalArgumentException if the user ID is less
 * than 1
public String getName(int id);

This Javadoc does three things.

  1. It means I can easily get all the information I may need about the method from within the autocomplete dialog in my IDE, so I don’t need to leave my code.
  2. It defines a contract for the method so anyone implementing the code (or overriding it in the case of already implemented code) knows the basic assumptions that existing users already have.
  3. It provides me with a number of unit tests (or BDD scenarios) that I already know I have to write. While I may not be able to enforce the contract within the framework of the code (especially with interfaces), I can ensure the basic contract is followed by providing tests that check id is greater than 1 and that some non-existent ID returns null.

Writing good documentation forces you to think about your code. If you can’t describe what the method is doing succinctly then that in itself is a code smell. It forces you to consider the edge cases and to think about how people are going to use your code. When updating a method it forces you to consider your actions. If you’re changing our get method so that it will always return a default value rather than null will that affect other peoples logic and flow? What about the other way round when you’ve got a method that states it doesn’t return null – change that and you’re likely going to introduce NPEs in other code. Yes, the unit tests should pick that up, but lets work it out up front, before you’ve gone hacking about with the code.

Those of you familiar with BDD may spot that what you’re doing here is just performing deliberate self discovery with yourself. The conversation is between you, and you as a potential future developer. If, during that conversation, you can’t find anything to say that the interface doesn’t already tell you, then by all means leave out the Javadoc. To my mind that’s pretty much just the getters and setters on Javabeans where you know it’ll take and return anything. Everything else can and should benefit from good documentation.


Development Strategies

The Development Strategy triangle.

Most [all?] discussions on Agile (or lean, or XP, or whatever the strategy de jour is currently) seem to use a sliding scale of “Agileness” with pure a Waterfall process on the left, a pure Agile process on the right. You then place teams somewhere along this axis with very few teams being truly pure Waterfall or pure Agile. I don’t buy this. I think it’s a triangle with Waterfall at one point, Agile at the second, and Panic Driven Development at the third. Teams live somewhere within this triangle.

So what is Panic Driven Development? Panic Driven Development, or PDD is the knee jerk reactions from the business to various external stimuli. There’s no planning process and detailed spec as per Waterfall, there’s no discreet chunks and costing as per Agile, there is just “Do It Now!” because “The sky is falling!“; or “All our competitors are doing it!“; or “It’ll make the company £1,000,0001; or purely “Because I’m the boss and I said so“. Teams high up the PDD axis will often lurch from disaster to disaster never really finishing anything as the Next Big Thing trumps everything else, but even the most Agile team will have some PDD in their lives, it happens every time there is a major production outage.

If you’re familiar with the Cynefin framework you’ll recognise PDD as living firmly in the chaotic space. As such, a certain amount of PDD in any organisation is absolutely fine – you could even argue it’s vital to handle unexpected emergencies – but beyond this PDD is very harmful to productivity, morale and code quality. Over a certain point it doesn’t matter if you’re Agile or Waterfall, the high levels of PDD mean you are probably going to fail.

Sadly, systemic PDD is often something that comes from The Business and it can be hard for the development team to push back and gain some order. If you find yourself in this situation you need to track all the unplanned incoming work and its affect on the work you should be doing and feed this data back to the business. Only when they see the harm that this sort of indecision is causing, and the effect on the bottom line, will they be able to change.

1 I have worked on quite a few “million pound” projects or deals. The common denominator is that all of them failed to produce the promised million, often by many orders of magnitude.

“PDD” originally appeared as part of Agile In The Real World and appears here in a revised and expanded form.

Three Bin Scrum

Allan Kelly blogged recently about using three backlogs with Scrum rather than the more traditional two. Given this is a format we currently use at Virgin Wines he asked if I would do a writeup of how it’s used so he could know more. I’ve already covered our setup in passing, but thought I would write it up in a little more detail and in the context of Allan’s blog.

Our agile adoption has gone from pure PDD, to Scrum-ish, to Kanban, to something vaguely akin to Scrumban taking the bits we liked from Scrum and Kanban. It works for us and our business, although we do regularly tweak it and improve it.

With Kanban we had a “Three-bin System“. The bin on the factory floor was the stuff the team was actively looking at, or about to look at; the bin in the factory store was a WIP limited set of issues to look at in the near future; and the bin at the supplier was everything else.

When we moved to our hybrid system we really didn’t want to replace our three bins, or backlogs with just a sprint backlog and product backlog because the product backlog would just be unworkable (as in 1072 issues sitting in it unworkable!). So we kept our three backlogs.

The Product Backlog

The Product Backlog (What Allan calls the Opportunity backlog, which is a much better name) is a dumping ground. Every minor bug, every business whim, every request is recorded and, unless it meets certain criteria, dumped in the Product Backlog. There’s 925 issues in the product backlog at the moment, a terrifyingly large number of those are bugs!

I can already hear people telling me that those aren’t really bugs or feature requests, how can they be, they’re not prioritised, therefore they’re not important. They’re bugs alright. Mostly to do with our internal call centre application or internal processes where there are workarounds. I would dearly love to get those bugs fixed, but this is the Real World and I have finite resources and a demanding business.

I am open and honest about the Product Backlog. An issue goes in there, it’s not coming out again without a business sponsor to champion it. It’s not on any “long term road map”. It’s buried. I am no longer thinking about it.

Our QA team act as the business sponsor for the bugs. Occasionally they’ll do a sweep and close any that have been fixed by other work, and if they get people complaining about a bug in the Product Backlog they’ll prioritise it.

The Product Backlog is too big to view in its entirety. We use other techniques, such as labels and heat maps to give an overview of whats in this backlog at a glance.

The Sprint backlog

Bad name, I know, but this equates to Allan’s Validated Backlog. This is the list of issues that could conceivably be picked up and put into the next sprint. The WIP limit for this backlog is roughly 4 x velocity which, with our week long sprints, puts it at about a months work deep.

To make it into the Sprint Backlog an issue must be costed, prioritised and have a business sponsor. Being in this backlog doesn’t guarantee that the work will get done, and certainly doesn’t guarantee it’ll get done within a month. It simply means it’s on our radar and has a reasonable chance of being completed. The more active the product sponsor, the higher that chance.

The Current Sprint

With a WIP limit of last weeks velocity, adjusted for things like holidays and the like, this forms the List Of Things We Hope To Do This Week. We don’t have “Sprint Failures“, so if an issue doesn’t make it all the way to the Completed column it simply gets dumped back into the Sprint Backlog at sprint completion. The majority of uncompleted issues will get picked up in the next sprint, but it’s entirely possible for something to make it all the way to the current sprint, not get worked on, then get demoted all the way back to the Product Backlog, possibly never to be heard from again.

Because issues that span sprints get put back in exactly the same place they were when the last sprint ended we end up with something that’s akin to punctuated kanban. It’s not quite the hard stop and reset that pure Scrum advocates, but it’s also not continuous flow.

The current sprint is not set in stone. While I discourage The Business from messing about with it once it’s started (something they’ve taken on board) we are able to react to events. Things can be dropped from the sprint, added to the sprint or re-costed. Developers who run out of work can help their colleagues, or go to the Sprint Backlog to pull some more work into the sprint. Even the sprint end date and the timing of the planning meeting and retrospective is movable if need be.

The Expedited Queue

There is a fourth backlog, the Expedited Queue. This is a pure Kanban setup with WIP limits of 0 on every column and should be empty at all times. QA failures and bugs requiring a patch get put in this queue and should be fixed ASAP by the first available developer. Story points are used to record how much work was spent in the Expedited Queue, but it’s not attributed to the sprints velocity. The logic here is that it’s taking velocity away from the sprint, as this is work caused by items that aren’t quite as “done” as we had hoped.