Tag Archives: solution

K.I.S.S.

One of the talks I do explains how ‘simple’ in the K.I.S.S.1 principle is not the same as ‘easy‘. Sometimes the easy option can introduce complexities into the system that become detrimental and actually introduce complexity. The simplest example is testing. Not testing is the easy option; but it introduces uncertainty and complexity into your system that will be difficult to pin down at a later date.

That said easy and simple are not mutually exclusive, which is an important thing to remember, especially when you’re caught in the throws of trying to fix a supposedly simple problem.

I spent far too many hours yesterday trying to write an install script for one of our components. The underlying issue is simple: “As someone installing the component, I want the configuration files to reflect the idiosyncrasies of my setup so that I can compile the code”. Or, if you prefer BDD format: “Given I am installing the component, when I perform the compile step, then the compilation should work”.

It all boils down to a bindings file which specifies the location of a couple of libraries and includes. The defaults specified in that file don’t work on everyones system, although they do work on live2.

My salutation was to write some code that iterated through the files, check if any were missing, and if so, prompt the user to give the correct locations. Great, except the format of bindings.gyp is such that I needed to take a complex structure, flatten it, inject extra details so the user prompts made sense, and then reconstruct the complex structure from the flat one. Not wanting to hard code the format I then disappeared up my own backside with specially crafted config files and mappings from that to the format used by bindings.gyp.

nearly 200 lines of code in, deep in callback hell with grave concerns about whether my script would even pass code review I discovered some pretty nasty bugs that meant that minor configuration changes to a live server would cause an automated deploy to suddenly require user input, which we didn’t want. Adding logic to provide an override to this would make things even more complex and my nice, simple solution was disappearing out of reach.

It was then that it hit me that I was providing the wrong solution. This is something that really needs to be set once per environment and then left. It’s not going to be used a huge amount of time, it doesn’t need to be gold plated. With that in mind I wrote a simple shell script that checked for the presence of an environment variable. If the variable existed, use the bindings file it points to, if it doesn’t, use the default; simples.

All told, with error handling and everything, the script is 21 lines long. Not only that, but it provides a nice way to handle upgrades to the environment in live without having to redeploy.


1 Keep It Simple Stupid – in a nutshell, don’t overcomplicate things.

2 Something we wanted to maintain.

Improving Google Drive on the Mac

I’ve been a Dropbox user for ages now, although I’ve never really been hugely active in getting referrals or other activities to increase my free space. Currently my account lets me store 3.8Gb of data. Until recently this has been fine, however, I also now have a few shared folders which are beginning to chew up some space. To mitigate this problem I’ve hived off just under a Gig of data to Google Drive, which I’ve recently installed. While Google Drive doesn’t really give me the in-app integration of Dropbox, it is quite a useful place to dump lesser used files and backups.

My problem is that the badges used by Google Drive are hideous and fugly, and that doesn’t appeal to my sense of good style. If you compare the Google Drive folder on the left in the two images below to the Dropbox folder on the right you’ll see what I mean.

dropbox googledrive

Googling to see if there was a way to solve this issue was, initially, fruitless, however, I came across something telling me where the Dropbox badges were within the application. They’re
icns files which got me thinking; did Google Drive use something similar? Turns out it does. The solution was simple, copy the Dropbox icns files over the Google Drive ones, reboot and voila.

As you can see from the two examples below, it’s not quite perfect. Again, Dropbox is on the left and Google Drive is on the right, and the Google Drive badge is offset lower and to the right. Still, it’s a huge improvement.

dropboxgoogledrivenew

The commands I used were simple, and include a step to backup the old Google Drive icns files. Obviously, we’re messing with the internals of an application here so Caveat Emptor and all that, and if you end up breaking it you get to keep both halves – please don’t come running to me.

cd /Applications/Google\ Drive.app/Contents/Resources/FinderExt.bundle/Contents/Resources

cp Blacklisted.icns Blacklisted.icns.bak
cp Shared.icns Shared.icns.bak
cp Synced.icns Synced.icns.bak
cp Syncing.icns Syncing.icns.bak

cp /Applications/Dropbox.app/Contents/Resources/emblem-dropbox-unsyncable.icns Blacklisted.icns
cp /Applications/Dropbox.app/Contents/Resources/emblem-dropbox-uptodate.icns Shared.icns
cp /Applications/Dropbox.app/Contents/Resources/emblem-dropbox-uptodate.icns Synced.icns
cp /Applications/Dropbox.app/Contents/Resources/emblem-dropbox-syncing.icns Syncing.icns

Eclipse, OSX and JDK 1.7

Despite being a massive Mac fanboi I am the first to admit that as soon as you start going a little off piste with OSX you run into problems that require technical knowledge to fix. Java development on the Mac falls into the category of off piste and it has always been more than a little fun getting things set up.

Now that Oracle are providing the JDK it seems that things no longer live quite where they do which left me scratching my head when trying to get Eclipse working with JDK 1.7.

Installing JDK 1.7 is easy, go to the Oracle download page, grab the 64bit OSX DMG, open, run, job done.

$ java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

Now to tell Eclipse where the JDK is:

$ ls -l `which java`
lrwxr-xr-x  1 root  wheel  74 24 Oct 15:37 /usr/bin/java -> /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands/java

Great… except Eclipse doesn’t recognise /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands/ or /System/Library/Frameworks/JavaVM.framework/Versions/Current/ as a valid JDK location.

A bit of Googling I discovered the magic java_home command.

$ /usr/libexec/java_home -v 1.7
/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home

Giving that directory to Eclipse made it happy and I’m now able to use an up to date version of Java for my code.

iOS 7 Music Problems

So, like pretty much every other fanboi out there, I now have iOS7 installed on my iPad and iPhone. In the main I’ve been quite impressed. I’m still running an iPhone 4S and I was worried it would struggle. Two things I did notice though were music playback and battery life were both shocking. I’ve had this problem in the past when the iPhone 4 came out. I was on an iPhone 3 and the latest version of iOS struggled to play music without stuttering. I had joked that Apple deliberately caused older hardware to do this to force upgrades, and then duly went and got an iPhone 4 which solved any speed issues. This time round I wasn’t so sure it was hardware related. For one thing, every time I opened the music app there was network access, and music wasn’t stuttering, it was just stopping, or refusing to play.

Googling the situation wasn’t helpful. The internet is rife with stories about iOS7, the new music player and people having unrelated problems, meaning my searches were brining up useless news articles and forum posts. To that end I’m going to describe the problems I had so maybe others migh find this and get a solution.

The first problem was music would just stop. Press play again and nothing would happen, or maybe it would play a second or two, and then stop.

Next up was the bizarre behaviour of me pressing ‘next’ and seeing my iPhone keep skipping tracks. It was almost as if it was considering the track, and then discounting it, moving onto the next one. Sometimes it would skip a number of tracks before finally deciding it would play one.

Then there was the issue of near constant network access when the music app was open, and really poor battery life, probably because of the network access.

Lastly there were some odd songs on my iPhone. I rate all my music and have rules that put 5*, 4* and a random selection of 3* tracks onto my phone. This does mean that each time I sync I get a slightly different selection of tunes, but I was sure I’d set some of these tracks not to sync.

It turns out the explanation, and solution was very simple. The problems seemed to occur when I had limited network access so I wondered if iOS7 was doing anything funky with the music app and phoning home. A quick check of the Music app preferences yielded:

image

Seems my phone was now trying to play music I’d bought off the iTunes Music Store, but that wasn’t on my phone. A quick change of settings to:

image

And the number of tracks on my phone dropped by 1,000, those that were left played instantly and the problems all went away.

Costing and Commitment

One of the hardest aspects of Scrum seems to be the accurate costing of stories. We all know the theory: you break your work into chunks, cost those chunks, and commit to a certain amount of work each week based on costing and velocity. Pure Scrum™ then states that if you do not complete your committed work then Your Sprint Has Failed. All well and good, apart from one small problem: it’s all bollocks.

I’ve long had an issue with the traditional costing/commitment/sprint cycle insofar as it doesn’t translate from theory into the real world. In her recent talk at NorDev, Liz Keogh pointed out that Scrum practitioners are distancing themselves from the word “commitment” and from Scrum Failures as they are causing inherent problems in the process. Allan Kelly recently blogged about how he considers commitment to be harmful. Clearly I’m not alone in my concerns.

As always, when it comes to things like this, I’ve simply borrowed ideas for other people and customised the process to suit me and my team. If this means I’m doing it “wrong” then I make absolutely no apologies for it, it works for us and it works well.

Costing

Developers suck at costing. I mean really suck. It’s like some mental block and I have yet to find a quick and effective way to clear this. Story points are supposed to make developers better at costing because you’re removing the time element and trying to group issues by complexity. That’s lovely on paper, but I’ve found it just confuses developers – oh, we all get the “a Chihuahua is 1 point so a Labrador must be 5 or 8 points” thing, but that’s dogs, visualising the relative size and complexity of code is not quite as simple.

What’s worse is that story points only relate to time once you have a velocity. You can’t have velocity without points and developers generally find it really hard to guesstimate1 without times. Chicken and egg. In the end I just loosely correlated story points and time, making everyone happy. I’ve also made story points really short because once you go past a few days estimates start becoming pure guesses. What we end up with is:

Points Meaning
0 Quicker to do it than cost it.
0.5 One line change.
1 Easily done in an hour.
2 Will take a couple of hours.
3 A morning/afternoons work.
5 Will take most of the day.
8 It’s a days work.
13 Not confident I can do it in a day.
20 Couple of days work.
40 Going to take most of the week.
100 Going to take a couple of weeks.

Notice this is a very loose correlation to time, and it gets looser the larger the story point count. Given these vagaries I will only allow 40 and 100 point costings to be given to bugs. Stories should be broken up into chunks of two days or less so you’ve got a good understanding of what you’re doing and how long it’s going to take2.

With that in mind 40 points really becomes “this is going to be a bitch to fix” and 100 points is saved for when the entire team looks at you blankly when diagnosing the problem: “Err… let me go look at it for a few days and I’ll get back to you“.

Stopping inflation

Story point inflation is a big problem with scrum. Developers naturally want to buy some contingency time and are tempted to pad estimates. Story point deflation is even worse with developers being hopelessly optimistic and then failing to deliver. Throw in the The Business trying to game the system and it’s quickly become a mess. I combat this in a few ways.

Firstly, points are loosely correlated to time. In ideal conditions a developer wants to be completing about 8 points a day. This is probably less once you take meetings, walkups and other distractions into account. While an 8 point story should be costed such as the developer can complete it in a normal day with distractions accounted for, the same doesn’t hold true for a series of 1 point stories. If they’re all about an hour long and there’s an hours worth of distractions in the day then the developer is only getting 7 points done in that day.

Minor fluctuations in average per developer throughput are fine, but when your velocity starts going further out of whack it’s time to speak to everyone and get them to think about how they’re estimating things.

Secondly, points are loosely correlated to time. A developer can track how long it takes them to complete an issue and if they’re consistently under or over estimating it becomes very apparent as the story points bear no correlation to the actual expended effort. A 5 pointer taking 7 hours isn’t a problem, but any more than that and it probably wanted to be an 8 pointer. Make a note, adjust future estimates accordingly. I encourage all my developers to track how long an issue really takes them and to see how that relates to the initial estimate.

Thirdly, costing is done as a group exercise (we play planning poker) and we work on the premise of an “average developer”. Obviously if we take someone who is unfamiliar with the code it’s going to take them longer. You’ll generally find there’s some outlying estimates with someone very familiar with that part of the code giving low estimates and people unfamiliar with it padding the value. We usually come to a consensus fairly quickly and, if we can’t I just take an average.

I am aware that this goes against what Traditional Scrum™ teaches us, but then I’m not practicing that, I’m practicing some mongrel Scrumban process that I made up as I went along.

Velocity and commitment

I use an the average velocity of the past 7 sprints3 adjusted to take into account holiday when planning a sprint. We then pile a load of work into the sprint based on this figure and get to work. Traditionally we’ve said that we’ve committed to that number of story points and issues but only because that’s the terminology that I learned with Scrum. Like everything else, it’s a guestimate. It’s what we hope to do, a line in the sand. There are no sprint failures. What there is is a discussion of why things were not completed and why actual velocity didn’t match expected velocity. Most of the time the reasons are benign and we can move on. If it turns out there are problems or impediments then we can address them. It’s a public discussion and people don’t get to hide.

Epics and Epic Points

The problem with having story points covering such a small time period is that larger pieces of work start costing huge numbers of points. A month is something like 200 points and a year becomes 2500 points. With only 2000 hours in a year we start getting a big disconnect between points and time which The Business will be all over. They’ll start arguing that if a 1 year project is 2500 points then why can’t we have 2500 1 point issues in that time?

To get round this issue we use epic points which are used to roughly cost epics because they’re broken down into stories and properly costed. While story points are educated guesstimates epic points are real finger in the air jobs. They follow the same sequence as story points, but they go up to 1000 (1, 2, 3, 5, 8, 13, 20, 40, 100, 250, 500, 1000). We provide a handy table that lets the business know that if you have an epic with x many points and you lob y developers at the problem then it will take roughly z days/weeks/months. The figures are deliberately wooly and are used for prioritisation of epics, not delivery dates. We’re also very clear on the fact that if 1 developer can do it in 4 weeks, 2 developers can’t do it in 2. That’s more likely to be 3 weeks.

Epic points are malleable and get revisited regularly during the life of an epic. They can go up, down or remain unchanged based on how the team feel about the work that’s left. It’s only as the project nears completion that the epic points and remaining story points start bearing a relationship to each other. Prior to that epic points allow The Business to know if it’s a long way off, or getting closer.


1 What, you think this is based on some scientific method or something? Lovely idea, we’re making educated guesses.

2 I’ve had developers tell me they can’t cost an issue because they didn’t know what’s involved. If you don’t know what’s involved then how can you work on it? Calling it 20 points and hoping for the best isn’t going to work. Instead you need to create a costing task, and spend some time working out what’s involved. Then, when you understand the issue, you can then cost it properly.

3 A figure based purely on the fact that JIRA shows me the past 7 sprints in the velocity view.

Agile In The Real World

No plan of operations extends with any certainty beyond the first contact with the main hostile force.” – Helmuth Carl Bernard Graf von Moltke

I’ve been doing eXtreme Programming (XP) and Agile in one guise or another since the early 2000’s. During that time I’ve been in big teams, small teams, bureaucratic organisations, lean organisations and chaotic organisations. I have never worked in a top down Agile organisation and probably never will. Also, no two teams I have worked in have done Agile the same way. I suspect this is partly to do with the organisations the teams were part of, and partly to do with the teams themselves. This is not a bad thing.

Agile is a toolkit, not a rigid set of structures. As with all toolkits, some tools fit better for certain circumstances than others. A good team will adopt an Agile process that fits them, fits the business they work in and then adapt that process as and when things change (and they will). If you’re looking for a post about “How to do Agile” then this is the wrong place. I can’t tell you, I don’t know your team, or your organisation. Instead this explains how I’ve implemented Agile for our team and our organisation in order to get the maximum benefit.

PDD

Most (all?) discussions on Agile seem to use a sliding scale of Agileness with pure a Waterfall process on the left, a pure Agile process on the right and then place teams somewhere along this axis with very few teams being truly pure Waterfall or pure Agile. I don’t buy this. I think it’s a triangle with Waterfall at one point, Agile at the second, and Panic Driven Development at the third. Teams live somewhere within this triangle.

So what is Panic Driven Development? Panic Driven Development, or PDD is the knee jerk reactions from the business to various external stimuli. There’s no planning process and detailed spec as per Waterfall, there’s no discreet chunks and costing as per Agile, there is just “Do It Now!” because “The sky is falling!“, or “All our competitors are doing it!“, or “It’ll make the company £1,000,0001, or purely “Because I’m the boss and I said so“. Teams high up the PDD axis will often lurch from disaster to disaster never really finishing anything as the Next Big Thing trumps everything else, but even the most Agile team will have some PDD in their lives, it happens every time there is a major production outage.

When I first joined my current company it was almost pure PDD. Worse still, timescales were being determined by people who didn’t have the first clue about how long things would really take. Projects were late (often by many months) and issue tracking was managed by simply ditching any issues over a certain age. In short it was chaos. Chuck in a legacy codebase with some interesting “patterns”, a whole bunch of anti patterns and a serious amount of WTFs and you have the perfect storm: low output and poor quality.

Working on the edge of chaos

One thing I realised very early on was that I was not going to be able to change The Business. The onslaught of new things based on half formed ideas was never going to change and the rapid changes of direction were part of the companies DNA. Rather than fight this we embraced it, with some caveats.

Things change for us, fast. Ideas get discarded, updated and changed in days and the development team needs to keep up. To achieve this we use Scrum… except where we don’t, and use Kanban instead. Don’t worry though, it’s not that complex. 🙂

Scheduled work is done using Scrum. Sprints are a week long and start on a Wednesday2. Short, rapid sprints mean we can change direction fast without knocking the sprint planning for six. If the business want to change direction they only have to wait a few days. Releases generally (but not always) consist of two sprints of work. A release undergoes 2 weeks of QA after leaving development so will generally be in production 4 weeks after the sprint started. If need be we can do a one sprint release with as little as one week QA and have a change out within 3 weeks of it being requested.

Sat on top of that we have a Kanban queue which should remain empty at all times. It is populated with QA failures and critical issues that are either blocking the release, or require patching. Every column on the Kanban board has a constraint of 0 items. Put something in it and it goes red, making it pretty obvious that someone needs to fix something sharpish.

The sprint planning meeting, retrospective and costing are all handled in the same Wednesday morning meeting which lasts an hour. First up we look at the state of the outgoing sprint. We look at what got added to the sprint after it started, and why; what was removed from the sprint, and why; and what wasn’t completed within the timeframe of the sprint, and why. We run a system whereby it’s OK for things to span sprints. Things overrun, things get stalled, and sometimes it’s simply that you had half an hour left in the sprint, added a new issue to work on, but never had enough time to finish it. Any concerns are raised and handled, then the sprint is closed. The next sprint is then planned using a moving average of velocity as guidance for how much work to add. Any time remaining in the meeting is used costing and curating the backlog. Sadly the business rarely attend these meetings meaning we need to be creative when it comes to business sponsors.

Finding Business Sponsors

Unlike traditional Scrum we have two backlogs. With over a decade of technical debt and more new development than we can possibly hope to achieve we have hundreds of issues. Clearly this is unworkable. The majority of these live in the un-prioritised backlog. We know about them, we’ve documented them, but they’re not getting done, and may not even get costed unless someone champions them and gets them pushed into the scrum backlog. The scrum backlog is the realistic backlog. We aim to keep no more than 4 x average velocity worth of work in this backlog which means at any given time it provides a roadmap for the next month. We also make sure everything in the scrum backlog is properly costed meaning sprint planning is incredibly easy; just put the top 25% of the backlog into the sprint, adjusted for holidays and various other factors.

Using this method you very quickly find sponsors coming out of the woodwork. When work is not done people start asking where it is, you can then explain to them that it’s not been prioritised, or it’s being trumped by other work. If they care about the issue then they need to champion it, become the business sponsor and take responsibility for it. They can argue the case for it being moved up the backlog with the business. If they don’t want to do that then clearly the work is not important, so it goes into the un-prioritised backlog to eventually die through lack of interest. Stuff that is already in the un-prioritised backlog can be fished out when a sponsor is found and costing can start.

Bugs generally follow a slightly different process insofar as they will always have a sponsor, even if it’s the testing team. Bugs are never closed unless they are fixed, or cease to be an issue due to other changes. The QA team will regularly revisit all open bugs and re-prioritise or close them as necessary.

Costing

New features are costed using planning poker and we use very small stories. Valid costings are 1 (1 line change), 2, 3, 5, 8, 13 and 20. Our target velocity is between 8 and 13 points per developer per day. Any slower and we’re being too optimistic with our costing, any faster and we’re being too pessimistic. Bearing that in mind a developer should easily handle two 20 point stories in a single sprint with room to spare. Anything larger than 20 points needs to be carved up into multiple stories, or turned into an Epic. We do this because estimates get rapidly poorer once you go past a couple of days work.

Stories are only costed if the team fully understand the issue. If there are questions the issue is noted and the questions taken to the Business Sponsor. Yes, it would be great if they were in the costing meeting and could answer the questions there and then, but it can be a little like herding cats sometimes. The cost to the business sponsor is that the issue isn’t costed and can’t go into a sprint until it is, and it’s a cost they’re incurring by not attending, not that we’re imposing on them.

Stories that exceed 20 points are either quickly split into a couple of stories, or converted to an epic and a costing task raised. This allows time in a sprint for one or more members of the team to find the full set of requirements from the business sponsor and generate the full set of required stories for later costing.

Scope creep can either be added to a story, or a new story created for the creep. If it’s added to an existing story it’s old costing is discarded, and if that story is in the current sprint it’s ejected from the sprint until it’s been re-costed and space made for it. The costing may happen there and then with the team having a quick huddle, or it may need to wait for the next planning meeting.

It’s not a silver bullet

Nothing is written in stone except the maximum velocity of the team. Sprints can start late, end late or end early. Releases can be held back, or bought forward. Issues can be removed from the sprint and replaced with others. We can react to the business, but it’s not a silver bullet. The more the business change their minds the slower throughput gets due to the inertia of changing direction, however, they are now better informed and can see and measure the effects of this which has resulted in a lot less chopping and changing.

Projects are now being delivered on time, however, the timescales are also now realistic, and easily tracked. Projects are becoming better defined as the true cost of them is realised by those proposing them. The output is similar to what it used to be, but is now more focused. Rather than over promise, under deliver and spend months cleaning up the mess certain projects just aren’t even attempted.

The process is continually evolving. We’ve done pure Scrum and pure Kanban before. The model we use took the most useful aspects of both of these systems. As we try new things we’ll take the best bits and adapt them to suit us. No doubt there are Agile Evangelists out there who will balk at one or more aspects of what we do as being wrong. Maybe they are, all I can say is they work for us and the team is happy with how we work. If they’re not, we change it.


1 I have worked on quite a few “million pound” projects or deals. The common denominator is that all of them failed to produce the promised million, often by many orders of magnitude.

2 Why Wednesday? People are more likely to be in. There isn’t that last minute panic on Friday to get everything finished and the sprint doesn’t start on a day where people are catching up after the weekend.

TDD, BDD and Sonar; or how I learned to stop leaning on a crutch

I should probably start by pointing out that this post is more to help me get something straight in my head than anything else. It’s also covering a subject that I’m not sure I fully understand, so I may have completely missed the point.

One of the things that I was most interested in at Sync Conf was Behaviour Driven Development (BDD). I’ve never been great at following Test Driven Development (TDD), mainly because I couldn’t make the shift required to fully get my head round it. What I practiced, if it has a name, was Javadoc Driven Development; each method would have descriptive Javadoc that defined what happened on various inputs. I found that by doing this I built up a narrative of how a class would work and that provided me with concrete test examples.

This method of testing only really works if you write decent Javadoc. The following is next to useless, and on many levels:

/**
* Sets the name.
*
* @param name the name
*/
public void setName(String name);

What happens if I pass in null? What happens if I call the method twice? Does passing a blank string cause issues? I’ve heard so many arguments that Javadoc is pointless and you should just read the code, after all, it’s just going to get out of date. I find that attitude utterly reprehensible; the behaviour of the method could rely on something that it’s calling and that could go more than one level deep. I don’t want to have to dive through reams of code to understand if what I’m calling does what I expect, nor do I want to have to guess. For me the Javadoc provides a contract for the method in a narrative form which is then implemented by the code. Change the logic of the code and you have to change the Javadoc, and I’d actually change the Javadoc first.

The mindset I try and put myself in is not “how will this method work logically”, but more “how will this method be used and how does it fit in with what the object is trying to do”. In the above example we’re setting a name. Does it make sense for a name to be null, or blank? If we can have an empty name how is that represented? null, empty string, or both? Is a blank string treated the same as an empty string? Should we trim the string? Are there other things we’re going to limit? You then document the answers to those questions:

/**
* Sets the name to the given value, overwriting any previous value that was
* set. Passing <code>null</code> into this method will have the effect of 
* clearing the name. Passing a blank or empty string will have the effect 
* of clearing the name by setting it to <code>null</code>.
*
* @param name the value to set the name to
*/
public void setName(String name);

All of a sudden you’ve got a load of things you need to test. I’d be writing tests to ensure that:

  • Given a null value, providing a value A set the value to A.
  • Given a null value, providing a null value retained the null value.
  • Given a null value, providing a blank value retained the null value.
  • Given the value A, providing a value A retained the value A.
  • Given a value A, providing value B set the name to that value. 
  • Given a value A, providing a null set the name to null.
  • Given a value A, providing a blank string set the name to null.

Those of you familiar with BDD may already be seeing something familiar here. What I have been doing is something very similar to BDD, albeit in an informal and haphazard way and at a very low level. I was defining functionality as a narrative in the Javadoc with my user in mind (another developer), and then using that to discover the scenarios that I should test. I was actually so used to this method of working that when I used Tumbler to test one of the classes I was working on it already felt natural and the tests practically wrote themselves. Interestingly enough the conventions used by BDD and Tumbler freed me from one of the constraints I was facing with testing; that of my test classes were getting too big. I will still structuring my tests classes as the mirror of my code, so for any object Foo there was a FooTest. By thinking in stories for the tests too and grouping scenarios that belong to a story I could break out of this habit and have as many classes as I needed testing the behaviour of Foo.

Happy with my new test scenario, and the output that Tumbler had produced, I proceeded to run Sonar against it. Sonar did not like the result. Most of what it complained about no longer matters. I can turn off the requirement for Javadoc on all methods because my test methods contain the scenario in the code and don’t require it elsewhere. The need for a descriptive string in my assertions can also be turned off as the way the tests are displayed by Tumbler provide a much more natural way of reading the test results. One critical marker stood out though: 0% code coverage.

It took me all of 30 seconds to work out what was going on. Tumbler uses its own JUnit test runner, produces its own reports and isn’t spitting out the files that Sonar is looking for and so there’s nothing for it to report on. This may or may not be something I can fix, although Google is yielding nothing on the subject. This got me to thinking: Do I need to know my coverage? Surely if I’m defining the behaviour of the class, then writing the code for that then I’ve pretty much got 100% coverage since I shouldn’t be writing code that produces behaviour that hasn’t been defined. This is where I got stuck.

Liz Keogh, who gave the Sync Conf BDD talk mentioned that BDD didn’t work so well for simple or well understood scenarios. Should I be using TDD here? That way I’d get my test coverage back, but I lose my new way of working. Finally, after much Googling I can across this blog and came to the realisation that I’m coming at this all wrong: there is no spoon. What I think Liz meant was that BDD at the level of the business helping to write the scenarios isn’t useful for well understood scenarios, because they’re well understood and anyone can write them, not that we just give up on BDD and go do something else… or maybe she did, but if I’m understanding Hadi correctly then using TDD instead of BDD is just a semantic shift and I could just as easily use BDD in TDD’s place.

We all know that 100% test coverage means nothing, and that chasing it can end up an exercise in pointlessness. Then there’s the farcical scenario where you have 100% test coverage, all tests running green and software that doesn’t work. So why the panic over not knowing my test coverage? I think it boils down to the Sonar reports let me browse the code, see whats not tested and then think up tests to cover that code. In other words chasing 100% (or 90% or 80%) and writing tests for testings sake. If I’m doing BDD properly then I’ll be thinking up the narratives, determining the scenarios and then writing the code to meet those scenarios. If my code coverage isn’t above 80% (which is a level I consider to be generally acceptable) then I’m doing something wrong as there is code and code paths not covered by scenarios which is, in theory, pointless code.

So how do I solve my Sonar problem? Simple, unhook the alerts on test coverage, remove the test coverage widget and keep my eye out for a plugin for Tumbler reports. In the mean time I can just use the reports generated by Tumbler to keep an eye on my tests and make sure they’re running clean and read up on getting my Maven builds to fail when Tumbler has a failed test.

Maven and Eclipse

Yesterday’s issue with Maven turned out to be a little more severe than just not having an Internet connection. Not only could Eclipse not create my new project, it couldn’t build an existing one. This problem persisted even with an Internet connection. From the command line everything worked though.

After fiddling about with a few things I tried a software update for Eclipse. It failed updating GWT. The last time I used eclipse on my laptop I was buggering about with Google Web Toolkit and Maven. Given the failure to update maybe I’d broken something. I uninstalled GWT. No joy.

Finally I stumbled across something on Google that suggested I blow away a large chunk of my local Maven repository, rebuild clean from the command line, refresh the Eclipse project and run Maven -> Update from within the project. That worked.

As far as I can work out I had newer version of the jars in my repository than Eclipse wanted and, for whatever reason, it wasn’t downloading the versions I needed. The command line was happy with the version I had. By deleting and updating it obviously downloaded versions that everyone was happy with. And people wonder why I like Ant.