Monthly Archives: February 2013

TDD, BDD and Sonar; or how I learned to stop leaning on a crutch

I should probably start by pointing out that this post is more to help me get something straight in my head than anything else. It’s also covering a subject that I’m not sure I fully understand, so I may have completely missed the point.

One of the things that I was most interested in at Sync Conf was Behaviour Driven Development (BDD). I’ve never been great at following Test Driven Development (TDD), mainly because I couldn’t make the shift required to fully get my head round it. What I practiced, if it has a name, was Javadoc Driven Development; each method would have descriptive Javadoc that defined what happened on various inputs. I found that by doing this I built up a narrative of how a class would work and that provided me with concrete test examples.

This method of testing only really works if you write decent Javadoc. The following is next to useless, and on many levels:

/**
* Sets the name.
*
* @param name the name
*/
public void setName(String name);

What happens if I pass in null? What happens if I call the method twice? Does passing a blank string cause issues? I’ve heard so many arguments that Javadoc is pointless and you should just read the code, after all, it’s just going to get out of date. I find that attitude utterly reprehensible; the behaviour of the method could rely on something that it’s calling and that could go more than one level deep. I don’t want to have to dive through reams of code to understand if what I’m calling does what I expect, nor do I want to have to guess. For me the Javadoc provides a contract for the method in a narrative form which is then implemented by the code. Change the logic of the code and you have to change the Javadoc, and I’d actually change the Javadoc first.

The mindset I try and put myself in is not “how will this method work logically”, but more “how will this method be used and how does it fit in with what the object is trying to do”. In the above example we’re setting a name. Does it make sense for a name to be null, or blank? If we can have an empty name how is that represented? null, empty string, or both? Is a blank string treated the same as an empty string? Should we trim the string? Are there other things we’re going to limit? You then document the answers to those questions:

/**
* Sets the name to the given value, overwriting any previous value that was
* set. Passing <code>null</code> into this method will have the effect of 
* clearing the name. Passing a blank or empty string will have the effect 
* of clearing the name by setting it to <code>null</code>.
*
* @param name the value to set the name to
*/
public void setName(String name);

All of a sudden you’ve got a load of things you need to test. I’d be writing tests to ensure that:

  • Given a null value, providing a value A set the value to A.
  • Given a null value, providing a null value retained the null value.
  • Given a null value, providing a blank value retained the null value.
  • Given the value A, providing a value A retained the value A.
  • Given a value A, providing value B set the name to that value. 
  • Given a value A, providing a null set the name to null.
  • Given a value A, providing a blank string set the name to null.

Those of you familiar with BDD may already be seeing something familiar here. What I have been doing is something very similar to BDD, albeit in an informal and haphazard way and at a very low level. I was defining functionality as a narrative in the Javadoc with my user in mind (another developer), and then using that to discover the scenarios that I should test. I was actually so used to this method of working that when I used Tumbler to test one of the classes I was working on it already felt natural and the tests practically wrote themselves. Interestingly enough the conventions used by BDD and Tumbler freed me from one of the constraints I was facing with testing; that of my test classes were getting too big. I will still structuring my tests classes as the mirror of my code, so for any object Foo there was a FooTest. By thinking in stories for the tests too and grouping scenarios that belong to a story I could break out of this habit and have as many classes as I needed testing the behaviour of Foo.

Happy with my new test scenario, and the output that Tumbler had produced, I proceeded to run Sonar against it. Sonar did not like the result. Most of what it complained about no longer matters. I can turn off the requirement for Javadoc on all methods because my test methods contain the scenario in the code and don’t require it elsewhere. The need for a descriptive string in my assertions can also be turned off as the way the tests are displayed by Tumbler provide a much more natural way of reading the test results. One critical marker stood out though: 0% code coverage.

It took me all of 30 seconds to work out what was going on. Tumbler uses its own JUnit test runner, produces its own reports and isn’t spitting out the files that Sonar is looking for and so there’s nothing for it to report on. This may or may not be something I can fix, although Google is yielding nothing on the subject. This got me to thinking: Do I need to know my coverage? Surely if I’m defining the behaviour of the class, then writing the code for that then I’ve pretty much got 100% coverage since I shouldn’t be writing code that produces behaviour that hasn’t been defined. This is where I got stuck.

Liz Keogh, who gave the Sync Conf BDD talk mentioned that BDD didn’t work so well for simple or well understood scenarios. Should I be using TDD here? That way I’d get my test coverage back, but I lose my new way of working. Finally, after much Googling I can across this blog and came to the realisation that I’m coming at this all wrong: there is no spoon. What I think Liz meant was that BDD at the level of the business helping to write the scenarios isn’t useful for well understood scenarios, because they’re well understood and anyone can write them, not that we just give up on BDD and go do something else… or maybe she did, but if I’m understanding Hadi correctly then using TDD instead of BDD is just a semantic shift and I could just as easily use BDD in TDD’s place.

We all know that 100% test coverage means nothing, and that chasing it can end up an exercise in pointlessness. Then there’s the farcical scenario where you have 100% test coverage, all tests running green and software that doesn’t work. So why the panic over not knowing my test coverage? I think it boils down to the Sonar reports let me browse the code, see whats not tested and then think up tests to cover that code. In other words chasing 100% (or 90% or 80%) and writing tests for testings sake. If I’m doing BDD properly then I’ll be thinking up the narratives, determining the scenarios and then writing the code to meet those scenarios. If my code coverage isn’t above 80% (which is a level I consider to be generally acceptable) then I’m doing something wrong as there is code and code paths not covered by scenarios which is, in theory, pointless code.

So how do I solve my Sonar problem? Simple, unhook the alerts on test coverage, remove the test coverage widget and keep my eye out for a plugin for Tumbler reports. In the mean time I can just use the reports generated by Tumbler to keep an eye on my tests and make sure they’re running clean and read up on getting my Maven builds to fail when Tumbler has a failed test.

Maven and Eclipse

Yesterday’s issue with Maven turned out to be a little more severe than just not having an Internet connection. Not only could Eclipse not create my new project, it couldn’t build an existing one. This problem persisted even with an Internet connection. From the command line everything worked though.

After fiddling about with a few things I tried a software update for Eclipse. It failed updating GWT. The last time I used eclipse on my laptop I was buggering about with Google Web Toolkit and Maven. Given the failure to update maybe I’d broken something. I uninstalled GWT. No joy.

Finally I stumbled across something on Google that suggested I blow away a large chunk of my local Maven repository, rebuild clean from the command line, refresh the Eclipse project and run Maven -> Update from within the project. That worked.

As far as I can work out I had newer version of the jars in my repository than Eclipse wanted and, for whatever reason, it wasn’t downloading the versions I needed. The command line was happy with the version I had. By deleting and updating it obviously downloaded versions that everyone was happy with. And people wonder why I like Ant.

The Beginning Is A Very Difficult Time

This morning was meant to see my start development of one of my project ideas. Maven had other ideas. I’ve been [ab]using Ant for longer than I can remember and would consider myself to be an advanced hacker. The build system at work will automatically switch between building JARs, WARs and EARs based on project structure; a project structure which is based on Maven. It’s not exactly an elegant build system, woe betide the poor bastard that inherits it from me, but it works and adding new projects is very simple.

The use of the Maven project structure stems from a number of years ago where the team I was working with decided that Maven was far too hideously complex to consider using, but had some sensible ideas of project layout. I’ve maintained this view until recently when I decided that perhaps there is a better option than thousands of lines of hand crafted XML to handle our builds. This coincided with me using more and more tools that are build using Maven prompting a decision to start learning how to use it. Since you learn best by doing I began to use Maven for any new projects.

This is the fourth time I’ve used Maven in a project from scratch, or at least it would be had I not been tripped up by something that keeps tripping me up: Maven often requires Internet access; something which, at this moment in time, I don’t really have… and certainly not to the level that I suspect Maven is going to need as it runs off and downloads all the dependencies I’m going to want.

Still, all is not lost. I have a name for the project (something that can often take me days to think up), I know what archetype I want to use and I’ve done a little research into a couple of things I’m going to include in the project just so I can have a go at using them. I’ll create the project over lunch and begin coding this afternoon

An Evening of Game A.I.

On Thursday I’m off to An Evening of Game AI. Now, what I know about writing modern games can be written on the back of a postage stamp. What I am is a consumer of games. And a fussy one at that. So why an evening of Game AI? Well first off its a subject that I think I’ll find interesting. Even if I never out the knowledge gained into practice I’m liable to walk away thinking “that was cool”. I’m a geek; shoot me. More importantly is the concept of Gameification [hideous word] and how I might apply game AI techniques into a business context. The ideas I have a very embryonic at the moment and it may be that they’re pie in the sky, but hopefully Thursday will prove useful and give me an idea of what directions I should be looking in.

Delivery Tracking

While lamenting the waste of a day waiting for a “7am-7pm” delivery which didn’t turn up, a friend of mine put the following on Facebook:

“The first delivery company to give a live schedule of how they reckon deliveries will pan out during the day, and keep it up-to-date, so you can at least see where you are in the queue… will make a fortune”

That got me to thinking, how hard would this be? Superficially I don’t think this is too hard a problem to solve using current technology. The hardest bit is the route planning and we can turn to Google for that. So what do we need for an MVP?

I’m thinking a driver app that contains an ordered list of postcodes, interspersed with any breaks the driver may have. The ability to reorder the drops and/or breaks. The ability to mark a drop as done. Here the definition of done is that delivery was attempted. Each time an update is made the app phones home with the current location and an ordered list of outstanding drops and breaks.

A server sits in the middle listening for these updates. Using the current location as the starting point the server will then traverse the drop off points querying Google to find out how long it will take to drive between each point. Store the time to each post code (including any breaks) with the post code.

A client page can then be used to query the server. It would find the correct ordered list of post codes, look up your post code and report how far down the list you are and how long, roughly, until the delivery will be with you.

Of course, the devil is in the detail, and integrating this with the drivers current handheld units, the parcel tracking software and everything else would take some thought, but the basic premis is there. If you were familiar with the Google APIs you could probably knock together a demo web page showing the drivers view at the top and the customers view at the bottom in a couple of days.

Mobile Office

Part of the reason for relaunching my blog was to document and track the start of a few personal projects I want to undertake. Finding time to work on these projects is always fun, especially now there’s a little one running round the house. What I do have, however, is two consistent blocks of 30+ minutes on the train each and every working day. Can I turn this into a mobile office and develop some applications in my spare time?

Well lets look at what I need. Firstly there’s space. Somewhere to actually do the work. While, by habit, I tend to sit in an ‘airline style’ seat (i.e. not a table seat) out of habit there’s no shortage of table seats, even if I have to share it with someone for part of the journey. This is one of the many perks of no longer working in London and living in the sticks.

Secondly I need a development environment. I usually have my MacBook Pro with me, it’s loaded with the tools I generally use (and more can be added) so I’d say that’s sorted.

Thirdly I’m going to need internet for reference, looking for libraries, getting unstuck and humorous cat pictures when things are going badly. That’s not going to be so easy. While I have two iThings with mobile connectivity, both with the ability to tether my laptop to them, the mobile signal on the train is intermittent at best. Between my iPhone and iPad, which exist on two different networks, I have coverage for perhaps 50% of the journey. Annoying if you need to look something up now.

This is not, however, an insurmountable problem. I’m pretty sure that making sure I have a much as possible available to me locally, and by planning what I’m going to do in each chunk I can minimise downtime.

First things first though, I need to decide what I’m going to work on first.

Wordle your code base

As part of Kevlin Henney‘s presentation at Sync Conf he showed a technique of visualising your code using Wordle. In a nutshell you strip the comments from your code (lumping it into one giant file in the process), pop it into Wordle and see what drops out. The theory behind this process is that, once you get past the scaffolding of the programming language, the language of the domain should become apparent. The results are very interesting.

I know there’s problems with our main code base. I work on it daily and the entire team knows about the issues at a near visceral level, but finding ways to express and visualise the problems can be hard. Today I discovered I work in a domain of java.lang.String and null. Not great for an OO language. Even the language scaffolding told a story. The relative size of the import keyword to the class keyword show we’ve got a lot of dependency issues.

More surprising was the result from our newer code base. Here the language of the domain started to show through, but it coloured by the unit tests which I’d forgotten to remove. On the plus side @Test features highly, as does final. Tomorrow I’ll generate a few more Wordles without the test code and use it to spark some discussion on our new architecture in our retrospective.

Sleep Sort

I may be late to the party here, but I was introduced to the Sleep Sort algorithm at Sync Conf last week. The code is deceptively simple:

#!/bin/bash
function f() {
    sleep "$1"
    echo "$1"
}
while [ -n "$1" ]
do
    f "$1" &
    shift
done
wait

Of course it’s also hideously inefficient. Using it to sort 3 1 4 1 5 is fine as it only takes 5 seconds, but I wouldn’t like to sort any number much over 10 as you could be there for some time. Does make me wonder if you could speed it up by using milliseconds rather than seconds… and at what point you start getting race conditions.

I hate WordPress

A couple of years ago I suffered quite a major hack on my sites thanks to a vulnerability in one of my WordPress plugins. After quarantining everything and disinfecting some key sites I got massively disillusioned with the whole self hosted internet presence thing and slunk off to go spam Facebook with my drivel instead. Facebook doesn’t scratch the blogging itch though and recently I’ve been trying out some alternatives to WordPress. Sadly, many years of using WordPress means, hate it as I do, I’m familiar with it. The plugins, themes and features I want are all there, and its got a very large and active community.

I was looking at using Habari, which showed promise, however it’s too immature at the moment. I may well return to it later if the issues I found with it and the plugins I wanted to use are resolved, but for the time being it’s a grudging return to WordPress. Fresh install, fresh database, fresh start, but WordPress is on its absolute last chance. If it fails me again I’ll probably go roll my own.

Meanwhile I’ve still got sites that I haven’t restored from the attack a couple of years ago. I think it’s time to accept that they’ll never be fixed, let the domain names expire and concentrate on slightly less things at once 🙂