Tag Archives: BDD

K.I.S.S.

One of the talks I do explains how ‘simple’ in the K.I.S.S.1 principle is not the same as ‘easy‘. Sometimes the easy option can introduce complexities into the system that become detrimental and actually introduce complexity. The simplest example is testing. Not testing is the easy option; but it introduces uncertainty and complexity into your system that will be difficult to pin down at a later date.

That said easy and simple are not mutually exclusive, which is an important thing to remember, especially when you’re caught in the throws of trying to fix a supposedly simple problem.

I spent far too many hours yesterday trying to write an install script for one of our components. The underlying issue is simple: “As someone installing the component, I want the configuration files to reflect the idiosyncrasies of my setup so that I can compile the code”. Or, if you prefer BDD format: “Given I am installing the component, when I perform the compile step, then the compilation should work”.

It all boils down to a bindings file which specifies the location of a couple of libraries and includes. The defaults specified in that file don’t work on everyones system, although they do work on live2.

My salutation was to write some code that iterated through the files, check if any were missing, and if so, prompt the user to give the correct locations. Great, except the format of bindings.gyp is such that I needed to take a complex structure, flatten it, inject extra details so the user prompts made sense, and then reconstruct the complex structure from the flat one. Not wanting to hard code the format I then disappeared up my own backside with specially crafted config files and mappings from that to the format used by bindings.gyp.

nearly 200 lines of code in, deep in callback hell with grave concerns about whether my script would even pass code review I discovered some pretty nasty bugs that meant that minor configuration changes to a live server would cause an automated deploy to suddenly require user input, which we didn’t want. Adding logic to provide an override to this would make things even more complex and my nice, simple solution was disappearing out of reach.

It was then that it hit me that I was providing the wrong solution. This is something that really needs to be set once per environment and then left. It’s not going to be used a huge amount of time, it doesn’t need to be gold plated. With that in mind I wrote a simple shell script that checked for the presence of an environment variable. If the variable existed, use the bindings file it points to, if it doesn’t, use the default; simples.

All told, with error handling and everything, the script is 21 lines long. Not only that, but it provides a nice way to handle upgrades to the environment in live without having to redeploy.


1 Keep It Simple Stupid – in a nutshell, don’t overcomplicate things.

2 Something we wanted to maintain.

Why you should be documenting your code

It seems there are two schools of thought when it codes to documenting code. In one corner you’ve got those who feel that the “code should be expressive“; those that feel the “unit tests should document the code“; and there are those who argue that developers are lazy and the documentation gets out of date. These people will argue that there is no point documenting code. And in the other corner there’s me… well, me and the rest of my team, who have all seen sense after I’ve pointed out the error of their ways. 🙂

The rule for Javadoc in our team is “do it properly, or not at all“. I don’t mandate it because if you do you will just end up with rubbish. I do strongly encourage it though. The key word, however, is properly. I’m not just talking about ending the first sentence with a full stop, and making sure you have all the correct param, returns and throws tags; I mean well written, well thought out documentation.

I’ll handle the lazy developers argument first. The same argument can be made for not unit testing, and yet I doubt there are many people who would take the stance that not unit testing is OK. It’s part of development. Writing good Javadoc should also be a part of development.

But it’ll get out of date!“, I hear you cry. So not only has a developer done a half arsed job when updating some code by not updating the documentation, everyone who peer reviewed it missed that too. Please, feel free to hide behind that excuse. You may also want to stop doing reviews because you’re not doing them correctly. Also, what else are you missing?

The unit tests should document the code. Let’s consider a hypothetical situation. I’ve got your object and I’m calling a getter. I want to know if you’ll return null or not, because I may have some extra processing if to do if I do get a null value. You expect me to stop, mid flow, go find the unit tests, locate the relevant tests and work out if null is something that might come back? Really? If there are no tests dealing with null return values can I safely assume that null is never returned, or did you just forget to include that? I’ve never bought that argument.

As to the code being descriptive, well yes, it should. But it’s not always evident what’s going on. I can leap into our hypothetical get method and have a poke about to see what’s happening, but what if we’re getting the value from a delegate? What if that talks to a cache and a DAO? How far do I need to delve into your code to work out what’s going on?

And then we come on to the best argument of all: “You only need to document API’s“. It’s all API.

But what is good documentation? To help with that we’ll expand our scenario. Our getter has the following interface:

public String getName(int id);

What would most people put?

/**
 * Gets the name for the given id.
 * 
 * @param id the id
 *
 * @returns the name
 */
public String getName(int id);

It’s valid Javadoc, I’ll give you that, but it’s this kind of rubbish that has given the anti-documentation lot such great ammo. It’s told me nothing that the code hasn’t already told me. Worse still, it’s told me nothing I couldn’t have worked out from the autocomplete. I’d agree with you that having no Javadoc is better than that. This is why I say do it properly, or not at all.

The documentation needs to tell me things I can’t work out from the interface. The method takes an ID, you may want to be more explicit what the ID is. Are there lower or upper bounds for the ID? You return a String, are there any assumptions I can make about this String? Could the method return null?

/**
 * Returns the name of the user identified by the provided
 * user ID, or <code>null</code> if no user with the
 * specified ID exists. The user ID must be valid, that is
 * greater than 1.
 * 
 * @param id a valid user ID
 *
 * @returns the name associated with the given ID, or
 * <code>null</code> if no user exists
 *
 * @throws IllegalArgumentException if the user ID is less
 * than 1
 */
public String getName(int id);

This Javadoc does three things.

  1. It means I can easily get all the information I may need about the method from within the autocomplete dialog in my IDE, so I don’t need to leave my code.
  2. It defines a contract for the method so anyone implementing the code (or overriding it in the case of already implemented code) knows the basic assumptions that existing users already have.
  3. It provides me with a number of unit tests (or BDD scenarios) that I already know I have to write. While I may not be able to enforce the contract within the framework of the code (especially with interfaces), I can ensure the basic contract is followed by providing tests that check id is greater than 1 and that some non-existent ID returns null.

Writing good documentation forces you to think about your code. If you can’t describe what the method is doing succinctly then that in itself is a code smell. It forces you to consider the edge cases and to think about how people are going to use your code. When updating a method it forces you to consider your actions. If you’re changing our get method so that it will always return a default value rather than null will that affect other peoples logic and flow? What about the other way round when you’ve got a method that states it doesn’t return null – change that and you’re likely going to introduce NPEs in other code. Yes, the unit tests should pick that up, but lets work it out up front, before you’ve gone hacking about with the code.

Those of you familiar with BDD may spot that what you’re doing here is just performing deliberate self discovery with yourself. The conversation is between you, and you as a potential future developer. If, during that conversation, you can’t find anything to say that the interface doesn’t already tell you, then by all means leave out the Javadoc. To my mind that’s pretty much just the getters and setters on Javabeans where you know it’ll take and return anything. Everything else can and should benefit from good documentation.

BDD inline questions

I’ve started using BDD scenarios for most things now as I find the Given/When/Then syntax great for focusing on the problem and generating questions. What I have found though is that often the questions need to be written down somewhere to ask the relevant person. Correlating the questions and the scenarios can be a pain so I’ve started putting them inline into my scenario text using a ?? notation. The format is simple:

  • ?? on it’s own: denotes an outstanding question about the preceding text.
  • word??: denotes a question about the preceding word.
  • ?option/option?: denotes uncertainty about which option should be used.
  • ?question?: denotes a question being asked.

This syntax can actually be used to write the original scenario:

Given ??
When ??
Then ?? should ??

An example of this usage might be:

Given a user on a web?? form
When the user ?enters/selects? ?should we be thinking about how the provides the data? their D.O.B.
Then the form should ??

A discussion, which could be via email or some other means than face to face, can than be had to resolve the questions using just the scenario files and the annotations to the scenarios. You could even go one step further and use something like Gollum to track the changes to the scenarios.

Tumbler and low level stories

I’ve run into a bit of a brick wall with Tumbler in so far as I think I’m using it at far too low a level. I’ve got a fairly simple object at the moment, little more than a bean, which I’m using as a working example to try these new techniques out. While my first story and group of scenarios were easy to write I started running into issues with the second group. The issues are twofold. Firstly I’m having to learn to rethink how I group my tests to fit into stories and scenarios. While working this through I started butting into problems with long class and method names which I can’t really shorten as there will, eventually, be literally hundreds of tests and I need to be able to distinguish between them.

After fiddling about with different ways of framing the stories and scenarios I discovered that its annotations aren’t picked up by the JUnit plugin for Eclipse so I can’t rely on them to make readable tests, I have to use readable class and method names.

Then, I discovered that Tumbler isn’t hierarchical. Stories are listed on the index page, then you can drill down into a story and see it’s scenarios. That’s it. If I had 100 stories I’d have to wade through all of them on the front page. What I need is epics.

This all rather makes sense for a tool that’s going to be used at a higher level, detailing 20 or 30 user stories that constitute an application, but I want the ability to test at different levels. After all, as I understand it, BDD can be considered fractal in its nature and is as easily applied to a users interaction with a save dialog box as it is to the save method call on some object somewhere. Yes, the players change, and yes the granularity and precision of the inputs and outputs change, but it’s the same fundamental thought process when developing the tests.

In order to shed some light on the issue I tool a look at the Tumbler source, specifically the test cases, but they were all at the user level. Tumbler itself isn’t that complicated a program so it may be that these user stories suffice for testing the majority of the code, but I want to know at an object level that they do what they say on the tin.

Sadly, the majority of this discovery has been performed on the train with its connectivity issues so performing research into alternative tools is proving hard. That said, given Tumbler isn’t massively complex I may just put my current project to one side, fork that and get it to work at both the low and high levels. In the mean time it seems like I need to do more research.

TDD, BDD and Sonar; or how I learned to stop leaning on a crutch

I should probably start by pointing out that this post is more to help me get something straight in my head than anything else. It’s also covering a subject that I’m not sure I fully understand, so I may have completely missed the point.

One of the things that I was most interested in at Sync Conf was Behaviour Driven Development (BDD). I’ve never been great at following Test Driven Development (TDD), mainly because I couldn’t make the shift required to fully get my head round it. What I practiced, if it has a name, was Javadoc Driven Development; each method would have descriptive Javadoc that defined what happened on various inputs. I found that by doing this I built up a narrative of how a class would work and that provided me with concrete test examples.

This method of testing only really works if you write decent Javadoc. The following is next to useless, and on many levels:

/**
* Sets the name.
*
* @param name the name
*/
public void setName(String name);

What happens if I pass in null? What happens if I call the method twice? Does passing a blank string cause issues? I’ve heard so many arguments that Javadoc is pointless and you should just read the code, after all, it’s just going to get out of date. I find that attitude utterly reprehensible; the behaviour of the method could rely on something that it’s calling and that could go more than one level deep. I don’t want to have to dive through reams of code to understand if what I’m calling does what I expect, nor do I want to have to guess. For me the Javadoc provides a contract for the method in a narrative form which is then implemented by the code. Change the logic of the code and you have to change the Javadoc, and I’d actually change the Javadoc first.

The mindset I try and put myself in is not “how will this method work logically”, but more “how will this method be used and how does it fit in with what the object is trying to do”. In the above example we’re setting a name. Does it make sense for a name to be null, or blank? If we can have an empty name how is that represented? null, empty string, or both? Is a blank string treated the same as an empty string? Should we trim the string? Are there other things we’re going to limit? You then document the answers to those questions:

/**
* Sets the name to the given value, overwriting any previous value that was
* set. Passing <code>null</code> into this method will have the effect of 
* clearing the name. Passing a blank or empty string will have the effect 
* of clearing the name by setting it to <code>null</code>.
*
* @param name the value to set the name to
*/
public void setName(String name);

All of a sudden you’ve got a load of things you need to test. I’d be writing tests to ensure that:

  • Given a null value, providing a value A set the value to A.
  • Given a null value, providing a null value retained the null value.
  • Given a null value, providing a blank value retained the null value.
  • Given the value A, providing a value A retained the value A.
  • Given a value A, providing value B set the name to that value. 
  • Given a value A, providing a null set the name to null.
  • Given a value A, providing a blank string set the name to null.

Those of you familiar with BDD may already be seeing something familiar here. What I have been doing is something very similar to BDD, albeit in an informal and haphazard way and at a very low level. I was defining functionality as a narrative in the Javadoc with my user in mind (another developer), and then using that to discover the scenarios that I should test. I was actually so used to this method of working that when I used Tumbler to test one of the classes I was working on it already felt natural and the tests practically wrote themselves. Interestingly enough the conventions used by BDD and Tumbler freed me from one of the constraints I was facing with testing; that of my test classes were getting too big. I will still structuring my tests classes as the mirror of my code, so for any object Foo there was a FooTest. By thinking in stories for the tests too and grouping scenarios that belong to a story I could break out of this habit and have as many classes as I needed testing the behaviour of Foo.

Happy with my new test scenario, and the output that Tumbler had produced, I proceeded to run Sonar against it. Sonar did not like the result. Most of what it complained about no longer matters. I can turn off the requirement for Javadoc on all methods because my test methods contain the scenario in the code and don’t require it elsewhere. The need for a descriptive string in my assertions can also be turned off as the way the tests are displayed by Tumbler provide a much more natural way of reading the test results. One critical marker stood out though: 0% code coverage.

It took me all of 30 seconds to work out what was going on. Tumbler uses its own JUnit test runner, produces its own reports and isn’t spitting out the files that Sonar is looking for and so there’s nothing for it to report on. This may or may not be something I can fix, although Google is yielding nothing on the subject. This got me to thinking: Do I need to know my coverage? Surely if I’m defining the behaviour of the class, then writing the code for that then I’ve pretty much got 100% coverage since I shouldn’t be writing code that produces behaviour that hasn’t been defined. This is where I got stuck.

Liz Keogh, who gave the Sync Conf BDD talk mentioned that BDD didn’t work so well for simple or well understood scenarios. Should I be using TDD here? That way I’d get my test coverage back, but I lose my new way of working. Finally, after much Googling I can across this blog and came to the realisation that I’m coming at this all wrong: there is no spoon. What I think Liz meant was that BDD at the level of the business helping to write the scenarios isn’t useful for well understood scenarios, because they’re well understood and anyone can write them, not that we just give up on BDD and go do something else… or maybe she did, but if I’m understanding Hadi correctly then using TDD instead of BDD is just a semantic shift and I could just as easily use BDD in TDD’s place.

We all know that 100% test coverage means nothing, and that chasing it can end up an exercise in pointlessness. Then there’s the farcical scenario where you have 100% test coverage, all tests running green and software that doesn’t work. So why the panic over not knowing my test coverage? I think it boils down to the Sonar reports let me browse the code, see whats not tested and then think up tests to cover that code. In other words chasing 100% (or 90% or 80%) and writing tests for testings sake. If I’m doing BDD properly then I’ll be thinking up the narratives, determining the scenarios and then writing the code to meet those scenarios. If my code coverage isn’t above 80% (which is a level I consider to be generally acceptable) then I’m doing something wrong as there is code and code paths not covered by scenarios which is, in theory, pointless code.

So how do I solve my Sonar problem? Simple, unhook the alerts on test coverage, remove the test coverage widget and keep my eye out for a plugin for Tumbler reports. In the mean time I can just use the reports generated by Tumbler to keep an eye on my tests and make sure they’re running clean and read up on getting my Maven builds to fail when Tumbler has a failed test.