Monthly Archives: November 2013

I am not a developer

I am not a developer anymore” – Me, last night

This (probably not so) startling statement was made by me at yesterdays Norfolk Developers meeting as part of a discussion about the Developer to QA ratio in my team. Actually, the statement is probably more accurate if you change it to “I am not a professional developer anymore“. The reality is I’ve not done any real development at work for about a month now. I’m a manager. I don’t code, I… well, that’s the thing. I’m not so sure what I do any more.

Developers make the worst managers” – Me, c.2003

History has taught me that developers are not good at management. Don’t ask my why this is1, it’s just something I’ve observed during my career. It’s by no means a hard and fast rule, but there is a strong correlation there.

This doesn’t bode well for me. If you’ll excuse the WoW2 analogy: I feel like I’ve gone from being a level 60 Developer with Epic gear to being a level 1 Manager with noobie kit. Oh, everyone says I’m doing a wonderful job, but this is still the honeymoon period and I’ve yet to cock anything up.

Yesterday I… erm…” – Me, 9:00am most mornings during the standup meeting

Part (most?) of my problem revolves round the fact that I no longer produce anything. As a developer I can point to a list of git commits and say “I wrote this code“; I can point to a feature and say “I implemented this“; I can point to a bug report and say “I fixed this“. My job is now meta. I liaise, guide, advise and facilitate while others do the actual work. Hopefully while doing this I add value.

For a long time now I’ve kept a private work journal detailing what I’ve done during the day. It’s proven to be useful on more than one occasion and now my job is much less tangible it helps keep track of what it is I actually do. There is a blissful irony here though: I’m now so busy doing… whatever it is I do during my day, that I don’t always have time to note it down at the end of the day.

Have you tried pairing a QA and a developer together?” – Chris Oldwood, last night, 21:00

One thing I learned being a developer is that peer review is A Good Thing™, and that you can learn new things from the most unlikely places. It’s part of the reason why I am so candid about our development processes at things like nor(DEV): and SyncNorwich, warts and all; the feedback you get is invaluable.

The discussion that ensued after Cat Landin‘s talk last night on why developers are so bad at testing gave some really valuable insight into fixing some of the problems we have in our team. Sometimes all it takes is someone unencumbered by the politics, culture and mindset of an organisation to point out simple, but effective fixes.

After last night I have a number of “bug fixes” for our processes. Lets hope they’re as easy to refactor as code.


1Trust me, if I knew I’d be cashing in on “How to go from Developer to Manager” courses and books.

2World of Warcraft – and also from 5+ years ago, I’ve been clean a while now.

Chasing 100% Coverage

Unit tests, as we all know, are A Good Thing™. It stands to reason then that 100% unit test coverage must also be A Good Thing™, and therefore is something that should be strived for at all costs. I’d argue this is wrong though.

100% unit1 test coverage can be exceptionally difficult to achieve in real world applications without jumping through a whole load of hoops that end up being counterproductive. The edges of your system often talk to other systems which can be difficult to isolate or mock out. You can often be left chasing that last few percent and making counterintuitive changes to achieve them. At this point I’d say it’s better to leave the clean, readable, untested code in and just accept 100% coverage isn’t always possible.

This leads to another problem though. Once you’re not hitting 100% coverage you need to be sure that code that isn’t covered is actually code you can’t cover. As your code base gets bigger the amount a single missed line of code affects your coverage gets smaller.

PHPUnit takes a pragmatic approach to this issue; it allows you to mark blocks of code as being untestable. The net result is that it simply ignores these blocks of code in its coverage calculations allowing you to get 100% coverage of testable code.

Quite a few people who I’ve told about this have declared this to be ‘cheating’, however, lets look at a very real issue I have in one of my bits of Java code. I have code that uses the list of locales that can be retrieved by Java. It uses the US as default since it’s reasonable to assume that the US locale will be set up on a system. While highly improbably, it’s not impossible for a system to lack the US locale and the code handles this gracefully. Unit testing this code path is impossible as it involves changes to the environment. I could conceivably handle this in functional tests, but it’s not easy. I could remove the check and just let the code fall over in a crumpled heap in this case, but then if it ever does happen someone is going to have a nasty stack trace to deal with rather than a clear and concise error message.

If I could mark that single block of code as untestable I would then be able to see if the rest of my code was 100% covered at a glance. As it is I’ve got 99 point something coverage and I need to drill into the coverage results to ensure the missing coverage comes from that one class. Royal pain in the behind.

I am willing to concede that the ability to mark code as untestable can be abused, but then that’s what code reviews are for. If someone knows how to unit test a block of code that’s marked as untestable they can fail the review and give advice to the developer that submitted the code.


1 People seem to get unit and functional tests muddled up. A unit test should be able to run in isolation with no dependency on environment or other systems. Functional tests, on the other hand, can make assumptions about the environment and can require that other systems are running and correctly configured. The classic example is a database. Unit tests should mock out the database connection and used canned data within the unit tests. Functional tests can connect to a real database pre-populated with test data. Typically functional tests also test much larger blocks of code than individual unit tests do.