Tag Archives: research

An Idea For Someone

I’ll be honest here, I don’t really curate my LinkedIn all that much, and I tend to accept any request to link. I do perform a little bit of filtering, but barriers to entry a low. Very low.

One thing I did notice that that, after my title change, I got a lot more requests from people I didn’t know (mainly recruitment consultants, so no surprises there). This got me to thinking:

What if I setup a fake profile with fake title (suitably impressive, like CEO), and just punted it out to LinkedIn. How long until the requests started pouring in?

OK, possibly too easy. Lets go one step further. Lets create a fake company with 26 profiles on LinkedIn. Each profile will be for nice, high ranking titles like CIO for APAC region, or CFO for EMEA. Each profile will also have a first name starting with a different letter of the alphabet. Now, here comes the fun. That fake profile will only accept requests from people who’s names also contain the starting letter of the profiles name (or, if this proves to be wildly successful, only accept requests from people who’s names start with the same letter as the fake profile). After 1 year, look at the network the fake company has built.

Sadly I’m far too lazy to do this (I suspect a fake company website may also be in order), but hey, this is the internet; there are people out there with WAY too much time on their hands. Take this idea, go, run with it. Report back in a year 😀

Replacing our Git branching strategy

The branching strategy we use at work is one that has evolved from us tentatively learning how Git works, reacting to our mistakes and avoiding the problems that have arisen. I have, on more than one occasion, had to rebuilt the history due to errant merges going into master. Mistakes can also happen resulting in contamination of release branches, or failure to update all required branches, including master. While it works for us, the fact that it can occasionally bite us on the bum, and the fact that I’m having difficulty documenting why we do what we do both point to the conclusion that the strategy is flawed. So I’m binning it, although not until I have a working alternative that we’re happy with.

Key to this new strategy will be how changes are reapplied back to the main and release branches. Rather than simply taking peoples word for why you should or shouldn’t be doing merges or rebases in Git I’ve gone right back to basics and made sure I fully understand what is going on when you merge or rebase and what the implications are – to the point of setting up parallel git repositories and testing the same operations with different strategies on each.

Secondly I need to look at putting branches back on origin. Being distributed can make git a pain in the backside for some things and sometimes you really do just need one place to go look for data. A prime example is Fisheye/Crucible which we use for viewing our git repositories and performing code reviews. Since our JIRA workflow looks to Fisheye/Crucible for information about code changes and code reviews we push all branches to origin. Would a move to Stash remove this need?

Thirdly is our habit of keeping all branches. This leads to a lot of clutter and may or may not be related to the first two points; I’ll need to do more investigation on that front, however, I suspect I’ll be able to greatly reduce the number of branches we have.

What I suspect we’ll end up with is a strategy where most branches are taken from master, with master being rebased back into the branch before the branch is merged into master and then deleted. Release branches will also be taken from master as needed. Fixes to release branches will be branched from the release branch, the release branch rebased back in when work is done, and then the fix merged into the release branch. The release branch will then be merged back into master. At some point the release will be tagged and deleted. Pull requests using Stash will hopefully obviate the need to push feature branches to origin. How well that plan survives contact with reality I don’t know.

Revisiting Git

I first discovered Vincent Driessen’s branching model when I was investigating Git a couple of years ago. At the time I was desperate to get away from the merge issues that we were having with Subversion and was looking into both Git, and different branching models using it. Given the steep learning curve with Git we decided not to complicate things further and to stick with our old branching model; albeit with a slight change in naming convention borrowed from Vincent.

Now I’m a little more comfortable with Git it’s time to revisit our branching strategy, and use of Git overall. I’ve looked at Vincent’s branching model again, and looked at git-flow in great depth and have concluded it’s still not for us, however, the idea of codifying the workflow, and providing git extensions to aid using the workflow appealed to me immensely.

Currently we’re doing little more than branching off master for issues, with each issue (be it a new feature, or bug fix) living in its own branch. Issues are rebased into master when they’re done. Releases are also cut from master and any patches (either to the release version in QA, or to the release version in live) are cut from the release branch, rebased into that and then rebased into master.

In order to simplify our JIRA and Fisheye/Crucible setup we also push issue branches to origin making them visible from a central point, as well as providing a backup. These branches are considered private and not used by others.

Since we have tight control over all the branches a rebase only strategy works fine for our team, although we have had some problems caused by accidental merges. The next step is to improve the workflow and the merge/rebase strategies to survive all situations, codify these steps and then script them – something I’ve already started doing.

I’m also looking at this as a potential opportunity to possibly move away from Fisheye and Crucible to Stash, potentially saving some money whilst keeping the tight JIRA integration.

Tumbler and low level stories

I’ve run into a bit of a brick wall with Tumbler in so far as I think I’m using it at far too low a level. I’ve got a fairly simple object at the moment, little more than a bean, which I’m using as a working example to try these new techniques out. While my first story and group of scenarios were easy to write I started running into issues with the second group. The issues are twofold. Firstly I’m having to learn to rethink how I group my tests to fit into stories and scenarios. While working this through I started butting into problems with long class and method names which I can’t really shorten as there will, eventually, be literally hundreds of tests and I need to be able to distinguish between them.

After fiddling about with different ways of framing the stories and scenarios I discovered that its annotations aren’t picked up by the JUnit plugin for Eclipse so I can’t rely on them to make readable tests, I have to use readable class and method names.

Then, I discovered that Tumbler isn’t hierarchical. Stories are listed on the index page, then you can drill down into a story and see it’s scenarios. That’s it. If I had 100 stories I’d have to wade through all of them on the front page. What I need is epics.

This all rather makes sense for a tool that’s going to be used at a higher level, detailing 20 or 30 user stories that constitute an application, but I want the ability to test at different levels. After all, as I understand it, BDD can be considered fractal in its nature and is as easily applied to a users interaction with a save dialog box as it is to the save method call on some object somewhere. Yes, the players change, and yes the granularity and precision of the inputs and outputs change, but it’s the same fundamental thought process when developing the tests.

In order to shed some light on the issue I tool a look at the Tumbler source, specifically the test cases, but they were all at the user level. Tumbler itself isn’t that complicated a program so it may be that these user stories suffice for testing the majority of the code, but I want to know at an object level that they do what they say on the tin.

Sadly, the majority of this discovery has been performed on the train with its connectivity issues so performing research into alternative tools is proving hard. That said, given Tumbler isn’t massively complex I may just put my current project to one side, fork that and get it to work at both the low and high levels. In the mean time it seems like I need to do more research.

An Evening of Game A.I.

On Thursday I’m off to An Evening of Game AI. Now, what I know about writing modern games can be written on the back of a postage stamp. What I am is a consumer of games. And a fussy one at that. So why an evening of Game AI? Well first off its a subject that I think I’ll find interesting. Even if I never out the knowledge gained into practice I’m liable to walk away thinking “that was cool”. I’m a geek; shoot me. More importantly is the concept of Gameification [hideous word] and how I might apply game AI techniques into a business context. The ideas I have a very embryonic at the moment and it may be that they’re pie in the sky, but hopefully Thursday will prove useful and give me an idea of what directions I should be looking in.