Tag Archives: Cruise Control

Continuous Integration

I’m having lunch with Paul Grenyer today to discuss Continuous Integration, or CI. In a nutshell CI is an automated process that performs a build on a regular basis, be that every hour, overnight, or on every commit to a major branch. Ideally your build will also run your unit tests and any other tests or analysis you run meaning that at any given moment in time you can be confident that your build is sound.

CI has been a part of my coding life for so long that I can’t even remember when I was first introduced to it. What I do remember is that, initially at least, it was setup and handled by others. I simply had to check in my code and hope I didn’t get the dreaded “Build Broken” email from Cruise Control.

CI was so ingrained into me that it was a bit of a shock when I moved to a company that didn’t use it. We couldn’t even use the joke phrase “It compiles, lets ship it!” because we didn’t know if it actually did compile from commit to commit. A quick Google, much swearing and half a day later and we had Cruise up and running. Now, not only was there the “Build Broken” email to fear, there was editing the Cruise config file after each release to point to the new release branches. Cruise is (or at least was) not the easiest thing in the world to configure.

My tenure as a Cruise admin was mercifully short lived as I discovered Hudson which is much, much easier to configure. My fun with release branches continued until we moved to Git. By this time Hudson had forked and we had gone the Jenkins
route. Jenkins now runs CI builds, overnight builds, release builds and has been pressed into service as a handy way to kick off a few scripts either periodically or on request.

Our Builds

Much as I’d love to use Maven our legacy code makes that difficult. Instead we have a build project that handles all our builds using a set of quite complex ant scripts. Locally the developers have the option of:

  • clean: Delete all build artefacts. Not sure this is ever used, but it’s there, just in case.
  • compile: For our legacy code this does a local build and puts the build output in the directories required to run everything locally. Thanks to the magic of our system running things locally is different to running it in any other environment. For the newer code base this just compiles the code locally allowing you to run it. Given Eclipse does the same anyway it’s a target that is rarely used in the newer projects.
  • deploy: Perform a full build of the project, including Checkstyle checks, JUnit tests, Cobertura code coverage and packaging the code into it’s final zip, jar, war or ear (depending on the project). If this completes for all projects and dependents you have altered you can be reasonably sure that Jenkins will not fail your build. In the rare case that it does you are exempt from shame and punishment as it’s invariably something you couldn’t have known about.
  • sonar: Perform a deploy, then run Sonar over the code which performs an enhanced set of checks configured in Sonar. Keeping Sonar green keeps me happy, but unlike build failures, chasing a clean Sonar result should not be done at the expense of actually getting work done. Sometimes good enough is fine.
  • verify: The newer code base is split over a number of project. Verify runs deploy for each project checking that you’ve not broken anything in another project that may depend on your code.

Sat on top of this is the set of CI build targets run by Jenkins:

  • ci.build: Run on master and the release branches after each commit (currently Jenkins polls every 60 seconds, I’d like to change this to a commit hook one day), it calls deploy on each project. Unlike verify, which is a single ant build that calls deploy on each project Jenkins runs a new ant build for each project. This has caused issues where verify builds clean and Jenkins fails and vice-versa.

  • push.build: This is a manually run parameterised build that takes the given version number and creates a production release with a unique build number. This calls deploy but overrides a number of parameters so the version details are configured correctly. It also pushes the resultant zip, jar, ear or war in a staging area.

  • promote.build: Another manually run parameterised build that takes the build number generated by push and promotes it to the specified environment (development, one of the QA environments or our pre-production environment). This simply copies the staged files from the previous push, guaranteeing that the same release is tested in each environment.

  • release.build: Identical to promote.build except there is a checkbox that must be ticked agreeing to the warning that this is going to production. The destination becomes the production staging area.

  • overnight.build: Run overnight by Jenkins, this calls sonar and provides a nightly snapshot of the overall quality of our builds.

New projects just need a simple ant file pointing at our build project with a few variables set to gain all of these targets. It’s then just a question of cloning a the Jenkins jobs from another project, making them specific to the new project and you’re away. Maybe not the most elegant of systems, but its reliable and adaptable.