Tuesday, April 30, 2013

Version Numbers in Code Considered Harmful

Time for me to eat crow. Some time ago, I wrote a post titled "Version Numbers in Branch Names Considered Harmful". I'm changing my mind about this, and it's largely due to the way revision control systems have changed over time.

Specifically, in git and mercurial, branches are no longer heavy weight items. In fact, they hardly exist at all until one actually makes a change, so creating branches in and itself is not such a big deal, whereas in Perforce and Subversion, creating branches is a huge deal, and there is a strong motivation to avoid doing it.

Instead, encoding version information in the branch names turns out to be a pretty good technique, since it allows me to remove any hard coded version strings from the code itself. This turns out to have some wonderful properties in our new "agile" world of fast paced incremental releases.

In my previous post about compiling release notes, I showed how build manifests can be used to create a list of revisions involved in a build. Now the big trick is that in git and in mercurial, the mere act of creating a branch does not touch anything in the revision graph.

This means that if we use the git revision hashes as our source of truth, we can detect whether something really changed, or whether we just rebuilt the same thing, but with a different branch name, and hence a different version number.

A crucial element to make this work is to ensure the build system injects the current version number at build time, usually by extracting it from the branch name.

Doing this allows a development team to keep branching all repositories as new releases come out, but the branches only matter if something changes. All you need to communicate is the current release version number, and developers know that for any change in any repository, they should use the branch associated with it.

When a release occurs, all you need to do is compare the hashes of the previous release with the current one, and only release the pieces that changed. No thinking required, no risk of rebuilds of essentially unchanged items, it just works.

Now, how the dependencies of those items should be managed at build time is a much more interesting question, with lots of potential for bikeshed.

Saturday, April 20, 2013

Why is it so hard to collect release notes (part II)

I've written before about this subject. It turns out that in practice it is quite difficult to precisely list all changes made to large scale piece of software, even with all the trappings of modern revision control systems available.

To demonstrate the challenges involved with modern "agile" processes, let's take a relatively simple scenario: two applications or services, each depending on a piece of shared code.

In the good old days, this was somewhat of a no-brainer.  All pieces would live in the same source tree, and be built together.


A single version control repository would be used, and a release consists of all components, built once.

A new release consists of rebuilding all pieces and delivering them as a unit.  So finding the difference between two releases was a pure version control system operation, and no additional thinking was required.


The challenge in the bad old days was that every single one of your customers probably had a different version of your system, and was both demanding in fixing their bugs, but refusing to upgrade to your latest release. Therefore you had to manage lots of patch branches.

Still, everything was within the confines of a single version control repository, and figuring out the deltas between two releases was essentially running:
 git log <old>..<new>
In the software as a service model, you don't have the patching problem. Instead, you have the challenge of getting your fixes out to production as fast and as safely as possible.

The mantra here is: if it ain't broke, don't fix it. As we've seen before, rebuilding and re-releasing an unchanged component already can have risks, and forcing a working component to be rebuilt because of unrelated changes will at least cause delay.

So, to support the new world, we split our single version control repository into separate repositories, one for each component. The components get built separately, and can be released separately.
In this example, Foo.app got updated. This update unfortunately also required a fix to the shared code library. Arguably, Bar.app should be rebuilt to see if the fix broke it, but we are in a hurry (when are we ever not in a hurry?), and since Bar.app is functioning fine as is, we don't rebuild it. We just leave it alone.

As we deploy this to production, we realize we now have two different versions of common there.

That in itself is usually not a problem if we build things right, for example by using "assemblies", using static linking, or just paying attention to the load paths.

I sometimes joke that the first thing everyone does in a C++ project is to override new. The first thing everyone does in a Java project is to write their own class loader. This scenario explains why.

But with this new process, answering the question of "What's new in production?" is no longer a simple version control operation. For one, there isn't a single repository anymore - and then the answer depends on the service or application you're examining.

In this case, the answer would be:
Bar.app: unchanged
Foo.app: git log v1.0..v2.0 # in Foo.app's repo
         git log v1.0..v2.0 # in common's repo
In order to divine this somehow, we need to register the exact revisions used for every piece of the build. I personally like build manifests embedded someplace in the deliverable items. These build manifests would include all the dependency information, and would look somewhat like this:
{ "Name": "Foo.app",
  "Rev":  "v2.0",
  "Includes": [ { "Name": "common",
                  "Rev":  "v2.0" } ] }
The same idea can be used to describe the state of a complete release. We just aggregate all the build manifests into a larger one:
{ "Name": "January Release",
  "Includes: [{ "Name": "Foo.app",
                "Rev":  "v2.0",
                "Includes": [{ "Name": "common",
                               "Rev":  "v2.0" }]}
,
              { "Name": "Bar.app",
                "Rev":  "v1.0",
                "Includes": [{ "Name": "common",
                               "Rev":  "v1.0" }]}]}
Now the development team around Bar.app wasn't idle during all this time, and they also came up with some changes. They too needed to update the shared code in common.

Of course, they also were in a hurry, and even though they saw that the Foo.app folks were busy tweaking the shared code, they decided that a merge was too risky for their schedule, and instead branched the common code for their own use.

When they got it all working, the manifest of the February release looked like this:
{ "Name": "February Release",
  "Includes: [{ "Name": "Foo.app",
                "Rev":  "v2.0",
                "Includes": [{ "Name": "common",
                               "Rev":  "v2.0" }]}
,
              { "Name": "Bar.app",
                "Rev":  "v1.1",
                "Includes": [{ "Name": "common",
                               "Rev":  "v1.1" }]}]}
It should be easy to see how recursive traversal of both build manifests will yield the answer to the question "What changed between January and February?":
Foo.app: unchanged
Bar.app: git log v1.0..v1.1 # in Bar.app's repo
         git log v1.0..v1.1 # in common's repo
So far, this wasn't so difficult.  Where things get interesting is when a brand new service comes into play.

Here, the folks who developed New.app decided that they will be good citizens and merge the shared code in common. "Might as well", they thought, "someone's gotta deal with the tech debt".

Of course, the folks maintaining Foo.app and Bar.app would have none of it: "if it ain't broke, don't fix it", they said, and so the March release looked like this:
{ "Name": "March Release",
  "Includes: [{ "Name": "Foo.app",
                "Rev":  "v2.0",
                "Includes": [{ "Name": "common",
                               "Rev":  "v2.0" }]}
,
              { "Name": "Bar.app",
                "Rev":  "v1.1",
                "Includes": [{ "Name": "common",
                               "Rev":  "v1.1" }]},
 
             { "Name": "New.app",
                "Rev":  "v1.0",
                "Includes": [{ "Name": "common",
                               "Rev":  "v2.1" }]}
]}
So, what changed between February and March?
Foo.app: unchanged
Bar.app: unchanged
New.app: ????
The first part of the answer is easy. Since New.app's repository is brand new, it only makes sense to include all changes.
Foo.app: unchanged
Bar.app: unchanged
New.app: git log v1.0 # in New.app's repository
         and then ????
One can make a reasonable argument that from the point of view of New.app, all the changes in common are also new,  so they should be listed. In practice, though, this could be a huge list, and wouldn't really be that useful, as most changes would be completely unrelated to New.app, and also would be unrelated to anything within the new release. We need something better.

I think the best answer is: "all changes between the oldest change seen by the other apps, and the change seen by the new app". Or another way to put it: all changes made by the other apps that are ancestors of the changes made by the new app. This would be:
Foo.app: unchanged
Bar.app: unchanged
New.app: git log v1.0             # in New.app
         git log v2.1 ^v2.0 ^v1.1 # in common
But how do we code this?

A single recursive traversal of the build manifests is no longer enough. We need to make two passes.

The first pass will compare existing items vs existing items, each registering the revision ranges used for every repository.

The second pass processes the new items, now using the revision ranges accumulated in the first pass to generate the appropriate git log commands for all the dependencies of each new item.

This method seems to work right in all the corner cases:
  • If a dependency is brand new, it won't get traversed by any other app, so the list of revisions to exclude in the git log command (the ones prefixed with ^) will be empty, resulting in a git command to list all revisions - check.
  • If the new app uses a dependency as is, not modified by any other app, then the result will be revision ranges bound by the same revisions, thereby producing no new changes - check.
I think this method produces reliable changeset information, and allows me to use git commit comments as the primary source of truth on what actually changed. Combining this with conventions to include issue tracker references in git commits, and hooks to validate the references will go a long way towards automating the release process.

Notes:
  • In the examples here, I used tags instead of git revision hashes. This was simply done for clarity. In practice, I would only ever apply tags at the end of the release process, never earlier. After all, the goal is to create tags reflecting the state as it is, not as it should be.
  • The practice of manually managing your dependencies has unfortunately become quite common in the java ecosystem. Build tools like maven and ivy support this process, but it has a large potential to create technological debt, as the maintainers of individual services can kick the can down the road for a long time before having to reconcile their use of shared code with other folks. Often, this only happens when a security fix is made, and folks are forced to upgrade, and then suddenly all that accumulated debt comes due. So if you wonder why some well known exploits still work on some sites, that's one reason...
  • Coding the traversal makes for a nice exercise and interview question. It's particularly tempting to go for premature optimization, which then comes back to bite you. For example, it is tempting to skip processing an item where the revision range is empty (i.e. no change), but in that case you might also forget to register the range, which would result in a new app not seeing that an existing app did use the item...
  • Obviously, collecting release notes is more than just collecting commit comments, but this is, I think, the first step. Commit comments -> Issue tracker references -> Release Notes. The last step will always be a manual one, after all, that's where the judgement comes in. 

Tuesday, April 2, 2013

What Is vs What Should Be

Along the lines of "What is it We Do Again?":
  • Product Management is all about what should be;
  • Release Management is all about what is.
It is very easy to get the two confused. For example:
  • You build a piece of software after a developer commits a fix. The bug should be fixed. Is it?
  • You deploy a war file to a tomcat server. The service should be updated. Is it?
  • ...
It is very easy for release managers to go and push buttons, and then infer that the buttons did their job - and 99% of the time they will be correct, but every once in a while, things break.

It's therefore very important to always locate the source of truth for any statement you wish to make.

For example, in my previous post describing my tagging process, I do something which appears to be bizarre: I go and retrieve build manifests from the actual production sites and use those as my source of truth for tagging the source trees.

The reason I do it that way is that I have a fairly bullet-proof chain of evidence to prove that the tagged revisions truly are in production:
  • The build manifests are created at build time, so they reflect the revisions used right then;
  • The build manifests are packaged and travel along with every deploy;
  • The build manifests can only get to the target site via a deploy;
  • The build manifests contain hash signatures that can be run against the deployed files to validate that the manifest matches up with the file.
So, retrieving a build manifest from the production site gives me very high confidence that the revisions referenced are in production.

Consider common alternatives:
  • Tag at build time. Problem becomes identifying the tag actually deployed to production - so this doesn't really solve anything.
  • Rebuild for deploy. Problem becomes identifying the source tree configs used by QA. In addition, can you really trust your build system to be repeatable?
My method of relying on what is gives our QA team a wide variety of options on how to proceed:
  • They can release pieces incrementally as they pass QA;
  • They can try different combinations of builds;
  • They can roll back to previous builds;
Now there is a problem still: it might take a while to collect the truth, and as we saw with in the tagging of shared code, the truth can be quite messy at time. Hence the importance of timestamping your data, so at least you can say "this was the truth as of yesterday at noon".

The complexity of modern software as a service environments isn't going away, and we will need to get used to the idea of many versions of the same thing co-existing in a single installation. The fewer assumptions made in the process of presenting the state of these installations, the better.

Monday, April 1, 2013

Why Don't You Just Tag Your Releases?

In my experience, any utterance beginning with the words "why don't you just..." can be safely ignored.

Then again, ignoring isn't always an option...

So, why don't we just tag the released code?

Back in the days, when software was a small set of executable programs linked from a small set of libraries, this was a simple thing to do. Usually, the whole build was done from a single source tree in one pass, and there never was any ambiguity over which version of each file was used in a build.

The modern days aren't that simple anymore. These days, we build whole suites of services, built from a large set of libraries, often using many different versions of the same library at the same time. Why? mainly because folks don't care to rebuild working services just because one or two dependencies changed.

Furthermore, we also use build systems like maven, based on ivy and similar artifact management tools, and other packaging tools which allow anyone to specify precisely which version of a piece of shared code they wish to use for their particular service or executable. As a side effect, we also get faster builds simply because we avoid rebuilding many libraries and dependencies.

Most people will opt for the "what I know can't hurt me, so why take a chance" approach and resist upgrading dependencies until forced to do so, either because they wish to use a new feature, or for security reasons.

Therefore, in any production environment, you will see many different versions of the same logical entity used at the same time, so tagging a single revision in the source tree of a shared piece of code is impossible. Many revisions need to be tagged.

So here's what I currently do:

First, I use Build Manifests. These contain both the dependency relationship between various build objects, and their specific VCS revision ids.
Next, I identify the top level items. These are usually the pieces delivering the actual shippable item, either a service or an executable, or some other package. Every one of these top level items will have a unique human readable name, and a version. This is what I use as the basic tag name.

So my tag names end up looking like this:

<prefix>-<YYYY-MM-DD>-<top-level-name>-<version>[+<dependency-path>]

The date stamp is essentially just there to easily sort the tags and group related versions of related services together, and also to keep tags unique and help locate any bugs in the tagging process. They could be omitted in a perfect world.

With this I run my tagger once a day, retrieving the build manifests from the final delivery area (could be our production site, could be our download site, or wherever the final released components live). We do this to act as a cross-check for the release process. If we find something surprising there, then we know our release process is broken someplace.

The tagger will start with every top level item and generate the tag name, then traverse the dependency list, adding an id for every dependency build used in the top level item. Unless the revision used to build the dependency has already been tagged, it will get the tag with the dependency path.

When checking whether a specific revision is already tagged, I deliberately ignore the date portion and the dependency path portions of the tag and only check the name and version part. This will avoid unnecessary duplication of tags.

In the end, you will get:
  • One tag for every top level source tree
  • At least one tag for every dependency. You might get many tags in any specific shared code source tree, depending on how the dependency was used in the top level item. My current record is three different revisions of the same piece of shared code used in the same top level item. Dozens of different revisions are routinely used in one production release (usually containing many top level items).
So now the problem changed from "Why don't you just..." to "WTF, why are there so many tags??". Progress is slow...

Updated: grammar, formatting and clarity.