Saturday, December 31, 2011

So, What Is It We Do Again?

One of the fallacies of our Release Management profession is the apparent simplicity. Every software release process essentially works like this:


The devil is in the details. Nevertheless, this diagram is helpful, as it structures our work and helps describe what is being done, and when it's done.

The funny thing is: none of the boxes in the diagram are things release management should be doing. Our job is to ensure they get done, but the doing is usually performed by others:
  • Release Criteria are ultimately set by the customer. In most cases, this will be some combination of business development, marketing and product design people who represent the customer. Our job is to collect the input and ensure it's actually written down (lots of opportunity for bikeshedding crop up when debating the difference between functional specification, technical specifications and test plans, but in the end, these are all supporting documents for the release criteria).
  • Software is built by developers. We may on occasion assist them by providing good build infrastructure and other support tools, but we're doing this partly out of self-interest. After all, we do need to provide some reasonable guess as to what changes were actually made to the software, so that the next step can be performed based on actual data.
  • Deciding whether the software meets the criteria set forth at the outset is mainly QA's job. Our job is to ensure that the people who defined the release criteria don't get involved at this stage. They get their shot after QA fails the release candidate. They can address the failure by relaxing the release criteria - it's their company in the end, but it's important that they take responsibility for it. Our job is to protect QA's integrity by enforcing this process.
  • Even shipping itself is not really our job. Our job is to catch or prevent undocumented last minute tweaks in production.
So what is it again what we do?
  • We're the umpires of the process: we ensure it's followed and tracked.
  • We facilitate the process. We ensure the documents are collected and are transparently accessible to all the players and we ensure that the build and change processes allow proper inferences to be made when making the release/no-go decision.
That's it.

My focus on this blog is on the Build Software box, simply because I consider it the least well understood part of the process and also the one with the most opportunity for data collection and automation:
  • The build process is the ideal place to collect change data and interact with the revision control system and various other tracking systems.
  • The build process is also the ideal hook for test automation, including a variety of coverage and complexity analysis, again providing supporting data for the release/no-go decision.
It's also in the Build Software box where way too many mistakes are made:
  • Not releasing the bits that were tested (i.e. rebuilding for release)
  • Blindsiding developers by using build processes that are different from those used by developers.
  • Failure to track all dependencies
  • ...
But even with the focus on build processes, it is the flow chart above that really matters and at the heart of our job as Release Manager:
  • To ensure that all stakeholders are aware of the process and their role
  • To ensure that comprehensive and correct data is available when making the release/no-go decision.
Happy New Year!




Monday, December 19, 2011

Aggregation Artifacts

When I introduced artifacts, I pointed out that some of these artifacts might not have any actual payload, but purely exist to aggregate other artifacts via dependencies.

One technique to use this is described in http://blog.artifact-software.com/tech/

A problem that I'm mulling in my mind is figuring out when the aggregation should happen:
  • Build time?
  • Packaging time?
  • Installation time?
Build time is probably the right answer - I'll need to play with maven build scoped dependencies soon to see whether the source code for an aggregation artifact (which in maven would consist of just the pom file) work as I'd expect...

Thursday, December 15, 2011

Why is Git More Popular that Mercurial?

There already are a lot of Git vs Mercurial pages out there, so this will focus on what I believe is the main reason why git appears to be winning the mind share battle in the field of distributed version control systems:
Git makes developers look good.
It's not just that the mere fact of being able to manipulate the myriads of git commands with ease confers substantial bragging rights, but it really is the fact that git allows developers to present a more polished image of their work.

Let's compare the typical work flow:

In both systems, you start by cloning some repository, and begin work:


Usually, it will take several tries to get your work done, and the checkin comments above are quite typical - the ones shown are safe for work.

Meanwhile, some other changes happen on the main line, and you being a good citizen, pull in the changes. 


And here is where the two systems differ: In mercurial, you merge:


... and that's it. The owner of the parent repo pulls in your change, and all your dirty laundry is visible to the public, exposing you to various forms of ridicule:


Git gives you, the developer, much more control over your image. For one, you probably won't merge, but rebase. This keeps the revision history linear, and gives the warm fuzzies to those who hate revision graphs.


The next thing you do is use the interactive rebase to squash all those embarrassing mistakes, and make it appear as if perfect code flows right off your fingers.

The final result is that git will usually have a much cleaner revision history than mercurial:


... and since there are a lot more developers than release managers, well, you better deal with it, as the likely response to any whining will be:


Wednesday, December 14, 2011

Not Enough Bikeshed?

I'm wondering if I'm not Hoisted by my own Petard. I'm observing that the only blog post that got any kind of reaction was the one about the ant build system.

True to form, I guess that's the one tool everybody knows and has an opinion on, whereas the rest is, well, too much like a nuclear power plant?

Tuesday, December 13, 2011

Introducing Artifacts (part 3)

Slowly, we're getting our pieces together. In Cherrypicking Made Easy, I explained how there really is only one code branch Release Management should care about:  the production state.

The only way the production state (main) branch can get updated is copying in code, after the artifacts built from that code have been successfully released into production.

The next step down the food chain is to consider artifacts built from code that is one copy merge away from the code in the production branch. Specifically, this means that the head of the main branch is a direct ancestor of the head of the code branch from which the artifact is built. I'll call artifacts built under these conditions release candidate artifacts or releasable artifacts.

Most other artifacts will be non-releasable, and I'll call those development artifacts.

Whenever a new version of an artifact is built, any preceding builds from the same branch need to be demoted to development artifact status.

In part 2 of this series, I explained how artifacts not only depend on the source code, but can also depend on other artifacts. Usually, you will represent your whole product as an artifact that depends on many other artifacts, and therefore the state of the dependencies will influence the state of the artifact being built in obvious ways.
  • A released artifact may only have released artifacts as dependencies.
  • A release candidate artifact may only have either released or release candidate artifacts as dependencies.
Let's examine a simple application, which consists of three artifacts. This is actually the most common pattern:
  • An application artifact, which has no payload, only dependencies to include all the artifacts required for the application to run;
  • The code artifact, which contains the configuration independent portion of the application ("foo" in the picture)
  • A configuration artifact, which contains the specific configuration files required to run the app ("bar" in the picture).
For example, a "web server" can be considered a combo of the apache binaries, packed up as the code artifact, and the apache configuration files packed up as the configuration artifact. The idea is that different web apps may all share the same binaries for apache, but use different configurations.

Let's assume we're working on two different projects, each modifying foo. Each project constructs a release candidate:

Both release candidates are presented to QA and product management, and let's assume the "right" artifact is selected for release. We promote all the artifacts contributing to the release candidate to the "Released" state, and perform the copy merge in their respective source trees:

Now that there is a new revision on the main branch, the left version of foo no longer qualifies as a release candidate, and therefore we demote the artifact to development status. Note how this demotion needs to be propagated through the dependency chain to ensure that all artifacts depending on the left version of foo are also demoted:

At this point, we have the skeleton of a good release process:
  • We build complex applications by assembling them from simpler pieces via artifacts, each built from a combination of source code and other artifacts;
  • We tie in the merge status of the code branch via the artifact state;
  • We define how the artifact state propagates through the artifact dependency graph;
  • We define how the act of releasing an artifact propagates copy merges through all the source trees of all the dependent artifacts, thereby ensuring that the main branch of every source tree represents the released code of the artifact built there.
From here on, it's mainly decoration. Good decoration is crucial, though, since it is the only way for people to actually understand the process and make sensible decisions about what a release candidate contains and which one to release. 




Why is it so Hard to Collect Release Notes?

I have found it rather astonishing how hard it can be to answer a simple question: "what changed since the last release?". One would think that this is something for which every version control system should have some sort of command or tool, but it isn't so.

Branching and merging will generate a revision graph, which looks a little bit like the diagram on the right. Generating release notes is essentially locating all the revisions contributing to the new release which do not contribute to the old release. This is done by first traversing the revision graph from the revision of the old release, marking all revisions as you recurse through all the ancestors. Then you perform the same traversal starting from the new release, but stop the traversal when you hit on a marked revision.

The mercurial distributed version control system implements this elegantly in a single command:
hg log --prune OldRelease --follow --rev NewRelease:0

Basically the same method is used for revision control systems that use explicit branches instead of an implicit revision graph. Even though every branch has a linear revision history, you need to traverse all the merge arrows stemming from other branches and mark the revisions, much the same way as you would in a revision graph.

The problem is that some very popular revision control systems don't even classify merges as something special. Subversion, for example, has no first class representation of a merge. This is partly because subversion allows you great liberty in selecting revision ranges to merge, and also allows you to squash multiple merges into the same revision. You therefore need to rely on rules and conventions to draw the merge arcs in your revision graph. As people make mistakes or bypass the conventions on occasion, those arcs may be incorrect. resulting in incorrect traversals and therefore incorrect release notes.

Perforce, which for a long time was the darling of the "simple is best" design school, doesn't have anything built in for generating release notes either. At least they do represent merges as a first class object, and their knowledge base recommends implementing the "traverse and mark" process using this script:
p4 changes -l -i //depot/main/p4/...@OldRelease > FILE1
p4 changes -l -i //depot/main/p4/...@NewRelease > FILE2
diff FILE1 FILE2 > CHANGES
Note that on a large repository with a lot of history, this can run for an hour or so. Note also that it's the "-i" flag which causes the recursion through the "integrated" changes, and that the following naive invocation would fail:
p4 changes -l -i //depot/main/p4/...@OldRelease,NewRelease \ 
     > CHANGES

Another popular revision control system creates a different problem: the actual revision graph may change between releases. Yep, "git" is a very powerful system with a lot of "cool" attached to it, but from a release management perspective, it is downright scary, as you can use the "interactive rebase" feature to re-arrange the revision history in many ways.

There is nothing wrong with developers streamlining and cleaning up their revision history prior to getting their repos pulled into an upstream location, but retroactively editing the change history in an authoritative, shared repository is something to be avoided, even if it means the occasional exposure of embarrassing mistakes.

This is why many shops actually do not use the revision control system to generate release notes. Instead, they will rely on a combination of spreadsheets and issue or work tracking systems. That's quite sad, since this obscures what code changes are effectively included in a build.

As I'll be expanding on the use of build artifacts as precious objects to be tracked, it will become quite important to always know exactly what went into them. Subsequent posts will explore some techniques to incrementally build this information and attach it to the artifact metadata.

Saturday, December 10, 2011

Basic Change Process Revisited


In an earlier post, I explained the basic change process. In this post I'd like to show how it works in a distributed version control system. Besides encouraging the general use of distributed version control systems, I'd like to expose the difference between the classic branching diagrams and the actual meaningful part of the revision history.

As a reminder, here's the branch diagram again:
  1. Start at a stable revision;
  2. Branch, make change;
  3. Meanwhile, some other change makes it into the stable branch;
  4. Pull in the new change, merge;
  5. Add your change to the stable branch (assuming, of course, it passes QA).
Here's what it looks like in the distributed world:

You start with your main repository. This will usually be hosted on some kind fo more centralized host, or on a service like GitHub or bit bucket. As I explained in my Cherrypicking Made Easy post, the best repo to use is the one that represents your current production version of the code. I believe that in the end, this is the only repository that truly matters to Release Management...

If you wish to make a change, you fork or clone your own repository off the main repo. If you use a remote service, you may wish to first fork/clone the repo on that remote service, then clone onto your local machine.





Now edit, compile, test and check in - creating the green revision in the diagram.








Meanwhile, some other change appears in the main repository, marked as the blue change.


In preparation to getting your green change out, you first pull in the blue change from the main repository. Note that this pull operation usually doesn't change any content, it simply creates a new head in your revision graph.
Inside your repository, you merge the two heads, creating a new revision with the merge result. Note how this creates this diamond shape in the revision graph. "git" users may opt to perform a rebase instead. This essentially revolves rewriting the revision history to remove the green side of the diamond and to pretend that your change was a simple addition.
Finally, if everything looks good, your change can be pushed back into the main repo. This should be done by the owner of the main repo, via a so-called pull request. This allows the repo owner to examine the changes prior to pulling them in.







The interesting thing here is how the branching diagram of the basic change process simplifies into the diamond shaped revision graph. This will become more interesting as we examine the basic technique to produce Release Notes by examining changes contributing to a specific revision (usually a release).

A very good read to get into the mood of distributed version control systems is http://hginit.com, especially if you're a subversion or perforce user.

Wednesday, December 7, 2011

Introducing Artifacts (part 2)

In part 1, I introduced artifacts as being data sets derived from source and other artifacts. In this post, I'll dive into the build cycle.

In its simplest form, the build cycle consists of:
  • Checking out source code and noting the revision id of the checkout;
  • Retrieving all the artifacts the build depends on, noting their revision ids;
  • Performing the build, creating a new artifact;
  • Computing a revision id for that artifact, usually by hashing the source code revision id and the revision ids of all the included artifacts;
  • Publishing the artifact back into the artifact repository.

Now in practice, you will keep multiple artifact repositories, only one of which would contain the artifacts actually deployed into production or released to the customer. All the other repositories will contain artifacts in various stages of completion.

When you build your new artifact, your build configuration would specifiy which repositories to examine.

In some ways, it is similar to branching in a source code repository, and you might even wish to use the branch names to name your artifact repositories.


There is one important way artifact repositories differ from branches: you can't merge them.

In fact, if you do specify multiple repositories, and different versions of the same artifact are found, you have two choices:
  • You can define an order of preference;
  • You can fail the build.
You will choose an order of preference mainly to override the production artifacts with your newly built ones.

You should fail the builds if different versions of the same artifact are present in two non-production repositories. There is no sensible way to resolve this at build time. If you are really attempting to build an artifact that depends on two separate development streams, then those two streams need to be merged at the source code level, creating a single stream, and then you can use a single artifact repository for that stream.


A slightly different approach is to create a separate repository for every artifact build itself. This approach is supported by some automated build systems, for example TeamCity, via so-called artifact dependencies.

No matter which strategy you use, it will become quickly obvious that you will need some sort of artifact registry service to keep track of their existence and their contents. 

How such a registry can work will be subject of part 3 of this series.

Cherrypicking Made Easy

Branching strategy discussions are bike sheds. Distributed version control systems have brought some relief, as by their very nature they will force you to branch and merge more often.

Traditionally, most shops end up adding one change on top of the previous change, like so:


Invariably, they will end up stuck against a deadline, and find out that Feature A above cannot be fixed in time, so it needs to be pulled out instead. Unfortunately, Feature B relies on  code changes made to implement Feature A, and a nasty merge destabilizes the whole branch...

Instead, I advocate using a Production Branch, which only contains revisions of the code as it was when a release was made. These provide a stable baseline from which new work can be started. Every change starts life by being based on a baseline revision of the code:

As the release deadline approaches, you assemble a Release Candidate Branch. Note how natural and trivial this is when you use a distributed version control system: simply clone the production repo, then pull in the desired feature repos and merge:
 
Now, if a problem occurs, for example if Feature B causes trouble and you don't think it can be fixed in time, you simply discard your release candidate branch and create a new one, this time only merging in the features and fixes that worked:

If QA passes, you can release, and you copy your Release Candidate Branch into the Production Branch. The failed Feature B branch can then merge from the production branch (aka "rebase"), and maybe get on the next release:

Now choosing between continuous integration and parallel development isn't an either/or choice. Usually you will do both. Technically, you can consider the Continuous Integration model as an edge case of the Production Branch model, where you always just extrude one single branch after a release.

The point is to be aware of the risks and benefits of the choices you make:
  • Continuous integration reduces merge complexity, but commits you to either completing or manually backing out changes. Another advantage is that fixes propagate to all developers faster, but then again, not all developers appreciate having their world change in the middle of their work, so they might postpone updating their workspaces, thereby nullifying the advantage of simple merges.
  • Parallel development increases the tracking burden and might force you into doing a lot merging. In really bad cases, many of the merges will be repetitive as you recreate new release candidates using different combinations of features. The big win is that you do not need to commit to completing any particular feature or change at a specific time. You can make riskier changes and decide close to release time which changes to actually release.
The nature of distributed revision control systems will push you towards parallel development. That's a good thing, don't fight it.

Tuesday, December 6, 2011

How Important Are Reproducible Builds (Updated)?

In my opinion: not so much.

Now before you unpack the flame thrower, let me qualify: a process designed around the idea that builds are reproducible will be both slow and risky. It is better to design a process that assumes that builds are not reproducible.

For one, in practice, builds truly are never reproducible:
  • Most tools these days bake timestamps into their binaries. For example, Java .jar files are really just .zip files, and they will include the timestamps of the contained files. Without being able to bitwise compare two binaries, you have no direct indication that the two builds are equivalent. You can at best hope for equivalence by inspecting the build metatada.
  • Distributed build processes are by nature non-deterministic and may cause your compiler or linker  to choose different optimization paths depending on the ordering of data received by preceding parallel build steps. This may even cause performance to vary slightly between builds.
  • Your control over the build environment is limited. Yes, in theory, your control should be total, and with the increasing use of virtualization, control over it may increase again, but in general it is practically impossible to reliably reproduce an older build, especially if the hardware on which the original build occurred is no longer available.
Does this mean one shouldn't even try? Of course not. Enterprise customers will still refuse to upgrade to the latest version of your product but expect you to construct patches for their ancient version. You still will want to trace and compare multiple older builds, and rebuild them with new fixes, if only to verify possible origins of bugs.

Note that those use cases imply code changes - and that's OK. The battle I'm fighting here is against gratuitous rebuilds of unchanged code.

The main reason for pretending that builds are reproducible is to allow you to treat your build artifacts as disposable, and to permit sloppiness in your build process: "oh, I'll just to a make clean; make".

The risk comes in when you say: "Source didn't change, so no need to test the rebuilt binary".

As an aside, you lose a lot of efficiency if you constantly rebuild artifacts that change rarely.

My conclusion is that you're better off treating your build artifacts as one-of-a-kind, and ensuring that the bits you are releasing are the exact same bits you tested. Hence all the emphasis on artifact management and tracking. Not only will this allow you to reduce your risk, but it will also make your builds a lot faster.

Update:

It does occur to me that often, configuration is baked into the build artifact. Open source projects are especially prone to doing so, as they mix up build configuration with runtime configuration, sometimes out of ideological reasons (forced to have the source if you wish to configure special features), but mostly out of false convenience (why create a special runtime configuration interface if you can simply edit source files).

The antidote here is to patch the open source tools such that the configuration can be a separate artifact from the actual application. This way, your test environment build simply aggregates a different set of configuration artifacts with the same application artifact, and you can still test and ship the exact same application bits...

Now Complete with its own Domain

Amazingly, http://fortified-bikesheds.com was available, so I grabbed it. A testament to progress, the whole process of grabbing and configuring the new domain name took about 15 minutes, including reading the instructions, and did not involve editing a single file.

Sunday, December 4, 2011

Introducing Artifacts (part 1)

Every software project will eventually include a build process, producing new files from existing files.

Even in the simplest of all cases, for example a static web site, it makes sense to consider the process of copying new files into the doc root as a build process. Very quickly, this build process might grow to include html verification, link validation, and as the site grows, page generation from templates etc...
The generated files are generally called Artifacts. I don't know where the term originated. I first saw it when examining the maven build system.

Artifacts can be anything: executable binaries, packages, simple file sets, machine images. The point of using the term Artifact is to focus on the metadata, and disregard the actual content or shape of the Artifact.

Another property of artifacts is that they will usually depend on other artifacts. Most sites grow into using artifacts organically by starting to identify and catalogue their dependencies to third party utilities. These can be libraries, tool chains, operating system dependencies - anything that might affect your build and your product. 

The next step for growth is to treat your own artifacts as if they were third party artifacts.

A typical SaaS application might wish to define the system as a tree of artifact dependencies, starting with an artifact representing the whole cluster of hosts, each depending on host image artifacts, which in turn depend on application artifacts, which pull in configuration artifacts and so on...

Most shops end up treating artifacts as something incidental. They might store some third party artifacts in their version control system, or simply install them onto their build machines, and systematically rebuild their own artifacts from scratch whenever they need a new build. This is not only time consuming, but it also misses an opportunity to apply one of the antidotes to counter the objections against applying the Basic Change Process.

Treating every artifact you build as a precious, reusable entity, you have the opportunity to split your codebase into independent small pieces, each branched separately. This will only work, of course, if you have a good system for storing and tracking artifacts.

There are some systems out there built for storing and tracking artifacts:
  • Ivy interacts with the ant build system and provides some basic functionality
  • Maven of course has artifact management built right into it
Even though both tools are adequate, they do suffer from shortcomings, as we will see in a followup post.

The diagram on the right illustrates the build cycle for artifacts.

A new artifact is built from both a source tree and the artifacts it depends on. The build system would retrieve them as needed from an artifact repository and construct a new one, which then gets published back into the repository, to be used by other artifact builds.

The challenges here are:
  • How to track what went into building an artifact
  • How to select which dependency to pull in
  • How to coordinate source code control over multiple artifacts when releasing
I will go into these challenges in part 2 of this post.






Saturday, December 3, 2011

What Is It With "ant"?

I believe build engineering is by far the most important technical discipline in release management. With a good build system, you can do anything; without one, you are constantly hobbled. Over the past, I've had plenty of opportunities to examine and develop build systems.

Twenty years ago, things were a lot simpler: there was make, and not much else. Being first of a breed, it had a lot of cosmetic and functional flaws. It also was unusual in that it was a non-deterministic rules based system. Many developers failed to appreciate the power of that concept, and instead attempted to apply their procedural knowledge to make and failed. Very few people understood and liked makefiles.

Then came the explosion of UNIX variants, and then came Microsoft DOS and Visual Studio, and they did everything to be different. Software had to be built for dozens of different platforms, all with their own dialect of make and differing shell syntax...

When Java started becoming mainstream, there was an obvious need for a build system to support java build processes. Make would be quite adequate to support it, but which version and flavor of make? Or more importantly, which version of the command line interpreter (shell) is to be used to get make to work?

Now it is true that by the time Java became popular, most of the UNIX make variants have been merged into GNU make, nowadays the de facto standard for make on UNIX. Unfortunately, most java developers worked on Microsoft Windows, with their rather horrid Batch File langage. Nobody can really blame the java developers as they avoided using it.

Instead, they developed ant: a rule based specification based on XML, with Java itself being the underlying shell language, exporting a variety of basic functionality in a platform independent way:
  • Compiling .java files
  • Aggregating them into .jar files
  • Copying files
  • Creating directories
  • Performing version control operation
The idea is that any operation not yet supported by ant could be implemented in Java, and then invoked via the XML based specification file.

Even though it is sad to see that ant in large part retraced the development history of make, with the same stumbling stones (e.g Recursive Make Considered Harmful) and a serious lack of features (e.g. Pattern Rules), it is an understandable tradeoff between platform independence and feature set.

So I guess I'll just have to live with it.

Thank God there's Maven.

The Basic Change Process

This diagram illustrates the basic change process:
  1. Start at a known good revision of the code
  2. Branch and make changes
  3. Meanwhile, someone else completes a change, and puts it into the main line (hopefully after passing review, QA, etc)
  4. You pull and merge
  5. You copy back into the main line, again after passing review, QA, etc...
This process should be engrained in every software engineer. Even the grandfather of concurrent revision control systems, CVS, is built up around this model, as long as you consider your checkout to be an effective branch of the code, which it is...

As an aside, I am very pleased to see that Perforce finally, after almost twenty years of procrastination, introduced streams, which implement this exact change process directly, and not as one of many many options in a free-form depot structure.


The trick is to continue following this model as your code base and your team's size grows. Common objections are:
  • I don't trust (3), so I don't want to do (4) and risk endangering my code because of (3)'s changes.
  • I just know that (3) is completely orthogonal, so I can do (5) without doing (4).
  • I don't trust (3), so I'll hold off with (4) until I finally have to do (5).
  • Merging other people's code takes up too much of my time, so I'll hold off, especially since the changes seem to be mostly from places I don't care about.
The net effect is that the merges will either be too simple, as they assume orthogonality and that lack of conflict means lack of risk, or that they will become too complex, as people hold off forever.

There are two antidotes to these objections:
  • Increase the requirements for performing step (5), in order to ensure that your mainline is always clean and releasable. I have come a long way to finally conclude that the best requirement is not only to have passed review and QA, but to actually have been deployed or released prior to getting copied into your mainline. Your mainline should be a true representation of what is in production, and nothing less than that. Later posts will expand on that theme....
  • Keep the size of your source tree small. Split your source tree into components, modules, packages, artifacts or whatever is convenient and develop, build and release them separately. Yes, this increases the tracking burden, but it will keep the basic change model intact, and help implement the first point of making your mainline represent production. Again, later posts will expand on that theme.
The main point is that the slippery slope begins as soon as you attempt to optimize or otherwise deviate from the basic change process. Resist the temptation and use the two antidotes instead.

Introduction

Greetings, thanks for checking out my blog...

I've been doing release management for over 20 years now, and I thought the time has come to talk about it a little. That's what this blog will be about.

Release management is one of those skunk disciplines that sneak up on you. Nobody really takes it seriously until you start crumbling under the load and wondering why something so simple has suddenly become a nightmare.

It seemed so simple at first - after all, everyone can run a compiler or an IDE and generate your executables and post them someplace, right?

Discussions tend follow the bike shed pattern, and everyone will be happy to slap paint on it.

Little do they realize that there's a nuclear missile silo hiding beneath. As your project grows, your build suddenly crumbles, your branches multiply, your merges become hell. In other words you suddenly uncovered the nuclear missile silo hiding under your bike shed.

My hope is that I can contribute some insights and spark discussions over best practices which stood the test of time.