Tuesday, December 6, 2011

How Important Are Reproducible Builds (Updated)?

In my opinion: not so much.

Now before you unpack the flame thrower, let me qualify: a process designed around the idea that builds are reproducible will be both slow and risky. It is better to design a process that assumes that builds are not reproducible.

For one, in practice, builds truly are never reproducible:
  • Most tools these days bake timestamps into their binaries. For example, Java .jar files are really just .zip files, and they will include the timestamps of the contained files. Without being able to bitwise compare two binaries, you have no direct indication that the two builds are equivalent. You can at best hope for equivalence by inspecting the build metatada.
  • Distributed build processes are by nature non-deterministic and may cause your compiler or linker  to choose different optimization paths depending on the ordering of data received by preceding parallel build steps. This may even cause performance to vary slightly between builds.
  • Your control over the build environment is limited. Yes, in theory, your control should be total, and with the increasing use of virtualization, control over it may increase again, but in general it is practically impossible to reliably reproduce an older build, especially if the hardware on which the original build occurred is no longer available.
Does this mean one shouldn't even try? Of course not. Enterprise customers will still refuse to upgrade to the latest version of your product but expect you to construct patches for their ancient version. You still will want to trace and compare multiple older builds, and rebuild them with new fixes, if only to verify possible origins of bugs.

Note that those use cases imply code changes - and that's OK. The battle I'm fighting here is against gratuitous rebuilds of unchanged code.

The main reason for pretending that builds are reproducible is to allow you to treat your build artifacts as disposable, and to permit sloppiness in your build process: "oh, I'll just to a make clean; make".

The risk comes in when you say: "Source didn't change, so no need to test the rebuilt binary".

As an aside, you lose a lot of efficiency if you constantly rebuild artifacts that change rarely.

My conclusion is that you're better off treating your build artifacts as one-of-a-kind, and ensuring that the bits you are releasing are the exact same bits you tested. Hence all the emphasis on artifact management and tracking. Not only will this allow you to reduce your risk, but it will also make your builds a lot faster.

Update:

It does occur to me that often, configuration is baked into the build artifact. Open source projects are especially prone to doing so, as they mix up build configuration with runtime configuration, sometimes out of ideological reasons (forced to have the source if you wish to configure special features), but mostly out of false convenience (why create a special runtime configuration interface if you can simply edit source files).

The antidote here is to patch the open source tools such that the configuration can be a separate artifact from the actual application. This way, your test environment build simply aggregates a different set of configuration artifacts with the same application artifact, and you can still test and ship the exact same application bits...

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.