Sunday, February 19, 2012

Rake - yet another reinvention of Make

So, why is Rake any better than straight old make?
require 'rake/clean'
task :default => ["hello"]
SRC = FileList['*.c']
OBJ = SRC.ext('o')
rule '.o' => '.c' do |t|
  sh "cc -c -o #{} #{t.source}"
file "hello" => OBJ do
  sh "cc -o hello #{OBJ}"
# File dependencies go here ...
file 'main.o' => ['main.c', 'greet.h']
file 'greet.o' => ['greet.c']
To compare, GNU make:
SRC := $(wildcard *.c)
OBJ := $(patsubst %.c,%.o,$(SRC)) 

default: hello

    rm -f *.o

clobber: clean
    rm -f hello

hello: $(OBJ)
    cc -o hello $(OBJ)

%.o: %.c
    cc -c -o $@ $<

# File dependencies go here ...
main.o: main.c greet.h
greet.o: greet.c 
I guess this is a testament to ruby that it can actually do this and make it look so close to the real thing.

I actually can think of several reasons why the rake version is superior, among them:
  • The ability to use a fully developed programming language instead of the GNU Make $(...) macros, some of which use very bizarre semantics
  • The ability to extend it easily, by adding more ruby code.
But I am also a little disappointed:
  • You do need to learn Ruby, and be comfortable with some of the Ruby contortions - but considering that Ruby appears to be the current fashion, it would probably be a good thing to learn regardless....
  • In the end, you still use "sh" to invoke a shell. I really wished they addressed the shell quoting problem in a different way - I bet even after 30 years, things will fail spectacularly if any of the files have a space in them.
  • It doesn't appear to me that rake really has any novel idea - in spite of there being a long history of make clones, some of which do present interesting extensions:
    • Augmenting file timestamps with file hashing, in order to avoid rebuilding aggregate targets when a rebuild of an ingredient doesn't actually produce a different file...
    • Extending rules to include "cascade dependencies", a way to express: "If you need X, you will also need Y". This allows you to express a C include dependency in a more direct way: "anyone who uses main.c (in the example above) will also need to depend on greet.h". This is subtly different from the classic dependencies.
Granted, rake is open source, and there's nothing to prevent me from adding all this, right?

Monday, February 13, 2012

Other Folks do Code Promotion Too.

This article is a related take on a promotion branching model. It appears that he does name branches using version numbers, though. It's not absolutely necessary though.

One problem common to both is that if you really need to support multiple releases of the same product in parallel on the field, you have a problem when you need to patch an older supported version.

Saturday, February 11, 2012

Oh no! There's code too!

I've decided to revive an ancient project of mine: the autodiscovering build system. I started working on this around 1995, and used it in several companies. It's kind of sad to see it languish, as I  believe it to be superior to most systems used to date (of course I would)...

For this purpose, I placed a public mercurial repo on bitbucket, at

On most UNIX boxes, it should be possible to simply clone the repository, then perform the usual
sudo make install
You can then enter the examples subdirectory, and run the "b" (for build) command, installed as part of the system. This should build 4 samples, each representing a common C/C++ project scenario, and pack them up as an RPM.

Big caveat. I'm reviving this after about a 6 year break, so some stuff is likely to break (and yes, I realize that these days, Debians are cooler than RPMs, so maybe I'll add a debian packaging module).

My goal is to get this into a presentable state again, do a little bit of refactoring to expose a better plugin architecture, and then take some big C/C++ project (firefox?) and apply it. This might take a while.

The benefits of getting this done are:
  • reliable parallel and incremental builds, even after source tree re-orgs;
  • improved error logging by virtue of every construction step having its own log file;
  • improved build debugging by having the actual build scripts for every step explicitly available;
  • fine grained build configuration with exact tracing of the origins of every setting;
  • simplicity for the developer: "drop code here".
Some drawbacks exist, of course:
  • No IDE support - it's conceivable to add a project file generator to this, or leverage existing project file generators. Of course, once you add a project file generator, the benefits of the incremental builds go away...
  • Modern C/C++ toolchains tend to choke over the link step, rendering the benefit of incremental parallel builds less interesting
  • Expects a "traditional" large source tree. In other posts, I've been arguing that many small source trees, each generating archives or artifacts might be a better way to go.
So, lots of challenges ahead!

Friday, February 10, 2012

Building an Artifact Registry Service (part 2)

In a previous post, I explained how to map changeset ids to a monotonically increasing build number. The basic motivation for this was to create an entity which was usable as a version number, but mapped to a unique source code configuration.

In this post, I'll build on this foundation and show how we can incrementally add changeset information as metadata attached to the build number.

Let's start with a single repo "A", and attach it to some sort of continuous build system.

It will see checkin Nr. 101, and produce build Nr. 1. We will associate checkin Nr. 101 to the build Nr. 1, saying "Build Nr 1 includes change Nr. 101".

Sometime later, checkin Nr. 102 occurs, and it will trigger build Nr. 2. Now, we associate change Nr. 102 to build Nr. 2, and then look at Nr. 102's ancestor, and notice that it has been built by build Nr 1. Now instead of including change Nr. 101, we will associate the build Nr. 1 to build Nr. 2, saying "Build Nr 2 includes build Nr. 1". The idea is that we can stop scanning for further changes at that point, since the previous build already includes all of them.

See how it works when three quick checkins happen in a row, and only at checkin Nr. 105 does the continuous build system kick in and produce build Nr. 3. Now our scan picks up changes Nr 103, 104 and 105 and includes them in build Nr. 3, but then notices that change Nr 102 is in build Nr 2, so it includes that in build Nr 3, and stops the scan.

The real kicker of this method is that we can re-use the "build includes build" relationship to express dependencies to other builds.

For example here: builds done in repo A use an artifact generated in builds done in Repo B. Say we have configured our continuous build system to kick off a build in repo A whenever a build in repo B finishes.

So while developers furiously check in changes, builds keep coming, and every time a build on repo A happens, it uses the latest build from repo B to build the final artifact from repo A. It behooves us to add the relationship that build Nr. 5 includes not only build Nr. 3 but also build Nr 4 from the other repository.

Now if we want to know what the difference between build Nr. 3 and build Nr. 5 is, we can simply start by following all the arrows from build Nr. 3 and cross off all the places we traverse, which would be builds Nr. 1 and 2 and changes Nr. 101, 102 and 201. Then we start at build Nr. 5 and start following arrows until we hit something we've already seen: This would be build Nr. 4 and changes Nr 103, 104, 105 and 202 and 203.

Now let's assume nothing happens in repo A, but two more changes get put into repo B. This produces build Nr 6, which then kicks off a build on repo A.

This should create a build Nr 7, as shown. It is distinct from the previous build, as it uses a new artifact built on repo B, so a rebuild must occur even though nothing changed in repo A.

This shows that once we use dependent builds like this, we cannot simply map the changeset id (i.e. the number 105) to a build number, but we must use a hash composed of the changeset id of the repo where the build is occurring and all the build numbers the build depends on. In this case we would use "Changeset 105" + "Build 4" to create the hash that maps to build Nr. 5, and subsequently "Changeset 105" + "Build 6" to map to build Nr 7.

Nothing changes in our "Find the delta between two builds" algorithm described above, it will correctly determine that the difference between build Nr. 5 and Nr. 7 are the changesets 204 and 205.

The beauty of this method is that it scales naturally to complex dependency graphs, and will allow mixing and matching of arbitrary builds from arbitrary repositories, as long as a unique identifier can be used to correctly locate the changeset used in the build in every repository.

In part 3, I'll be talking about additional metadata which we may wish to include in our artifact repository service, and how that service can become the cornerstone of your release management organization.

Thursday, February 9, 2012

Update of the Autodetecting Build System Demo

I recently wanted to show off my automated dependency generating build system and I noticed that the rpmbuild semantics have subtly changed in the past 6 years, so I uploaded a new version with a fix.

Check it out... I know that these days C/C++ based build systems are uncool,  but just in case - I still think that in spite (or rather because of) its age, this demo has value:
  • It shows how to incrementally regenerate dependencies, leading to reliable incremental builds, even after refactorings;
  • It shows how you can get a handle on build logs;
  • It's a great framework for managing build configurations
  • It works well with parallel builds.

Saturday, February 4, 2012

The Seven Deadly Sins of Build Systems

Build systems tend to be among the messiest code around. Arguably, many many coding sins are being perpetrated there, so it's kind of hard to pare it down to seven. 

I do think the following are the most common, and also the simplest to fix.

1. Changing Files In Place

As far as sins go, this one is rather minor. There may even be some good reasons to do this, but in most cases where I've seen it happen, it was due to a reluctance to admit that there actually is a proper, full scale code generation problem at hand.

Best is to avoid doing it: rename the file, for example by appending a .in  or a .template suffix, and generate the target file via the build system.

Situations where this may be inconvenient are rare, but do exist. For example if the generation step is not useful for developers, developers might want a way to skip it - especially if the code generation is to generate files used in an IDE. Better of course would be to teach the IDE how to generate the files themselves, but that may be impractical in some cases.

If you must do it:
  • ensure your modification is "idem-potent", i.e. it doesn't care if has been run before
  • ensure it's OK for the modified files to be checked in as such. 

2. Mixing up Version Control with Builds

This sin is fortunately getting rarer, as people are adopting saner version control systems like mercurial or git. Nevertheless, it is still common that a build system will either access multiple branches of the source tree at the same time, or even perform checkouts.

Remedy is to collect your sources prior to starting the build.

If you are using a version control system that conflates branches with source trees (e.g. perforce and subversion), don't let branches appear in the source tree. Use the version control system to prepare your source tree from the right branches for you and only then let the build system loose.

3. Build Systems Performing Version Control Checkins

Unfortunately very common. Most common is abusing the version control system to perform duties that should be performed by a different service, for example an artifact repository  (to store built binaries) or registry system (to store generated configurations or logs). If you do this, the obvious next question becomes how you would resolve merge conflicts stemming from those checkins. Assume they never happen?

4. Mixing up Build Configuration with Runtime Configuration

This is very common. This likely stems from the way most open source software is packaged and deployed:
  • Unpack source
  • configure
  • make install
This works brilliantly as long as you don't actually modify the sources (and you're willing to live with having a compiler installed on your production machines), and if your runtime environment is stable.

The important thing people miss is that this is a packaging and installation model, not a development model - and as a distribution model it might not even be good for you, as many shops wouldn't dream of shipping their sources and expecting their customers to build it locally.

Unfortunately, the lore has won, and many build systems continue to be modeled like open source packages, with two main effects:
  • Your build system spends a lot of time moving files around, aka installing them (where?)
  • Any change in the runtime environment (install location, names of other services, host names etc) requires a rebuild
The latter point introduces delays and risks during testing, as your software migrates between different environments to eventually end up in production.

5. Mixing up Build with Packaging and with Deploy

This is mostly just sloppy coding. At this day and age, it shouldn't require convincing that encapsulation, separation of concerns and clean APIs are good things. When designing build systems, it is helpful to consider a couple of little disaster scenarios:
  • Pretend you must change your version control system, now!
  • Pretend you must switch to a different packaging system, now of course!
  • Pretend your switching deploy strategies (e.g. from an incremental update to wholesale re-imaging)
None of these should require any surgery in the build system itself.

6. Builds Spill Outside Their Designated Area

Otherwise known as "works on my machine". Three sources of evil:
  • Uncatalogued dependencies on outside files
  • Undocumented and unvalidated dependencies on the environment
  • Writing to locations shared by others
The antidote is to ensure to never write anywhere you don't have exclusive access to, thereby insuring you do not interfere with any other activities on the build host, to always clean out and establish a well defined environment and to specifically test and validate any toolchain elements assumed pre-installed on the build host.

7. Labeling Builds

This one epitomizes the strong belief in rituals often found in build engineering.  All version control systems have ways to identify a unique source code configuration. If they didn't, they wouldn't be source code control systems. It is sufficient to register the parameters required to recreate the source code configuration. You need some sort of registry no matter what you do - and using labels for that job can actually be a lot more complicated than one might think at first:
  • You need to generate a label name. The complication here is about what to do when multiple builds are done at the same time, using the same source code configuration (e.g. multi-platform builds): use the same label? use multiple labels?
  • You need to ensure the labeling completes. In some systems, labeling is not an atomic operation. Instead, every file is labeled individually.
The Mercurial version control system exposes the complications of making labels work correctly in a decentralized environment by forcing you to check in and merge labels.

In the end, you will be producing many thousands of builds each year, and the vast majority of them are useless, never to be referenced again. Keeping track of all those labels  can be quite a burden.

Instead: label the important builds, after you know that they really are important (e.g. when the products built from those sources are released). Then the presence of a label actually means something, and may help answer questions like "Where did you want me to clone from?".