Skip to main content

Faster build machines, faster end-to-end time

One metric Release Engineering focuses on a lot is the time between a commit to the source tree and when all the builds and tests for that revision are completed. We call this our end-to-end time. Our approach to improving this time has been to identify the longest bits in the total end-to-end time, and to make them faster. One big chunk of the end-to-end time used to be wait times: how long a build or test waited for a machine to become free. We've addressed this by adding more build and test slaves into the pool. We've also focused on parallelizing the entire process, so instead of doing this: before we now do this: after(not exactly to scale, but you get the idea)

Faster builds please!

After splitting up tests and reducing wait times, the longest part of the entire process was now the actual build time. Last week we added 25 new machines to the pool of build slaves for windows. These new machines are real hardware machines (not VMs), with a quad-core 2.4 GHz processor, 4 GB RAM, and dedicated hard drives. We've been anticipating that the new build machines can crank out builds much faster. And they can. (the break in the graph is the time when mozilla-central was closed due to the PGO link failure bug) We started rolling the new builders into production on February 22nd (where the orange diamonds start on the graph). You can see the end-to-end time immediately start to drop. From February 1st to the 22nd, our average end-to-end time for win32 on mozilla-central was 4h09. Since the 22nd the average time has dropped down to 3h02. That's over an hour better on average, a 26% improvement. Faster builds means tests can start faster, which means tests can be done sooner, which means a better end-to-end time. It also means that build machines become free to do another build sooner, and so we're hoping that these faster build machines will also improve our wait time situation (but see below). Currently we're limited to running the build on these machines with -j1 because of random hangs in the build when using -j2 or higher (bug 524149). Once we fix that, or move to pymake, we should see even better improvements.

What about OSX?

In preparation for deploying these new hardware build machines, we also implemented some prioritization algorithms to choose fast build machines over slow ones, and also to try and choose a machine that has a recent object directory (to make our incremental builds faster). This has helped us out quite a bit on our OSX end-to-end times as well, where we have a mixed pool of xserves and minis doing builds and tests. OSX End to End time Simply selecting the right machine for the job reduced our end-to-end time from 3h12 to 2h13, again almost an hour improvement, or 30% better.

What's next?

We have 25 linux machines that should be coming into our production pools this week. We'll continue to monitor the end-to-end and wait times over the next weeks to see how everything is behaving. One thing I'm watching out for is that having faster builds means we can produce more builds in less time...which means more builds to test! Without enough machines running tests, we may end up making wait times and therefore our end-to-end times worse! We're already begun work on handling this. Our plan is to start doing the unittest runs on the talos hardware....but that's another post!

Comments