Skip to main content

Posts about technology (old posts, page 8)

Two great, completely unrelated links

Yesterday was a bit of an overwhelming day. After getting home at 1am after a long bus ride home, I was unwinding by catching up on some news and email. I came across these two links, both of which really lifted my mood. The first, Grokking the Zen of the Vi Wu-Wei, talks about a programmer's journey from emacs to BBEdit to vim. This post is a great read in and of itself, but what's really worth it, is the link around the middle of the post to http://stackoverflow.com/questions/1218390/what-is-your-most-productive-shortcut-with-vim/1220118#1220118. This was truly a joy to read. Definitely the best answer I've ever seen on Stack Overflow, and quite possibly the best discussion of vi I've ever read. It taught me a lot, but I enjoyed reading it for more than that. It was almost like being on a little adventure, discovering all these little hidden secrets about the neighbourhood you've been living in for years. Like I said, it was 1am. The second, The Pope, the judge, the paedophile priest and The New York Times, gave me some reassurance that things aren't always as they seem as reported by the media. Regardless of how you feel about the Church or the Pope, it seems that journalistic integrity has fallen by the wayside here. From the article:

Fr Thomas Brundage, the former Archdiocese of Milwaukee Judicial Vicar who presided over the canonical criminal case of the Wisconsin child abuser Fr Lawrence Murphy, has broken his silence to give a devastating account of the scandal – and of the behaviour of The New York Times, which resurrected the story. It looks as if the media were in such a hurry to to blame the Pope for this wretched business that not one news organisation contacted Fr Brundage. As a result, crucial details were unreported.
The entire article is worth a read.

Buildbot performance and scaling

It seems like it was ages ago when I posted about profiling buildbot. One of the hot spots identified there was the dataReceived call. This has been sped up a little bit in recent versions of twisted, but our buildbot masters were still severely overloaded. It turns out that the buildbot slaves make a lot of RPC calls when sending log data, which results in tens of thousands of dataReceived calls. Multiply that by several dozen build slaves sending log data peaking at a combined throughput of 10 megabits/s and you've got an awful lot of data to handle. By adding a small slave-side buffer, the number of RPC calls to send log data is drastically reduced by an order of magnitude in some tests, resulting in a much better load situation on the master. This is good for us, because it means the masters are much more responsive, and it's good for everybody else because it means we have fewer failures and wasted time due to the master being too busy to handle everything. It also means we can throw more build slaves onto the masters! The new code was deployed towards the end of the day on the 26th, or the end of the 12th week.

One useful script, a linux version

Johnathan posted links to 3 scripts he finds useful. His sattap script looked handy, so I hacked it up for linux. Run it to do a screen capture, and upload the image to a website you have ssh access into. The link is printed out, and put into the clipboard. Hope you find this useful!


#!/bin/sh

# sattap - Send a thing to a place

set -e



SCP_USER='catlee'

SCP_HOST='people.mozilla.org'

SCP_PATH='~/public_html/sattap/'



HTTP_URL="http://people.mozilla.org/~catlee/sattap/"



FILENAME=`date | md5sum | head -c 8`.png

FILEPATH=/tmp/$FILENAME



echo Capturing...

import $FILEPATH

echo Copying to $SCP_HOST

scp $FILEPATH ${SCP_USER}@${SCP_HOST}:$SCP_PATH

echo Deleting local copy

rm $FILEPATH



echo $HTTP_URL$FILENAME | xclip -selection clipboard

echo Your file should be at $HTTP_URL$FILENAME, which is also in your paste buffer

What happens when you push

As of November 1st, when you push a change to mozilla-central, the following builds and tests get triggered:

  • Linux optimized build
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
    • Talos
    • Talos nochrome
    • Talos jss
    • Talos dirty
    • Talos tp4
    • Talos cold
  • Linux debug build + leak tests
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
  • Linux optimized + refcounting build
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
  • Windows optimized build
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
    • XP Talos
    • XP Talos nochrome
    • XP Talos jss
    • XP Talos dirty
    • XP Talos tp4
    • Vista Talos
    • Vista Talos nochrome
    • Vista Talos jss
    • Vista Talos dirty
    • Vista Talos tp4
  • Windows debug build + leak tests
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
  • Windows optimized + refcounting build
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
  • Mac OSX optimized build
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
    • Leopard Talos
    • Leopard Talos nochrome
    • Leopard Talos jss
    • Leopard Talos dirty
    • Leopard Talos tp4
    • Leopard Talos cold
  • Mac OSX debug build + leak tests
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
  • Mac OSX optimized + refcounting build
    • mochitest 1/5
    • mochitest 2/5
    • mochitest 3/5
    • mochitest 4/5
    • mochitest 5/5
    • everythingelse
  • Linux 64-bit build
  • Maemo Build
    • mochitest chrome
    • crashtest
    • mochitest 1/4
    • mochitest 2/4
    • mochitest 3/4
    • mochitest 4/4
    • reftest
    • xpcshell
    • Talos Tdhtml
    • Talos Tgfx
    • Talos Tp3
    • Talos Tp4
    • Talos Tp4 nochrome
    • Talos Ts
    • Talos Tsspider
    • Talos Tsvg
    • Talos Twinopen
    • Talos non-Tp1
    • Talos non-Tp2
  • WinCE build
  • Windows Mobile build
  • Linux Fennec Desktop build
  • Windows Fennec Desktop build
  • Mac OSX Fennec Desktop build
That's 111 distinct build and test jobs that get spread out across our build and tests pools. A total of 40 machine hours per checkin in our main build, test and talos pools is used, plus an additional 25 machine hours on the mobile devices!!! In addition, we also do certain types of jobs on a periodic basis:
  • Nightly builds
  • XULRunner builds
  • Shark builds
  • Code coverage runs
  • L10n repacks for 72 locales and 7 platforms (Windows, Mac OSX, Linux, Windows Fennec, Mac OSX Fennec, Linux Fennec, Maemo); that's 504 individual repacks!
In the course of collecting the data for this post, I've been constantly amazed at the amount of stuff that we're doing, and the scale of the infrastructure! The list above is just for our mozilla-central branch, and I've most likely missed something. We do similar amounts of work for our other branches as well: Try, mozilla-1.9.2, mozilla-1.9.1, TraceMonkey, Electrolysis, and Places. Things have certainly changed a lot in the past year.

When do tests get run?

Continuing our RelEng Blogging Blitz, I'm going to be discussing how and when tests get triggered in our build automation systems. We've got two basic classes of tests right now: unit tests, and performance tests, a.k.a. Talos. The unit tests are run on the same pool of machines that the builds are done on, while the performance tests are run on a separate pool of around 100 Mac Minis. Both kinds of tests are triggered in similar ways. For refcounting ("unittest") builds, once the compile step is complete, the binaries are packaged up with make package, the tests are packaged up with make package-tests, the symbols are packaged up with make buildsymbols, and then the whole lot is uploaded to stage.mozilla.org using make upload. Once they're uploaded, we have valid URLs that refer to the builds, tests, and symbols. We then trigger the relevant unit test runs on that build. When a slave is assigned this test run, it then downloads the build, tests, and symbols from stage and starts running the tests. On mozilla-central, we've also recently started to run unittests on optimized and debug builds. We're hoping to bring this functionality to mozilla-1.9.2 once all the kinks are worked out. For regular optimized builds, in addition to unittests, we also trigger performance tests on the freshly minted build. OSX builds are currently tested on Tiger and Leopard for mozilla-1.9.1 and mozilla-1.9.2, and on Leopard only for mozilla-central and project branches. Windows builds are tested on XP and Vista, and Linux builds are tested on Ubuntu. In addition to having tests triggered automatically by builds, the Release Engineering Sheriff can re-run unittests or performance tests on request!

When do builds happen?

As part of our RelEng Blogging Blitz, I'll give a quick overview of when and how builds get triggered on our main build infrastructure. There are three ways builds can be triggered. The first, and most common way, is when a developer pushes his or her changes to hg.mozilla.org. Our systems check for new changes every minute or so, and put new changes into a queue. Once the tree has been quiet for 3 minutes (i.e. no changes for 3 minutes), a new build request is triggered with all queued changes. If there is a free slave available, then a new build starts immediately, otherwise the build request is put in a queue. The second way builds are triggered is via a nightly scheduler. We start triggering builds on branches at 3:02am pacific local time (some branches are triggered at 3:32am or 4:02 am). We run at 3:02am to avoid problems with daylight savings <-> standard time transitions. In the fall there are two 2:59am's when we go back to standard time, and in the spring transition there is no 2:59am. The start times are staggered to avoid slamming hg.mozilla.org, or other shared resources. The last way builds can be triggered is manually. The Release Engineering Sheriff can trigger builds on specific revisions, or rebuild past builds pretty easily, so if you need a build triggered, contact your friendly neighbourhood RelEng Sheriff!

Faster signing

Once upon a time, when Mozilla released a new version of Firefox, signing all of the .exes and .dlls for all of the locales took a few hours. As more locales were added, and the file sizes increased, this time has increased to over 8 hours to sign all the locales in a Firefox 3.5 release. This has been a huge bottleneck for getting new releases out the door. For our most recent Firefox releases, we've started using some new signing infrastructure that I've been working on over the past few months. There are quite a few reasons why the new infrastructure is faster:

  • Faster hardware. We've moved from an aging single core system to a new quad-core 2.5 GHz machine with 4 GB of RAM.
  • Concurrent signing of locales. Since we've got more cores on the system, we should take advantage of them! The new signing scripts spawn off 4 child processes, each one grabs one locale at a time to process.
  • In-process compression/decompression. Our complete.mar files use bz2 compression for every file in the archive. The old scripts would call the 'bzip2' and 'bunzip2' binaries to do compression and decompression. It's significantly faster to do these operations in-process.
  • Better caching of signed files. The biggest win came from the simple observation that after you sign a file, and re-compress it to include in a .mar file, you should be caching the compressed version to use later. The old scripts did cache signed files, but only the decompressed versions. So to sign the contents of our mar files, the contents would have to be completely unpacked and decompressed, then the cache was checked, files were signed or pulled from cache as necessary, and then re-compressed again. Now, we unpack the mar file, check the cache, and only decompress / sign / re-compress any files that need signing but aren't in the cache. We don't even bother decompressing files that don't need signing, another difference from the original scripts. Big thanks to Nick Thomas for having the idea for this at the Mozilla All-Hands in April.
As a result of all of this, signing all our locales can be done in less than 15 minutes now! See bug 470146 for the gory details. The main bottleneck at this point is the time it takes to transfer all of the files to the signing machine and back again. For a 3.5 release, there's around 3.5 GB of data to transfer back and forth, which takes on the order of 10 minutes to pull down to the signing machine, and another few minutes to push back after signing is done.