Skip to main content

Gardening Update

My square foot garden is coming along quite nicely now! I put in the last transplants from a local nursery a few weeks ago: strawberries and some sweet red peppers. I'm still a bit undecided on if it's worth it to start from seed and then transplant or not. The plants that managed to survive the transplanting process and the onslaught of various pests in the spring are much smaller than plants I've seen at the local farmer's market. I've actually picked up some tomatoes and herbs at the farmer's market to supplement the ones I started from seed. All the lettuce, swiss chard, cucumber and melons that are growing right now I seeded directly into the ground. I think I probably need to make sure the seedlings are getting enough light, and that there is enough nutrients in the soil for the seedlings and transplants. Herbs and peppers Onions, carrots and lettuce I've managed to harvest quite a few radishes already, and even some cilantro and basil! I'm a bit worried about the garlic...it looks a bit sickly lately. The onions look great though! I haven't had too many problems with pests lately. There are nibbles on some leaves of most plants, but nothing really major. I love going out to check how the plants are growing every day. It looks like I'll be able to start harvesting some lettuce and swiss chard soon!

Pooling the Talos slaves

One of the big projects for me this quarter was getting our Talos slaves configured as a pool of machines shared across branches. The details are being tracked in bug 488367 for those interested in the details. This is a continuation of our work on pooling our slaves, like we've done over the past year with our build, unittest, and l10n slaves. Up until now each branch has had a dedicated set of Mac Minis to run performance tests for just that branch, on five different operating systems. For example, the Firefox 3.0 branch used to have 19 Mac Minis doing regular Talos tests: 4 of each platform (except for Leopard, which had 3). Across our 4 active branches (Firefox 3.0, 3.5, 3.next, and TraceMonkey), we have around 80 minis in total! That's a lot of minis! What we've been working towards is to put all the Talos slaves into one pool that is shared between all our active branches. Slaves will be given builds to test in FIFO order, regardless of which branch the build is produced on. This new pool will be....

Faster

With more slaves available to all branches, the time to wait for a free slave will go down, so testing can start more quickly...which means you get your results sooner!

Smarter

It will be able to handle varying load between branches. If there's a lot of activity on one branch, like on the Firefox 3.5 branch before a release, then more slaves will be available to test those builds and won't be sitting idle waiting for builds from low activity branches.

Scalable

We will be able to scale our infrastructure much better using a pooled system. Similar to how moving to pooled build and unittest slaves has allowed us to scale based on number of checkins rather than number of branches, having pooled Talos slaves will allow us to scale our capacity based on number of builds produced rather than the number of branches. In the current setup, each new release or project branch required an allocation of at least 15 minis to dedicate to the branch. Once all our Talos slaves are pooled, we will be able to add Talos support for new project or release branches with a few configuration changes instead of waiting for new minis to be provisioned. This means we can get up and running with new project branches much more quickly!

More Robust

We'll also be in a much better position in terms of maintenance of the machines. When a slave goes offline, the test coverage for any one branch won't be jeopardized since we'll still have the rest of the slaves that can test builds from that branch. In the current setup, if one or two machines of the same platform needs maintenance on one branch, then our performance test coverage of that branch is significantly impacted. With only one or two machines remaining to run tests on that platform, it can be difficult to determine if a performance regression is caused by a code change, or is caused by some machine issue. Losing two or three machines in this scenario is enough to close the tree, since we no longer have reliable performance data. With pooled slaves we would see a much more gradual decrease in coverage when machines go offline. It's the difference between losing one third of the machines on your branch, and losing one tenth.

When is all this going to happen?

Some of it has started already! We have a small pool of slaves testing builds from our four branches right now. If you know how to coerce Tinderbox to show you hidden columns, you can take a look for yourself. They're also reporting to the new graph server using machines names starting with 'talos-rev2'. We have some new minis waiting to be added to the pool. Together with existing slaves, this will give us around 25 machines in total to start off the new pool. This isn't enough yet to be able to test every build from each branch without skipping any, so for the moment the pool will be skipping to the most recent build per branch if there's any backlog. It's worth pointing out that our current Talos system also skips builds if there's any backlog. However, our goal is to turn off skipping once we have enough slaves in the pool to handle our peak loads comfortably. After this initial batch is up and running, we'll be waiting for a suitable time to start moving the existing Talos slaves into the pool. All in all, this should be a big win for everyone!

Testing my thumb colour

I've started a little vegetable patch in the backyard this year. I heard some folks at work talking about square foot gardening, so I thought I would give it a shot. So far I've had mixed success. Plants started directly in the garden have done great so far. These include red and white onions, garlic, carrots and radishes. I think 100% of the garlic, radishes and carrots I planted have sprouted, and about 85% of the onions have sprouted. Garlic and onions Germinating seeds has also gone fairly well. Almost all the seeds I've started indoors have germinated and gotten to the point where I want to transplant them either outside or into a larger container...and it's this transplanting thing that's the hardest part so far. None of my broccoli, cauliflower, lettuce, swiss chard, cantaloupe or eggplant have survived transplanting. Four (out of four) tomato plants survived going from seedlings into bigger pots, and I just put them in the ground yesterday. This evening 3 of them were still upright, so I'm hopeful there. One of two basil plants is still alive...the other one looks like somebody came along and cut it off at the stem, which is a bit strange. It's not the end of the world, I'm planning on getting some already started plants at our local farmer's market in June to make up for any of the plants that I couldn't get started. I just had no idea that transplanting was so tricky! Anybody else noticed how computer "hackers" also tend to be interested in hacking other parts of their lives? Gardening or cooking or photography - all allow you to have really fine control over parts of complicated processes, and let you play with how changing one piece affects the whole.

Parallelizing Unit Tests

Last week we flipped the switch and turned on running unit tests on packaged builds for our mozilla-1.9.1, mozilla-central, and tracemonkey branches. What this means is that our current unit test builds are uploaded to a web server along with all their unit tests. Another machine will then download the build and tests, and run various test suites on them. Splitting up the tests this way allows us to run the test suites in parallel, so the mochitest suite will run on one machine, and all the other suites will be run on another machine (this group of tests is creatively named 'everythingelse' on Tinderbox). paralleltests Splitting up the tests is a critical step towards reducing our end-to-end time, which is the total time elapsed between when a change is pushed into one of the source repositories, and when all of the results from that build are available. Up until now, you had to wait for all the test suites to be completed in sequence, which could take over an hour in total. Now that we can split the tests up, the wait time is determined by the longest test suite. The mochitest suite is currently the biggest chunk here, taking somewhere around 35 minutes to complete, and all of the other tests combined take around 20 minutes. One of the next steps for us to do is to look at splitting up the mochitests into smaller pieces. For the time being, we will continue to run the existing unit tests on the same machine that is creating the build. This is so that we can make sure that running tests on the packaged builds is giving us the same results (there are already some known differences: bug 491675, bug 475383) Parallelizing the unit tests, and the infrastructure required to run them, is the first step towards achieving a few important goals. - Reducing end-to-end time. - Running unit tests on debug, as well as on optimized builds. Once we've got both of these going, we can turn off the builds that are currently done solely to be able to run tests on them. - Running unit tests on the same build multiple times, to help isolate intermittent test failures. All of the gory details can be found in bug 383136.

Upcoming Identity Management with Weave

I was really excited to read a recent post about upcoming identity support with Weave on Mozilla Labs' blog. Why is this so cool? Weave lets you securely synchronize parts of your browser profile between different machines. All your bookmarks, AwesomeBar history, saved passwords can be synchronized between your laptop, desktop and mobile phone. Your data is always encrypted with a private key that only you have access to. Combine this with intelligent form-filling, automatic detection of OpenID-enabled sites, and you've got what is essentially single sign-on onto all your websites from all your browsers. Now you'll be able to sign into Firefox, and Firefox will know how to sign into all your websites. Keep up the great work Labs!

poster 0.4 released

I'm happy to announce the release of poster version 0.4. This is a bug fix release, which fixes problems when trying to use poster over a secure connection (with https). I've also reworked some of the code so that it can hopefully work with python 2.4. It passes all the unit tests that I have under python 2.4 now, but since I don't normally use python 2.4, I'd be interested to hear other people's experience using it. One of the things that I love about working on poster, and about open source software in general, is hearing from users all over the world who have found it helpful in some way. It's always encouraging to hear about how poster is being used, so thank you to all who have e-mailed me! poster can be downloaded from my website, or from the cheeseshop. As always, bug reports, comments, and questions are always welcome.

XULRunner Nightlies now available

As Mossop mentioned, I've been working for the past week on getting XULRunner nightly builds up and running. I'm happy to announce that they're now available! The first builds of XULRunner for Linux (i686 and x86_64), Windows and OS X (i386 and ppc) for both mozilla-1.9.1 and mozilla-central (1.9.2) are finishing up, and are available (or will be soon!) at http://ftp.mozilla.org/pub/mozilla.org/xulrunner/nightly/. Fresh builds will be available every night. Enjoy!

Around the Bay Race

I ran my first Around the Bay Race yesterday! I was pretty nervous when I got up Sunday morning at 6. It was pouring rain, and the weather forecast for Hamilton said that the rain wouldn't be letting up until the afternoon. I also had to really ease up on my training over the last 3 weeks due to a cold. I tried pushing myself to do the long runs with the cold, but I could barely manage to finish 5km. I managed to force a bowl of oatmeal down my throat, along with a few cups of coffee, despite my nervous stomach. I knew I'd need the energy from the oatmeal, and I didn't want to deal with caffeine withdrawal during the race! When we arrived in Hamilton, it was still pouring rain. Ugh. And 5°C. Ugh. At least it wasn't freezing rain! I felt a bit cold in the first few kilometers, but after my body warmed up I was pretty comfortable. I concentrated on having a comfortable pace; I didn't want to burn out too early. I started out behind the 3:00 pace bunny, and passed him around 10 kilometers in. Those first 10 kilometers really flew by. I was doing around 5:30 per kilometer and feeling pretty relaxed, with plenty of fuel still left in the tank. The rain was still off and on, but I didn't really notice. After crossing the bridge and turning west along North Shore Boulevard, I knew the hardest part of the race was coming up. It's about 9km of hills, finishing with one killer hill just past the Woodland Cemetery. I was actually really excited when I got to that hill. I had done it before, so I knew what to expect. I had plenty of energy left, and so I really powered up the hill. After getting to the top, I felt like I had all but finished the race! There were a little more than 3km left to go, on pretty much flat ground. Man, those last 3km were killers. I was running into the wind at that point, and my knees were starting to protest against all this abuse. My average pace had slowed to 5:48 by this point, and my current pace was around 6:10. I tried to speed up a bit, but my legs just weren't listening any more. I had wanted to finish with an average pace of 5:45, but by this point in the race I knew that wasn't going to happen, so I concentrated on just finishing. Those last few kilometers went by very very slowly. Finally, the Copps Coliseum was in view. Only a few hundred meters left! Entering the coliseum was a great feeling; I found the energy to pick up my pace a little and make a strong finish. And nothing was better than having Melissa and Thomas there to greet me after I finished! Overall, a great race despite the weather. Now I need to figure out if I should sign up for the Mississauga Marathon! The official course map is available on Google Maps, or my GPS track is available on MapMyRun.com (hopefully I followed the course!)

Exporting MQ patches

I've been trying to use Mercurial Queues to manage my work on different tasks in several repositories. I try to name all my patches with the name of the bug it's related to; so for my recent work on getting Talos not skipping builds, I would call my patch 'bug468731'. I noticed that I was running this series of steps a lot: cd ~/mozilla/buildbot-configs hg qdiff > ~/patches/bug468731-buildbot-configs.patch cd ~/mozilla/buildbotcustom hg qdiff > ~/patches/bug468731-buildbotcustom.patch ...and then uploading the resulting patch files as attachments to the bug. There's a lot of repetition and extra mental work in those steps:

  • I have to type the bug number manually twice. This is annoying, and error-prone. I've made a typo on more than one occasion and then wasted a few minutes trying to track down where the file went.
  • I have to type the correct repository name for each patch. Again, I've managed to screw this up in the past. Often I have several terminals open, one for each repository, and I can get mixed up as to which repository I've currently got active.
  • mercurial already knows the bug number, since I've used it in the name of my patch.
  • mercurial already knows which repository I'm in.
I wrote the mercurial extension below to help with this. It will take the current patch name, and the basename of the current repository, and save a patch in ~/patches called [patch_name]-[repo_name].patch. It will also compare the current patch to any previous ones in the patches directory, and save a new file if the patches are different, or tell you that you've already saved this patch. To enable this extension, save the code below somewhere like ~/.hgext/mkpatch.py, and then add "mkpatch = ~/.hgext/mkpatch.py" to your .hgrc's extensions section. Then you can run 'hg mkpatch' to automatically create a patch for you in your ~/patches directory!

import os, hashlib



from mercurial import commands, util

from hgext import mq



def mkpatch(ui, repo, *pats, **opts):
    """Saves the current patch to a file called -.patch
    in your patch directory (defaults to ~/patches)
    """
    repo_name = os.path.basename(ui.config('paths', 'default'))
    if opts.get('patchdir'):
        patch_dir = opts.get('patchdir')
        del opts['patchdir']
    else:
        patch_dir = os.path.expanduser(ui.config('mkpatch', 'patchdir', "~/patches"))

    ui.pushbuffer()
    mq.top(ui, repo)
    patch_name = ui.popbuffer().strip()

    if not os.path.exists(patch_dir):
        os.makedirs(patch_dir)
    elif not os.path.isdir(patch_dir):
        raise util.Abort("%s is not a directory" % patch_dir)

    ui.pushbuffer()
    mq.diff(ui, repo, *pats, **opts)
    patch_data = ui.popbuffer()
    patch_hash = hashlib.new('sha1', patch_data).digest()

    full_name = os.path.join(patch_dir, "%s-%s.patch" % (patch_name, repo_name))
    i = 0
    while os.path.exists(full_name):
        file_hash = hashlib.new('sha1', open(full_name).read()).digest()
        if file_hash == patch_hash:
            ui.status("Patch is identical to ", full_name, "; not saving")
            return
        full_name = os.path.join(patch_dir, "%s-%s.patch.%i" % (patch_name, repo_name, i))
        i += 1

    open(full_name, "w").write(patch_data)
    ui.status("Patch saved to ", full_name)


mkpatch_options = [
        ("", "patchdir", '', "patch directory"),
        ]
cmdtable = {
    "mkpatch": (mkpatch, mkpatch_options + mq.cmdtable['^qdiff'][1], "hg mkpatch [OPTION]... [FILE]...")
}

Maybe he's right?

The Pope has been taking quite a bit of heat over the past few weeks in the press. The latest media frenzy is over recent statements he made regarding the Church's consistent teaching that condoms are not the answer to the AIDS crisis in Africa, or anywhere else in the world. It seems like most people automatically assume that condoms are an important part of the solution to combating AIDS. It makes sense on some level I suppose; we're reminded constantly of the importance of having "safe sex", and how using a condom is the responsible thing to do. And I'm sure that condoms do reduce the risk of HIV transmission for any one given sexual encounter. But what are the effects over time? If condoms have a 99% success rate, that's still 1 out of 100 failures. I'm not going to bet my life on a 1% chance of failure. Something with a 1% chance of occurring in a single event, has a 63% chance of occurring at least once over 100 events. Let's say the prevention of transmission rate is 99.9%; there's still a 9.5% chance of transmission over 100 sexual encounters in this scenario. Now, big giant disclaimer here, I don't know what the accepted statistics are on the effectiveness of condoms in preventing HIV transmission, either in ideal circumstances, or in actual usage. I do know that the chances of failure for something definitely add up quickly over time, and they add up fast. So it shouldn't be a surprise to hear that, "We have found no consistent associations between condom use and lower HIV-infection rates, which, 25 years into the pandemic, we should be seeing if this intervention was working." The full article can be read over at the National Review Online. In two places I know of that have had success in combating AIDS, Uganda and the Philippines, the primary focus was on having faithful, monogamous sexual practices. And it makes sense why this works. If people have fewer sexual partners, then the risk of transmission in the general population is reduced. So maybe the Pope is right when he said, "If the soul is lacking, if Africans do not help one another, the scourge cannot be resolved by distributing condoms; quite the contrary, we risk worsening the problem. The solution can only come through a twofold commitment: firstly, the humanization of sexuality, in other words a spiritual and human renewal bringing a new way of behaving towards one another; and secondly, true friendship, above all with those who are suffering, a readiness - even through personal sacrifice - to be present with those who suffer. And these are the factors that help and bring visible progress." (my emphasis) I think he is. Thanks to Mulier Fortis for the link to the National Review article.