As part of one of our goals in Release Engineering this quarter, I'm investigating whether we can automatically detect variance in Talos performance data. Automatically detecting these changes in performance results would be a great help to developers and tree sheriffs. Imagine if the Tinderbox tree could be made to burn if a performance regression was detected? There are lots of possibilities if we can get this working: regressions could cause the tree to burn, firebot could spam #developers with information, try-talos data could be compared more easily to the baseline data, or we could automatically back out changes that cause regressions! :P This is also exciting, because it allows us to consider moving towards a pool-o'-slaves model for the Talos machines, just like we have for build and unittests right now. Having Talos use a pool-o'-slaves allows us to scale to additional project / release branches much more quickly, and allows us to be more flexible in allocating machines across branches. I've spent some time over the past few weeks playing around with data from graph server, bugging Johnathan, and having fun with flot, and I think I've come up with a workable solution.
How it worksI grab all the data for a test/branch/platform combination, and merge it into a single data series, ordered by buildid (the closest thing we've got right now to being able to sort the data in the same order in which changes landed). Individual data points are classified into one of four buckets:
- "Good" data. We think these data points are within a certain tolerance of the expected value. Determining what the expected value is a bit tricky, so read on!
- "Spikes". These data points are outside of the specified tolerance, but don't seem to be part of an ongoing problem (yet). Spikes can be caused by having the tolerance set too low, random machine voodoo, or not having enough data to make a definitive call as to if it's a code regression or machine problem.
- "Regressions". When 3 or more data points are outside of the tolerance in the same direction, we assume this is due to a problem with the code, and flag it as a regression.
- "Machine problem". When the last 2 data points from the same machine have been outside of the tolerance, then we assume this is due to a problem with the machine.
Initial ResultsAs an example, here's how data from Linux Tp3 tests on the Mozilla 1.9.2 branch is categorized: Or, if you have a canvas-enabled browser, check out this interactive graph. A window size of 20 and a standard deviation threshold of 2.5 was used here for this data set. The green line represents all the good data. The orange line (which is mostly hidden by the green line), represents the raw data from the 3 Linux machines running that test. The orange circles represent spikes in the data, red circles represent regressions, and blue circles represent possible machine problems. For the most part we can ignore the spikes. Too many spikes probably means we need to tighten our tolerance a bit There are two periods of to take notice of on this graph:
- Jan 12, around noon, a regression was detected. Two orange spike circles are followed by three red regression circles. Recall that we wait for the 3rd data point to confirm an actual regression.
- Jan 30, around noon, a similar case. Two orange spike circles, followed by regression points.
Regression: Tp3 decrease from 417.974 to 235.778 (43.59%) on Fri Jan 30 11:34:00 2009. Linux 1.9.2 build 20090130083434
My code can be found on http://hg.mozilla.org/users/catlee_mozilla.com/talos-grokker.
Patches or comments welcome!