Capturing system and app performance metrics during autopilot test runs

Thomas Voß thomas.voss at
Wed Mar 20 12:40:13 UTC 2013

Hey all,

our current daily quality efforts are focused on functional testing of
the overall (Unity) stack. That is, we are executing a large number of
autopilot tests in an automated manner to verify that any change within
the overall ecosystem does not break the user facing functionality.

>From my perspective though, we could leverage the existing daily quality
setup even more to record a multitude of metrics describing the runtime
characteristics/behavior of the (Unity) stack. We could rely on the
captured data to do in-depth analysis of the overall system performance
characteristics or even focus on application-specific characteristics,
e.g., average latency of input event delivery. To this end, we would
need to have a system in place that allows us to (remotely) harvest
measurements from a multitude of different sources and that is easily
integrate-able with applications such that they can export their
specific measurements.

I'm reaching out to the list to find out whether there has been previous
work on automatically capturing runtime characteristics of the overall
system and specific applications during full-stack test runs. Second, I
would like to know what kind of technology would be available to
implement the scenario described before. So far, I have been looking at:

  (1.) SGI's Performance CoPilot (see
  (2.) collectd (see

Both look promising, but I have a slight preference for Performance
CoPilot as it is more specific to the scenario at hand.

Have there been any previous experiences with either of the technologies?



More information about the ubuntu-devel mailing list