brainstorming for UDS-N - Performance

Robert Collins robertc at robertcollins.net
Sat Oct 2 21:48:34 BST 2010


On Sat, Oct 2, 2010 at 10:19 AM, Kees Cook <kees.cook at canonical.com> wrote:
> On Fri, Oct 01, 2010 at 10:25:07AM +0100, Matthew Paul Thomas wrote:
>> Have you asked them why that is? Maybe they don't know how to automate
>> the measurement, where to host it, or who to tell about it.
>
> In discussions at the last UDS, it seems that most teams could not agree
> on what would be valuable to measure. For the teams that did have things
> they wanted to measure (e.g. a specific firefox rendering speed test),
> no one stepped up to automate it.

I'd expect every team to have different concerns here - even on simple
things like python. The server team may care about python bytecode
execution performance, but the desktop team about load time (as a for
instance). [The two things are coupled, this is just intended as an
illustrative example].

> In a test-driven development style, it really seems like these measurements
> must be defined and automated before work on performance can be done. The
> trouble is that the performance work is rarely being done in the same team
> that will feel the impact, so it's non-trivial to understand the effect on
> another team's performance numbers.

The TDD loop is:
* think of something the code should/shouldn't do that it doesn't/does do
* Add a failing test for that
* Make the change

You can certainly start working on performance without predefining
your measurements and without automation.

I suggest that we *don't block on those things*. Iterate and add
automation as we go: the measurements being gathered will grow over
time, as will the sophistication of the measurements, the testable
environments and the tests themselves.

> Oddly, I want these things not to measure how awesome upcoming performance
> improvements are, but to justify security/performance trade-offs. :P
> "It's only 10% slower, but you'll never have this class of high-risk
> security vulnerability again!" :)

I can certainly see a comprehensive benchmark for the system being of
use to you, but have you considered looking for apps that exercise
$changedcodepath and running their upstream performance metrics,
whatever they are, first ?

-Rob



More information about the ubuntu-devel mailing list