brainstorming for UDS-N - Performance
Kees Cook
kees.cook at canonical.com
Mon Oct 4 19:25:29 BST 2010
Hi,
On Sun, Oct 03, 2010 at 09:48:34AM +1300, Robert Collins wrote:
> On Sat, Oct 2, 2010 at 10:19 AM, Kees Cook <kees.cook at canonical.com> wrote:
> > In a test-driven development style, it really seems like these measurements
> > must be defined and automated before work on performance can be done. The
> > trouble is that the performance work is rarely being done in the same team
> > that will feel the impact, so it's non-trivial to understand the effect on
> > another team's performance numbers.
>
> The TDD loop is:
> * think of something the code should/shouldn't do that it doesn't/does do
> * Add a failing test for that
> * Make the change
That seems too narrow to me. If I used this for kernel security features,
I could appear successful when I'm not:
* kernel should not access userspace memory without from/to_user helpers.
* write trivial module to dereference userspace.
* add TLB-flushing user/kernel-separated page table support to kernel.
* test module and win!
> You can certainly start working on performance without predefining
> your measurements and without automation.
Right, but I think the real approach to this would be:
* kernel should not access userspace memory without from/to_user helpers.
* write trivial module to dereference userspace.
* collect performance on current important workloads.
* add TLB-flushing user/kernel-separated page table support to kernel.
* recollect performance on current important workloads.
* win, but also show the performance difference
> I can certainly see a comprehensive benchmark for the system being of
> use to you, but have you considered looking for apps that exercise
> $changedcodepath and running their upstream performance metrics,
> whatever they are, first ?
This is the basic problem: _everything_ exercises the changed code path.
The things I poke tend to be compiler output, libc routines, memory access
at the kernel level, etc. It affects all applications, so identifying
"upstream performance metrics" is the big work, and it tends to be very
sensitive to each team's preferences, hence my email. :)
Having those tests runnable up front saves the time of developing a thing,
and then having people scream months later when they finally notice a
performance hit. Moving the review/discussion/whatever up as close to
the development time is what is important, I think.
-Kees
--
Kees Cook
Ubuntu Security Team
More information about the ubuntu-devel
mailing list