husks vs parts

John Lenton john.lenton at canonical.com
Sun Oct 4 15:17:27 UTC 2015


A few days ago I mentioned in passing that husks (that I was and am
working on) offered a performance improvement over parts on some
codepaths, and mvo seemed to want to know more.

So today I sat down and benchmarked the two approaches in two
scenarios, one for when you need to get a map of all installed snaps;
one for when you need to get a list of the names of all active snaps.
I faked[1] a system with 6601 installed and active snaps (I ran out of
words in my dictionary after that). In a real system with less snaps
the difference will not be that noteworthy unless the system is slow,
but I didn't feel like doing this on the bbb because the bbb was waaay
over there on the other side of the room and my bed was nice and warm.
Also disc io is not a consideration at all in this benchmark, but
likely would be an issue before the mere parsing of the yaml dominated
the results as happens here.

Running the benchmark[2], this is what I got:

PASS
BenchmarkHusk-4             1 12867238402 ns/op 844836712 B/op
12222865 allocs/op
BenchmarkPart-4             1 13366845150 ns/op 1023884240 B/op
13068112 allocs/op
BenchmarkActiveHusk-4       2 641433537 ns/op 81248688 B/op  291370 allocs/op
BenchmarkActivePart-4       1 12822966438 ns/op 918116504 B/op
12574341 allocs/op
ok   _/tmp/bnch 80.138s

that's a less-than-10% improvement for the "load everything" code (96%
ns/op, 83% B/op, 94% alloc/op); in this case, once a husk is found it
is loaded into a part, so it's equivalent to the part code except at
the beginning, and not particularly surprising they are very close.

The "active" test, on the other hand, runs in 5% ns/op, 8% b/op, and
2% alloc/op. This is code that emulates what needs to happen to check
for updates, for example.

Anyway, I profiled the code after this, and found yet another case
where we were compiling a regexp in a loop (in the whitelist code,
fwiw). Fixed that (will be in a branch soonish), and got

$ GOPATH=~/canonical/snappy go test -bench=. -benchmem
PASS
BenchmarkHusk-4             1 8172991553 ns/op 477554688 B/op 5949169 allocs/op
BenchmarkPart-4             1 9096755188 ns/op 656670872 B/op 6794801 allocs/op
BenchmarkActiveHusk-4       2 575280486 ns/op 81250832 B/op  291350 allocs/op
BenchmarkActivePart-4       1 8238703830 ns/op 550893248 B/op 6300937 allocs/op
ok   _/tmp/bnch 53.857s

that's knocked ⅓ of the time off the yaml loading code, leaving it
90%/73%/88% for the All code, and 7%/15%/5% for the ActiveNames code.

HTH,

1. fake packages made with: https://gist.github.com/chipaca/4da751672cb1546aa214
2. benchmark code at https://gist.github.com/chipaca/579fdb70cf1293cc5309



More information about the snappy-devel mailing list