Creating a roadmap for improving bzr's performance.

Robert Collins robertc at robertcollins.net
Mon May 7 09:26:45 BST 2007


I write this because I realised we dont have a clear roadmap for
carrying our performance improvements across the code base. During the
last release this left me at a little bit of a loose end from time to
time - we have quite a significant number of contributors today and its
important that folk who want to contribute be able to find a piece of
the puzzle and start putting it together. Theres nothing particularly
radical in here, mainly I'm trying to ensure that what *I* think about
all this is somewhere outside my skull, and hopefully thats useful to us
all.

Performance is still a significant problem for us, both for our users
and for folk evaluating bzr. I'm here at the Ubuntu Developer Summit at
the moment in Sevilla, and bzr's performance has come up a number of
times. Particularly of interest to me is the publicity that the Mozilla
benchmarking has generated: even though 0.15 is hugely faster that 0.14
was, the common meme is that bzr is slow slow slow.

The 0.15 and 0.16 releases have both made some very important
performance strides but at most that is some breathing room - other
VCS's are playing catch up on our usability whilst we are playing catch
up on performance.

At the moment we are working on many micro optimisations, but I dont
think we're likely to get to fantastic levels of performance without
getting our api and data storage stack to match the operations we want
to be fast: and this requires end to end optimisations, removal of
unnecessary friction in the api stack, removal or other accomodation for
badly-scaling operations. Aarons fast compare trees operation is an
example of a tuned low-friction api - but we need more of these, and we
need to make better use of them.

What should be in the roadmap?
------------------------------

A good roadmap provides a place for contributors to look for tasks, it
provides users with a sense of when we will fix things that are
affecting them, and it also allows us all to agree about where we are
headed. So the roadmap should contain enough things to let all this
happen.

I think that it needs to contain the analysis work which is required, a
list of the use cases to be optimised, the disk changes required, and
the broad sense of the api changes required. It also needs to list the
inter-dependencies between these things: we should aim for a large
surface area of 'ready to be worked on' items, that makes it easy to
improve performance without having to work in lockstep with other
developers.

Clearly the analysis step is an immediate bottleneck - we cannot tell if
an optimisation for use case A is a pessimism for use case B until we
have analysed both A and B. I propose that we complete the analysis of
say a dozen core use cases end to end during the upcoming sprint in
London. We should then be able to fork() for much of the detailed design
work and regroup with disk and api changes shortly thereafter.

I suspect that clarity of layering will make a big difference to
developer parallelism, so another proposal I have is for us to look at
the APIs for Branch and Repository in London in the light of what we
have learnt over the last years.

What should the final system look like, how is it different to what we
----------------------------------------------------------------------
have today?
-----------

One of the things I like the most about bzr is its rich library API, and
I've heard this from numerous other folk. So anything that will remove
that should be considered a last resort.

Similarly our relatively excellent cross platform support is critical
for projects that are themselves cross platform, and thats a
considerable number these days.

And of course, our focus on doing the right thing is what differentiates
us from some of the other VCS's, so we should be focusing on doing the
right thing quickly :).

What we have today though has grown organically in response to us
identifying bottlenecks over several iterations of back end storage,
branch metadata and the local tree representation. I think we are
largely past that and able to describe the ideal characteristics of the
major actors in the system - primarily Tree, Branch, Repository - based
on what we have learnt.

What use cases?
---------------

My list of use cases is probably not complete - its just the ones I
happen to see a lot :). I think each should be analysed comprehensively
so we dont need to say 'push over the network' - its implied in the
scaling analysis that both semantic and file operation latency will be
considered.

These use cases are ordered by roughly the ease of benchmarking, and the
frequency of use. This ordering is so that when people are comparing bzr
they are going to get use cases we have optimised; and so that as we
speed things up our existing users will have the things they do the most
optimised.

status tree
status subtree
commit
commit to a bound branch
incremental push/pull
log
log path
add
initial push or pull [both to a new repo and an existing repo with
   different data in it]
diff tree
diff subtree

revert tree
revert subtree
merge from a branch
merge from a bundle
annotate
create a bundle against a branch
uncommit
missing
update
cbranch

how to coordinate?
------------------

I think we should hold regular get-togethers (on IRC) to coordinate on
our progress, because this is a big task and its a lot easier to start
helping out some area which is having trouble if we have kept in contact
about each areas progress. This might be weekly or fortnightly or some
such.

we need a shared space to record the results of the analysis and the
roadmap as we go forward. Given that we'll need to update these as new
features are considered, I propose that we use doc/design as a working
space, and as we analyse use cases we include them in there - including
the normal review process for each patch. We also need documentation
about doing performance tuning - not the minutiae, though that is
needed, but about how to effective choose things to optimise which will
give the best return on time spent - that is what the roadmap should
help with, but this looks to be a large project and an overview will be
of great assistance I think. We want to help everyone that wishes to
contribute to performance to do so effectively.

Finally, its important to note that coding is not the only contribution
- testing, giving feedback on current performance, helping with the
analysis are all extremely important tasks too and we probably want to
have clear markers of where that should be done to encourage such
contributions.

I'm writing a separate email to kickstart the analysis for the roadmap,
hopefully avoiding conflating the discussions about planning in general
vs the details of optimisation analysis. 

-Rob

-- 
GPG key available at: <http://www.robertcollins.net/keys.txt>.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : https://lists.ubuntu.com/archives/bazaar/attachments/20070507/31b2c8da/attachment.pgp 


More information about the bazaar mailing list