BDFL decision of Python's DVCS

Andrew Bennetts andrew.bennetts at canonical.com
Wed Apr 1 08:20:52 BST 2009


Matthew D. Fuller wrote:
[...]
> Let's see what committer-stats has to say about bzr.dev.  info -v says
> there are 23k and change revs in the repository.  How do they divide
> up?
> 
>     3780 Aaron Bentley
> 
> Employed by Canonical.

Actually, he wasn't employed while writing many of those revisions.  And
many of those revisions are probably some bzrtools ancestry that was merged
in when cdiff or shelve or something migrated to the core, which perhaps
explains why he's top of the list.

(I see you touch on this later in your message, but I thought I should
mention it explicitly.)

[...]
>     2794 Canonical.com Patch Queue Manager
> 
> Employed by Canonical (and making scandalously low wages, I wager ;).

Indeed... this committer is a frequent and vocal complainer, rejecting
people's patches with alarming regularity.  Perhaps we should up the wages
;)

>      941 Vincent Ladeuil
> 
> Ah, finally somebody not so employed.  Responsible for slightly under
> a quarter the revs of the #1 contributor by volume, and somewhat over
> half that of the 5th place Canonical employee.

Actually, he is employed by Canonical now...

[...]
> I'm well aware that many of those revs from many of those people date
> from times when they WEREN'T Canonical employees.  But still;
> perception trumps reality, even when the numbers DON'T tend to support
> that perception.

Right.  As you say, perception trumps reality, but we can try to fix
perceptions...

IMO, the story here is actually “Canonical is hiring — if you do good work
on a project we care about, we'd love to have you do more of that”, which
strikes me as a positive!

[...]
> see which is larger.  I'm not claiming performance is unimportant
> (quite the contrary), but performance is *VERY* easy to demonstrate
> incontrovertibly.  Usability isn't.

Well, partly.  Benchmarks are only accurate in so far as they reflect
operations that you care about.  It's easy to perform a misleading
benchmark when there are many different facets to performance (do you care
about "bzr st" time? "bzr ci" time on an initial commit?  What about "bzr
ci" of one file in a big tree?  What about fetching deep, branch-heavy
ancestry?  What if the direct analogy of a workflow from another tool
performs worse, but there's an alternative workflow that performs better and
may suit your situation better anyway?).

So I agree it's easier to produce a benchmark than a UI comparison, but I'd
say it's actually still quite hard to do well when tools are actually
substantially different in how they work and what they support.  Especially
when every candidate is a moving target!

> bzr is thus in the quite uneviably position of having its
> disadvantages easily demonstrated, and its advantages difficult to
> convey.  It doesn't make them imaginary, but in the PR sweeps it's the
> next best thing.

I agree with this conclusion, mostly.  Certainly we've had some easily
demonstrated performance shortcomings that other tools don't have, and
that's been a real, and justifiable, barrier to adoption for many.

Happily these relative shortcomings are rapidly diminshing (I'm particularly
thinking of recent network changes and brisbane-core).  It'd be lovely if we
had fixed them 18 months ago and thus gained many more adopters, but I think
we'll still do well in the long run.

> [0] This is seriously *NOT* intended to denigrate ANYBODY.  Absolutely
>     vital and truly wonderful stuff can and does come from people
>     responsible for only a few dozen revs out of a couple dozen
>     thousand.  Quantity isn't a substitute for quality; but it's still
>     quantity.

Yeah.  And lines changed per person may give a different, but still not
actually useful, impression.  It's hard!  (Let's go shopping...)

[...]
>     This also purposefully ignores very important contributions to the
>     bzr world like writing plugins or documentation or problem reports
>     or doing support.  I'm talking about just the people who write the
>     code that gets committed to bzr.dev.  It's unfortunate perhaps
>     that that metric gets more weight for figuring "the people
>     involved with X" than it really merits relative to non-core-code
>     contributions, but perception looks there.

Right.  Measuring a community or contributions in general is a pretty
difficult problem.

-Andrew.




More information about the bazaar mailing list