Introduction to history deltas

Martin Pool mbp at sourcefrog.net
Thu Dec 8 06:52:54 GMT 2005


On  7 Dec 2005, Robert Collins <robertc at robertcollins.net> wrote:
> One thing that occurs to me is that tracking a heavily merged into
> branch affects the crossover - it will accrue relatively few revisions
> itself, but each of those hauls in 20 or 30 or more revisions. This
> leads to only needing 6 or 7 commits on the 'mainline' before pulling
> from multiple knits is more efficient (in the bzr.dev case).

Of course a branch which accumulates many large merges is also likely to
have changes to many files.  

The point most in favor of grouping by revision for remote access is
that we never need to look for updates to an existing file, or download
just part of a file.

> > ..That suggests that it's an advantage to optimize for
> > single file access, locally.
> 
> And I agree with this 100%.

Yes, as Aaron said previously it is very common to need to get the
previous text of just a few modified files for bzr diff, or to get
particular previous inventories.

> My summary is this:
> We should trade off the actual data storage and transmission
> requirements so that:
> * local operations are very fast
> * data transmitted for push and pull are proportional to changes, not to
> total history size.

> We should treat latency optimisation as a problem orthogonal to those.
> There are many solutions:
>  * smart servers that stream data
>  * async behaviour in the core library
>  * Dedicated optimised routines for hot spots that are async but
> encapsulate it.

Yes, I agree with that too.

-- 
Martin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Url : https://lists.ubuntu.com/archives/bazaar/attachments/20051208/00d36b01/attachment.pgp 


More information about the bazaar mailing list