Fwd: fetching size increase rapidly...

John Arbash Meinel john at arbash-meinel.com
Sun Jul 25 00:52:29 BST 2010

Hash: SHA1

Michael Andronov wrote:
> Martin -
> 1. Thank you very much for your ' bzr branch log+sftp://...'
> Running above confirmed that the issue was related to abnormal number
> of .pack transmissions.
> 2. After that I run 'bzr pack' as you suggested earlier.
> It took a while to re-pack,  and that resolved the issue.
> So, the issue is resolved.
> I'm wondering though what may be the cause of those .pack at first
> place? Is there the way to trace?
> Just to avoid the same situation in the future, if possible.
> Thanks everyone for help and support!
> Michael.

a) You're using sftp, which means we only have simple filesystem access
to the remote repository. To get good compression, we group content
together into 'blocks' and compress that. Most groups are about 2-4MB in

However, if you need, say, only the middle 5000 bytes, with SFTP you
have to read the whole 2MB block to get it back.

If you were using bzr on the remote side (bzr+ssh), then the server
would notice that you were only using a small portion of the block, and
create a new block for you.

b) cache overflow. If you are measuring how much data gets transferred
for a fresh branch (not into a shared repository), you may be running
into issues with cache size and fragmentation of the source.

Because of stuff like (a) we do some local caching of blocks when we
read. However, to avoid consuming too much RAM, we limit that. It would
be possible to switch between blocks enough that we can't fit all of the
useful ones in cache. (We also try to re-order and recompress on the
fly.) If that was happening, though, I would expect more note of it in
the ~/.bzr.log.

Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/


More information about the bazaar mailing list