Smart server plans for 1.7: effort tests, single RPC for branch opening, server-side autopacking.

Andrew Bennetts andrew at
Wed Aug 6 23:34:23 BST 2008

John Arbash Meinel wrote:
> I was actually thinking we could do a memory bound test now that we have
> "bzr -Dmemory". My idea would be to create a say 50MB file, and then
> commit it, and ensure that memory doesn't go about 200MB (or whatever is
> reasonable.) And then ratchet down that upper limit as we get rid of
> extra copies. The idea is to use a number large enough that copying the
> text will show up in memory, but is immune to the little allocations for
> everything else we might do.
> Or say, hardlink 10 of these files, etc. I wouldn't want to be abusive
> on disk space for the test, but I think we could really use some amount
> of memory consumption testing.

Well, if you make a sparse file (“f = open('foo', 'w'); * 1024*1024);
f.write('x')”), then you won't actually consume significant diskspace either.
It's not completely portable (but you're already talking about using hardlinks),
but on at least Linux and ext3 that doesn't actually allocate 50MB of space.  

The unwritten, unallocated bytes are assumed to be 0s.  Obviously that only
helps certain test cases, but it's a start.

I suppose we could just skip hardlinks and sparse files altogether and just
implement a custom transport.  E.g. one that isn't backed by a filesystem at
all, and just dynamically creates file contents based on name (so
fakefiles:///100000 could generate a 100000 byte file when read, without
necessarily buffering 100000 bytes in memory to do so).

> Any thoughts on how to do it tastefully?

Not really, well not beyond what's written above :)

I agree that it would be valuable.


More information about the bazaar mailing list