2GB limit

Maritza Mendez martitzam at gmail.com
Fri Oct 8 03:18:02 BST 2010


Sorry... Hit send accidently.  At the risk of asking a dumb question or two...

1. Martin...your note seems to suggest the problem is especial to
Windows.  Is that right or is it really just a matter of 32 bit python
(and pyx)?

2.  Would running 64 bit python and non-compiled bzr (slow but works)
provide a sufficient proof of principle solution?

Thanks
~M


On 10/7/10, Martin Pool <mbp at canonical.com> wrote:
> On 3 October 2010 06:49, Maritza Mendez <martitzam at gmail.com> wrote:
>
> In some ways the simplest fix for this is for someone to build and
> publish a 64-bit bzr Windows build.  (There was a thread recently.)
> The most immediate problem is arguably not that bzr's holding the
> whole thing in memory, but that it can't use all of your machine's vm
> to do it.  Even laptops come with more than 2GB these days.
>
>> 1. How soon is bug 109114 [https://bugs.launchpad.net/bzr/+bug/109114]
>> likely to get attention from Canonical?
>
> It's not something our internal stakeholders are hitting so far,
> because we just don't tend to deal with trees with single enormous
> files.  We do deal with enormous trees and John and others have done
> some work on memory usage there, with good results.
>
> That said, we do fix a lot of other bugs that aren't directly on our
> plan, including performance things, and we are even more likely to
> help someone else who wants to start working on this.  Karl recently
> put up a patch for http://pad.lv/551391 to report what we're using
> memory on when we run out, which is a step towards that.
>
>> 3. How hard would it be to add a bzr ignore rule that is based on filesize
>> instead of or combined with filename patterns?  My biggest fear is that
>> someone (like me) will accidentally try to commit a file which we know bzr
>> cannot handle and then having to un-do the damage to the repo, hopefully
>> before any more commits are made.  Being able to block-by-default commits
>> of
>> files exceeding a configurable size would help protect the integrity of
>> the
>> repo and help me keep my developers productive.  I realize this is
>> dangerous
>> (missing content) but not as dangerous as downtime.  As Philip Peitsch
>> mentioned yesterday (what a coincidence!) explaining downtime to
>> management
>> is not pleasing.
>
> I think you could use a precommit hook to check this, and there is a
> bug asking for a builtin feature to warn about committing large files.
>  I think that would be quite useful, even if we had no size limits at
> all.
>
> --
> Martin
>



More information about the bazaar mailing list