bug report

John Arbash Meinel john at arbash-meinel.com
Thu Feb 22 14:05:32 GMT 2007


Andrew Voznytsa wrote:
> John Arbash Meinel wrote:
>> Andrew Voznytsa wrote:
> [...]
>>> It seems that bzr failed during committing media/Test002.mpg which is
>>> about 240 Mb (media/Test001.mpg was about 143 Mb).
>>>
>>> PC spec attached.
>>>
>>> bzr worked with shared repository over sftp (sftp server is localhost).
>>
>> Thanks for the report. Do you know how much memory was in use when it
>> failed?
> 
> bzr ate around 680 Mb and crashed after some time.
> 
> I set paging file size to 2 Gb but nothing changed.

Well, Windows has a soft limit around 1.2Gb because of how it allocates
memory (depends on the size of each allocation and how fragmented memory
is, etc). It has a hard limit of 2GB per process unless you do a lot of
hacking. (You have to boot with the /3GB flag, as well as set the "large
address aware" bit in the executable).

You don't seem to be terribly close to that amount, but all it needs is
a single allocation that goes above to cause it to fail.


> 
>>
>> At this point, we have focused on supporting versioning source code, and
>> making that fast. So we have the explicit requirement that we can hold 3
>> copies of a file in memory. (base, this, other for merging).
> 
> I'd mention that large files in multimedia software field are quite
> often, for example for regression tests. Some of files might be very
> large (HDTV clips). So loading whole file into memory (I'm not speaking
> about number of copies) could be just impossible. I believe you know
> about such cases so I just want to recall and ask when (if) you plan to
> implement support of large files?
> 

Well, at this point we are not planning on allowing you to version 4.7GB
DVDs just yet. As Marius said, it would be possible to switch algorithms
based on size, but it adds a lot to the complexity. So I see that sort
of thing as post 1.0 (2.0?).


> Best regards,
> Andrew Voznytsa
> 
> 

John
=:->



More information about the bazaar mailing list