Bug report

Vincent Ladeuil v.ladeuil+lp at free.fr
Mon Dec 17 15:52:25 GMT 2007


>>>>> "bialix" == Alexander Belchenko <bialix at ukr.net> writes:

    bialix> John Arbash Meinel пишет:
    >> Alexander Belchenko wrote:
    >>> Vincent,
    >>> 
    >>> What do you think, your patch for http readv to read answer by chunks,
    >>> is applicable to sftp transport? Symptoms looks pretty similar at first
    >>> glance.
    >>> 
    >>> Alexander
    >> 
    >> sftp already reads in chunks. So it is something different.
    >> 
    >> I wonder if it has to do with Andrew's work to have the protocol write in
    >> "chunked encoding".
    >> 
    >> I'm guessing the code might actually be using "get_data_stream()" rather than
    >> "readv". But I'm not positive.
    >> 
    >> If it is using get_data_stream, then it is actually building up a large packet
    >> of data to send over the wire.
    >> 
    >> Having memory problems with 1GB of RAM seems surprising.
    >> 
    >> If he is interested in testing, I suppose he could try doing one of the:
    >> 
    >> bzr branch -r 100, bzr pull -r 200, bzr pull -r 300, sort of things to see if
    >> that works.

    bialix> More details from Vlad (I translate his e-mail from Russian):

    bialix> "This trick (with splitting revisions ranges) cannot be used
    bialix> because error occurs on the first revision. My friend long time
    bialix> working on Windows without version control, and then I
    bialix> (i.e. Vlad) get 1GB of data files and create new repository, and
    bialix> then push it to server. So, repository from the start becomes
    bialix> very big. And because usual checkout command fails, I need to
    bialix> push branch via SMB.

    bialix> More info. In the repository there are too many big files (10MB
    bialix> and bigger). So, your assumption about big amount of requested
    bialix> data is probably true.

    bialix> I have python installed and therefore can patch python-based
    bialix> version of bzr to check John Meinel's patch.

    bialix> Logs will be later."

Sooo, in the other mail (forwarded to john too), I noticed that
the .pack file is a 512MB file.

Now that I think about it, I think John was on the right track
with his patch, but that we should limit the size of the coalesced
offsets (so that an offset provided *to* readv never crosses
coalesced offsets), i.e. use the 'max_size' parameter instead of
the 'limit' parameter.

    Vincent




More information about the bazaar mailing list