Bug report
Alexander Belchenko
bialix at ukr.net
Mon Dec 17 14:14:20 GMT 2007
John Arbash Meinel пишет:
> Alexander Belchenko wrote:
>> John Arbash Meinel пишет:
>>> Alexander Belchenko wrote:
>>>> Vincent,
>>>>
>>>> What do you think, your patch for http readv to read answer by chunks,
>>>> is applicable to sftp transport? Symptoms looks pretty similar at first
>>>> glance.
>>>>
>>>> Alexander
>>> sftp already reads in chunks. So it is something different.
>>>
>>> I wonder if it has to do with Andrew's work to have the protocol write in
>>> "chunked encoding".
>> Umm? Vlad used plain sftp protocol. It's unrelated to bzr smart server IMO.
>>
>>> Having memory problems with 1GB of RAM seems surprising.
>>>
>>> If he is interested in testing, I suppose he could try doing one of the:
>>>
>>> bzr branch -r 100, bzr pull -r 200, bzr pull -r 300, sort of things to
>>> see if
>>> that works.
>> This tricks works for my problem with smart server. IMO should works and
>> for sftp too.
>>
>> But will be nice if we can to force this behavior either automatically
>> at transport level or from command line.
>>
>
> If it works for sftp, then I'm curious if something else is going on. Also,
> what version of paramiko are you using? (I recall that <1.6? would make
> requests larger than the SFTP spec required implementations to support.)
As I could see from traceback, Vlad used standalone bzr.exe and
therefore Paramiko 1.7 (I bundle my custom ctypes version).
> Vincent has a decent "transportstats" plugin which could be interesting here.
> I'm really surprised to see plain "sftp" failing. The last time I looked at the
> code, no single request would be >32KB, though we might make a lot of async
> requests for lots of ranges.
>
> Maybe that is the problem, that we are queuing up too many requests at once.
>
> We could check some of that from the log file. It should have a line like:
>
> SFTP.readv() %s offsets => %s coalesced => %s requests
>
> Which would at least tell us how many concurrent requests we are trying to make.
>
> You could also try something like the attached patch. Which makes sure that we
> don't asynchronously request more than 10MB of data at a time. (32KB * 300 =
> 9.6MB.)
It's not easy to try your patch in the case of bzr.exe, because all
python sources compiled to *.pyc. But it's doable, if Vlad willing to try.
Sorry, next part for Vlad in Russian:
Влад, если вы хотите попробовать патч, предложенный Джоном Майнелом, я
могу рассказать как модифицировать установленную у вас версию. Нужно
будет подменить один файл в zip-архиве.
More information about the bazaar
mailing list