[MERGE] Updated sftp_readv
Vincent Ladeuil
v.ladeuil+lp at free.fr
Thu Dec 20 20:35:21 GMT 2007
>>>>> "john" == John Arbash Meinel <john at arbash-meinel.com> writes:
<snip/>
john> The problem is that it doesn't break at boundaries. So what I actually need is:
john> data = ''.join([buffer[10][12:]] + buffer[11:15] + [buffer[15][:18]])
john> And the hard part is figuring out what all of those numbers should be. It might
john> be something like:
john> start_block = start_offset = None
john> end_block = end_offset = None
john> bytes_so_far = 0
john> for block_idx, block in enumerate(buffer):
john> next_bytes_so_far = bytes_so_far + len(block)
john> if start_block is None:
john> if next_bytes_so_far > start:
john> start_block = block_idx
john> start_offset = start - bytes_so_far
john> if end_block is None:
john> if next_bytes_so_far > end:
john> end_block = block_idx
john> end_offset = end - bytes_so_far
john> break # We know we are done
john> if end_block == start_block:
john> data = buffer[start_block][start_offset:end_offset]
john> else:
john> data = ''.join([buffer[start_block][start_offset:]]
john> + buffer[start_block+1:end_block]
john> + buffer[end_block][:end_offset])
john> Which I think is correct, but it certainly doesn't fall
john> under the "simple" definition.
Eeeek, sure.
But why are you trying to do that ?
Because your coalesced offsets are so big that you don't want to
totally buffer them ?
Why not make them smaller then ?
I think the biggest offset a readv can be required to yield can't
be bigger than a full text revision for a given file, users
should have machines configured to handle that (they versioned
the file in the first place don't they ?) ?
So I'll go the same way than for http: limit the size of the
coalesced offset, as long as you buffer the requests, that should
not make any difference in terms of latency.
In fact doing:
cur_coalesced.ranges = new_ranges
is nothing more than doing that after the fact.
Or did I miss something ?
Vincent
More information about the bazaar
mailing list