Rev 3687: Merge in some of the changes from the old sftp_chunked branch. in http://bzr.arbash-meinel.com/branches/bzr/1.7-dev/sftp_chunked
John Arbash Meinel
john at arbash-meinel.com
Wed Sep 3 23:03:17 BST 2008
At http://bzr.arbash-meinel.com/branches/bzr/1.7-dev/sftp_chunked
------------------------------------------------------------
revno: 3687
revision-id: john at arbash-meinel.com-20080903220310-1uwt7qt5p1istebv
parent: pqm at pqm.ubuntu.com-20080903205840-mteswj8dfvld7vo3
parent: john at arbash-meinel.com-20071217233848-pq8zo3fyr9yt1rc1
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: sftp_chunked
timestamp: Wed 2008-09-03 17:03:10 -0500
message:
Merge in some of the changes from the old sftp_chunked branch.
But revert the sftp code itself.
We should start from scratch.
modified:
bzrlib/errors.py errors.py-20050309040759-20512168c4e14fbd
bzrlib/transport/__init__.py transport.py-20050711165921-4978aa7ce1285ad5
------------------------------------------------------------
revno: 3120.2.2
revision-id: john at arbash-meinel.com-20071217233848-pq8zo3fyr9yt1rc1
parent: john at arbash-meinel.com-20071217165633-unoib2xwcy3moixw
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: sftp_chunked
timestamp: Mon 2007-12-17 17:38:48 -0600
message:
finish polishing up the sftp code.
modified:
bzrlib/transport/sftp.py sftp.py-20051019050329-ab48ce71b7e32dfe
------------------------------------------------------------
revno: 3120.2.1
revision-id: john at arbash-meinel.com-20071217165633-unoib2xwcy3moixw
parent: pqm at pqm.ubuntu.com-20071217060447-sictlq5nibqhpuec
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: sftp_chunked
timestamp: Mon 2007-12-17 10:56:33 -0600
message:
Change the sftp_readv loop to buffer even less.
Instead of waiting until we have a whole collapsed range, start trying to
return data as soon as any data arrives.
Also, there is a common case of having a request be made in
sorted order. Which means that we don't need to buffer much at all.
modified:
bzrlib/errors.py errors.py-20050309040759-20512168c4e14fbd
bzrlib/transport/__init__.py transport.py-20050711165921-4978aa7ce1285ad5
bzrlib/transport/sftp.py sftp.py-20051019050329-ab48ce71b7e32dfe
-------------- next part --------------
=== modified file 'bzrlib/errors.py'
--- a/bzrlib/errors.py 2008-09-02 18:51:03 +0000
+++ b/bzrlib/errors.py 2008-09-03 22:03:10 +0000
@@ -666,6 +666,16 @@
self.actual = actual
+class OverlappingReadv(BzrError):
+ """Raised when a readv() requests overlapping chunks of data.
+
+ Not all transports supports this, so the api should generally forbid it.
+ (It isn't a feature we need anyway.
+ """
+
+ _fmt = 'Requested readv ranges overlap'
+
+
class PathNotChild(PathError):
_fmt = 'Path "%(path)s" is not a child of path "%(base)s"%(extra)s'
=== modified file 'bzrlib/transport/__init__.py'
--- a/bzrlib/transport/__init__.py 2008-09-03 09:11:20 +0000
+++ b/bzrlib/transport/__init__.py 2008-09-03 22:03:10 +0000
@@ -751,7 +751,8 @@
return offsets
@staticmethod
- def _coalesce_offsets(offsets, limit=0, fudge_factor=0, max_size=0):
+ def _coalesce_offsets(offsets, limit=0, fudge_factor=0, max_size=0,
+ allow_overlap=False):
"""Yield coalesced offsets.
With a long list of neighboring requests, combine them
@@ -760,27 +761,26 @@
Turns [(15, 10), (25, 10)] => [(15, 20, [(0, 10), (10, 10)])]
:param offsets: A list of (start, length) pairs
-
:param limit: Only combine a maximum of this many pairs Some transports
penalize multiple reads more than others, and sometimes it is
better to return early.
0 means no limit
-
:param fudge_factor: All transports have some level of 'it is
- better to read some more data and throw it away rather
+ better to read some more data and throw it away rather
than seek', so collapse if we are 'close enough'
-
:param max_size: Create coalesced offsets no bigger than this size.
When a single offset is bigger than 'max_size', it will keep
its size and be alone in the coalesced offset.
0 means no maximum size.
-
- :return: yield _CoalescedOffset objects, which have members for where
- to start, how much to read, and how to split those
- chunks back up
+ :param allow_overlap: If False, raise an error if requested ranges
+ overlap.
+ :return: return a list of _CoalescedOffset objects, which have members
+ for where to start, how much to read, and how to split those chunks
+ back up
"""
last_end = None
cur = _CoalescedOffset(None, None, [])
+ coalesced_offsets = []
for start, size in offsets:
end = start + size
@@ -789,18 +789,19 @@
and start >= cur.start
and (limit <= 0 or len(cur.ranges) < limit)
and (max_size <= 0 or end - cur.start <= max_size)):
+ if not allow_overlap and start < last_end:
+ raise errors.OverlappingReadv()
cur.length = end - cur.start
cur.ranges.append((start-cur.start, size))
else:
if cur.start is not None:
- yield cur
+ coalesced_offsets.append(cur)
cur = _CoalescedOffset(start, size, [(0, size)])
last_end = end
if cur.start is not None:
- yield cur
-
- return
+ coalesced_offsets.append(cur)
+ return coalesced_offsets
def get_multi(self, relpaths, pb=None):
"""Get a list of file-like objects, one for each entry in relpaths.
More information about the bazaar-commits
mailing list