Rev 23: Only decompress as much of the zlib data as is needed to read the text recipe. in http://people.ubuntu.com/~robertc/baz2.0/plugins/groupcompress/trunk
Robert Collins
robertc at robertcollins.net
Mon Jan 19 05:46:54 GMT 2009
At http://people.ubuntu.com/~robertc/baz2.0/plugins/groupcompress/trunk
------------------------------------------------------------
revno: 23
revision-id: robertc at robertcollins.net-20090119054653-khm0iyeyfv47hzb3
parent: robertc at robertcollins.net-20090108041820-8u4qmcarcv70so32
committer: Robert Collins <robertc at robertcollins.net>
branch nick: trunk
timestamp: Mon 2009-01-19 16:46:53 +1100
message:
Only decompress as much of the zlib data as is needed to read the text recipe.
=== modified file 'TODO'
--- a/TODO 2008-07-05 18:15:40 +0000
+++ b/TODO 2009-01-19 05:46:53 +0000
@@ -3,3 +3,4 @@
* layers - gc reader/writer and vf layers should be separate
* mpdiff usage
* other stuff from design
+ * use byte offsets not line offsets.
=== modified file 'groupcompress.py'
--- a/groupcompress.py 2009-01-07 03:25:15 +0000
+++ b/groupcompress.py 2009-01-19 05:46:53 +0000
@@ -297,8 +297,8 @@
def cleanup_pack_group(versioned_files):
+ versioned_files.writer.end()
versioned_files.stream.close()
- versioned_files.writer.end()
class GroupCompressVersionedFiles(VersionedFiles):
@@ -487,12 +487,15 @@
parents = self._unadded_refs[key]
else:
index_memo, _, parents, (method, _) = locations[key]
- # read
+ # read the group
read_memo = index_memo[0:3]
zdata = self._access.get_raw_records([read_memo]).next()
- # decompress
- plain = zlib.decompress(zdata)
- # parse
+ # decompress - whole thing; this is a bug.
+ decomp = zlib.decompressobj()
+ plain = decomp.decompress(zdata, index_memo[4])
+ # cheapo debugging:
+ # print len(zdata), len(plain)
+ # parse - requires split_lines, better to have byte offsets here.
delta_lines = split_lines(plain[index_memo[3]:index_memo[4]])
label, sha1, delta = parse(delta_lines)
if label != key:
More information about the bazaar-commits
mailing list