Rev 3677: Document my attempt to use copy() as a look-ahead. in http://bzr.arbash-meinel.com/branches/bzr/1.7-dev/btree
John Arbash Meinel
john at arbash-meinel.com
Fri Aug 22 21:35:53 BST 2008
At http://bzr.arbash-meinel.com/branches/bzr/1.7-dev/btree
------------------------------------------------------------
revno: 3677
revision-id: john at arbash-meinel.com-20080822203551-cnz8r1hpi4wyfamh
parent: john at arbash-meinel.com-20080822203320-y98xykrjms4r5goj
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: btree
timestamp: Fri 2008-08-22 15:35:51 -0500
message:
Document my attempt to use copy() as a look-ahead.
modified:
bzrlib/chunk_writer.py chunk_writer.py-20080630234519-6ggn4id17nipovny-1
-------------- next part --------------
=== modified file 'bzrlib/chunk_writer.py'
--- a/bzrlib/chunk_writer.py 2008-08-22 20:33:20 +0000
+++ b/bzrlib/chunk_writer.py 2008-08-22 20:35:51 +0000
@@ -174,6 +174,14 @@
else:
# This may or may not fit, try to add it with Z_SYNC_FLUSH
_stats[8] += 1 # len(bytes)
+ # Note: It is tempting to do this as a look-ahead pass, and to
+ # 'copy()' the compressor before flushing. However, it seems that
+ # 'flush()' is when the compressor actually does most work
+ # (consider it the real compression pass over the data-so-far).
+ # Which means that it is the same thing as increasing repack,
+ # similar cost, same benefit. And this way we still have the
+ # 'repack' knob that can be adjusted, and not depend on a
+ # platform-specific 'copy()' function.
out = comp.compress(bytes)
out += comp.flush(Z_SYNC_FLUSH)
self.unflushed_in_bytes = 0
More information about the bazaar-commits
mailing list