Rev 3814: Bring in the tip of brisbane-core, including the remap fixes. in http://bzr.arbash-meinel.com/branches/bzr/brisbane/prefix
John Arbash Meinel
john at arbash-meinel.com
Wed Jan 7 18:37:29 GMT 2009
At http://bzr.arbash-meinel.com/branches/bzr/brisbane/prefix
------------------------------------------------------------
revno: 3814
revision-id: john at arbash-meinel.com-20090107183404-fj6u2hhxl81t0msa
parent: john at arbash-meinel.com-20081220211649-llqcjbt02rkyk6ef
parent: john at arbash-meinel.com-20090107180839-li0vjvvg8bx85bcc
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: prefix
timestamp: Wed 2009-01-07 12:34:04 -0600
message:
Bring in the tip of brisbane-core, including the remap fixes.
modified:
bzrlib/chk_map.py chk_map.py-20081001014447-ue6kkuhofvdecvxa-1
bzrlib/tests/test_chk_map.py test_chk_map.py-20081001014447-ue6kkuhofvdecvxa-2
------------------------------------------------------------
revno: 3802.1.4
revision-id: john at arbash-meinel.com-20090107180839-li0vjvvg8bx85bcc
parent: john at arbash-meinel.com-20081219230732-ri1i1tujtrh2d3sl
parent: john at arbash-meinel.com-20090107180553-5sr2e5lfxcan84u5
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: brisbane-core
timestamp: Wed 2009-01-07 12:08:39 -0600
message:
Change _iter_nodes so that it is a generator.
Change _check_remap to use _iter_nodes to eliminate the redundancy.
We also added a 'batch_size' parameter, so that with large fan-outs
we don't page in all the nodes unnecessarily. Which adds both direct
computation overhead, but also causes us to thrash the page cache.
modified:
bzrlib/chk_map.py chk_map.py-20081001014447-ue6kkuhofvdecvxa-1
bzrlib/tests/test_chk_map.py test_chk_map.py-20081001014447-ue6kkuhofvdecvxa-2
------------------------------------------------------------
revno: 3802.2.7
revision-id: john at arbash-meinel.com-20090107180553-5sr2e5lfxcan84u5
parent: john at arbash-meinel.com-20090107175432-hw8ozprv36irblz0
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: remap
timestamp: Wed 2009-01-07 12:05:53 -0600
message:
Change _iter_nodes into a generator.
This dramatically simplifies _check_remap, because all of the code to
handle paging in new nodes is already present.
All we needed to add was the ability to 'batch' requests for the
get_record_stream(), instead of reading in everything at once.
modified:
bzrlib/chk_map.py chk_map.py-20081001014447-ue6kkuhofvdecvxa-1
------------------------------------------------------------
revno: 3802.2.6
revision-id: john at arbash-meinel.com-20090107175432-hw8ozprv36irblz0
parent: john at arbash-meinel.com-20090107175252-19qp2rn5cknmlqof
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: remap
timestamp: Wed 2009-01-07 11:54:32 -0600
message:
fix a test case that exercises the new stop-early code.
modified:
bzrlib/tests/test_chk_map.py test_chk_map.py-20081001014447-ue6kkuhofvdecvxa-2
------------------------------------------------------------
revno: 3802.2.5
revision-id: john at arbash-meinel.com-20090107175252-19qp2rn5cknmlqof
parent: john at arbash-meinel.com-20081224165308-mdwuyh9kxtmijy65
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: remap
timestamp: Wed 2009-01-07 11:52:52 -0600
message:
Change _check_remap to only page in a batch of children at a time.
If we have more than 16 children, it is unlikely that we will be able to fit
everything into a single node anyway, and it helps prevent us from requesting
all 255 nodes and overflowing the page cache.
modified:
bzrlib/chk_map.py chk_map.py-20081001014447-ue6kkuhofvdecvxa-1
------------------------------------------------------------
revno: 3802.2.4
revision-id: john at arbash-meinel.com-20081224165308-mdwuyh9kxtmijy65
parent: john at arbash-meinel.com-20081209231116-6lm0mt17pxieb18x
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: remap
timestamp: Wed 2008-12-24 10:53:08 -0600
message:
Remove pdb statement.
modified:
bzrlib/chk_map.py chk_map.py-20081001014447-ue6kkuhofvdecvxa-1
------------------------------------------------------------
revno: 3802.2.3
revision-id: john at arbash-meinel.com-20081209231116-6lm0mt17pxieb18x
parent: john at arbash-meinel.com-20081209063920-y7dofjycpl6m946l
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: xml_cache
timestamp: Tue 2008-12-09 17:11:16 -0600
message:
Properly remove keys that are found in the page cache. And add some debugging.
modified:
bzrlib/chk_map.py chk_map.py-20081001014447-ue6kkuhofvdecvxa-1
------------------------------------------------------------
revno: 3802.2.2
revision-id: john at arbash-meinel.com-20081209063920-y7dofjycpl6m946l
parent: john at arbash-meinel.com-20081209061005-gz20bp1fke585zll
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: xml_cache
timestamp: Tue 2008-12-09 00:39:20 -0600
message:
Finish using the page cache as part of _check_remap, add debugging functions
to give a count of what happens with _check_remap()
modified:
bzrlib/chk_map.py chk_map.py-20081001014447-ue6kkuhofvdecvxa-1
------------------------------------------------------------
revno: 3802.2.1
revision-id: john at arbash-meinel.com-20081209061005-gz20bp1fke585zll
parent: john at arbash-meinel.com-20081219230732-ri1i1tujtrh2d3sl
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: xml_cache
timestamp: Tue 2008-12-09 00:10:05 -0600
message:
Use the page cache as part of _check_remap()
modified:
bzrlib/chk_map.py chk_map.py-20081001014447-ue6kkuhofvdecvxa-1
-------------- next part --------------
=== modified file 'bzrlib/chk_map.py'
--- a/bzrlib/chk_map.py 2008-12-20 21:16:49 +0000
+++ b/bzrlib/chk_map.py 2009-01-07 18:34:04 +0000
@@ -43,11 +43,14 @@
lazy_import.lazy_import(globals(), """
from bzrlib import versionedfile
""")
-from bzrlib.lru_cache import LRUCache
+from bzrlib import lru_cache
# approx 2MB
-_PAGE_CACHE_SIZE = 2*1024*1024 / 4*1024
-_page_cache = LRUCache(_PAGE_CACHE_SIZE)
+# If each line is 50 bytes, and you have 255 internal pages, with 255-way fan
+# out, it takes 3.1MB to cache the layer.
+_PAGE_CACHE_SIZE = 4*1024*1024
+# We are caching bytes so len(value) is perfectly accurate
+_page_cache = lru_cache.LRUSizeCache(_PAGE_CACHE_SIZE)
class CHKMap(object):
@@ -529,7 +532,8 @@
elements = line.split('\x00', width)
items[tuple(elements[:-1])] = elements[-1]
if len(items) != length:
- raise AssertionError("item count mismatch")
+ raise AssertionError("item count (%d) mismatch for key %s,"
+ " bytes %r" % (length, key, bytes))
result._items = items
result._len = length
assert length == len(lines) - 5
@@ -800,22 +804,25 @@
for item in node.iteritems(store, key_filter=key_filter):
yield item
- def _iter_nodes(self, store, key_filter=None):
+ def _iter_nodes(self, store, key_filter=None, batch_size=None):
"""Iterate over node objects which match key_filter.
:param store: A store to use for accessing content.
:param key_filter: A key filter to filter nodes. Only nodes that might
contain a key in key_filter will be returned.
- :return: An iterable of nodes.
+ :param batch_size: If not None, then we will return the nodes that had
+ to be read using get_record_stream in batches, rather than reading
+ them all at once.
+ :return: An iterable of nodes. This function does not have to be fully
+ consumed. (There will be no pending I/O when items are being returned.)
"""
- nodes = []
keys = {}
if key_filter is None:
for prefix, node in self._items.iteritems():
if type(node) == tuple:
keys[node] = prefix
else:
- nodes.append(node)
+ yield node
else:
# XXX defaultdict ?
length_filters = {}
@@ -831,7 +838,7 @@
if type(node) == tuple:
keys[node] = prefix
else:
- nodes.append(node)
+ yield node
break
if keys:
# Look in the page cache for some more bytes
@@ -843,21 +850,31 @@
continue
else:
node = _deserialise(bytes, key)
- nodes.append(node)
self._items[keys[key]] = node
found_keys.add(key)
+ yield node
for key in found_keys:
del keys[key]
if keys:
# demand load some pages.
- stream = store.get_record_stream(keys, 'unordered', True)
- for record in stream:
- bytes = record.get_bytes_as('fulltext')
- node = _deserialise(bytes, record.key)
- nodes.append(node)
- self._items[keys[record.key]] = node
- _page_cache.add(record.key, bytes)
- return nodes
+ if batch_size is None:
+ # Read all the keys in
+ batch_size = len(keys)
+ key_order = list(keys)
+ for batch_start in range(0, len(key_order), batch_size):
+ batch = key_order[batch_start:batch_start + batch_size]
+ # We have to fully consume the stream so there is no pending
+ # I/O, so we buffer the nodes for now.
+ stream = store.get_record_stream(batch, 'unordered', True)
+ nodes = []
+ for record in stream:
+ bytes = record.get_bytes_as('fulltext')
+ node = _deserialise(bytes, record.key)
+ nodes.append(node)
+ self._items[keys[record.key]] = node
+ _page_cache.add(record.key, bytes)
+ for node in nodes:
+ yield node
def map(self, store, key, value):
"""Map key to value."""
@@ -877,7 +894,7 @@
new_parent.add_node(self._lookup_prefix[:len(new_prefix)+1],
self)
return new_parent.map(store, key, value)
- children = self._iter_nodes(store, key_filter=[key])
+ children = list(self._iter_nodes(store, key_filter=[key]))
if children:
child = children[0]
else:
@@ -1012,7 +1029,7 @@
"""Remove key from this node and it's children."""
if not len(self._items):
raise AssertionError("cant unmap in an empty InternalNode.")
- children = self._iter_nodes(store, key_filter=[key])
+ children = list(self._iter_nodes(store, key_filter=[key]))
if children:
child = children[0]
else:
@@ -1066,52 +1083,20 @@
new_leaf = LeafNode()
new_leaf.set_maximum_size(self._maximum_size)
new_leaf._key_width = self._key_width
- keys = {}
- # There is some overlap with _iter_nodes here, but not a lot, and it
- # allows us to do quick evaluation without paging everything in
- for prefix, node in self._items.iteritems():
- if type(node) == tuple:
- keys[node] = prefix
- else:
- if isinstance(node, InternalNode):
- # Without looking at any leaf nodes, we are sure
- return self
- for key, value in node._items.iteritems():
- if new_leaf._map_no_split(key, value):
- # Adding this key would cause a split, so we know we
- # don't need to collapse
- return self
- # So far, everything fits. Page in the rest of the nodes, and see if it
- # holds true.
- if keys:
- # TODO: Consider looping over a limited set of keys (like 25 or so
- # at a time). If we have to read more than 25 we almost
- # certainly won't fit them all into a single new LeafNode, so
- # reading the extra nodes is a waste.
- # This will probably matter more with hash serialised keys,
- # as we will get InternalNodes with more references.
- # The other argument is that unmap() is uncommon, so we don't
- # need to optimize it. But map() with a slightly shorter
- # value may happen a lot.
- stream = store.get_record_stream(keys, 'unordered', True)
- nodes = []
- # Fully consume the stream, even if we could determine that we
- # don't need to continue. We requested the bytes, we may as well
- # use them
- for record in stream:
- node = _deserialise(record.get_bytes_as('fulltext'), record.key)
- self._items[keys[record.key]] = node
- nodes.append(node)
- for node in nodes:
- if isinstance(node, InternalNode):
- # We know we won't fit
- return self
- for key, value in node._items.iteritems():
- if new_leaf._map_no_split(key, value):
- return self
-
- # We have gone to every child, and everything fits in a single leaf
- # node, we no longer need this internal node
+ # A batch_size of 16 was chosen because:
+ # a) In testing, a 4k page held 14 times. So if we have more than 16
+ # leaf nodes we are unlikely to hold them in a single new leaf
+ # node. This still allows for 1 round trip
+ # b) With 16-way fan out, we can still do a single round trip
+ # c) With 255-way fan out, we don't want to read all 255 and destroy
+ # the page cache, just to determine that we really don't need it.
+ for node in self._iter_nodes(store, batch_size=16):
+ if isinstance(node, InternalNode):
+ # Without looking at any leaf nodes, we are sure
+ return self
+ for key, value in node._items.iteritems():
+ if new_leaf._map_no_split(key, value):
+ return self
return new_leaf
=== modified file 'bzrlib/tests/test_chk_map.py'
--- a/bzrlib/tests/test_chk_map.py 2008-12-13 00:01:00 +0000
+++ b/bzrlib/tests/test_chk_map.py 2009-01-07 18:34:04 +0000
@@ -571,11 +571,12 @@
self.assertIsInstance(chkmap._root_node._items['aac'], tuple)
self.assertIsInstance(chkmap._root_node._items['aad'], LeafNode)
# Unmapping the new node will check the existing nodes to see if they
- # would fit, and find out that they do not
+ # would fit, and find out that they do not, but it can also find out
+ # before reading all of them.
chkmap.unmap(('aad',))
self.assertIsInstance(chkmap._root_node._items['aaa'], LeafNode)
self.assertIsInstance(chkmap._root_node._items['aab'], LeafNode)
- self.assertIsInstance(chkmap._root_node._items['aac'], LeafNode)
+ self.assertIsInstance(chkmap._root_node._items['aac'], tuple)
def test_iter_changes_empty_ab(self):
# Asking for changes between an empty dict to a dict with keys returns
More information about the bazaar-commits
mailing list