Rev 4677: Bring in newer bzr.dev, and Robert's 'adjacent-streams' work, in http://bazaar.launchpad.net/~jameinel/bzr/2.1b1-pack-on-the-fly

John Arbash Meinel john at arbash-meinel.com
Thu Sep 3 16:26:38 BST 2009


At http://bazaar.launchpad.net/~jameinel/bzr/2.1b1-pack-on-the-fly

------------------------------------------------------------
revno: 4677 [merge]
revision-id: john at arbash-meinel.com-20090903152627-opz8nb80gwvb0y5l
parent: john at arbash-meinel.com-20090903152536-guqk7hltitdra91w
parent: robertc at robertcollins.net-20090902222955-kwp6gjk0izbna7z1
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: 2.1b1-pack-on-the-fly
timestamp: Thu 2009-09-03 10:26:27 -0500
message:
  Bring in newer bzr.dev, and Robert's 'adjacent-streams' work,
  which allows the smart server to return full streams, rather than one stream
  per block.
removed:
  doc/en/migration/              migration-20090722133816-63ik5s6s5gsnz7zy-7
  doc/en/migration/index.txt     index.txt-20090722133816-63ik5s6s5gsnz7zy-8
modified:
  NEWS                           NEWS-20050323055033-4e00b5db738777ff
  bzrlib/_known_graph_py.py      _known_graph_py.py-20090610185421-vw8vfda2cgnckgb1-1
  bzrlib/_known_graph_pyx.pyx    _known_graph_pyx.pyx-20090610194911-yjk73td9hpjilas0-1
  bzrlib/repository.py           rev_storage.py-20051111201905-119e9401e46257e3
  bzrlib/smart/repository.py     repository.py-20061128022038-vr5wy5bubyb8xttk-1
  bzrlib/tests/test__known_graph.py test__known_graph.py-20090610185421-vw8vfda2cgnckgb1-2
  bzrlib/tests/test_smart.py     test_smart.py-20061122024551-ol0l0o0oofsu9b3t-2
  doc/_templates/index.html      index.html-20090722133849-lus2rzwsmlhpgqhv-1
  doc/contents.txt               contents.txt-20090722133816-63ik5s6s5gsnz7zy-13
-------------- next part --------------
=== modified file 'NEWS'
--- a/NEWS	2009-09-01 21:13:16 +0000
+++ b/NEWS	2009-09-03 15:26:27 +0000
@@ -80,6 +80,23 @@
   that has a ghost in the mainline ancestry.
   (John Arbash Meinel, #419241)
 
+* ``groupcompress`` sort order is now more stable, rather than relying on
+  ``topo_sort`` ordering. The implementation is now
+  ``KnownGraph.gc_sort``. (John Arbash Meinel)
+
+* Local data conversion will generate correct deltas. This is a critical
+  bugfix vs 2.0rc1, and all 2.0rc1 users should upgrade to 2.0rc2 before
+  converting repositories. (Robert Collins, #422849)
+
+* Network streams now decode adjacent records of the same type into a
+  single stream, reducing layering churn. (Robert Collins)
+
+Documentation
+*************
+
+* The main table of contents now provides links to the new Migration Docs
+  and Plugins Guide. (Ian Clatworthy)
+
 
 bzr 2.0rc1
 ##########

=== modified file 'bzrlib/_known_graph_py.py'
--- a/bzrlib/_known_graph_py.py	2009-08-17 20:41:26 +0000
+++ b/bzrlib/_known_graph_py.py	2009-08-25 18:45:40 +0000
@@ -97,6 +97,10 @@
         return [node for node in self._nodes.itervalues()
                 if not node.parent_keys]
 
+    def _find_tips(self):
+        return [node for node in self._nodes.itervalues()
+                      if not node.child_keys]
+
     def _find_gdfo(self):
         nodes = self._nodes
         known_parent_gdfos = {}
@@ -218,6 +222,51 @@
         # We started from the parents, so we don't need to do anymore work
         return topo_order
 
+    def gc_sort(self):
+        """Return a reverse topological ordering which is 'stable'.
+
+        There are a few constraints:
+          1) Reverse topological (all children before all parents)
+          2) Grouped by prefix
+          3) 'stable' sorting, so that we get the same result, independent of
+             machine, or extra data.
+        To do this, we use the same basic algorithm as topo_sort, but when we
+        aren't sure what node to access next, we sort them lexicographically.
+        """
+        tips = self._find_tips()
+        # Split the tips based on prefix
+        prefix_tips = {}
+        for node in tips:
+            if node.key.__class__ is str or len(node.key) == 1:
+                prefix = ''
+            else:
+                prefix = node.key[0]
+            prefix_tips.setdefault(prefix, []).append(node)
+
+        num_seen_children = dict.fromkeys(self._nodes, 0)
+
+        result = []
+        for prefix in sorted(prefix_tips):
+            pending = sorted(prefix_tips[prefix], key=lambda n:n.key,
+                             reverse=True)
+            while pending:
+                node = pending.pop()
+                if node.parent_keys is None:
+                    # Ghost node, skip it
+                    continue
+                result.append(node.key)
+                for parent_key in sorted(node.parent_keys, reverse=True):
+                    parent_node = self._nodes[parent_key]
+                    seen_children = num_seen_children[parent_key] + 1
+                    if seen_children == len(parent_node.child_keys):
+                        # All children have been processed, enqueue this parent
+                        pending.append(parent_node)
+                        # This has been queued up, stop tracking it
+                        del num_seen_children[parent_key]
+                    else:
+                        num_seen_children[parent_key] = seen_children
+        return result
+
     def merge_sort(self, tip_key):
         """Compute the merge sorted graph output."""
         from bzrlib import tsort

=== modified file 'bzrlib/_known_graph_pyx.pyx'
--- a/bzrlib/_known_graph_pyx.pyx	2009-08-26 16:03:59 +0000
+++ b/bzrlib/_known_graph_pyx.pyx	2009-09-02 13:32:52 +0000
@@ -25,11 +25,18 @@
     ctypedef struct PyObject:
         pass
 
+    int PyString_CheckExact(object)
+
+    int PyObject_RichCompareBool(object, object, int)
+    int Py_LT
+
+    int PyTuple_CheckExact(object)
     object PyTuple_New(Py_ssize_t n)
     Py_ssize_t PyTuple_GET_SIZE(object t)
     PyObject * PyTuple_GET_ITEM(object t, Py_ssize_t o)
     void PyTuple_SET_ITEM(object t, Py_ssize_t o, object v)
 
+    int PyList_CheckExact(object)
     Py_ssize_t PyList_GET_SIZE(object l)
     PyObject * PyList_GET_ITEM(object l, Py_ssize_t o)
     int PyList_SetItem(object l, Py_ssize_t o, object l) except -1
@@ -108,14 +115,65 @@
     return <_KnownGraphNode>temp_node
 
 
-cdef _KnownGraphNode _get_parent(parents, Py_ssize_t pos):
+cdef _KnownGraphNode _get_tuple_node(tpl, Py_ssize_t pos):
     cdef PyObject *temp_node
-    cdef _KnownGraphNode node
 
-    temp_node = PyTuple_GET_ITEM(parents, pos)
+    temp_node = PyTuple_GET_ITEM(tpl, pos)
     return <_KnownGraphNode>temp_node
 
 
+def get_key(node):
+    cdef _KnownGraphNode real_node
+    real_node = node
+    return real_node.key
+
+
+cdef object _sort_list_nodes(object lst_or_tpl, int reverse):
+    """Sort a list of _KnownGraphNode objects.
+
+    If lst_or_tpl is a list, it is allowed to mutate in place. It may also
+    just return the input list if everything is already sorted.
+    """
+    cdef _KnownGraphNode node1, node2
+    cdef int do_swap, is_tuple
+    cdef Py_ssize_t length
+
+    is_tuple = PyTuple_CheckExact(lst_or_tpl)
+    if not (is_tuple or PyList_CheckExact(lst_or_tpl)):
+        raise TypeError('lst_or_tpl must be a list or tuple.')
+    length = len(lst_or_tpl)
+    if length == 0 or length == 1:
+        return lst_or_tpl
+    if length == 2:
+        if is_tuple:
+            node1 = _get_tuple_node(lst_or_tpl, 0)
+            node2 = _get_tuple_node(lst_or_tpl, 1)
+        else:
+            node1 = _get_list_node(lst_or_tpl, 0)
+            node2 = _get_list_node(lst_or_tpl, 1)
+        if reverse:
+            do_swap = PyObject_RichCompareBool(node1.key, node2.key, Py_LT)
+        else:
+            do_swap = PyObject_RichCompareBool(node2.key, node1.key, Py_LT)
+        if not do_swap:
+            return lst_or_tpl
+        if is_tuple:
+            return (node2, node1)
+        else:
+            # Swap 'in-place', since lists are mutable
+            Py_INCREF(node1)
+            PyList_SetItem(lst_or_tpl, 1, node1)
+            Py_INCREF(node2)
+            PyList_SetItem(lst_or_tpl, 0, node2)
+            return lst_or_tpl
+    # For all other sizes, we just use 'sorted()'
+    if is_tuple:
+        # Note that sorted() is just list(iterable).sort()
+        lst_or_tpl = list(lst_or_tpl)
+    lst_or_tpl.sort(key=get_key, reverse=reverse)
+    return lst_or_tpl
+
+
 cdef class _MergeSorter
 
 cdef class KnownGraph:
@@ -216,6 +274,19 @@
                 PyList_Append(tails, node)
         return tails
 
+    def _find_tips(self):
+        cdef PyObject *temp_node
+        cdef _KnownGraphNode node
+        cdef Py_ssize_t pos
+
+        tips = []
+        pos = 0
+        while PyDict_Next(self._nodes, &pos, NULL, &temp_node):
+            node = <_KnownGraphNode>temp_node
+            if PyList_GET_SIZE(node.children) == 0:
+                PyList_Append(tips, node)
+        return tips
+
     def _find_gdfo(self):
         cdef _KnownGraphNode node
         cdef _KnownGraphNode child
@@ -322,7 +393,7 @@
                 continue
             if node.parents is not None and PyTuple_GET_SIZE(node.parents) > 0:
                 for pos from 0 <= pos < PyTuple_GET_SIZE(node.parents):
-                    parent_node = _get_parent(node.parents, pos)
+                    parent_node = _get_tuple_node(node.parents, pos)
                     last_item = last_item + 1
                     if last_item < PyList_GET_SIZE(pending):
                         Py_INCREF(parent_node) # SetItem steals a ref
@@ -397,6 +468,77 @@
         # We started from the parents, so we don't need to do anymore work
         return topo_order
 
+    def gc_sort(self):
+        """Return a reverse topological ordering which is 'stable'.
+
+        There are a few constraints:
+          1) Reverse topological (all children before all parents)
+          2) Grouped by prefix
+          3) 'stable' sorting, so that we get the same result, independent of
+             machine, or extra data.
+        To do this, we use the same basic algorithm as topo_sort, but when we
+        aren't sure what node to access next, we sort them lexicographically.
+        """
+        cdef PyObject *temp
+        cdef Py_ssize_t pos, last_item
+        cdef _KnownGraphNode node, node2, parent_node
+
+        tips = self._find_tips()
+        # Split the tips based on prefix
+        prefix_tips = {}
+        for pos from 0 <= pos < PyList_GET_SIZE(tips):
+            node = _get_list_node(tips, pos)
+            if PyString_CheckExact(node.key) or len(node.key) == 1:
+                prefix = ''
+            else:
+                prefix = node.key[0]
+            temp = PyDict_GetItem(prefix_tips, prefix)
+            if temp == NULL:
+                prefix_tips[prefix] = [node]
+            else:
+                tip_nodes = <object>temp
+                PyList_Append(tip_nodes, node)
+
+        result = []
+        for prefix in sorted(prefix_tips):
+            temp = PyDict_GetItem(prefix_tips, prefix)
+            assert temp != NULL
+            tip_nodes = <object>temp
+            pending = _sort_list_nodes(tip_nodes, 1)
+            last_item = PyList_GET_SIZE(pending) - 1
+            while last_item >= 0:
+                node = _get_list_node(pending, last_item)
+                last_item = last_item - 1
+                if node.parents is None:
+                    # Ghost
+                    continue
+                PyList_Append(result, node.key)
+                # Sorting the parent keys isn't strictly necessary for stable
+                # sorting of a given graph. But it does help minimize the
+                # differences between graphs
+                # For bzr.dev ancestry:
+                #   4.73ms  no sort
+                #   7.73ms  RichCompareBool sort
+                parents = _sort_list_nodes(node.parents, 1)
+                for pos from 0 <= pos < len(parents):
+                    if PyTuple_CheckExact(parents):
+                        parent_node = _get_tuple_node(parents, pos)
+                    else:
+                        parent_node = _get_list_node(parents, pos)
+                    # TODO: GraphCycle detection
+                    parent_node.seen = parent_node.seen + 1
+                    if (parent_node.seen
+                        == PyList_GET_SIZE(parent_node.children)):
+                        # All children have been processed, queue up this
+                        # parent
+                        last_item = last_item + 1
+                        if last_item < PyList_GET_SIZE(pending):
+                            Py_INCREF(parent_node) # SetItem steals a ref
+                            PyList_SetItem(pending, last_item, parent_node)
+                        else:
+                            PyList_Append(pending, parent_node)
+                        parent_node.seen = 0
+        return result
 
     def merge_sort(self, tip_key):
         """Compute the merge sorted graph output."""
@@ -522,7 +664,7 @@
             raise RuntimeError('ghost nodes should not be pushed'
                                ' onto the stack: %s' % (node,))
         if PyTuple_GET_SIZE(node.parents) > 0:
-            parent_node = _get_parent(node.parents, 0)
+            parent_node = _get_tuple_node(node.parents, 0)
             ms_node.left_parent = parent_node
             if parent_node.parents is None: # left-hand ghost
                 ms_node.left_pending_parent = None
@@ -532,7 +674,7 @@
         if PyTuple_GET_SIZE(node.parents) > 1:
             ms_node.pending_parents = []
             for pos from 1 <= pos < PyTuple_GET_SIZE(node.parents):
-                parent_node = _get_parent(node.parents, pos)
+                parent_node = _get_tuple_node(node.parents, pos)
                 if parent_node.parents is None: # ghost
                     continue
                 PyList_Append(ms_node.pending_parents, parent_node)

=== modified file 'bzrlib/repository.py'
--- a/bzrlib/repository.py	2009-08-30 23:51:10 +0000
+++ b/bzrlib/repository.py	2009-09-03 15:26:27 +0000
@@ -3844,6 +3844,9 @@
                 possible_trees.append((basis_id, cache[basis_id]))
             basis_id, delta = self._get_delta_for_revision(tree, parent_ids,
                                                            possible_trees)
+            revision = self.source.get_revision(current_revision_id)
+            pending_deltas.append((basis_id, delta,
+                current_revision_id, revision.parent_ids))
             if self._converting_to_rich_root:
                 self._revision_id_to_root_id[current_revision_id] = \
                     tree.get_root_id()
@@ -3878,9 +3881,6 @@
                     if entry.revision == file_revision:
                         texts_possibly_new_in_tree.remove(file_key)
             text_keys.update(texts_possibly_new_in_tree)
-            revision = self.source.get_revision(current_revision_id)
-            pending_deltas.append((basis_id, delta,
-                current_revision_id, revision.parent_ids))
             pending_revisions.append(revision)
             cache[current_revision_id] = tree
             basis_id = current_revision_id

=== modified file 'bzrlib/smart/repository.py'
--- a/bzrlib/smart/repository.py	2009-08-14 00:55:42 +0000
+++ b/bzrlib/smart/repository.py	2009-09-02 22:29:55 +0000
@@ -519,36 +519,92 @@
     yield pack_writer.end()
 
 
+class _ByteStreamDecoder(object):
+    """Helper for _byte_stream_to_stream.
+
+    Broadly this class has to unwrap two layers of iterators:
+    (type, substream)
+    (substream details)
+
+    This is complicated by wishing to return type, iterator_for_type, but
+    getting the data for iterator_for_type when we find out type: we can't
+    simply pass a generator down to the NetworkRecordStream parser, instead
+    we have a little local state to seed each NetworkRecordStream instance,
+    and gather the type that we'll be yielding.
+
+    :ivar byte_stream: The byte stream being decoded.
+    :ivar stream_decoder: A pack parser used to decode the bytestream
+    :ivar current_type: The current type, used to join adjacent records of the
+        same type into a single stream.
+    :ivar first_bytes: The first bytes to give the next NetworkRecordStream.
+    """
+
+    def __init__(self, byte_stream):
+        """Create a _ByteStreamDecoder."""
+        self.stream_decoder = pack.ContainerPushParser()
+        self.current_type = None
+        self.first_bytes = None
+        self.byte_stream = byte_stream
+
+    def iter_stream_decoder(self):
+        """Iterate the contents of the pack from stream_decoder."""
+        # dequeue pending items
+        for record in self.stream_decoder.read_pending_records():
+            yield record
+        # Pull bytes of the wire, decode them to records, yield those records.
+        for bytes in self.byte_stream:
+            self.stream_decoder.accept_bytes(bytes)
+            for record in self.stream_decoder.read_pending_records():
+                yield record
+
+    def iter_substream_bytes(self):
+        if self.first_bytes is not None:
+            yield self.first_bytes
+            # If we run out of pack records, single the outer layer to stop.
+            self.first_bytes = None
+        for record in self.iter_pack_records:
+            record_names, record_bytes = record
+            record_name, = record_names
+            substream_type = record_name[0]
+            if substream_type != self.current_type:
+                # end of a substream, seed the next substream.
+                self.current_type = substream_type
+                self.first_bytes = record_bytes
+                return
+            yield record_bytes
+
+    def record_stream(self):
+        """Yield substream_type, substream from the byte stream."""
+        self.seed_state()
+        # Make and consume sub generators, one per substream type:
+        while self.first_bytes is not None:
+            substream = NetworkRecordStream(self.iter_substream_bytes())
+            # after substream is fully consumed, self.current_type is set to
+            # the next type, and self.first_bytes is set to the matching bytes.
+            yield self.current_type, substream.read()
+
+    def seed_state(self):
+        """Prepare the _ByteStreamDecoder to decode from the pack stream."""
+        # Set a single generator we can use to get data from the pack stream.
+        self.iter_pack_records = self.iter_stream_decoder()
+        # Seed the very first subiterator with content; after this each one
+        # seeds the next.
+        list(self.iter_substream_bytes())
+
+
 def _byte_stream_to_stream(byte_stream):
     """Convert a byte stream into a format and a stream.
 
     :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream.
     :return: (RepositoryFormat, stream_generator)
     """
-    stream_decoder = pack.ContainerPushParser()
-    def record_stream():
-        """Closure to return the substreams."""
-        # May have fully parsed records already.
-        for record in stream_decoder.read_pending_records():
-            record_names, record_bytes = record
-            record_name, = record_names
-            substream_type = record_name[0]
-            substream = NetworkRecordStream([record_bytes])
-            yield substream_type, substream.read()
-        for bytes in byte_stream:
-            stream_decoder.accept_bytes(bytes)
-            for record in stream_decoder.read_pending_records():
-                record_names, record_bytes = record
-                record_name, = record_names
-                substream_type = record_name[0]
-                substream = NetworkRecordStream([record_bytes])
-                yield substream_type, substream.read()
+    decoder = _ByteStreamDecoder(byte_stream)
     for bytes in byte_stream:
-        stream_decoder.accept_bytes(bytes)
-        for record in stream_decoder.read_pending_records(max=1):
+        decoder.stream_decoder.accept_bytes(bytes)
+        for record in decoder.stream_decoder.read_pending_records(max=1):
             record_names, src_format_name = record
             src_format = network_format_registry.get(src_format_name)
-            return src_format, record_stream()
+            return src_format, decoder.record_stream()
 
 
 class SmartServerRepositoryUnlock(SmartServerRepositoryRequest):

=== modified file 'bzrlib/tests/test__known_graph.py'
--- a/bzrlib/tests/test__known_graph.py	2009-08-26 16:03:59 +0000
+++ b/bzrlib/tests/test__known_graph.py	2009-09-02 13:32:52 +0000
@@ -768,3 +768,70 @@
                 },
                 'E',
                 [])
+
+
+class TestKnownGraphStableReverseTopoSort(TestCaseWithKnownGraph):
+    """Test the sort order returned by gc_sort."""
+
+    def assertSorted(self, expected, parent_map):
+        graph = self.make_known_graph(parent_map)
+        value = graph.gc_sort()
+        if expected != value:
+            self.assertEqualDiff(pprint.pformat(expected),
+                                 pprint.pformat(value))
+
+    def test_empty(self):
+        self.assertSorted([], {})
+
+    def test_single(self):
+        self.assertSorted(['a'], {'a':()})
+        self.assertSorted([('a',)], {('a',):()})
+        self.assertSorted([('F', 'a')], {('F', 'a'):()})
+
+    def test_linear(self):
+        self.assertSorted(['c', 'b', 'a'], {'a':(), 'b':('a',), 'c':('b',)})
+        self.assertSorted([('c',), ('b',), ('a',)],
+                          {('a',):(), ('b',): (('a',),), ('c',): (('b',),)})
+        self.assertSorted([('F', 'c'), ('F', 'b'), ('F', 'a')],
+                          {('F', 'a'):(), ('F', 'b'): (('F', 'a'),),
+                           ('F', 'c'): (('F', 'b'),)})
+
+    def test_mixed_ancestries(self):
+        # Each prefix should be sorted separately
+        self.assertSorted([('F', 'c'), ('F', 'b'), ('F', 'a'),
+                           ('G', 'c'), ('G', 'b'), ('G', 'a'),
+                           ('Q', 'c'), ('Q', 'b'), ('Q', 'a'),
+                          ],
+                          {('F', 'a'):(), ('F', 'b'): (('F', 'a'),),
+                           ('F', 'c'): (('F', 'b'),),
+                           ('G', 'a'):(), ('G', 'b'): (('G', 'a'),),
+                           ('G', 'c'): (('G', 'b'),),
+                           ('Q', 'a'):(), ('Q', 'b'): (('Q', 'a'),),
+                           ('Q', 'c'): (('Q', 'b'),),
+                          })
+
+    def test_stable_sorting(self):
+        # the sort order should be stable even when extra nodes are added
+        self.assertSorted(['b', 'c', 'a'],
+                          {'a':(), 'b':('a',), 'c':('a',)})
+        self.assertSorted(['b', 'c', 'd', 'a'],
+                          {'a':(), 'b':('a',), 'c':('a',), 'd':('a',)})
+        self.assertSorted(['b', 'c', 'd', 'a'],
+                          {'a':(), 'b':('a',), 'c':('a',), 'd':('a',)})
+        self.assertSorted(['Z', 'b', 'c', 'd', 'a'],
+                          {'a':(), 'b':('a',), 'c':('a',), 'd':('a',),
+                           'Z':('a',)})
+        self.assertSorted(['e', 'b', 'c', 'f', 'Z', 'd', 'a'],
+                          {'a':(), 'b':('a',), 'c':('a',), 'd':('a',),
+                           'Z':('a',),
+                           'e':('b', 'c', 'd'),
+                           'f':('d', 'Z'),
+                           })
+
+    def test_skip_ghost(self):
+        self.assertSorted(['b', 'c', 'a'],
+                          {'a':(), 'b':('a', 'ghost'), 'c':('a',)})
+
+    def test_skip_mainline_ghost(self):
+        self.assertSorted(['b', 'c', 'a'],
+                          {'a':(), 'b':('ghost', 'a'), 'c':('a',)})

=== modified file 'bzrlib/tests/test_smart.py'
--- a/bzrlib/tests/test_smart.py	2009-08-27 22:17:35 +0000
+++ b/bzrlib/tests/test_smart.py	2009-09-03 15:26:27 +0000
@@ -36,6 +36,7 @@
     smart,
     tests,
     urlutils,
+    versionedfile,
     )
 from bzrlib.branch import Branch, BranchReferenceFormat
 import bzrlib.smart.branch
@@ -112,6 +113,25 @@
         return self.get_transport().get_smart_medium()
 
 
+class TestByteStreamToStream(tests.TestCase):
+
+    def test_repeated_substreams_same_kind_are_one_stream(self):
+        # Make a stream - an iterable of bytestrings.
+        stream = [('text', [versionedfile.FulltextContentFactory(('k1',), None,
+            None, 'foo')]),('text', [
+            versionedfile.FulltextContentFactory(('k2',), None, None, 'bar')])]
+        fmt = bzrdir.format_registry.get('pack-0.92')().repository_format
+        bytes = smart.repository._stream_to_byte_stream(stream, fmt)
+        streams = []
+        # Iterate the resulting iterable; checking that we get only one stream
+        # out.
+        fmt, stream = smart.repository._byte_stream_to_stream(bytes)
+        for kind, substream in stream:
+            streams.append((kind, list(substream)))
+        self.assertLength(1, streams)
+        self.assertLength(2, streams[0][1])
+
+
 class TestSmartServerResponse(tests.TestCase):
 
     def test__eq__(self):

=== modified file 'doc/_templates/index.html'
--- a/doc/_templates/index.html	2009-07-22 14:36:38 +0000
+++ b/doc/_templates/index.html	2009-08-18 00:10:19 +0000
@@ -26,19 +26,17 @@
       <p class="biglink"><a class="biglink" href="{{ pathto("en/upgrade-guide/index") }}">Upgrade Guide</a><br/>
       <span class="linkdescr">moving to Bazaar 2.x</span>
       </p>
-      <p class="biglink"><a class="biglink" href="{{ pathto("en/migration/index") }}">Migration Docs</a><br/>
+      <p class="biglink"><a class="biglink" href="http://doc.bazaar-vcs.org/migration/en/">Migration Docs</a><br/>
       <span class="linkdescr">for refugees of other tools</span>
       </p>
-      <p class="biglink"><a class="biglink" href="{{ pathto("developers/index") }}">Developer Docs</a><br/>
-      <span class="linkdescr">polices and tools for giving back</span>
+      <p class="biglink"><a class="biglink" href="http://doc.bazaar-vcs.org/plugins/en/">Plugins Guide</a><br/>
+      <span class="linkdescr">help on popular plugins</span>
       </p>
     </td></tr>
   </table>
 
-  <p>Other languages:
-      <a href="{{ pathto("index.es") }}">Spanish</a>,
-      <a href="{{ pathto("index.ru") }}">Russian</a>
-  </p>
+  <p>Keen to help? See the <a href="{{ pathto("developers/index") }}">Developer Docs</a>
+      for policies and tools on contributing code, tests and documentation.</p>
 
 
   <h2>Related Links</h2>
@@ -59,4 +57,9 @@
     </td></tr>
   </table>
 
+  <hr>
+  <p>Other languages:
+      <a href="{{ pathto("index.es") }}">Spanish</a>,
+      <a href="{{ pathto("index.ru") }}">Russian</a>
+  </p>
 {% endblock %}

=== modified file 'doc/contents.txt'
--- a/doc/contents.txt	2009-07-22 13:41:01 +0000
+++ b/doc/contents.txt	2009-08-18 00:10:19 +0000
@@ -20,7 +20,6 @@
 
    en/release-notes/index
    en/upgrade-guide/index
-   en/migration/index
    developers/index
 
 

=== removed directory 'doc/en/migration'
=== removed file 'doc/en/migration/index.txt'
--- a/doc/en/migration/index.txt	2009-07-22 13:41:01 +0000
+++ b/doc/en/migration/index.txt	1970-01-01 00:00:00 +0000
@@ -1,6 +0,0 @@
-Bazaar Migration Guide
-======================
-
-This guide is under development. For notes collected so far, see
-http://bazaar-vcs.org/BzrMigration/.
-



More information about the bazaar-commits mailing list