Rev 4682: Merge the bzr 2.0 stable branch. in http://bzr.arbash-meinel.com/branches/bzr/jam-integration

John Arbash Meinel john at arbash-meinel.com
Thu Oct 15 20:47:44 BST 2009


At http://bzr.arbash-meinel.com/branches/bzr/jam-integration

------------------------------------------------------------
revno: 4682 [merge]
revision-id: john at arbash-meinel.com-20091015194718-03501qjvwifiy3c5
parent: mbp at sourcefrog.net-20091008070305-57gag3pyh2kec30l
parent: pqm at pqm.ubuntu.com-20091015054058-rs11wsd9qbb78pcn
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: jam-integration
timestamp: Thu 2009-10-15 14:47:18 -0500
message:
  Merge the bzr 2.0 stable branch.
modified:
  NEWS                           NEWS-20050323055033-4e00b5db738777ff
  bzr                            bzr.py-20050313053754-5485f144c7006fa6
  bzrlib/__init__.py             __init__.py-20050309040759-33e65acf91bbcd5d
  bzrlib/btree_index.py          index.py-20080624222253-p0x5f92uyh5hw734-7
  bzrlib/commands.py             bzr.py-20050309040720-d10f4714595cf8c3
  bzrlib/index.py                index.py-20070712131115-lolkarso50vjr64s-1
  bzrlib/osutils.py              osutils.py-20050309040759-eeaff12fbf77ac86
  bzrlib/repofmt/pack_repo.py    pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
  bzrlib/tests/per_repository_chk/test_supported.py test_supported.py-20080925063728-k65ry0n2rhta6t34-1
  bzrlib/tests/test_btree_index.py test_index.py-20080624222253-p0x5f92uyh5hw734-13
  bzrlib/tests/test_index.py     test_index.py-20070712131115-lolkarso50vjr64s-2
  bzrlib/tests/test_osutils.py   test_osutils.py-20051201224856-e48ee24c12182989
  setup.py                       setup.py-20050314065409-02f8a0a6e3f9bc70
-------------- next part --------------
=== modified file 'NEWS'
--- a/NEWS	2009-10-08 07:03:05 +0000
+++ b/NEWS	2009-10-15 19:47:18 +0000
@@ -2,8 +2,14 @@
 Bazaar Release Notes
 ####################
 
-2.0 series (not released)
-#########################
+.. contents:: List of Releases
+   :depth: 1
+
+bzr 2.0.2 (not released yet)
+############################
+
+:Codename:
+:2.0.2: ???
 
 Compatibility Breaks
 ********************
@@ -17,6 +23,36 @@
 * Avoid "NoneType has no attribute st_mode" error when files disappear
   from a directory while it's being read.  (Martin Pool, #446033)
 
+Improvements
+************
+
+Documentation
+*************
+
+API Changes
+***********
+
+Internals
+*********
+
+Testing
+*******
+
+
+bzr 2.0.1
+#########
+
+:Codename: Stability First
+:2.0.1: 2009-10-14
+
+The first of our new ongoing bugfix-only stable releases has arrived. It
+includes a collection of 12 bugfixes applied to bzr 2.0.0, but does not
+include any of the feature development in the 2.1.0 series.
+
+
+Bug Fixes
+*********
+
 * ``bzr add`` in a tree that has files with ``\r`` or ``\n`` in the
   filename will issue a warning and skip over those files.
   (Robert Collins, #3918)
@@ -32,6 +68,24 @@
 * Fixed ``ObjectNotLocked`` errors when doing some log and diff operations
   on branches via a smart server.  (Andrew Bennetts, #389413)
 
+* Handle things like ``bzr add foo`` and ``bzr rm foo`` when the tree is
+  at the root of a drive. ``osutils._cicp_canonical_relpath`` always
+  assumed that ``abspath()`` returned a path that did not have a trailing
+  ``/``, but that is not true when working at the root of the filesystem.
+  (John Arbash Meinel, Jason Spashett, #322807)
+
+* Hide deprecation warnings for 'final' releases for python2.6.
+  (John Arbash Meinel, #440062)
+
+* Improve the time for ``bzr log DIR`` for 2a format repositories.
+  We had been using the same code path as for <2a formats, which required
+  iterating over all objects in all revisions.
+  (John Arbash Meinel, #374730)
+
+* Make sure that we unlock the tree if we fail to create a TreeTransform
+  object when doing a merge, and there is limbo, or pending-deletions
+  directory.  (Gary van der Merwe, #427773)
+
 * Occasional IndexError on renamed files have been fixed. Operations that
   set a full inventory in the working tree will now go via the
   apply_inventory_delta code path which is simpler and easier to
@@ -40,12 +94,17 @@
   but such operations are already doing full tree scans, so no radical
   performance change should be observed. (Robert Collins, #403322)
 
+* Retrieving file text or mtime from a _PreviewTree has good performance when
+  there are many changes.  (Aaron Bentley)
+
+* The CHK index pages now use an unlimited cache size. With a limited
+  cache and a large project, the random access of chk pages could cause us
+  to download the entire cix file many times.
+  (John Arbash Meinel, #402623)
+
 * When a file kind becomes unversionable after being added, a sensible
   error will be shown instead of a traceback. (Robert Collins, #438569)
 
-Improvements
-************
-
 Documentation
 *************
 
@@ -54,33 +113,6 @@
 * Improved upgrade documentation for Launchpad branches.
   (Barry Warsaw)
 
-API Changes
-***********
-
-Internals
-*********
-
-Testing
-*******
-
-bzr 2.0.1
-##########
-
-Bug Fixes
-*********
-
-* Improve the time for ``bzr log DIR`` for 2a format repositories.
-  We had been using the same code path as for <2a formats, which required
-  iterating over all objects in all revisions.
-  (John Arbash Meinel, #374730)
-
-* Make sure that we unlock the tree if we fail to create a TreeTransform
-  object when doing a merge, and there is limbo, or pending-deletions
-  directory.  (Gary van der Merwe, #427773)
-
-* Retrieving file text or mtime from a _PreviewTree has good performance when
-  there are many changes.  (Aaron Bentley)
-
 
 bzr 2.0.0
 #########
@@ -10687,5 +10719,38 @@
 * Storage of local versions: init, add, remove, rm, info, log,
   diff, status, etc.
 
+
+bzr ?.?.? (not released yet)
+############################
+
+:Codename: template
+:2.0.2: ???
+
+Compatibility Breaks
+********************
+
+New Features
+************
+
+Bug Fixes
+*********
+
+Improvements
+************
+
+Documentation
+*************
+
+API Changes
+***********
+
+Internals
+*********
+
+Testing
+*******
+
+
+
 ..
    vim: tw=74 ft=rst ff=unix

=== modified file 'bzr'
--- a/bzr	2009-09-10 06:32:42 +0000
+++ b/bzr	2009-10-15 03:47:14 +0000
@@ -23,7 +23,7 @@
 import warnings
 
 # update this on each release
-_script_version = (2, 0, 0)
+_script_version = (2, 0, 2)
 
 if __doc__ is None:
     print "bzr does not support python -OO."

=== modified file 'bzrlib/__init__.py'
--- a/bzrlib/__init__.py	2009-09-25 01:41:37 +0000
+++ b/bzrlib/__init__.py	2009-10-15 03:47:14 +0000
@@ -50,7 +50,7 @@
 # Python version 2.0 is (2, 0, 0, 'final', 0)."  Additionally we use a
 # releaselevel of 'dev' for unreleased under-development code.
 
-version_info = (2, 0, 0, 'final', 0)
+version_info = (2, 0, 2, 'dev', 0)
 
 # API compatibility version: bzrlib is currently API compatible with 1.15.
 api_minimum_version = (1, 17, 0)

=== modified file 'bzrlib/btree_index.py'
--- a/bzrlib/btree_index.py	2009-08-17 22:11:06 +0000
+++ b/bzrlib/btree_index.py	2009-09-09 19:32:27 +0000
@@ -628,7 +628,7 @@
     memory except when very large walks are done.
     """
 
-    def __init__(self, transport, name, size):
+    def __init__(self, transport, name, size, unlimited_cache=False):
         """Create a B+Tree index object on the index name.
 
         :param transport: The transport to read data for the index from.
@@ -638,6 +638,9 @@
             the initial read (to read the root node header) can be done
             without over-reading even on empty indices, and on small indices
             allows single-IO to read the entire index.
+        :param unlimited_cache: If set to True, then instead of using an
+            LRUCache with size _NODE_CACHE_SIZE, we will use a dict and always
+            cache all leaf nodes.
         """
         self._transport = transport
         self._name = name
@@ -647,12 +650,15 @@
         self._root_node = None
         # Default max size is 100,000 leave values
         self._leaf_value_cache = None # lru_cache.LRUCache(100*1000)
-        self._leaf_node_cache = lru_cache.LRUCache(_NODE_CACHE_SIZE)
-        # We could limit this, but even a 300k record btree has only 3k leaf
-        # nodes, and only 20 internal nodes. So the default of 100 nodes in an
-        # LRU would mean we always cache everything anyway, no need to pay the
-        # overhead of LRU
-        self._internal_node_cache = fifo_cache.FIFOCache(100)
+        if unlimited_cache:
+            self._leaf_node_cache = {}
+            self._internal_node_cache = {}
+        else:
+            self._leaf_node_cache = lru_cache.LRUCache(_NODE_CACHE_SIZE)
+            # We use a FIFO here just to prevent possible blowout. However, a
+            # 300k record btree has only 3k leaf nodes, and only 20 internal
+            # nodes. A value of 100 scales to ~100*100*100 = 1M records.
+            self._internal_node_cache = fifo_cache.FIFOCache(100)
         self._key_count = None
         self._row_lengths = None
         self._row_offsets = None # Start of each row, [-1] is the end
@@ -690,9 +696,9 @@
                 if start_of_leaves is None:
                     start_of_leaves = self._row_offsets[-2]
                 if node_pos < start_of_leaves:
-                    self._internal_node_cache.add(node_pos, node)
+                    self._internal_node_cache[node_pos] = node
                 else:
-                    self._leaf_node_cache.add(node_pos, node)
+                    self._leaf_node_cache[node_pos] = node
             found[node_pos] = node
         return found
 

=== modified file 'bzrlib/commands.py'
--- a/bzrlib/commands.py	2009-09-09 14:21:37 +0000
+++ b/bzrlib/commands.py	2009-10-13 17:06:15 +0000
@@ -1097,7 +1097,7 @@
 
     # Is this a final release version? If so, we should suppress warnings
     if bzrlib.version_info[3] == 'final':
-        suppress_deprecation_warnings(override=False)
+        suppress_deprecation_warnings(override=True)
     if argv is None:
         argv = osutils.get_unicode_argv()
     else:

=== modified file 'bzrlib/index.py'
--- a/bzrlib/index.py	2009-08-17 22:11:06 +0000
+++ b/bzrlib/index.py	2009-09-09 18:52:56 +0000
@@ -1,4 +1,4 @@
-# Copyright (C) 2007, 2008 Canonical Ltd
+# Copyright (C) 2007, 2008, 2009 Canonical Ltd
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License as published by
@@ -368,7 +368,7 @@
     suitable for production use. :XXX
     """
 
-    def __init__(self, transport, name, size):
+    def __init__(self, transport, name, size, unlimited_cache=False):
         """Open an index called name on transport.
 
         :param transport: A bzrlib.transport.Transport.

=== modified file 'bzrlib/osutils.py'
--- a/bzrlib/osutils.py	2009-07-23 16:01:17 +0000
+++ b/bzrlib/osutils.py	2009-10-08 03:55:30 +0000
@@ -1083,7 +1083,14 @@
     bit_iter = iter(rel.split('/'))
     for bit in bit_iter:
         lbit = bit.lower()
-        for look in _listdir(current):
+        try:
+            next_entries = _listdir(current)
+        except OSError: # enoent, eperm, etc
+            # We can't find this in the filesystem, so just append the
+            # remaining bits.
+            current = pathjoin(current, bit, *list(bit_iter))
+            break
+        for look in next_entries:
             if lbit == look.lower():
                 current = pathjoin(current, look)
                 break
@@ -1093,7 +1100,7 @@
             # the target of a move, for example).
             current = pathjoin(current, bit, *list(bit_iter))
             break
-    return current[len(abs_base)+1:]
+    return current[len(abs_base):].lstrip('/')
 
 # XXX - TODO - we need better detection/integration of case-insensitive
 # file-systems; Linux often sees FAT32 devices (or NFS-mounted OSX

=== modified file 'bzrlib/repofmt/pack_repo.py'
--- a/bzrlib/repofmt/pack_repo.py	2009-09-08 05:51:36 +0000
+++ b/bzrlib/repofmt/pack_repo.py	2009-09-09 18:52:56 +0000
@@ -224,10 +224,14 @@
         return self.index_name('text', name)
 
     def _replace_index_with_readonly(self, index_type):
+        unlimited_cache = False
+        if index_type == 'chk':
+            unlimited_cache = True
         setattr(self, index_type + '_index',
             self.index_class(self.index_transport,
                 self.index_name(index_type, self.name),
-                self.index_sizes[self.index_offset(index_type)]))
+                self.index_sizes[self.index_offset(index_type)],
+                unlimited_cache=unlimited_cache))
 
 
 class ExistingPack(Pack):
@@ -1674,7 +1678,7 @@
             txt_index = self._make_index(name, '.tix')
             sig_index = self._make_index(name, '.six')
             if self.chk_index is not None:
-                chk_index = self._make_index(name, '.cix')
+                chk_index = self._make_index(name, '.cix', unlimited_cache=True)
             else:
                 chk_index = None
             result = ExistingPack(self._pack_transport, name, rev_index,
@@ -1699,7 +1703,8 @@
             txt_index = self._make_index(name, '.tix', resume=True)
             sig_index = self._make_index(name, '.six', resume=True)
             if self.chk_index is not None:
-                chk_index = self._make_index(name, '.cix', resume=True)
+                chk_index = self._make_index(name, '.cix', resume=True,
+                                             unlimited_cache=True)
             else:
                 chk_index = None
             result = self.resumed_pack_factory(name, rev_index, inv_index,
@@ -1735,7 +1740,7 @@
         return self._index_class(self.transport, 'pack-names', None
                 ).iter_all_entries()
 
-    def _make_index(self, name, suffix, resume=False):
+    def _make_index(self, name, suffix, resume=False, unlimited_cache=False):
         size_offset = self._suffix_offsets[suffix]
         index_name = name + suffix
         if resume:
@@ -1744,7 +1749,8 @@
         else:
             transport = self._index_transport
             index_size = self._names[name][size_offset]
-        return self._index_class(transport, index_name, index_size)
+        return self._index_class(transport, index_name, index_size,
+                                 unlimited_cache=unlimited_cache)
 
     def _max_pack_count(self, total_revisions):
         """Return the maximum number of packs to use for total revisions.

=== modified file 'bzrlib/tests/per_repository_chk/test_supported.py'
--- a/bzrlib/tests/per_repository_chk/test_supported.py	2009-09-08 06:25:26 +0000
+++ b/bzrlib/tests/per_repository_chk/test_supported.py	2009-09-09 18:52:56 +0000
@@ -17,8 +17,10 @@
 """Tests for repositories that support CHK indices."""
 
 from bzrlib import (
+    btree_index,
     errors,
     osutils,
+    repository,
     )
 from bzrlib.versionedfile import VersionedFiles
 from bzrlib.tests.per_repository_chk import TestCaseWithRepositoryCHK
@@ -108,6 +110,39 @@
         finally:
             repo.unlock()
 
+    def test_chk_bytes_are_fully_buffered(self):
+        repo = self.make_repository('.')
+        repo.lock_write()
+        self.addCleanup(repo.unlock)
+        repo.start_write_group()
+        try:
+            sha1, len, _ = repo.chk_bytes.add_lines((None,),
+                None, ["foo\n", "bar\n"], random_id=True)
+            self.assertEqual('4e48e2c9a3d2ca8a708cb0cc545700544efb5021',
+                sha1)
+            self.assertEqual(
+                set([('sha1:4e48e2c9a3d2ca8a708cb0cc545700544efb5021',)]),
+                repo.chk_bytes.keys())
+        except:
+            repo.abort_write_group()
+            raise
+        else:
+            repo.commit_write_group()
+        # This may not always be correct if we change away from BTreeGraphIndex
+        # in the future. But for now, lets check that chk_bytes are fully
+        # buffered
+        index = repo.chk_bytes._index._graph_index._indices[0]
+        self.assertIsInstance(index, btree_index.BTreeGraphIndex)
+        self.assertIs(type(index._leaf_node_cache), dict)
+        # Re-opening the repository should also have a repo with everything
+        # fully buffered
+        repo2 = repository.Repository.open(self.get_url())
+        repo2.lock_read()
+        self.addCleanup(repo2.unlock)
+        index = repo2.chk_bytes._index._graph_index._indices[0]
+        self.assertIsInstance(index, btree_index.BTreeGraphIndex)
+        self.assertIs(type(index._leaf_node_cache), dict)
+
 
 class TestCommitWriteGroupIntegrityCheck(TestCaseWithRepositoryCHK):
     """Tests that commit_write_group prevents various kinds of invalid data

=== modified file 'bzrlib/tests/test_btree_index.py'
--- a/bzrlib/tests/test_btree_index.py	2009-08-13 19:56:26 +0000
+++ b/bzrlib/tests/test_btree_index.py	2009-10-09 15:02:19 +0000
@@ -23,6 +23,8 @@
 from bzrlib import (
     btree_index,
     errors,
+    fifo_cache,
+    lru_cache,
     osutils,
     tests,
     )
@@ -1115,6 +1117,43 @@
         self.assertEqual({}, parent_map)
         self.assertEqual(set([('one',), ('two',)]), missing_keys)
 
+    def test_supports_unlimited_cache(self):
+        builder = btree_index.BTreeBuilder(reference_lists=0, key_elements=1)
+        # We need enough nodes to cause a page split (so we have both an
+        # internal node and a couple leaf nodes. 500 seems to be enough.)
+        nodes = self.make_nodes(500, 1, 0)
+        for node in nodes:
+            builder.add_node(*node)
+        stream = builder.finish()
+        trans = get_transport(self.get_url())
+        size = trans.put_file('index', stream)
+        index = btree_index.BTreeGraphIndex(trans, 'index', size)
+        self.assertEqual(500, index.key_count())
+        # We have an internal node
+        self.assertEqual(2, len(index._row_lengths))
+        # We have at least 2 leaf nodes
+        self.assertTrue(index._row_lengths[-1] >= 2)
+        self.assertIsInstance(index._leaf_node_cache, lru_cache.LRUCache)
+        self.assertEqual(btree_index._NODE_CACHE_SIZE,
+                         index._leaf_node_cache._max_cache)
+        self.assertIsInstance(index._internal_node_cache, fifo_cache.FIFOCache)
+        self.assertEqual(100, index._internal_node_cache._max_cache)
+        # No change if unlimited_cache=False is passed
+        index = btree_index.BTreeGraphIndex(trans, 'index', size,
+                                            unlimited_cache=False)
+        self.assertIsInstance(index._leaf_node_cache, lru_cache.LRUCache)
+        self.assertEqual(btree_index._NODE_CACHE_SIZE,
+                         index._leaf_node_cache._max_cache)
+        self.assertIsInstance(index._internal_node_cache, fifo_cache.FIFOCache)
+        self.assertEqual(100, index._internal_node_cache._max_cache)
+        index = btree_index.BTreeGraphIndex(trans, 'index', size,
+                                            unlimited_cache=True)
+        self.assertIsInstance(index._leaf_node_cache, dict)
+        self.assertIs(type(index._internal_node_cache), dict)
+        # Exercise the lookup code
+        entries = set(index.iter_entries([n[0] for n in nodes]))
+        self.assertEqual(500, len(entries))
+
 
 class TestBTreeNodes(BTreeTestCase):
 

=== modified file 'bzrlib/tests/test_index.py'
--- a/bzrlib/tests/test_index.py	2009-08-13 19:56:26 +0000
+++ b/bzrlib/tests/test_index.py	2009-09-09 18:52:56 +0000
@@ -1,4 +1,4 @@
-# Copyright (C) 2007 Canonical Ltd
+# Copyright (C) 2007, 2009 Canonical Ltd
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License as published by
@@ -1006,6 +1006,15 @@
         self.assertEqual(set(), missing_keys)
         self.assertEqual(set(), search_keys)
 
+    def test_supports_unlimited_cache(self):
+        builder = GraphIndexBuilder(0, key_elements=1)
+        stream = builder.finish()
+        trans = get_transport(self.get_url())
+        size = trans.put_file('index', stream)
+        # It doesn't matter what unlimited_cache does here, just that it can be
+        # passed
+        index = GraphIndex(trans, 'index', size, unlimited_cache=True)
+
 
 class TestCombinedGraphIndex(TestCaseWithMemoryTransport):
 

=== modified file 'bzrlib/tests/test_osutils.py'
--- a/bzrlib/tests/test_osutils.py	2009-07-23 16:01:17 +0000
+++ b/bzrlib/tests/test_osutils.py	2009-10-08 17:13:24 +0000
@@ -460,6 +460,49 @@
         self.failUnlessEqual('work/MixedCaseParent/nochild', actual)
 
 
+class Test_CICPCanonicalRelpath(tests.TestCaseWithTransport):
+
+    def assertRelpath(self, expected, base, path):
+        actual = osutils._cicp_canonical_relpath(base, path)
+        self.assertEqual(expected, actual)
+
+    def test_simple(self):
+        self.build_tree(['MixedCaseName'])
+        base = osutils.realpath(self.get_transport('.').local_abspath('.'))
+        self.assertRelpath('MixedCaseName', base, 'mixedcAsename')
+
+    def test_subdir_missing_tail(self):
+        self.build_tree(['MixedCaseParent/', 'MixedCaseParent/a_child'])
+        base = osutils.realpath(self.get_transport('.').local_abspath('.'))
+        self.assertRelpath('MixedCaseParent/a_child', base,
+                           'MixedCaseParent/a_child')
+        self.assertRelpath('MixedCaseParent/a_child', base,
+                           'MixedCaseParent/A_Child')
+        self.assertRelpath('MixedCaseParent/not_child', base,
+                           'MixedCaseParent/not_child')
+
+    def test_at_root_slash(self):
+        # We can't test this on Windows, because it has a 'MIN_ABS_PATHLENGTH'
+        # check...
+        if osutils.MIN_ABS_PATHLENGTH > 1:
+            raise tests.TestSkipped('relpath requires %d chars'
+                                    % osutils.MIN_ABS_PATHLENGTH)
+        self.assertRelpath('foo', '/', '/foo')
+
+    def test_at_root_drive(self):
+        if sys.platform != 'win32':
+            raise tests.TestNotApplicable('we can only test drive-letter relative'
+                                          ' paths on Windows where we have drive'
+                                          ' letters.')
+        # see bug #322807
+        # The specific issue is that when at the root of a drive, 'abspath'
+        # returns "C:/" or just "/". However, the code assumes that abspath
+        # always returns something like "C:/foo" or "/foo" (no trailing slash).
+        self.assertRelpath('foo', 'C:/', 'C:/foo')
+        self.assertRelpath('foo', 'X:/', 'X:/foo')
+        self.assertRelpath('foo', 'X:/', 'X://foo')
+
+
 class TestPumpFile(tests.TestCase):
     """Test pumpfile method."""
 

=== modified file 'setup.py'
--- a/setup.py	2009-09-15 07:39:18 +0000
+++ b/setup.py	2009-10-06 15:32:51 +0000
@@ -327,9 +327,6 @@
     # Ensure tbzrlib itself is on sys.path
     sys.path.append(tbzr_root)
 
-    # Ensure our COM "entry-point" is on sys.path
-    sys.path.append(os.path.join(tbzr_root, "shellext", "python"))
-
     packages.append("tbzrlib")
 
     # collect up our icons.
@@ -357,17 +354,6 @@
     excludes.extend("""pywin pywin.dialogs pywin.dialogs.list
                        win32ui crawler.Crawler""".split())
 
-    # NOTE: We still create a DLL version of the Python implemented shell
-    # extension for testing purposes - but it is *not* registered by
-    # default - our C++ one is instead.  To discourage people thinking
-    # this DLL is still necessary, its called 'tbzr_old.dll'
-    tbzr = dict(
-        modules=["tbzr"],
-        create_exe = False, # we only want a .dll
-        dest_base = 'tbzr_old',
-    )
-    com_targets.append(tbzr)
-
     # tbzrcache executables - a "console" version for debugging and a
     # GUI version that is generally used.
     tbzrcache = dict(
@@ -398,8 +384,7 @@
     console_targets.append(tracer)
 
     # The C++ implemented shell extensions.
-    dist_dir = os.path.join(tbzr_root, "shellext", "cpp", "tbzrshellext",
-                            "build", "dist")
+    dist_dir = os.path.join(tbzr_root, "shellext", "build")
     data_files.append(('', [os.path.join(dist_dir, 'tbzrshellext_x86.dll')]))
     data_files.append(('', [os.path.join(dist_dir, 'tbzrshellext_x64.dll')]))
 
@@ -636,7 +621,6 @@
                        'tools/win32/bzr_postinstall.py',
                        ]
     gui_targets = []
-    com_targets = []
     data_files = topics_files + plugins_files
 
     if 'qbzr' in plugins:
@@ -687,7 +671,6 @@
     setup(options=options_list,
           console=console_targets,
           windows=gui_targets,
-          com_server=com_targets,
           zipfile='lib/library.zip',
           data_files=data_files,
           cmdclass={'install_data': install_data_with_bytecompile},



More information about the bazaar-commits mailing list