Rev 5033: (andrew) Merge lp:bzr/2.1 to lp:bzr. in file:///home/pqm/archives/thelove/bzr/%2Btrunk/

Canonical.com Patch Queue Manager pqm at pqm.ubuntu.com
Fri Feb 12 12:55:38 GMT 2010


At file:///home/pqm/archives/thelove/bzr/%2Btrunk/

------------------------------------------------------------
revno: 5033 [merge]
revision-id: pqm at pqm.ubuntu.com-20100212125536-75ekp8giqbsnunmu
parent: pqm at pqm.ubuntu.com-20100212074209-ih8sj193z5w0aw2f
parent: andrew.bennetts at canonical.com-20100212122211-ygj7fc8u9btuvpbz
committer: Canonical.com Patch Queue Manager <pqm at pqm.ubuntu.com>
branch nick: +trunk
timestamp: Fri 2010-02-12 12:55:36 +0000
message:
  (andrew) Merge lp:bzr/2.1 to lp:bzr.
added:
  bzrlib/help_topics/en/location-alias.txt locationalias.txt-20100211071747-8cyf9n9xw0j3ypaz-1
modified:
  NEWS                           NEWS-20050323055033-4e00b5db738777ff
  bzrlib/cleanup.py              cleanup.py-20090922032110-mv6i6y8t04oon9np-1
  bzrlib/help_topics/__init__.py help_topics.py-20060920210027-rnim90q9e0bwxvy4-1
  bzrlib/log.py                  log.py-20050505065812-c40ce11702fe5fb1
  bzrlib/merge.py                merge.py-20050513021216-953b65a438527106
  bzrlib/push.py                 push.py-20080606021927-5fe39050e8xne9un-1
  bzrlib/repofmt/pack_repo.py    pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
  bzrlib/tests/__init__.py       selftest.py-20050531073622-8d0e3c8845c97a64
  bzrlib/tests/per_tree/test_inv.py test_inv.py-20070312023226-0cdvk5uwhutis9vg-1
  bzrlib/tree.py                 tree.py-20050309040759-9d5f2496be663e77
=== modified file 'NEWS'
--- a/NEWS	2010-02-12 04:33:05 +0000
+++ b/NEWS	2010-02-12 12:55:36 +0000
@@ -103,29 +103,33 @@
 * Fix "AttributeError in Inter1and2Helper" during fetch.
   (Martin Pool, #513432)
 
+* ``bzr update`` performs the two merges in a more logical order and will stop
+  when it encounters conflicts.  
+  (Gerard Krol, #113809)
+
 * Fix ``log`` to better check ancestors even if merged revisions are involved.
   (Vincent Ladeuil, #476293)
 
 * Give a better error message when doing ``bzr bind`` in an already bound
   branch.  (Neil Martinsen-Burrell, #513063)
 
+* Ignore ``KeyError`` from ``remove_index`` during ``_abort_write_group``
+  in a pack repository, which can happen harmlessly if the abort occurs during
+  finishing the write group.  Also use ``bzrlib.cleanup`` so that any
+  other errors that occur while aborting the individual packs won't be
+  hidden by secondary failures when removing the corresponding indices.
+  (Andrew Bennetts, #423015)
+
 * Set the mtime of files exported to a directory by ``bzr export`` all to
   the same value to avoid confusing ``make`` and other date-based build
   systems. (Robert Collins, #515631)
 
-* ``bzr update`` performs the two merges in a more logical order and will stop
-  when it encounters conflicts.  
-  (Gerard Krol, #113809)
-
 Improvements
 ************
 
 * Fetching into experimental formats will now print a warning. (Jelmer
   Vernooij)
 
-Documentation
-*************
-
 API Changes
 ***********
 
@@ -140,11 +144,6 @@
 * ``Repository.serialise_inventory`` has been renamed to 
   ``Repository._serialise_inventory`` to indicate it is private.
 
-Internals
-*********
-
-Testing
-*******
 * Using the ``bzrlib.chk_map`` module from within multiple threads at the
   same time was broken due to race conditions with a module level page
   cache. This shows up as a KeyError in the ``bzrlib.lru_cache`` code with
@@ -158,6 +157,23 @@
   regressions.
   (Vincent Ladeuil, #515597)
 
+Internals
+*********
+
+* Use ``bzrlib.cleanup`` rather than less robust ``try``/``finally``
+  blocks in several places in ``bzrlib.merge``.  This avoids masking prior
+  errors when errors like ``ImmortalPendingDeletion`` occur during cleanup
+  in ``do_merge``.
+  (Andrew Bennetts, #517275)
+
+API Changes
+***********
+
+* The ``remove_index`` method of
+  ``bzrlib.repofmt.pack_repo.AggregateIndex`` no longer takes a ``pack``
+  argument.  This argument was always ignored.
+  (Andrew Bennetts, #423015)
+
 bzr 2.1.0rc2
 ############
 
@@ -416,6 +432,29 @@
   (Martin Pool)
 
 
+bzr 2.0.5 (not released yet)
+############################
+
+:Codename:
+:2.0.5:
+
+Bug Fixes
+*********
+
+* Handle renames correctly when there are files or directories that 
+  differ only in case.  (Chris Jones, Martin Pool, #368931)
+
+* If ``bzr push --create-prefix`` triggers an unexpected ``NoSuchFile``
+  error, report that error rather than failing with an unhelpful
+  ``UnboundLocalError``.
+  (Andrew Bennetts, #423563)
+
+Documentation
+*************
+
+* Added ``location-alias`` help topic.
+  (Andrew Bennetts, #337834)
+
 bzr 2.0.4
 #########
 

=== modified file 'bzrlib/cleanup.py'
--- a/bzrlib/cleanup.py	2010-01-13 23:16:20 +0000
+++ b/bzrlib/cleanup.py	2010-02-05 03:37:52 +0000
@@ -31,9 +31,9 @@
 If you want to be certain that the first, and only the first, error is raised,
 then use::
 
-    operation = OperationWithCleanups(lambda operation: do_something())
+    operation = OperationWithCleanups(do_something)
     operation.add_cleanup(cleanup_something)
-    operation.run()
+    operation.run_simple()
 
 This is more inconvenient (because you need to make every try block a
 function), but will ensure that the first error encountered is the one raised,

=== modified file 'bzrlib/help_topics/__init__.py'
--- a/bzrlib/help_topics/__init__.py	2010-02-04 16:06:36 +0000
+++ b/bzrlib/help_topics/__init__.py	2010-02-12 11:58:21 +0000
@@ -269,6 +269,9 @@
   bzr+ssh://remote@shell.example.com/~/myproject/trunk
 
 would refer to ``/home/remote/myproject/trunk``.
+
+Many commands that accept URLs also accept location aliases too.  See
+::doc:`location-alias-help`.
 """
 
     return out
@@ -758,6 +761,8 @@
                         'Types of conflicts and what to do about them')
 topic_registry.register('debug-flags', _load_from_file,
                         'Options to show or record debug information')
+topic_registry.register('location-alias', _load_from_file,
+                        'Aliases for remembered locations')
 topic_registry.register('log-formats', _load_from_file,
                         'Details on the logging formats available')
 

=== added file 'bzrlib/help_topics/en/location-alias.txt'
--- a/bzrlib/help_topics/en/location-alias.txt	1970-01-01 00:00:00 +0000
+++ b/bzrlib/help_topics/en/location-alias.txt	2010-02-11 07:18:20 +0000
@@ -0,0 +1,19 @@
+Location aliases
+================
+
+Bazaar defines several aliases for locations associated with a branch.  These
+can be used with most commands that expect a location, such as `bzr push`.
+
+The aliases are::
+
+  :parent    the parent of this branch
+  :submit    the submit branch for this branch
+  :public    the public location of this branch
+  :bound     the branch this branch is bound to, for bound branches
+  :push      the saved location used for `bzr push` with no arguments
+  :this      this branch
+
+For example, to push to the parent location::
+
+    bzr push :parent
+

=== modified file 'bzrlib/log.py'
--- a/bzrlib/log.py	2010-02-04 16:06:36 +0000
+++ b/bzrlib/log.py	2010-02-12 11:58:21 +0000
@@ -1424,7 +1424,7 @@
         """
         # Revision comes directly from a foreign repository
         if isinstance(rev, foreign.ForeignRevision):
-            return rev.mapping.vcs.show_foreign_revid(rev.foreign_revid)
+            return self._format_properties(rev.mapping.vcs.show_foreign_revid(rev.foreign_revid))
 
         # Imported foreign revision revision ids always contain :
         if not ":" in rev.revision_id:

=== modified file 'bzrlib/merge.py'
--- a/bzrlib/merge.py	2010-02-09 19:04:02 +0000
+++ b/bzrlib/merge.py	2010-02-12 12:22:11 +0000
@@ -27,7 +27,6 @@
     merge3,
     osutils,
     patiencediff,
-    progress,
     revision as _mod_revision,
     textfile,
     trace,
@@ -37,6 +36,7 @@
     ui,
     versionedfile
     )
+from bzrlib.cleanup import OperationWithCleanups
 from bzrlib.symbol_versioning import (
     deprecated_in,
     deprecated_method,
@@ -46,11 +46,10 @@
 
 def transform_tree(from_tree, to_tree, interesting_ids=None):
     from_tree.lock_tree_write()
-    try:
-        merge_inner(from_tree.branch, to_tree, from_tree, ignore_zero=True,
-                    interesting_ids=interesting_ids, this_tree=from_tree)
-    finally:
-        from_tree.unlock()
+    operation = OperationWithCleanups(merge_inner)
+    operation.add_cleanup(from_tree.unlock)
+    operation.run_simple(from_tree.branch, to_tree, from_tree,
+        ignore_zero=True, interesting_ids=interesting_ids, this_tree=from_tree)
 
 
 class MergeHooks(hooks.Hooks):
@@ -455,6 +454,7 @@
     def _add_parent(self):
         new_parents = self.this_tree.get_parent_ids() + [self.other_rev_id]
         new_parent_trees = []
+        operation = OperationWithCleanups(self.this_tree.set_parent_trees)
         for revision_id in new_parents:
             try:
                 tree = self.revision_tree(revision_id)
@@ -462,14 +462,9 @@
                 tree = None
             else:
                 tree.lock_read()
+                operation.add_cleanup(tree.unlock)
             new_parent_trees.append((revision_id, tree))
-        try:
-            self.this_tree.set_parent_trees(new_parent_trees,
-                                            allow_leftmost_as_ghost=True)
-        finally:
-            for _revision_id, tree in new_parent_trees:
-                if tree is not None:
-                    tree.unlock()
+        operation.run_simple(new_parent_trees, allow_leftmost_as_ghost=True)
 
     def set_other(self, other_revision, possible_transports=None):
         """Set the revision and tree to merge from.
@@ -626,7 +621,8 @@
                                change_reporter=self.change_reporter,
                                **kwargs)
 
-    def _do_merge_to(self, merge):
+    def _do_merge_to(self):
+        merge = self.make_merger()
         if self.other_branch is not None:
             self.other_branch.update_references(self.this_branch)
         merge.do_merge()
@@ -646,26 +642,19 @@
                     sub_tree.branch.repository.revision_tree(base_revision)
                 sub_merge.base_rev_id = base_revision
                 sub_merge.do_merge()
+        return merge
 
     def do_merge(self):
+        operation = OperationWithCleanups(self._do_merge_to)
         self.this_tree.lock_tree_write()
-        try:
-            if self.base_tree is not None:
-                self.base_tree.lock_read()
-            try:
-                if self.other_tree is not None:
-                    self.other_tree.lock_read()
-                try:
-                    merge = self.make_merger()
-                    self._do_merge_to(merge)
-                finally:
-                    if self.other_tree is not None:
-                        self.other_tree.unlock()
-            finally:
-                if self.base_tree is not None:
-                    self.base_tree.unlock()
-        finally:
-            self.this_tree.unlock()
+        operation.add_cleanup(self.this_tree.unlock)
+        if self.base_tree is not None:
+            self.base_tree.lock_read()
+            operation.add_cleanup(self.base_tree.unlock)
+        if self.other_tree is not None:
+            self.other_tree.lock_read()
+            operation.add_cleanup(self.other_tree.unlock)
+        merge = operation.run_simple()
         if len(merge.cooked_conflicts) == 0:
             if not self.ignore_zero and not trace.is_quiet():
                 trace.note("All changes applied successfully.")
@@ -765,35 +754,37 @@
             warnings.warn("pb argument to Merge3Merger is deprecated")
 
     def do_merge(self):
+        operation = OperationWithCleanups(self._do_merge)
         self.this_tree.lock_tree_write()
+        operation.add_cleanup(self.this_tree.unlock)
         self.base_tree.lock_read()
+        operation.add_cleanup(self.base_tree.unlock)
         self.other_tree.lock_read()
+        operation.add_cleanup(self.other_tree.unlock)
+        operation.run()
+
+    def _do_merge(self, operation):
+        self.tt = transform.TreeTransform(self.this_tree, None)
+        operation.add_cleanup(self.tt.finalize)
+        self._compute_transform()
+        results = self.tt.apply(no_conflicts=True)
+        self.write_modified(results)
         try:
-            self.tt = transform.TreeTransform(self.this_tree, None)
-            try:
-                self._compute_transform()
-                results = self.tt.apply(no_conflicts=True)
-                self.write_modified(results)
-                try:
-                    self.this_tree.add_conflicts(self.cooked_conflicts)
-                except errors.UnsupportedOperation:
-                    pass
-            finally:
-                self.tt.finalize()
-        finally:
-            self.other_tree.unlock()
-            self.base_tree.unlock()
-            self.this_tree.unlock()
+            self.this_tree.add_conflicts(self.cooked_conflicts)
+        except errors.UnsupportedOperation:
+            pass
 
     def make_preview_transform(self):
+        operation = OperationWithCleanups(self._make_preview_transform)
         self.base_tree.lock_read()
+        operation.add_cleanup(self.base_tree.unlock)
         self.other_tree.lock_read()
+        operation.add_cleanup(self.other_tree.unlock)
+        return operation.run_simple()
+
+    def _make_preview_transform(self):
         self.tt = transform.TransformPreview(self.this_tree)
-        try:
-            self._compute_transform()
-        finally:
-            self.other_tree.unlock()
-            self.base_tree.unlock()
+        self._compute_transform()
         return self.tt
 
     def _compute_transform(self):

=== modified file 'bzrlib/push.py'
--- a/bzrlib/push.py	2009-12-25 13:47:23 +0000
+++ b/bzrlib/push.py	2010-02-12 01:35:24 +0000
@@ -110,6 +110,12 @@
                     "\nYou may supply --create-prefix to create all"
                     " leading parent directories."
                     % location)
+            # This shouldn't occur (because create_prefix is true, so
+            # create_clone_on_transport should be catching NoSuchFile and
+            # creating the missing directories) but if it does the original
+            # NoSuchFile error will be more informative than an
+            # UnboundLocalError for br_to.
+            raise
         except errors.TooManyRedirections:
             raise errors.BzrCommandError("Too many redirections trying "
                                          "to make %s." % location)

=== modified file 'bzrlib/repofmt/pack_repo.py'
--- a/bzrlib/repofmt/pack_repo.py	2010-01-29 10:59:12 +0000
+++ b/bzrlib/repofmt/pack_repo.py	2010-02-12 11:58:21 +0000
@@ -24,6 +24,7 @@
 
 from bzrlib import (
     chk_map,
+    cleanup,
     debug,
     graph,
     osutils,
@@ -646,11 +647,10 @@
         del self.combined_index._indices[:]
         self.add_callback = None
 
-    def remove_index(self, index, pack):
+    def remove_index(self, index):
         """Remove index from the indices used to answer queries.
 
         :param index: An index from the pack parameter.
-        :param pack: A Pack instance.
         """
         del self.index_to_pack[index]
         self.combined_index._indices.remove(index)
@@ -1840,14 +1840,22 @@
         self._remove_pack_indices(pack)
         self.packs.remove(pack)
 
-    def _remove_pack_indices(self, pack):
-        """Remove the indices for pack from the aggregated indices."""
-        self.revision_index.remove_index(pack.revision_index, pack)
-        self.inventory_index.remove_index(pack.inventory_index, pack)
-        self.text_index.remove_index(pack.text_index, pack)
-        self.signature_index.remove_index(pack.signature_index, pack)
-        if self.chk_index is not None:
-            self.chk_index.remove_index(pack.chk_index, pack)
+    def _remove_pack_indices(self, pack, ignore_missing=False):
+        """Remove the indices for pack from the aggregated indices.
+        
+        :param ignore_missing: Suppress KeyErrors from calling remove_index.
+        """
+        for index_type in Pack.index_definitions.keys():
+            attr_name = index_type + '_index'
+            aggregate_index = getattr(self, attr_name)
+            if aggregate_index is not None:
+                pack_index = getattr(pack, attr_name)
+                try:
+                    aggregate_index.remove_index(pack_index)
+                except KeyError:
+                    if ignore_missing:
+                        continue
+                    raise
 
     def reset(self):
         """Clear all cached data."""
@@ -2091,24 +2099,21 @@
         # FIXME: just drop the transient index.
         # forget what names there are
         if self._new_pack is not None:
-            try:
-                self._new_pack.abort()
-            finally:
-                # XXX: If we aborted while in the middle of finishing the write
-                # group, _remove_pack_indices can fail because the indexes are
-                # already gone.  If they're not there we shouldn't fail in this
-                # case.  -- mbp 20081113
-                self._remove_pack_indices(self._new_pack)
-                self._new_pack = None
+            operation = cleanup.OperationWithCleanups(self._new_pack.abort)
+            operation.add_cleanup(setattr, self, '_new_pack', None)
+            # If we aborted while in the middle of finishing the write
+            # group, _remove_pack_indices could fail because the indexes are
+            # already gone.  But they're not there we shouldn't fail in this
+            # case, so we pass ignore_missing=True.
+            operation.add_cleanup(self._remove_pack_indices, self._new_pack,
+                ignore_missing=True)
+            operation.run_simple()
         for resumed_pack in self._resumed_packs:
-            try:
-                resumed_pack.abort()
-            finally:
-                # See comment in previous finally block.
-                try:
-                    self._remove_pack_indices(resumed_pack)
-                except KeyError:
-                    pass
+            operation = cleanup.OperationWithCleanups(resumed_pack.abort)
+            # See comment in previous finally block.
+            operation.add_cleanup(self._remove_pack_indices, resumed_pack,
+                ignore_missing=True)
+            operation.run_simple()
         del self._resumed_packs[:]
 
     def _remove_resumed_pack_indices(self):

=== modified file 'bzrlib/tests/__init__.py'
--- a/bzrlib/tests/__init__.py	2010-02-11 09:27:55 +0000
+++ b/bzrlib/tests/__init__.py	2010-02-12 11:58:21 +0000
@@ -4372,6 +4372,23 @@
 CaseInsensitiveFilesystemFeature = _CaseInsensitiveFilesystemFeature()
 
 
+class _CaseSensitiveFilesystemFeature(Feature):
+
+    def _probe(self):
+        if CaseInsCasePresFilenameFeature.available():
+            return False
+        elif CaseInsensitiveFilesystemFeature.available():
+            return False
+        else:
+            return True
+
+    def feature_name(self):
+        return 'case-sensitive filesystem'
+
+# new coding style is for feature instances to be lowercase
+case_sensitive_filesystem_feature = _CaseSensitiveFilesystemFeature()
+
+
 # Kept for compatibility, use bzrlib.tests.features.subunit instead
 SubUnitFeature = _CompatabilityThunkFeature(
     deprecated_in((2,1,0)),

=== modified file 'bzrlib/tests/per_tree/test_inv.py'
--- a/bzrlib/tests/per_tree/test_inv.py	2009-07-10 07:14:02 +0000
+++ b/bzrlib/tests/per_tree/test_inv.py	2010-02-12 03:33:36 +0000
@@ -1,4 +1,4 @@
-# Copyright (C) 2007, 2008 Canonical Ltd
+# Copyright (C) 2007, 2008, 2010 Canonical Ltd
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License as published by
@@ -23,7 +23,10 @@
 from bzrlib import (
     tests,
     )
-from bzrlib.tests import per_tree
+from bzrlib.tests import (
+    features,
+    per_tree,
+    )
 from bzrlib.mutabletree import MutableTree
 from bzrlib.tests import SymlinkFeature, TestSkipped
 from bzrlib.transform import _PreviewTree
@@ -131,6 +134,8 @@
         work_tree.add(['dir', 'dir/file'])
         if commit:
             work_tree.commit('commit 1')
+        # XXX: this isn't actually guaranteed to return the class we want to
+        # test -- mbp 2010-02-12
         return work_tree
 
     def test_canonical_path(self):
@@ -163,3 +168,21 @@
         work_tree = self._make_canonical_test_tree()
         self.assertEqual('dir/None',
                          work_tree.get_canonical_inventory_path('Dir/None'))
+
+    def test_canonical_tree_name_mismatch(self):
+        # see <https://bugs.edge.launchpad.net/bzr/+bug/368931>
+        # some of the trees we want to use can only exist on a disk, not in
+        # memory - therefore we can only test this if the filesystem is
+        # case-sensitive.
+        self.requireFeature(tests.case_sensitive_filesystem_feature)
+        work_tree = self.make_branch_and_tree('.')
+        self.build_tree(['test/', 'test/file', 'Test'])
+        work_tree.add(['test/', 'test/file', 'Test'])
+
+        test_tree = self._convert_tree(work_tree)
+        test_tree.lock_read()
+        self.addCleanup(test_tree.unlock)
+
+        self.assertEqual(['test', 'test/file', 'Test', 'test/foo', 'Test/foo'],
+            test_tree.get_canonical_inventory_paths(
+                ['test', 'test/file', 'Test', 'test/foo', 'Test/foo']))

=== modified file 'bzrlib/tree.py'
--- a/bzrlib/tree.py	2010-02-10 21:36:32 +0000
+++ b/bzrlib/tree.py	2010-02-12 11:58:21 +0000
@@ -405,16 +405,34 @@
             bit_iter = iter(path.split("/"))
             for elt in bit_iter:
                 lelt = elt.lower()
+                new_path = None
                 for child in self.iter_children(cur_id):
                     try:
+                        # XXX: it seem like if the child is known to be in the
+                        # tree, we shouldn't need to go from its id back to
+                        # its path -- mbp 2010-02-11
+                        #
+                        # XXX: it seems like we could be more efficient
+                        # by just directly looking up the original name and
+                        # only then searching all children; also by not
+                        # chopping paths so much. -- mbp 2010-02-11
                         child_base = os.path.basename(self.id2path(child))
-                        if child_base.lower() == lelt:
+                        if (child_base == elt):
+                            # if we found an exact match, we can stop now; if
+                            # we found an approximate match we need to keep
+                            # searching because there might be an exact match
+                            # later.  
                             cur_id = child
-                            cur_path = osutils.pathjoin(cur_path, child_base)
+                            new_path = osutils.pathjoin(cur_path, child_base)
                             break
+                        elif child_base.lower() == lelt:
+                            cur_id = child
+                            new_path = osutils.pathjoin(cur_path, child_base)
                     except NoSuchId:
                         # before a change is committed we can see this error...
                         continue
+                if new_path:
+                    cur_path = new_path
                 else:
                     # got to the end of this directory and no entries matched.
                     # Return what matched so far, plus the rest as specified.




More information about the bazaar-commits mailing list