Rev 4770: (andrew) Merge lp:bzr/2.0 into lp:bzr. Includes fixes for #436325, in file:///home/pqm/archives/thelove/bzr/%2Btrunk/
Canonical.com Patch Queue Manager
pqm at pqm.ubuntu.com
Mon Oct 26 07:45:32 GMT 2009
At file:///home/pqm/archives/thelove/bzr/%2Btrunk/
------------------------------------------------------------
revno: 4770 [merge]
revision-id: pqm at pqm.ubuntu.com-20091026074529-g0v5lnaqpzksl9kl
parent: pqm at pqm.ubuntu.com-20091026035729-o3aemozkniitzk9k
parent: andrew.bennetts at canonical.com-20091026064440-06u7tpg7l6sjkh8h
committer: Canonical.com Patch Queue Manager <pqm at pqm.ubuntu.com>
branch nick: +trunk
timestamp: Mon 2009-10-26 07:45:29 +0000
message:
(andrew) Merge lp:bzr/2.0 into lp:bzr. Includes fixes for #436325,
#436794, #437626 and documentation improvements.
added:
bzrlib/tests/test_patches_data/binary.patch binary.patch-20091014204249-q507lc8it6nc6xll-1
modified:
Makefile Makefile-20050805140406-d96e3498bb61c5bb
NEWS NEWS-20050323055033-4e00b5db738777ff
bzrlib/knit.py knit.py-20051212171256-f056ac8f0fbe1bd9
bzrlib/patches.py patches.py-20050727183609-378c1cc5972ce908
bzrlib/tests/per_versionedfile.py test_versionedfile.py-20060222045249-db45c9ed14a1c2e5
bzrlib/tests/test_patches.py test_patches.py-20051231203844-f4974d20f6aea09c
bzrlib/tests/test_transform.py test_transaction.py-20060105172520-b3ffb3946550e6c4
bzrlib/transform.py transform.py-20060105172343-dd99e54394d91687
doc/en/user-guide/filtered_views.txt filtered_views.txt-20090226100856-a16ba1v97v91ru58-1
=== modified file 'Makefile'
--- a/Makefile 2009-10-26 03:08:56 +0000
+++ b/Makefile 2009-10-26 06:44:40 +0000
@@ -172,7 +172,7 @@
### Documentation Website ###
# Where to build the website
-DOC_WEBSITE_BUILD := build_doc_website
+DOC_WEBSITE_BUILD = build_doc_website
# Build and package docs into a website, complete with downloads.
doc-website: html-sphinx pdf-sphinx
@@ -188,13 +188,13 @@
# support our "plain" html documentation so that Sphinx is not a hard
# dependency for packagers on older platforms.
-rst2html := $(PYTHON) tools/rst2html.py --link-stylesheet --footnote-references=superscript --halt=warning
+rst2html = $(PYTHON) tools/rst2html.py --link-stylesheet --footnote-references=superscript --halt=warning
# translate txt docs to html
-derived_txt_files := \
+derived_txt_files = \
doc/en/user-reference/bzr_man.txt \
doc/en/release-notes/NEWS.txt
-txt_all := \
+txt_all = \
doc/en/tutorials/tutorial.txt \
doc/en/tutorials/using_bazaar_with_launchpad.txt \
doc/en/tutorials/centralized_workflow.txt \
@@ -207,14 +207,14 @@
doc/en/upgrade-guide/index.txt \
doc/index.txt \
$(wildcard doc/index.*.txt)
-txt_nohtml := \
+txt_nohtml = \
doc/en/user-guide/index.txt \
doc/es/user-guide/index.txt \
doc/ru/user-guide/index.txt
-txt_files := $(filter-out $(txt_nohtml), $(txt_all))
-htm_files := $(patsubst %.txt, %.html, $(txt_files))
+txt_files = $(filter-out $(txt_nohtml), $(txt_all))
+htm_files = $(patsubst %.txt, %.html, $(txt_files))
-non_txt_files := \
+non_txt_files = \
doc/default.css \
$(wildcard doc/*/bzr-en-quick-reference.svg) \
$(wildcard doc/*/bzr-en-quick-reference.png) \
@@ -229,7 +229,7 @@
# doc/developers/*.txt files that should *not* be individually
# converted to HTML
-dev_txt_nohtml := \
+dev_txt_nohtml = \
doc/developers/add.txt \
doc/developers/annotate.txt \
doc/developers/bundle-creation.txt \
@@ -255,9 +255,9 @@
doc/developers/status.txt \
doc/developers/uncommit.txt
-dev_txt_all := $(wildcard $(addsuffix /*.txt, doc/developers))
-dev_txt_files := $(filter-out $(dev_txt_nohtml), $(dev_txt_all))
-dev_htm_files := $(patsubst %.txt, %.html, $(dev_txt_files))
+dev_txt_all = $(wildcard $(addsuffix /*.txt, doc/developers))
+dev_txt_files = $(filter-out $(dev_txt_nohtml), $(dev_txt_all))
+dev_htm_files = $(patsubst %.txt, %.html, $(dev_txt_files))
doc/en/user-guide/index-plain.html: $(wildcard $(addsuffix /*.txt, doc/en/user-guide))
$(rst2html) --stylesheet=../../default.css $(dir $@)index-plain.txt $@
@@ -299,7 +299,7 @@
docs-plain: $(ALL_DOCS)
# produce a tree containing just the final docs, ready for uploading to the web
-HTMLDIR := html_docs
+HTMLDIR = html_docs
html-plain: docs-plain
$(PYTHON) tools/win32/ostools.py copytree $(WEB_DOCS) $(HTMLDIR)
@@ -325,7 +325,7 @@
# These are files that need to be copied into the build location to boostrap
# the build process.
# Note that the path is relative to tools/win32
-BUILDOUT_FILES := buildout.cfg \
+BUILDOUT_FILES = buildout.cfg \
buildout-templates/bin/build-installer.bat.in \
ostools.py bootstrap.py
=== modified file 'NEWS'
--- a/NEWS 2009-10-23 16:31:03 +0000
+++ b/NEWS 2009-10-26 06:44:40 +0000
@@ -25,6 +25,15 @@
* ``bzr+http`` servers no longer give spurious jail break errors when
serving branches inside a shared repository. (Andrew Bennetts, #348308)
+* Diff parsing handles "Binary files differ" hunks. (Aaron Bentley, #436325)
+
+* Fetching from stacked pre-2a repository via a smart server no longer
+ fails intermittently with "second push failed to complete".
+ (Andrew Bennetts, #437626)
+
+* PreviewTree file names are not limited by the encoding of the temp
+ directory's filesystem. (Aaron Bentley, #436794)
+
* TreeTransform.adjust_path updates the limbo paths of descendants of adjusted
files. (Aaron Bentley)
@@ -42,6 +51,9 @@
Documentation
*************
+* Filtered views user documentation upgraded to refer to format 2a
+ instead of pre-2.0 formats. (Ian Clatworthy)
+
API Changes
***********
@@ -107,12 +119,24 @@
* Avoid "NoneType has no attribute st_mode" error when files disappear
from a directory while it's being read. (Martin Pool, #446033)
+* Diff parsing handles "Binary files differ" hunks. (Aaron Bentley, #436325)
+
+* Fetching from stacked pre-2a repository via a smart server no longer
+ fails intermittently with "second push failed to complete".
+ (Andrew Bennetts, #437626)
+
+* PreviewTree file names are not limited by the encoding of the temp
+ directory's filesystem. (Aaron Bentley, #436794)
+
Improvements
************
Documentation
*************
+* Filtered views user documentation upgraded to refer to format 2a
+ instead of pre-2.0 formats. (Ian Clatworthy)
+
API Changes
***********
=== modified file 'bzrlib/knit.py'
--- a/bzrlib/knit.py 2009-09-15 02:57:23 +0000
+++ b/bzrlib/knit.py 2009-10-26 06:44:40 +0000
@@ -1710,10 +1710,12 @@
# There were index entries buffered at the end of the stream,
# So these need to be added (if the index supports holding such
# entries for later insertion)
+ all_entries = []
for key in buffered_index_entries:
index_entries = buffered_index_entries[key]
- self._index.add_records(index_entries,
- missing_compression_parents=True)
+ all_entries.extend(index_entries)
+ self._index.add_records(
+ all_entries, missing_compression_parents=True)
def get_missing_compression_parent_keys(self):
"""Return an iterable of keys of missing compression parents.
=== modified file 'bzrlib/patches.py'
--- a/bzrlib/patches.py 2009-03-23 14:59:43 +0000
+++ b/bzrlib/patches.py 2009-10-14 22:08:45 +0000
@@ -14,6 +14,15 @@
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+import re
+
+
+class BinaryFiles(Exception):
+
+ def __init__(self, orig_name, mod_name):
+ self.orig_name = orig_name
+ self.mod_name = mod_name
+ Exception.__init__(self, 'Binary files section encountered.')
class PatchSyntax(Exception):
@@ -57,6 +66,9 @@
def get_patch_names(iter_lines):
try:
line = iter_lines.next()
+ match = re.match('Binary files (.*) and (.*) differ\n', line)
+ if match is not None:
+ raise BinaryFiles(match.group(1), match.group(2))
if not line.startswith("--- "):
raise MalformedPatchHeader("No orig name", line)
else:
@@ -259,10 +271,19 @@
yield hunk
-class Patch:
+class BinaryPatch(object):
def __init__(self, oldname, newname):
self.oldname = oldname
self.newname = newname
+
+ def __str__(self):
+ return 'Binary files %s and %s differ\n' % (self.oldname, self.newname)
+
+
+class Patch(BinaryPatch):
+
+ def __init__(self, oldname, newname):
+ BinaryPatch.__init__(self, oldname, newname)
self.hunks = []
def __str__(self):
@@ -317,11 +338,15 @@
def parse_patch(iter_lines):
iter_lines = iter_lines_handle_nl(iter_lines)
- (orig_name, mod_name) = get_patch_names(iter_lines)
- patch = Patch(orig_name, mod_name)
- for hunk in iter_hunks(iter_lines):
- patch.hunks.append(hunk)
- return patch
+ try:
+ (orig_name, mod_name) = get_patch_names(iter_lines)
+ except BinaryFiles, e:
+ return BinaryPatch(e.orig_name, e.mod_name)
+ else:
+ patch = Patch(orig_name, mod_name)
+ for hunk in iter_hunks(iter_lines):
+ patch.hunks.append(hunk)
+ return patch
def iter_file_patch(iter_lines):
=== modified file 'bzrlib/tests/per_versionedfile.py'
--- a/bzrlib/tests/per_versionedfile.py 2009-10-19 15:06:58 +0000
+++ b/bzrlib/tests/per_versionedfile.py 2009-10-26 06:44:40 +0000
@@ -2438,6 +2438,43 @@
else:
self.assertIdenticalVersionedFile(source, files)
+ def test_insert_record_stream_long_parent_chain_out_of_order(self):
+ """An out of order stream can either error or work."""
+ if not self.graph:
+ raise TestNotApplicable('ancestry info only relevant with graph.')
+ # Create a reasonably long chain of records based on each other, where
+ # most will be deltas.
+ source = self.get_versionedfiles('source')
+ parents = ()
+ keys = []
+ content = [('same same %d\n' % n) for n in range(500)]
+ for letter in 'abcdefghijklmnopqrstuvwxyz':
+ key = ('key-' + letter,)
+ if self.key_length == 2:
+ key = ('prefix',) + key
+ content.append('content for ' + letter + '\n')
+ source.add_lines(key, parents, content)
+ keys.append(key)
+ parents = (key,)
+ # Create a stream of these records, excluding the first record that the
+ # rest ultimately depend upon, and insert it into a new vf.
+ streams = []
+ for key in reversed(keys):
+ streams.append(source.get_record_stream([key], 'unordered', False))
+ deltas = chain(*streams[:-1])
+ files = self.get_versionedfiles()
+ try:
+ files.insert_record_stream(deltas)
+ except RevisionNotPresent:
+ # Must not have corrupted the file.
+ files.check()
+ else:
+ # Must only report either just the first key as a missing parent,
+ # no key as missing (for nodelta scenarios).
+ missing = set(files.get_missing_compression_parent_keys())
+ missing.discard(keys[0])
+ self.assertEqual(set(), missing)
+
def get_knit_delta_source(self):
"""Get a source that can produce a stream with knit delta records,
regardless of this test's scenario.
=== modified file 'bzrlib/tests/test_patches.py'
--- a/bzrlib/tests/test_patches.py 2009-03-23 14:59:43 +0000
+++ b/bzrlib/tests/test_patches.py 2009-10-15 15:25:20 +0000
@@ -24,6 +24,9 @@
from bzrlib.patches import (MalformedLine,
MalformedHunkHeader,
MalformedPatchHeader,
+ BinaryPatch,
+ BinaryFiles,
+ Patch,
ContextLine,
InsertLine,
RemoveLine,
@@ -45,6 +48,13 @@
"test_patches_data", filename)
return file(data_path, "rb")
+ def data_lines(self, filename):
+ datafile = self.datafile(filename)
+ try:
+ return datafile.readlines()
+ finally:
+ datafile.close()
+
def testValidPatchHeader(self):
"""Parse a valid patch header"""
lines = "--- orig/commands.py\n+++ mod/dommands.py\n".split('\n')
@@ -136,6 +146,21 @@
patchtext = self.datafile("patchtext.patch").read()
self.compare_parsed(patchtext)
+ def test_parse_binary(self):
+ """Test parsing a whole patch"""
+ patches = parse_patches(self.data_lines("binary.patch"))
+ self.assertIs(BinaryPatch, patches[0].__class__)
+ self.assertIs(Patch, patches[1].__class__)
+ self.assertContainsRe(patches[0].oldname, '^bar\t')
+ self.assertContainsRe(patches[0].newname, '^qux\t')
+ self.assertContainsRe(str(patches[0]),
+ 'Binary files bar\t.* and qux\t.* differ\n')
+
+ def test_roundtrip_binary(self):
+ patchtext = ''.join(self.data_lines("binary.patch"))
+ patches = parse_patches(patchtext.splitlines(True))
+ self.assertEqual(patchtext, ''.join(str(p) for p in patches))
+
def testInit(self):
"""Handle patches missing half the position, range tuple"""
patchtext = \
@@ -194,6 +219,11 @@
count += 1
self.assertEqual(count, len(mod_lines))
+ def test_iter_patched_binary(self):
+ binary_lines = self.data_lines('binary.patch')
+ e = self.assertRaises(BinaryFiles, iter_patched, [], binary_lines)
+
+
def test_iter_patched_from_hunks(self):
"""Test a few patch files, and make sure they work."""
files = [
=== added file 'bzrlib/tests/test_patches_data/binary.patch'
--- a/bzrlib/tests/test_patches_data/binary.patch 1970-01-01 00:00:00 +0000
+++ b/bzrlib/tests/test_patches_data/binary.patch 2009-10-14 22:08:45 +0000
@@ -0,0 +1,6 @@
+Binary files bar 2009-10-14 19:49:59 +0000 and qux 2009-10-14 19:50:35 +0000 differ
+--- baz 2009-10-14 19:49:59 +0000
++++ quxx 2009-10-14 19:51:00 +0000
+@@ -1 +1 @@
+-hello
++goodbye
=== modified file 'bzrlib/tests/test_transform.py'
--- a/bzrlib/tests/test_transform.py 2009-10-16 15:25:24 +0000
+++ b/bzrlib/tests/test_transform.py 2009-10-26 06:44:40 +0000
@@ -2754,6 +2754,16 @@
rev2_tree = tree.branch.repository.revision_tree(rev2_id)
self.assertEqual('contents', rev2_tree.get_file_text('file_id'))
+ def test_ascii_limbo_paths(self):
+ self.requireFeature(tests.UnicodeFilenameFeature)
+ branch = self.make_branch('any')
+ tree = branch.repository.revision_tree(_mod_revision.NULL_REVISION)
+ tt = TransformPreview(tree)
+ foo_id = tt.new_directory('', ROOT_PARENT)
+ bar_id = tt.new_file(u'\u1234bar', foo_id, 'contents')
+ limbo_path = tt._limbo_name(bar_id)
+ self.assertEqual(limbo_path.encode('ascii', 'replace'), limbo_path)
+
class FakeSerializer(object):
"""Serializer implementation that simply returns the input.
=== modified file 'bzrlib/transform.py'
--- a/bzrlib/transform.py 2009-10-16 15:29:50 +0000
+++ b/bzrlib/transform.py 2009-10-26 06:44:40 +0000
@@ -1054,47 +1054,20 @@
def _limbo_name(self, trans_id):
"""Generate the limbo name of a file"""
limbo_name = self._limbo_files.get(trans_id)
- if limbo_name is not None:
- return limbo_name
- parent = self._new_parent.get(trans_id)
- # if the parent directory is already in limbo (e.g. when building a
- # tree), choose a limbo name inside the parent, to reduce further
- # renames.
- use_direct_path = False
- if self._new_contents.get(parent) == 'directory':
- filename = self._new_name.get(trans_id)
- if filename is not None:
- if parent not in self._limbo_children:
- self._limbo_children[parent] = set()
- self._limbo_children_names[parent] = {}
- use_direct_path = True
- # the direct path can only be used if no other file has
- # already taken this pathname, i.e. if the name is unused, or
- # if it is already associated with this trans_id.
- elif self._case_sensitive_target:
- if (self._limbo_children_names[parent].get(filename)
- in (trans_id, None)):
- use_direct_path = True
- else:
- for l_filename, l_trans_id in\
- self._limbo_children_names[parent].iteritems():
- if l_trans_id == trans_id:
- continue
- if l_filename.lower() == filename.lower():
- break
- else:
- use_direct_path = True
-
- if use_direct_path:
- limbo_name = pathjoin(self._limbo_files[parent], filename)
- self._limbo_children[parent].add(trans_id)
- self._limbo_children_names[parent][filename] = trans_id
- else:
- limbo_name = pathjoin(self._limbodir, trans_id)
- self._needs_rename.add(trans_id)
- self._limbo_files[trans_id] = limbo_name
+ if limbo_name is None:
+ limbo_name = self._generate_limbo_path(trans_id)
+ self._limbo_files[trans_id] = limbo_name
return limbo_name
+ def _generate_limbo_path(self, trans_id):
+ """Generate a limbo path using the trans_id as the relative path.
+
+ This is suitable as a fallback, and when the transform should not be
+ sensitive to the path encoding of the limbo directory.
+ """
+ self._needs_rename.add(trans_id)
+ return pathjoin(self._limbodir, trans_id)
+
def adjust_path(self, name, parent, trans_id):
previous_parent = self._new_parent.get(trans_id)
previous_name = self._new_name.get(trans_id)
@@ -1407,6 +1380,54 @@
continue
yield self.trans_id_tree_path(childpath)
+ def _generate_limbo_path(self, trans_id):
+ """Generate a limbo path using the final path if possible.
+
+ This optimizes the performance of applying the tree transform by
+ avoiding renames. These renames can be avoided only when the parent
+ directory is already scheduled for creation.
+
+ If the final path cannot be used, falls back to using the trans_id as
+ the relpath.
+ """
+ parent = self._new_parent.get(trans_id)
+ # if the parent directory is already in limbo (e.g. when building a
+ # tree), choose a limbo name inside the parent, to reduce further
+ # renames.
+ use_direct_path = False
+ if self._new_contents.get(parent) == 'directory':
+ filename = self._new_name.get(trans_id)
+ if filename is not None:
+ if parent not in self._limbo_children:
+ self._limbo_children[parent] = set()
+ self._limbo_children_names[parent] = {}
+ use_direct_path = True
+ # the direct path can only be used if no other file has
+ # already taken this pathname, i.e. if the name is unused, or
+ # if it is already associated with this trans_id.
+ elif self._case_sensitive_target:
+ if (self._limbo_children_names[parent].get(filename)
+ in (trans_id, None)):
+ use_direct_path = True
+ else:
+ for l_filename, l_trans_id in\
+ self._limbo_children_names[parent].iteritems():
+ if l_trans_id == trans_id:
+ continue
+ if l_filename.lower() == filename.lower():
+ break
+ else:
+ use_direct_path = True
+
+ if not use_direct_path:
+ return DiskTreeTransform._generate_limbo_path(self, trans_id)
+
+ limbo_name = pathjoin(self._limbo_files[parent], filename)
+ self._limbo_children[parent].add(trans_id)
+ self._limbo_children_names[parent][filename] = trans_id
+ return limbo_name
+
+
def apply(self, no_conflicts=False, precomputed_delta=None, _mover=None):
"""Apply all changes to the inventory and filesystem.
=== modified file 'doc/en/user-guide/filtered_views.txt'
--- a/doc/en/user-guide/filtered_views.txt 2009-03-31 14:12:27 +0000
+++ b/doc/en/user-guide/filtered_views.txt 2009-10-19 04:55:02 +0000
@@ -25,10 +25,8 @@
a view implicitly so that it is clear that the operation or output is
being masked accordingly.
-Note: Bazaar's default format does not yet support filtered views. That
-is likely to change in the near future. To use filtered views in the
-meantime, you currently need to upgrade to ``development-wt6`` (or
-``development-wt6-rich-root``) format first.
+Note: Filtered views are only supported in format 2a, the default in
+Bazaar 2.0, or later.
Creating a view
More information about the bazaar-commits
mailing list