Rev 3847: (mbp) Fix bug #288751 by teaching fetch to expand to fulltexts if it in file:///home/pqm/archives/thelove/bzr/%2Btrunk/

Canonical.com Patch Queue Manager pqm at pqm.ubuntu.com
Fri Nov 21 22:19:37 GMT 2008


At file:///home/pqm/archives/thelove/bzr/%2Btrunk/

------------------------------------------------------------
revno: 3847
revision-id: pqm at pqm.ubuntu.com-20081121221932-44m8c85k5ri8h5hg
parent: pqm at pqm.ubuntu.com-20081121202743-dhg79sf8sf0wryfe
parent: john at arbash-meinel.com-20081121214103-gry7l3tuq9apgerx
committer: Canonical.com Patch Queue Manager <pqm at pqm.ubuntu.com>
branch nick: +trunk
timestamp: Fri 2008-11-21 22:19:32 +0000
message:
  (mbp) Fix bug #288751 by teaching fetch to expand to fulltexts if it
  	would cause a delta to span repo boundaries.
modified:
  NEWS                           NEWS-20050323055033-4e00b5db738777ff
  bzrlib/commit.py               commit.py-20050511101309-79ec1a0168e0e825
  bzrlib/fetch.py                fetch.py-20050818234941-26fea6105696365d
  bzrlib/index.py                index.py-20070712131115-lolkarso50vjr64s-1
  bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
  bzrlib/repofmt/pack_repo.py    pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
  bzrlib/repository.py           rev_storage.py-20051111201905-119e9401e46257e3
  bzrlib/tests/branch_implementations/test_stacking.py test_stacking.py-20080214020755-msjlkb7urobwly0f-1
  bzrlib/tests/interrepository_implementations/test_fetch.py test_fetch.py-20080425213627-j60cjh782ufm83ry-1
  bzrlib/tests/test_knit.py      test_knit.py-20051212171302-95d4c00dd5f11f2b
  bzrlib/tests/test_repository.py test_repository.py-20060131075918-65c555b881612f4d
  bzrlib/tests/test_revision.py  testrevision.py-20050804210559-46f5e1eb67b01289
  bzrlib/versionedfile.py        versionedfile.py-20060222045106-5039c71ee3b65490
    ------------------------------------------------------------
    revno: 3830.3.25
    revision-id: john at arbash-meinel.com-20081121214103-gry7l3tuq9apgerx
    parent: john at arbash-meinel.com-20081121212106-erewpn8s9353f411
    committer: John Arbash Meinel <john at arbash-meinel.com>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-21 15:41:03 -0600
    message:
      We changed the error that is raised when fetching from a broken repo.
      
      The test only really cares that an assertion that will cancel the fetch is being
      raised, to ensure that we don't actually allow fetching when broken.
    modified:
      bzrlib/tests/test_repository.py test_repository.py-20060131075918-65c555b881612f4d
    ------------------------------------------------------------
    revno: 3830.3.24
    revision-id: john at arbash-meinel.com-20081121212106-erewpn8s9353f411
    parent: john at arbash-meinel.com-20081121211151-yqloy3xyzu7qgsm4
    committer: John Arbash Meinel <john at arbash-meinel.com>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-21 15:21:06 -0600
    message:
      We don't require all parents to be present, just the compression parent.
    modified:
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
    ------------------------------------------------------------
    revno: 3830.3.23
    revision-id: john at arbash-meinel.com-20081121211151-yqloy3xyzu7qgsm4
    parent: john at arbash-meinel.com-20081121204849-widj20hs0s24c9g5
    committer: John Arbash Meinel <john at arbash-meinel.com>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-21 15:11:51 -0600
    message:
      Use Branch.sprout rather than Branch.clone.
      
      It allows the test to pass on Windows, and we are starting a new branch, which
      is '.sprout' anyway.
    modified:
      bzrlib/tests/test_revision.py  testrevision.py-20050804210559-46f5e1eb67b01289
    ------------------------------------------------------------
    revno: 3830.3.22
    revision-id: john at arbash-meinel.com-20081121204849-widj20hs0s24c9g5
    parent: john at arbash-meinel.com-20081121202415-30ok91cqouwa7f7e
    committer: John Arbash Meinel <john at arbash-meinel.com>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-21 14:48:49 -0600
    message:
      Restore the ability to use deltas in the generic fetch code.
    modified:
      bzrlib/repofmt/knitrepo.py     knitrepo.py-20070206081537-pyy4a00xdas0j4pf-1
    ------------------------------------------------------------
    revno: 3830.3.21
    revision-id: john at arbash-meinel.com-20081121202415-30ok91cqouwa7f7e
    parent: john at arbash-meinel.com-20081121202052-vs6gqscjphfmxmo8
    parent: pqm at pqm.ubuntu.com-20081121044450-xgyehkv3u1da37wg
    committer: John Arbash Meinel <john at arbash-meinel.com>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-21 14:24:15 -0600
    message:
      Merge in bzr.dev 3845 and handle the trivial conflicts.
    modified:
      NEWS                           NEWS-20050323055033-4e00b5db738777ff
      bzrlib/__init__.py             __init__.py-20050309040759-33e65acf91bbcd5d
      bzrlib/_readdir_pyx.pyx        readdir.pyx-20060609152855-rm6v321vuaqyh9tu-1
      bzrlib/btree_index.py          index.py-20080624222253-p0x5f92uyh5hw734-7
      bzrlib/builtins.py             builtins.py-20050830033751-fc01482b9ca23183
      bzrlib/fetch.py                fetch.py-20050818234941-26fea6105696365d
      bzrlib/lockable_files.py       control_files.py-20051111201905-bb88546e799d669f
      bzrlib/option.py               option.py-20051014052914-661fb36e76e7362f
      bzrlib/plugin.py               plugin.py-20050622060424-829b654519533d69
      bzrlib/plugins/launchpad/account.py account.py-20071011033320-50y6vfftywf4yllw-1
      bzrlib/plugins/launchpad/lp_directory.py lp_indirect.py-20070126012204-de5rugwlt22c7u7e-1
      bzrlib/plugins/launchpad/test_account.py test_account.py-20071011033320-50y6vfftywf4yllw-2
      bzrlib/remote.py               remote.py-20060720103555-yeeg2x51vn0rbtdp-1
      bzrlib/repofmt/pack_repo.py    pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
      bzrlib/repofmt/weaverepo.py    presplitout.py-20070125045333-wfav3tsh73oxu3zk-1
      bzrlib/repository.py           rev_storage.py-20051111201905-119e9401e46257e3
      bzrlib/shelf_ui.py             shelver.py-20081005210102-33worgzwrtdw0yrm-1
      bzrlib/tests/__init__.py       selftest.py-20050531073622-8d0e3c8845c97a64
      bzrlib/tests/branch_implementations/test_stacking.py test_stacking.py-20080214020755-msjlkb7urobwly0f-1
      bzrlib/tests/per_repository/test_write_group.py test_write_group.py-20070716105516-89n34xtogq5frn0m-1
      bzrlib/tests/test_btree_index.py test_index.py-20080624222253-p0x5f92uyh5hw734-13
      bzrlib/tests/test_pack_repository.py test_pack_repository-20080801043947-eaw0e6h2gu75kwmy-1
      bzrlib/tests/test_permissions.py test_permissions.py-20051215004520-ccf475789c80e80c
      bzrlib/tests/test_plugins.py   plugins.py-20050622075746-32002b55e5e943e9
      bzrlib/tests/test_remote.py    test_remote.py-20060720103555-yeeg2x51vn0rbtdp-2
      bzrlib/tests/test_shelf_ui.py  test_shelf_ui.py-20081027155203-wtcuazg85wp9u4fv-1
      bzrlib/transport/remote.py     ssh.py-20060608202016-c25gvf1ob7ypbus6-1
      bzrlib/workingtree.py          workingtree.py-20050511021032-29b6ec0a681e02e3
      bzrlib/workingtree_4.py        workingtree_4.py-20070208044105-5fgpc5j3ljlh5q6c-1
    ------------------------------------------------------------
    revno: 3830.3.20
    revision-id: john at arbash-meinel.com-20081121202052-vs6gqscjphfmxmo8
    parent: john at arbash-meinel.com-20081121201903-d9gj476w20s59580
    committer: John Arbash Meinel <john at arbash-meinel.com>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-21 14:20:52 -0600
    message:
      Minor PEP8 and copyright updates.
    modified:
      bzrlib/index.py                index.py-20070712131115-lolkarso50vjr64s-1
      bzrlib/versionedfile.py        versionedfile.py-20060222045106-5039c71ee3b65490
    ------------------------------------------------------------
    revno: 3830.3.19
    revision-id: john at arbash-meinel.com-20081121201903-d9gj476w20s59580
    parent: mbp at sourcefrog.net-20081121043538-b6bfhh84ckku26ge
    committer: John Arbash Meinel <john at arbash-meinel.com>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-21 14:19:03 -0600
    message:
      Small update to GraphIndexBuilder._external_references
      
      Add a comment describing the assumptions, and add a check for when
      there cannot be external compression references.
    modified:
      bzrlib/index.py                index.py-20070712131115-lolkarso50vjr64s-1
    ------------------------------------------------------------
    revno: 3830.3.18
    revision-id: mbp at sourcefrog.net-20081121043538-b6bfhh84ckku26ge
    parent: mbp at sourcefrog.net-20081120122941-vsgl27z58crm4qe1
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-21 15:35:38 +1100
    message:
      Faster expression evaluation order
    modified:
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
    ------------------------------------------------------------
    revno: 3830.3.17
    revision-id: mbp at sourcefrog.net-20081120122941-vsgl27z58crm4qe1
    parent: mbp at sourcefrog.net-20081120055632-dhju4121e5i1p3t3
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Thu 2008-11-20 23:29:41 +1100
    message:
      Don't assume versions being unmentioned by iter_lines_added_or_changed implies the versions aren't present
    modified:
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
    ------------------------------------------------------------
    revno: 3830.3.16
    revision-id: mbp at sourcefrog.net-20081120055632-dhju4121e5i1p3t3
    parent: mbp at sourcefrog.net-20081120034745-wayfz2zxr7ypqjdc
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Thu 2008-11-20 16:56:32 +1100
    message:
      Add passing tests for iter_lines_added_or_present in stacked repos
    modified:
      bzrlib/tests/branch_implementations/test_stacking.py test_stacking.py-20080214020755-msjlkb7urobwly0f-1
    ------------------------------------------------------------
    revno: 3830.3.15
    revision-id: mbp at sourcefrog.net-20081120034745-wayfz2zxr7ypqjdc
    parent: mbp at sourcefrog.net-20081119074922-f7xxrzpkce9ytyal
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Thu 2008-11-20 14:47:45 +1100
    message:
      Check against all parents when deciding whether to store a fulltext in a stacked repository
    modified:
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
    ------------------------------------------------------------
    revno: 3830.3.14
    revision-id: mbp at sourcefrog.net-20081119074922-f7xxrzpkce9ytyal
    parent: mbp at sourcefrog.net-20081119074746-2uiwh2i46qmyqk3d
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Wed 2008-11-19 18:49:22 +1100
    message:
      Return to setting _fetch_uses_deltas from initializers
    modified:
      bzrlib/repofmt/knitrepo.py     knitrepo.py-20070206081537-pyy4a00xdas0j4pf-1
      bzrlib/repository.py           rev_storage.py-20051111201905-119e9401e46257e3
    ------------------------------------------------------------
    revno: 3830.3.13
    revision-id: mbp at sourcefrog.net-20081119074746-2uiwh2i46qmyqk3d
    parent: mbp at sourcefrog.net-20081119074451-4kn0r1khwnhh7hxs
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Wed 2008-11-19 18:47:46 +1100
    message:
      review cleanups to insert_record_stream
    modified:
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
    ------------------------------------------------------------
    revno: 3830.3.12
    revision-id: mbp at sourcefrog.net-20081119074451-4kn0r1khwnhh7hxs
    parent: mbp at sourcefrog.net-20081114080450-7myrrjm45lqk8p2x
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Wed 2008-11-19 18:44:51 +1100
    message:
      Review cleanups: unify has_key impls, add missing_keys(), clean up exception blocks
    modified:
      bzrlib/index.py                index.py-20070712131115-lolkarso50vjr64s-1
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
      bzrlib/tests/interrepository_implementations/test_fetch.py test_fetch.py-20080425213627-j60cjh782ufm83ry-1
      bzrlib/versionedfile.py        versionedfile.py-20060222045106-5039c71ee3b65490
    ------------------------------------------------------------
    revno: 3830.3.11
    revision-id: mbp at sourcefrog.net-20081114080450-7myrrjm45lqk8p2x
    parent: mbp at sourcefrog.net-20081114075216-0rk6sowvvsgvlkag
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-14 18:04:50 +1000
    message:
      Update news for 288751
    modified:
      NEWS                           NEWS-20050323055033-4e00b5db738777ff
    ------------------------------------------------------------
    revno: 3830.3.10
    revision-id: mbp at sourcefrog.net-20081114075216-0rk6sowvvsgvlkag
    parent: mbp at sourcefrog.net-20081114074457-91gxtjmqa0b1x1h3
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-14 17:52:16 +1000
    message:
      Update more stacking effort tests
    modified:
      bzrlib/tests/test_knit.py      test_knit.py-20051212171302-95d4c00dd5f11f2b
    ------------------------------------------------------------
    revno: 3830.3.9
    revision-id: mbp at sourcefrog.net-20081114074457-91gxtjmqa0b1x1h3
    parent: mbp at sourcefrog.net-20081114061912-0px2cudfp3rugmxh
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-14 17:44:57 +1000
    message:
      Simplify kvf insert_record_stream; add has_key shorthand methods; update stacking effort tests
    modified:
      bzrlib/index.py                index.py-20070712131115-lolkarso50vjr64s-1
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
      bzrlib/tests/interrepository_implementations/test_fetch.py test_fetch.py-20080425213627-j60cjh782ufm83ry-1
      bzrlib/tests/test_knit.py      test_knit.py-20051212171302-95d4c00dd5f11f2b
    ------------------------------------------------------------
    revno: 3830.3.8
    revision-id: mbp at sourcefrog.net-20081114061912-0px2cudfp3rugmxh
    parent: mbp at sourcefrog.net-20081114053027-qf3ns0kgszqfyjus
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-14 16:19:12 +1000
    message:
      KnitVersionedFiles.insert_record_stream rebuilds repository-spanning deltas when needed
    modified:
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
    ------------------------------------------------------------
    revno: 3830.3.7
    revision-id: mbp at sourcefrog.net-20081114053027-qf3ns0kgszqfyjus
    parent: mbp at sourcefrog.net-20081114050933-kgm4kr84eba7800m
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-14 15:30:27 +1000
    message:
      KnitVersionedFiles.insert_record_stream checks that compression parents are in the same kvf, not in a fallback
    modified:
      bzrlib/knit.py                 knit.py-20051212171256-f056ac8f0fbe1bd9
    ------------------------------------------------------------
    revno: 3830.3.6
    revision-id: mbp at sourcefrog.net-20081114050933-kgm4kr84eba7800m
    parent: mbp at sourcefrog.net-20081113062150-ppqewoxejts3gvzt
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Fri 2008-11-14 15:09:33 +1000
    message:
      Document _fetch_uses_delta and make it a class attribute
    modified:
      bzrlib/fetch.py                fetch.py-20050818234941-26fea6105696365d
      bzrlib/repofmt/knitrepo.py     knitrepo.py-20070206081537-pyy4a00xdas0j4pf-1
      bzrlib/repository.py           rev_storage.py-20051111201905-119e9401e46257e3
    ------------------------------------------------------------
    revno: 3830.3.5
    revision-id: mbp at sourcefrog.net-20081113062150-ppqewoxejts3gvzt
    parent: mbp at sourcefrog.net-20081113061416-pf1kfgznvnwtiuyj
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Thu 2008-11-13 16:21:50 +1000
    message:
      GraphIndexBuilder shouldn't know references are for compression so rename
    modified:
      bzrlib/index.py                index.py-20070712131115-lolkarso50vjr64s-1
      bzrlib/repofmt/pack_repo.py    pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
    ------------------------------------------------------------
    revno: 3830.3.4
    revision-id: mbp at sourcefrog.net-20081113061416-pf1kfgznvnwtiuyj
    parent: mbp at sourcefrog.net-20081113054826-kp55q8ob8ipt0jsn
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Thu 2008-11-13 16:14:16 +1000
    message:
      Move _external_compression_references onto the GraphIndexBuilder, and check them for inventories too
    modified:
      bzrlib/index.py                index.py-20070712131115-lolkarso50vjr64s-1
      bzrlib/repofmt/pack_repo.py    pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
    ------------------------------------------------------------
    revno: 3830.3.3
    revision-id: mbp at sourcefrog.net-20081113054826-kp55q8ob8ipt0jsn
    parent: mbp at sourcefrog.net-20081112092834-ve92du17xgnvaztg
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Thu 2008-11-13 15:48:26 +1000
    message:
      commit should log original exception when aborting write group
    modified:
      bzrlib/commit.py               commit.py-20050511101309-79ec1a0168e0e825
    ------------------------------------------------------------
    revno: 3830.3.2
    revision-id: mbp at sourcefrog.net-20081112092834-ve92du17xgnvaztg
    parent: mbp at sourcefrog.net-20081112091307-yvchhvmcgr58aca7
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Wed 2008-11-12 19:28:34 +1000
    message:
      Check that newly created packs don't have missing delta bases.
      
      This should cause an error in cases that would lead to bug #288751.
    modified:
      bzrlib/repofmt/pack_repo.py    pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
    ------------------------------------------------------------
    revno: 3830.3.1
    revision-id: mbp at sourcefrog.net-20081112091307-yvchhvmcgr58aca7
    parent: pqm at pqm.ubuntu.com-20081111045205-junyogmq9uajfg6z
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: 288751-pack-deltas
    timestamp: Wed 2008-11-12 19:13:07 +1000
    message:
      NewPack should be constructed from the PackCollection, rather than attributes of it
    modified:
      NEWS                           NEWS-20050323055033-4e00b5db738777ff
      bzrlib/repofmt/pack_repo.py    pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
      bzrlib/tests/test_repository.py test_repository.py-20060131075918-65c555b881612f4d
=== modified file 'NEWS'
--- a/NEWS	2008-11-21 04:44:50 +0000
+++ b/NEWS	2008-11-21 20:24:15 +0000
@@ -35,8 +35,12 @@
       being able to allocate a new buffer, which is a gnu extension.  
       (John Arbash Meinel, Martin Pool, Harry Hirsch, #297831)
 
+    * Don't create text deltas spanning stacked repositories; this could
+      cause "Revision X not present in Y" when later accessing them.
+      (Martin Pool, #288751)
+
     * PermissionDenied errors from smart servers no longer cause
-      “PermissionDenied: "None"” on the client.
+      "PermissionDenied: "None"" on the client.
       (Andrew Bennetts, #299254)
       
     * TooManyConcurrentRequests no longer occur when a fetch fails and
@@ -50,6 +54,9 @@
 
   API CHANGES:
 
+    * Constructor parameters for NewPack (internal to pack repositories)
+      have changed incompatibly.
+
     * ``Repository.abort_write_group`` now accepts an optional
       ``suppress_errors`` flag.  Repository implementations that override
       ``abort_write_group`` will need to be updated to accept the new

=== modified file 'bzrlib/commit.py'
--- a/bzrlib/commit.py	2008-11-10 08:26:13 +0000
+++ b/bzrlib/commit.py	2008-11-13 05:48:26 +0000
@@ -60,6 +60,7 @@
     debug,
     errors,
     revision,
+    trace,
     tree,
     )
 from bzrlib.branch import Branch
@@ -384,7 +385,10 @@
                 # Add revision data to the local branch
                 self.rev_id = self.builder.commit(self.message)
 
-            except:
+            except Exception, e:
+                mutter("aborting commit write group because of exception:")
+                trace.log_exception_quietly()
+                note("aborting commit write group: %r" % (e,))
                 self.builder.abort()
                 raise
 

=== modified file 'bzrlib/fetch.py'
--- a/bzrlib/fetch.py	2008-11-12 02:29:03 +0000
+++ b/bzrlib/fetch.py	2008-11-21 20:24:15 +0000
@@ -248,6 +248,7 @@
             child_pb.finished()
 
     def _fetch_revision_texts(self, revs, pb):
+        # fetch signatures first and then the revision texts
         # may need to be a InterRevisionStore call here.
         to_sf = self.to_repository.signatures
         from_sf = self.from_repository.signatures

=== modified file 'bzrlib/index.py'
--- a/bzrlib/index.py	2008-10-29 21:39:27 +0000
+++ b/bzrlib/index.py	2008-11-21 20:20:52 +0000
@@ -1,4 +1,4 @@
-# Copyright (C) 2007 Canonical Ltd
+# Copyright (C) 2007, 2008 Canonical Ltd
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License as published by
@@ -53,6 +53,19 @@
 _newline_null_re = re.compile('[\n\0]')
 
 
+def _has_key_from_parent_map(self, key):
+    """Check if this index has one key.
+
+    If it's possible to check for multiple keys at once through 
+    calling get_parent_map that should be faster.
+    """
+    return (key in self.get_parent_map([key]))
+
+
+def _missing_keys_from_parent_map(self, keys):
+    return set(keys) - set(self.get_parent_map(keys))
+
+
 class GraphIndexBuilder(object):
     """A builder that can build a GraphIndex.
     
@@ -97,6 +110,28 @@
             if not element or _whitespace_re.search(element) is not None:
                 raise errors.BadIndexKey(element)
 
+    def _external_references(self):
+        """Return references that are not present in this index.
+        """
+        keys = set()
+        refs = set()
+        # TODO: JAM 2008-11-21 This makes an assumption about how the reference
+        #       lists are used. It is currently correct for pack-0.92 through
+        #       1.9, which use the node references (3rd column) second
+        #       reference list as the compression parent. Perhaps this should
+        #       be moved into something higher up the stack, since it
+        #       makes assumptions about how the index is used.
+        if self.reference_lists > 1:
+            for node in self.iter_all_entries():
+                keys.add(node[1])
+                refs.update(node[3][1])
+            return refs - keys
+        else:
+            # If reference_lists == 0 there can be no external references, and
+            # if reference_lists == 1, then there isn't a place to store the
+            # compression parent
+            return set()
+
     def _get_nodes_by_key(self):
         if self._nodes_by_key is None:
             nodes_by_key = {}
@@ -1167,6 +1202,8 @@
             found_parents[key] = parents
         return found_parents
 
+    has_key = _has_key_from_parent_map
+
     def insert_index(self, pos, index):
         """Insert a new index in the list of indices to query.
 
@@ -1271,6 +1308,8 @@
             except errors.NoSuchFile:
                 self._reload_or_raise()
 
+    missing_keys = _missing_keys_from_parent_map
+
     def _reload_or_raise(self):
         """We just got a NoSuchFile exception.
 

=== modified file 'bzrlib/knit.py'
--- a/bzrlib/knit.py	2008-10-21 03:36:14 +0000
+++ b/bzrlib/knit.py	2008-11-21 21:21:06 +0000
@@ -1,4 +1,4 @@
-# Copyright (C) 2005, 2006, 2007 Canonical Ltd
+# Copyright (C) 2005, 2006, 2007, 2008 Canonical Ltd
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License as published by
@@ -770,8 +770,9 @@
         present_parents = []
         if parent_texts is None:
             parent_texts = {}
-        # Do a single query to ascertain parent presence.
-        present_parent_map = self.get_parent_map(parents)
+        # Do a single query to ascertain parent presence; we only compress
+        # against parents in the same kvf.
+        present_parent_map = self._index.get_parent_map(parents)
         for parent in parents:
             if parent in present_parent_map:
                 present_parents.append(parent)
@@ -1323,6 +1324,11 @@
         # can't generate annotations from new deltas until their basis parent
         # is present anyway, so we get away with not needing an index that
         # includes the new keys.
+        #
+        # See <http://launchpad.net/bugs/300177> about ordering of compression
+        # parents in the records - to be conservative, we insist that all
+        # parents must be present to avoid expanding to a fulltext.
+        #
         # key = basis_parent, value = index entry to add
         buffered_index_entries = {}
         for record in stream:
@@ -1330,7 +1336,21 @@
             # Raise an error when a record is missing.
             if record.storage_kind == 'absent':
                 raise RevisionNotPresent([record.key], self)
-            if record.storage_kind in knit_types:
+            elif ((record.storage_kind in knit_types)
+                  and (not parents
+                       or not self._fallback_vfs
+                       or not self._index.missing_keys(parents)
+                       or self.missing_keys(parents))):
+                # we can insert the knit record literally if either it has no
+                # compression parent OR we already have its basis in this kvf
+                # OR the basis is not present even in the fallbacks.  In the
+                # last case it will either turn up later in the stream and all
+                # will be well, or it won't turn up at all and we'll raise an
+                # error at the end.
+                #
+                # TODO: self.has_key is somewhat redundant with
+                # self._index.has_key; we really want something that directly
+                # asks if it's only present in the fallbacks. -- mbp 20081119
                 if record.storage_kind not in native_types:
                     try:
                         adapter_key = (record.storage_kind, "knit-delta-gz")
@@ -1358,15 +1378,20 @@
                 index_entry = (record.key, options, access_memo, parents)
                 buffered = False
                 if 'fulltext' not in options:
-                    basis_parent = parents[0]
+                    # Not a fulltext, so we need to make sure the compression
+                    # parent will also be present.
                     # Note that pack backed knits don't need to buffer here
                     # because they buffer all writes to the transaction level,
                     # but we don't expose that difference at the index level. If
                     # the query here has sufficient cost to show up in
                     # profiling we should do that.
-                    if basis_parent not in self.get_parent_map([basis_parent]):
+                    #
+                    # They're required to be physically in this
+                    # KnitVersionedFiles, not in a fallback.
+                    compression_parent = parents[0]
+                    if self.missing_keys([compression_parent]):
                         pending = buffered_index_entries.setdefault(
-                            basis_parent, [])
+                            compression_parent, [])
                         pending.append(index_entry)
                         buffered = True
                 if not buffered:
@@ -1375,6 +1400,9 @@
                 self.add_lines(record.key, parents,
                     split_lines(record.get_bytes_as('fulltext')))
             else:
+                # Not a fulltext, and not suitable for direct insertion as a
+                # delta, either because it's not the right format, or because
+                # it depends on a base only present in the fallback kvfs.
                 adapter_key = record.storage_kind, 'fulltext'
                 adapter = get_adapter(adapter_key)
                 lines = split_lines(adapter.get_bytes(
@@ -1395,8 +1423,10 @@
                     del buffered_index_entries[key]
         # If there were any deltas which had a missing basis parent, error.
         if buffered_index_entries:
-            raise errors.RevisionNotPresent(buffered_index_entries.keys()[0],
-                self)
+            from pprint import pformat
+            raise errors.BzrCheckError(
+                "record_stream refers to compression parents not in %r:\n%s"
+                % (self, pformat(sorted(buffered_index_entries.keys()))))
 
     def iter_lines_added_or_present_in_keys(self, keys, pb=None):
         """Iterate over the lines in the versioned files from keys.
@@ -1413,9 +1443,11 @@
         is an iterator).
 
         NOTES:
-         * Lines are normalised by the underlying store: they will all have \n
+         * Lines are normalised by the underlying store: they will all have \\n
            terminators.
          * Lines are returned in arbitrary order.
+         * If a requested key did not change any lines (or didn't have any
+           lines), it may not be mentioned at all in the result.
 
         :return: An iterator over (line, key).
         """
@@ -1447,6 +1479,14 @@
             # change to integrate into the rest of the codebase. RBC 20071110
             for line in line_iterator:
                 yield line, key
+        # If there are still keys we've not yet found, we look in the fallback
+        # vfs, and hope to find them there.  Note that if the keys are found
+        # but had no changes or no content, the fallback may not return
+        # anything.  
+        if keys and not self._fallback_vfs:
+            # XXX: strictly the second parameter is meant to be the file id
+            # but it's not easily accessible here.
+            raise RevisionNotPresent(keys, repr(self))
         for source in self._fallback_vfs:
             if not keys:
                 break
@@ -1455,10 +1495,6 @@
                 source_keys.add(key)
                 yield line, key
             keys.difference_update(source_keys)
-        if keys:
-            # XXX: strictly the second parameter is meant to be the file id
-            # but it's not easily accessible here.
-            raise RevisionNotPresent(keys, repr(self))
         pb.update('Walking content.', total, total)
 
     def _make_line_delta(self, delta_seq, new_content):
@@ -1930,6 +1966,8 @@
         entry = self._kndx_cache[prefix][0][suffix]
         return key, entry[2], entry[3]
 
+    has_key = _mod_index._has_key_from_parent_map
+    
     def _init_index(self, path, extra_lines=[]):
         """Initialize an index."""
         sio = StringIO()
@@ -1995,6 +2033,8 @@
                     del self._filename
                     del self._history
 
+    missing_keys = _mod_index._missing_keys_from_parent_map
+
     def _partition_keys(self, keys):
         """Turn keys into a dict of prefix:suffix_list."""
         result = {}
@@ -2281,6 +2321,8 @@
         node = self._get_node(key)
         return self._node_to_position(node)
 
+    has_key = _mod_index._has_key_from_parent_map
+
     def keys(self):
         """Get all the keys in the collection.
         
@@ -2289,6 +2331,8 @@
         self._check_read()
         return [node[1] for node in self._graph_index.iter_all_entries()]
     
+    missing_keys = _mod_index._missing_keys_from_parent_map
+
     def _node_to_position(self, node):
         """Convert an index value to position details."""
         bits = node[2][1:].split(' ')

=== modified file 'bzrlib/repofmt/pack_repo.py'
--- a/bzrlib/repofmt/pack_repo.py	2008-11-17 22:40:17 +0000
+++ b/bzrlib/repofmt/pack_repo.py	2008-11-21 20:24:15 +0000
@@ -172,14 +172,6 @@
         """The text index is the name + .tix."""
         return self.index_name('text', name)
 
-    def _external_compression_parents_of_texts(self):
-        keys = set()
-        refs = set()
-        for node in self.text_index.iter_all_entries():
-            keys.add(node[1])
-            refs.update(node[3][1])
-        return refs - keys
-
 
 class ExistingPack(Pack):
     """An in memory proxy for an existing .pack and its disk indices."""
@@ -222,28 +214,17 @@
         'signature': ('.six', 3),
         }
 
-    def __init__(self, upload_transport, index_transport, pack_transport,
-        upload_suffix='', file_mode=None, index_builder_class=None,
-        index_class=None):
+    def __init__(self, pack_collection, upload_suffix='', file_mode=None):
         """Create a NewPack instance.
 
-        :param upload_transport: A writable transport for the pack to be
-            incrementally uploaded to.
-        :param index_transport: A writable transport for the pack's indices to
-            be written to when the pack is finished.
-        :param pack_transport: A writable transport for the pack to be renamed
-            to when the upload is complete. This *must* be the same as
-            upload_transport.clone('../packs').
+        :param pack_collection: A PackCollection into which this is being inserted.
         :param upload_suffix: An optional suffix to be given to any temporary
             files created during the pack creation. e.g '.autopack'
-        :param file_mode: An optional file mode to create the new files with.
-        :param index_builder_class: Required keyword parameter - the class of
-            index builder to use.
-        :param index_class: Required keyword parameter - the class of index
-            object to use.
+        :param file_mode: Unix permissions for newly created file.
         """
         # The relative locations of the packs are constrained, but all are
         # passed in because the caller has them, so as to avoid object churn.
+        index_builder_class = pack_collection._index_builder_class
         Pack.__init__(self,
             # Revisions: parents list, no text compression.
             index_builder_class(reference_lists=1),
@@ -259,14 +240,15 @@
             # listing.
             index_builder_class(reference_lists=0),
             )
+        self._pack_collection = pack_collection
         # When we make readonly indices, we need this.
-        self.index_class = index_class
+        self.index_class = pack_collection._index_class
         # where should the new pack be opened
-        self.upload_transport = upload_transport
+        self.upload_transport = pack_collection._upload_transport
         # where are indices written out to
-        self.index_transport = index_transport
+        self.index_transport = pack_collection._index_transport
         # where is the pack renamed to when it is finished?
-        self.pack_transport = pack_transport
+        self.pack_transport = pack_collection._pack_transport
         # What file mode to upload the pack and indices with.
         self._file_mode = file_mode
         # tracks the content written to the .pack file.
@@ -334,6 +316,35 @@
         else:
             raise AssertionError(self._state)
 
+    def _check_references(self):
+        """Make sure our external references are present.
+        
+        Packs are allowed to have deltas whose base is not in the pack, but it
+        must be present somewhere in this collection.  It is not allowed to
+        have deltas based on a fallback repository. 
+        (See <https://bugs.launchpad.net/bzr/+bug/288751>)
+        """
+        missing_items = {}
+        for (index_name, external_refs, index) in [
+            ('texts',
+                self.text_index._external_references(),
+                self._pack_collection.text_index.combined_index),
+            ('inventories',
+                self.inventory_index._external_references(),
+                self._pack_collection.inventory_index.combined_index),
+            ]:
+            missing = external_refs.difference(
+                k for (idx, k, v, r) in 
+                index.iter_entries(external_refs))
+            if missing:
+                missing_items[index_name] = sorted(list(missing))
+        if missing_items:
+            from pprint import pformat
+            raise errors.BzrCheckError(
+                "Newly created pack file %r has delta references to "
+                "items not in its repository:\n%s"
+                % (self, pformat(missing_items)))
+
     def data_inserted(self):
         """True if data has been added to this pack."""
         return bool(self.get_revision_count() or
@@ -356,6 +367,7 @@
         if self._buffer[1]:
             self._write_data('', flush=True)
         self.name = self._hash.hexdigest()
+        self._check_references()
         # write indices
         # XXX: It'd be better to write them all to temporary names, then
         # rename them all into place, so that the window when only some are
@@ -609,12 +621,8 @@
 
     def open_pack(self):
         """Open a pack for the pack we are creating."""
-        return NewPack(self._pack_collection._upload_transport,
-            self._pack_collection._index_transport,
-            self._pack_collection._pack_transport, upload_suffix=self.suffix,
-            file_mode=self._pack_collection.repo.bzrdir._get_file_mode(),
-            index_builder_class=self._pack_collection._index_builder_class,
-            index_class=self._pack_collection._index_class)
+        return NewPack(self._pack_collection, upload_suffix=self.suffix,
+                file_mode=self._pack_collection.repo.bzrdir._get_file_mode())
 
     def _copy_revision_texts(self):
         """Copy revision data to the new pack."""
@@ -704,19 +712,6 @@
             self.new_pack.text_index, readv_group_iter, total_items))
         self._log_copied_texts()
 
-    def _check_references(self):
-        """Make sure our external refereneces are present."""
-        external_refs = self.new_pack._external_compression_parents_of_texts()
-        if external_refs:
-            index = self._pack_collection.text_index.combined_index
-            found_items = list(index.iter_entries(external_refs))
-            if len(found_items) != len(external_refs):
-                found_keys = set(k for idx, k, refs, value in found_items)
-                missing_items = external_refs - found_keys
-                missing_file_id, missing_revision_id = missing_items.pop()
-                raise errors.RevisionNotPresent(missing_revision_id,
-                                                missing_file_id)
-
     def _create_pack_from_packs(self):
         self.pb.update("Opening pack", 0, 5)
         self.new_pack = self.open_pack()
@@ -753,7 +748,7 @@
                 time.ctime(), self._pack_collection._upload_transport.base, new_pack.random_name,
                 new_pack.signature_index.key_count(),
                 time.time() - new_pack.start_time)
-        self._check_references()
+        new_pack._check_references()
         if not self._use_pack(new_pack):
             new_pack.abort()
             return None
@@ -1109,9 +1104,9 @@
             output_texts.add_lines(key, parent_keys, text_lines,
                 random_id=True, check_content=False)
         # 5) check that nothing inserted has a reference outside the keyspace.
-        missing_text_keys = self.new_pack._external_compression_parents_of_texts()
+        missing_text_keys = self.new_pack.text_index._external_references()
         if missing_text_keys:
-            raise errors.BzrError('Reference to missing compression parents %r'
+            raise errors.BzrCheckError('Reference to missing compression parents %r'
                 % (missing_text_keys,))
         self._log_copied_texts()
 
@@ -1708,11 +1703,8 @@
         # Do not permit preparation for writing if we're not in a 'write lock'.
         if not self.repo.is_write_locked():
             raise errors.NotWriteLocked(self)
-        self._new_pack = NewPack(self._upload_transport, self._index_transport,
-            self._pack_transport, upload_suffix='.pack',
-            file_mode=self.repo.bzrdir._get_file_mode(),
-            index_builder_class=self._index_builder_class,
-            index_class=self._index_class)
+        self._new_pack = NewPack(self, upload_suffix='.pack',
+            file_mode=self.repo.bzrdir._get_file_mode())
         # allow writing: queue writes to a new index
         self.revision_index.add_writable_index(self._new_pack.revision_index,
             self._new_pack)
@@ -1735,6 +1727,10 @@
             try:
                 self._new_pack.abort()
             finally:
+                # XXX: If we aborted while in the middle of finishing the write
+                # group, _remove_pack_indices can fail because the indexes are
+                # already gone.  If they're not there we shouldn't fail in this
+                # case.  -- mbp 20081113
                 self._remove_pack_indices(self._new_pack)
                 self._new_pack = None
         self.repo._text_knit = None

=== modified file 'bzrlib/repository.py'
--- a/bzrlib/repository.py	2008-11-13 07:11:38 +0000
+++ b/bzrlib/repository.py	2008-11-21 20:24:15 +0000
@@ -2819,6 +2819,9 @@
             # fetching from a stacked repository or into a stacked repository
             # we use the generic fetch logic which uses the VersionedFiles
             # attributes on repository.
+            #
+            # XXX: Andrew suggests removing the check on the target
+            # repository.
             from bzrlib.fetch import RepoFetcher
             fetcher = RepoFetcher(self.target, self.source, revision_id,
                                   pb, find_ghosts)

=== modified file 'bzrlib/tests/branch_implementations/test_stacking.py'
--- a/bzrlib/tests/branch_implementations/test_stacking.py	2008-11-12 06:58:47 +0000
+++ b/bzrlib/tests/branch_implementations/test_stacking.py	2008-11-21 20:24:15 +0000
@@ -30,6 +30,16 @@
 
 class TestStacking(TestCaseWithBranch):
 
+    def check_lines_added_or_present(self, stacked_branch, revid):
+        # similar to a failure seen in bug 288751 by mbp 20081120
+        stacked_repo = stacked_branch.repository
+        stacked_repo.lock_read()
+        try:
+            list(stacked_repo.inventories.iter_lines_added_or_present_in_keys(
+                    [(revid,)]))
+        finally:
+            stacked_repo.unlock()
+
     def test_get_set_stacked_on_url(self):
         # branches must either:
         # raise UnstackableBranchFormat or
@@ -293,6 +303,8 @@
         unstacked.fetch(stacked.branch.repository, 'rev2')
         unstacked.get_revision('rev1')
         unstacked.get_revision('rev2')
+        self.check_lines_added_or_present(stacked.branch, 'rev1')
+        self.check_lines_added_or_present(stacked.branch, 'rev2')
 
     def test_autopack_when_stacked(self):
         # in bzr.dev as of 20080730, autopack was reported to fail in stacked
@@ -334,12 +346,13 @@
         other_tree = other_dir.open_workingtree()
         text_lines[9] = 'changed in other\n'
         self.build_tree_contents([('other/a', ''.join(text_lines))])
-        other_tree.commit('commit in other')
+        stacked_revid = other_tree.commit('commit in other')
         # this should have generated a delta; try to pull that across
         # bug 252821 caused a RevisionNotPresent here...
         stacked_tree.pull(other_tree.branch)
         stacked_tree.branch.repository.pack()
         stacked_tree.branch.check()
+        self.check_lines_added_or_present(stacked_tree.branch, stacked_revid)
 
     def test_fetch_revisions_with_file_changes(self):
         # Fetching revisions including file changes into a stacked branch
@@ -371,6 +384,7 @@
         rtree.lock_read()
         self.addCleanup(rtree.unlock)
         self.assertEqual('new content', rtree.get_file_by_path('a').read())
+        self.check_lines_added_or_present(target, 'rev2')
 
     def test_transform_fallback_location_hook(self):
         # The 'transform_fallback_location' branch hook allows us to inspect

=== modified file 'bzrlib/tests/interrepository_implementations/test_fetch.py'
--- a/bzrlib/tests/interrepository_implementations/test_fetch.py	2008-08-10 07:00:59 +0000
+++ b/bzrlib/tests/interrepository_implementations/test_fetch.py	2008-11-19 07:44:51 +0000
@@ -101,9 +101,12 @@
         # generally do).
         try:
             to_repo.fetch(tree.branch.repository, 'rev-two')
-        except errors.RevisionNotPresent, e:
+        except (errors.BzrCheckError, errors.RevisionNotPresent), e:
             # If an exception is raised, the revision should not be in the
             # target.
+            # 
+            # Can also just raise a generic check errors; stream insertion
+            # does this to include all the missing data
             self.assertRaises((errors.NoSuchRevision, errors.RevisionNotPresent),
                               to_repo.revision_tree, 'rev-two')
         else:

=== modified file 'bzrlib/tests/test_knit.py'
--- a/bzrlib/tests/test_knit.py	2008-10-21 03:47:13 +0000
+++ b/bzrlib/tests/test_knit.py	2008-11-14 07:52:16 +0000
@@ -1435,7 +1435,9 @@
         basis.calls = []
         test.add_lines(key_cross_border, (key_basis,), ['foo\n'])
         self.assertEqual('fulltext', test._index.get_method(key_cross_border))
-        self.assertEqual([("get_parent_map", set([key_basis]))], basis.calls)
+        # we don't even need to look at the basis to see that this should be
+        # stored as a fulltext
+        self.assertEqual([], basis.calls)
         # Subsequent adds do delta.
         basis.calls = []
         test.add_lines(key_delta, (key_cross_border,), ['foo\n'])
@@ -1696,7 +1698,11 @@
         source.add_lines(key_delta, (key_basis,), ['bar\n'])
         stream = source.get_record_stream([key_delta], 'unordered', False)
         test.insert_record_stream(stream)
-        self.assertEqual([("get_parent_map", set([key_basis]))],
+        # XXX: this does somewhat too many calls in making sure of whether it
+        # has to recreate the full text.
+        self.assertEqual([("get_parent_map", set([key_basis])),
+             ('get_parent_map', set([key_basis])),
+             ('get_record_stream', [key_basis], 'unordered', True)],
             basis.calls)
         self.assertEqual({key_delta:(key_basis,)},
             test.get_parent_map([key_delta]))
@@ -1763,8 +1769,7 @@
         test.add_mpdiffs([(key_delta, (key_basis,),
             source.get_sha1s([key_delta])[key_delta], diffs[0])])
         self.assertEqual([("get_parent_map", set([key_basis])),
-            ('get_record_stream', [key_basis], 'unordered', True),
-            ('get_parent_map', set([key_basis]))],
+            ('get_record_stream', [key_basis], 'unordered', True),],
             basis.calls)
         self.assertEqual({key_delta:(key_basis,)},
             test.get_parent_map([key_delta]))
@@ -1789,14 +1794,13 @@
                 multiparent.NewText(['foo\n']),
                 multiparent.ParentText(1, 0, 2, 1)])],
             diffs)
-        self.assertEqual(4, len(basis.calls))
+        self.assertEqual(3, len(basis.calls))
         self.assertEqual([
             ("get_parent_map", set([key_left, key_right])),
             ("get_parent_map", set([key_left, key_right])),
-            ("get_parent_map", set([key_left, key_right])),
             ],
-            basis.calls[:3])
-        last_call = basis.calls[3]
+            basis.calls[:-1])
+        last_call = basis.calls[-1]
         self.assertEqual('get_record_stream', last_call[0])
         self.assertEqual(set([key_left, key_right]), set(last_call[1]))
         self.assertEqual('unordered', last_call[2])

=== modified file 'bzrlib/tests/test_repository.py'
--- a/bzrlib/tests/test_repository.py	2008-10-29 21:39:27 +0000
+++ b/bzrlib/tests/test_repository.py	2008-11-21 21:41:03 +0000
@@ -744,7 +744,8 @@
         """
         broken_repo = self.make_broken_repository()
         empty_repo = self.make_repository('empty-repo')
-        self.assertRaises(errors.RevisionNotPresent, empty_repo.fetch, broken_repo)
+        self.assertRaises((errors.RevisionNotPresent, errors.BzrCheckError),
+                          empty_repo.fetch, broken_repo)
 
 
 class TestRepositoryPackCollection(TestCaseWithTransport):
@@ -1031,9 +1032,14 @@
         pack_transport = self.get_transport('pack')
         index_transport = self.get_transport('index')
         upload_transport.mkdir('.')
-        pack = pack_repo.NewPack(upload_transport, index_transport,
-            pack_transport, index_builder_class=BTreeBuilder,
+        collection = pack_repo.RepositoryPackCollection(repo=None,
+            transport=self.get_transport('.'),
+            index_transport=index_transport,
+            upload_transport=upload_transport,
+            pack_transport=pack_transport,
+            index_builder_class=BTreeBuilder,
             index_class=BTreeGraphIndex)
+        pack = pack_repo.NewPack(collection)
         self.assertIsInstance(pack.revision_index, BTreeBuilder)
         self.assertIsInstance(pack.inventory_index, BTreeBuilder)
         self.assertIsInstance(pack._hash, type(osutils.md5()))

=== modified file 'bzrlib/tests/test_revision.py'
--- a/bzrlib/tests/test_revision.py	2008-07-21 08:07:23 +0000
+++ b/bzrlib/tests/test_revision.py	2008-11-21 21:11:51 +0000
@@ -47,7 +47,7 @@
 
     the object graph is
     B:     A:
-    a..0   a..0 
+    a..0   a..0
     a..1   a..1
     a..2   a..2
     b..3   a..3 merges b..4
@@ -60,30 +60,30 @@
     """
     tree1 = self.make_branch_and_tree("branch1", format=format)
     br1 = tree1.branch
-    
+
     tree1.commit("Commit one", rev_id="a at u-0-0")
     tree1.commit("Commit two", rev_id="a at u-0-1")
     tree1.commit("Commit three", rev_id="a at u-0-2")
 
-    tree2 = tree1.bzrdir.clone("branch2").open_workingtree()
+    tree2 = tree1.bzrdir.sprout("branch2").open_workingtree()
     br2 = tree2.branch
     tree2.commit("Commit four", rev_id="b at u-0-3")
     tree2.commit("Commit five", rev_id="b at u-0-4")
     revisions_2 = br2.revision_history()
     self.assertEquals(revisions_2[-1], 'b at u-0-4')
-    
+
     tree1.merge_from_branch(br2)
     tree1.commit("Commit six", rev_id="a at u-0-3")
     tree1.commit("Commit seven", rev_id="a at u-0-4")
     tree2.commit("Commit eight", rev_id="b at u-0-5")
     self.assertEquals(br2.revision_history()[-1], 'b at u-0-5')
-    
+
     tree1.merge_from_branch(br2)
     tree1.commit("Commit nine", rev_id="a at u-0-5")
     # DO NOT MERGE HERE - we WANT a GHOST.
     tree2.add_parent_tree_id(br1.revision_history()[4])
     tree2.commit("Commit ten - ghost merge", rev_id="b at u-0-6")
-    
+
     return br1, br2
 
 

=== modified file 'bzrlib/versionedfile.py'
--- a/bzrlib/versionedfile.py	2008-09-08 13:35:44 +0000
+++ b/bzrlib/versionedfile.py	2008-11-21 20:20:52 +0000
@@ -1,4 +1,4 @@
-# Copyright (C) 2005, 2006 Canonical Ltd
+# Copyright (C) 2005, 2006, 2007, 2008 Canonical Ltd
 #
 # Authors:
 #   Johan Rydberg <jrydberg at gnu.org>
@@ -30,6 +30,7 @@
 
 from bzrlib import (
     errors,
+    index,
     osutils,
     multiparent,
     tsort,
@@ -846,6 +847,8 @@
         """
         raise NotImplementedError(self.get_sha1s)
 
+    has_key = index._has_key_from_parent_map
+
     def insert_record_stream(self, stream):
         """Insert a record stream into this container.
 
@@ -922,6 +925,8 @@
                 parent_lines, left_parent_blocks))
         return diffs
 
+    missing_keys = index._missing_keys_from_parent_map
+
     def _extract_blocks(self, version_id, source, target):
         return None
 




More information about the bazaar-commits mailing list