Rev 2779: Merge Martins refactoring work. in http://people.ubuntu.com/~robertc/baz2.0/repository
Robert Collins
robertc at robertcollins.net
Tue Sep 25 02:48:25 BST 2007
At http://people.ubuntu.com/~robertc/baz2.0/repository
------------------------------------------------------------
revno: 2779
revision-id: robertc at robertcollins.net-20070925014751-skcw309yznr5czka
parent: robertc at robertcollins.net-20070924055425-ihuk0s7nfwxqi2wn
parent: mbp at sourcefrog.net-20070921082817-mklp3xizm3397wry
committer: Robert Collins <robertc at robertcollins.net>
branch nick: repository
timestamp: Tue 2007-09-25 11:47:51 +1000
message:
Merge Martins refactoring work.
modified:
bzrlib/index.py index.py-20070712131115-lolkarso50vjr64s-1
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
bzrlib/repository.py rev_storage.py-20051111201905-119e9401e46257e3
------------------------------------------------------------
revno: 2745.1.17
revision-id: mbp at sourcefrog.net-20070921082817-mklp3xizm3397wry
parent: mbp at sourcefrog.net-20070921082141-9334ez9op3439jvj
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-21 18:28:17 +1000
message:
Remove more duplicated code
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.16
revision-id: mbp at sourcefrog.net-20070921082141-9334ez9op3439jvj
parent: mbp at sourcefrog.net-20070921080858-gwydequ77jjaxxz8
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-21 18:21:41 +1000
message:
Split out more common code for making index maps
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.15
revision-id: mbp at sourcefrog.net-20070921080858-gwydequ77jjaxxz8
parent: mbp at sourcefrog.net-20070921064936-c61wh4wihojgyvg1
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-21 18:08:58 +1000
message:
Split out common code for making index maps
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.14
revision-id: mbp at sourcefrog.net-20070921064936-c61wh4wihojgyvg1
parent: mbp at sourcefrog.net-20070921064711-6cghit2xbstwq8gr
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-21 16:49:36 +1000
message:
Fix up one remaining caller to RepositoryPackCollection.save
modified:
bzrlib/repository.py rev_storage.py-20051111201905-119e9401e46257e3
------------------------------------------------------------
revno: 2745.1.13
revision-id: mbp at sourcefrog.net-20070921064711-6cghit2xbstwq8gr
parent: mbp at sourcefrog.net-20070921064308-3ffuutrtw7tyzhhd
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-21 16:47:11 +1000
message:
Clean up duplicate index_transport variables
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.12
revision-id: mbp at sourcefrog.net-20070921064308-3ffuutrtw7tyzhhd
parent: mbp at sourcefrog.net-20070921063646-m15hirv0dkrxfwb0
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-21 16:43:08 +1000
message:
Move pack_transport and pack_name onto RepositoryPackCollection
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.11
revision-id: mbp at sourcefrog.net-20070921063646-m15hirv0dkrxfwb0
parent: mbp at sourcefrog.net-20070921054103-35zlfela1bbsafwo
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-21 16:36:46 +1000
message:
Move upload_transport from pack repositories to the pack collection
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.10
revision-id: mbp at sourcefrog.net-20070921054103-35zlfela1bbsafwo
parent: mbp at sourcefrog.net-20070920080458-fhr70d0gbnd82fio
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-21 15:41:03 +1000
message:
Rename RepositoryPackCollection.save to _save_pack_names
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.9
revision-id: mbp at sourcefrog.net-20070920080458-fhr70d0gbnd82fio
parent: mbp at sourcefrog.net-20070920074323-xm81gimzuqqi0ogt
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Thu 2007-09-20 18:04:58 +1000
message:
Move some more bits that seem to belong in RepositoryPackCollection into there
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.8
revision-id: mbp at sourcefrog.net-20070920074323-xm81gimzuqqi0ogt
parent: mbp at sourcefrog.net-20070919131232-0gtp1q90fxz10ctn
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Thu 2007-09-20 17:43:23 +1000
message:
Delegate abort_write_group to RepositoryPackCollection
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.7
revision-id: mbp at sourcefrog.net-20070919131232-0gtp1q90fxz10ctn
parent: mbp at sourcefrog.net-20070919051252-trwq0anc4rqzf6pr
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Wed 2007-09-19 23:12:32 +1000
message:
move commit_write_group to RepositoryPackCollection
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.6
revision-id: mbp at sourcefrog.net-20070919051252-trwq0anc4rqzf6pr
parent: mbp at sourcefrog.net-20070919042142-6816rhjjc6apvrii
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Wed 2007-09-19 15:12:52 +1000
message:
Move pack repository start_write_group to pack collection object
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.5
revision-id: mbp at sourcefrog.net-20070919042142-6816rhjjc6apvrii
parent: mbp at sourcefrog.net-20070919035909-yeq9cyvpr1e5y46t
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Wed 2007-09-19 14:21:42 +1000
message:
Make RepositoryPackCollection remember the index transport, and responsible for getting a map of indexes
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.4
revision-id: mbp at sourcefrog.net-20070919035909-yeq9cyvpr1e5y46t
parent: mbp at sourcefrog.net-20070919030529-v5pam6fom6s6iqxa
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Wed 2007-09-19 13:59:09 +1000
message:
Add CombinedGraphIndex repr
modified:
bzrlib/index.py index.py-20070712131115-lolkarso50vjr64s-1
------------------------------------------------------------
revno: 2745.1.3
revision-id: mbp at sourcefrog.net-20070919030529-v5pam6fom6s6iqxa
parent: mbp at sourcefrog.net-20070914031101-xd4qllczi7pomemm
parent: robertc at robertcollins.net-20070914024636-6olqfc532pyt61kl
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Wed 2007-09-19 13:05:29 +1000
message:
Merge Robert and indirectly trunk
added:
bzrlib/_patiencediff_c.c _patiencediff_c.c-20070721205602-q3imkipwlgagp3cy-1
bzrlib/benchmarks/bench_pack.py bench_pack.py-20070903042947-0wphp878xr6wkw7t-1
bzrlib/patiencediff.py patiencediff.py-20070721205536-jz8gaykeb7xtampk-1
bzrlib/tests/blackbox/test_unknowns.py test_unknowns.py-20070905015344-74tg6s1synijo2oe-1
bzrlib/tests/commands/test_update.py test_update.py-20070910091045-8uyp8v73j926l1g2-1
bzrlib/tests/inventory_implementations/ bzrlibtestsinventory-20070820060653-4mjbbmwhp74dsf3x-1
bzrlib/tests/inventory_implementations/__init__.py __init__.py-20070821044532-olbadbokgv3qv1yd-1
bzrlib/tests/inventory_implementations/basics.py basics.py-20070903044446-kdjwbiu1p1zi9phs-1
bzrlib/tests/tree_implementations/test_path_content_summary.py test_path_content_su-20070904100855-3vrwedz6akn34kl5-1
doc/developers/missing.txt missing.txt-20070718093412-eqjvfwo0oacov5sn-1
doc/en/user-guide/hooks.txt hooks.txt-20070829200551-7nr6e5a1io6x78uf-1
doc/en/user-reference/hooks.txt hooks.txt-20070830033044-xxu2rced13f72dka-1
doc/en/user-reference/index.txt index.txt-20070830033353-ud9e03xsh24053oo-1
renamed:
bzrlib/patiencediff.py => bzrlib/_patiencediff_py.py cdvdifflib.py-20051106064558-f8f8097fbf0db4e4
modified:
Makefile Makefile-20050805140406-d96e3498bb61c5bb
NEWS NEWS-20050323055033-4e00b5db738777ff
bzr bzr.py-20050313053754-5485f144c7006fa6
bzrlib/__init__.py __init__.py-20050309040759-33e65acf91bbcd5d
bzrlib/annotate.py annotate.py-20050922133147-7c60541d2614f022
bzrlib/benchmarks/__init__.py __init__.py-20060516064526-eb0d37c78e86065d
bzrlib/benchmarks/tree_creator/kernel_like.py kernel_like.py-20060815024128-b16a7pn542u6b13k-1
bzrlib/branch.py branch.py-20050309040759-e4baf4e0d046576e
bzrlib/builtins.py builtins.py-20050830033751-fc01482b9ca23183
bzrlib/bzrdir.py bzrdir.py-20060131065624-156dfea39c4387cb
bzrlib/commands.py bzr.py-20050309040720-d10f4714595cf8c3
bzrlib/commit.py commit.py-20050511101309-79ec1a0168e0e825
bzrlib/debug.py debug.py-20061102062349-vdhrw9qdpck8cl35-1
bzrlib/diff.py diff.py-20050309040759-26944fbbf2ebbf36
bzrlib/dirstate.py dirstate.py-20060728012006-d6mvoihjb3je9peu-1
bzrlib/errors.py errors.py-20050309040759-20512168c4e14fbd
bzrlib/fetch.py fetch.py-20050818234941-26fea6105696365d
bzrlib/graph.py graph_walker.py-20070525030359-y852guab65d4wtn0-1
bzrlib/help.py help.py-20050505025907-4dd7a6d63912f894
bzrlib/help_topics.py help_topics.py-20060920210027-rnim90q9e0bwxvy4-1
bzrlib/inventory.py inventory.py-20050309040759-6648b84ca2005b37
bzrlib/knit.py knit.py-20051212171256-f056ac8f0fbe1bd9
bzrlib/log.py log.py-20050505065812-c40ce11702fe5fb1
bzrlib/lsprof.py lsprof.py-20051208071030-833790916798ceed
bzrlib/memorytree.py memorytree.py-20060906023413-4wlkalbdpsxi2r4y-1
bzrlib/msgeditor.py msgeditor.py-20050901111708-ef6d8de98f5d8f2f
bzrlib/option.py option.py-20051014052914-661fb36e76e7362f
bzrlib/pack.py container.py-20070607160755-tr8zc26q18rn0jnb-1
bzrlib/plugin.py plugin.py-20050622060424-829b654519533d69
bzrlib/reconcile.py reweave_inventory.py-20051108164726-1e5e0934febac06e
bzrlib/repofmt/knitrepo.py knitrepo.py-20070206081537-pyy4a00xdas0j4pf-1
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
bzrlib/repofmt/weaverepo.py presplitout.py-20070125045333-wfav3tsh73oxu3zk-1
bzrlib/repository.py rev_storage.py-20051111201905-119e9401e46257e3
bzrlib/revision.py revision.py-20050309040759-e77802c08f3999d5
bzrlib/revisiontree.py revisiontree.py-20060724012533-bg8xyryhxd0o0i0h-1
bzrlib/store/revision/knit.py knit.py-20060303020652-de5fa299e941a3c7
bzrlib/tag.py tag.py-20070212110532-91cw79inah2cfozx-1
bzrlib/tests/__init__.py selftest.py-20050531073622-8d0e3c8845c97a64
bzrlib/tests/blackbox/__init__.py __init__.py-20051128053524-eba30d8255e08dc3
bzrlib/tests/blackbox/test_annotate.py testannotate.py-20051013044000-457f44801bfa9d39
bzrlib/tests/blackbox/test_cat.py test_cat.py-20051201162916-f0937e4e19ea24b3
bzrlib/tests/blackbox/test_commit.py test_commit.py-20060212094538-ae88fc861d969db0
bzrlib/tests/blackbox/test_conflicts.py test_conflicts.py-20060228151432-9723ebb925b999cf
bzrlib/tests/blackbox/test_diff.py test_diff.py-20060110203741-aa99ac93e633d971
bzrlib/tests/blackbox/test_help.py test_help.py-20060216004358-4ee8a2a338f75a62
bzrlib/tests/blackbox/test_ignore.py test_ignore.py-20060703063225-4tm8dc2pa7wwg2t3-1
bzrlib/tests/blackbox/test_info.py test_info.py-20060215045507-bbdd2d34efab9e0a
bzrlib/tests/blackbox/test_locale.py test_lang.py-20060824204205-80v50j25qkuop7yn-1
bzrlib/tests/blackbox/test_merge.py test_merge.py-20060323225809-9bc0459c19917f41
bzrlib/tests/blackbox/test_mv.py test_mv.py-20060705114902-33tkxz0o9cdshemo-1
bzrlib/tests/blackbox/test_nick.py test_nick.py-20061105141046-p7zovcsit44uj4w9-1
bzrlib/tests/blackbox/test_outside_wt.py test_outside_wt.py-20060116200058-98edd33e7db8bdde
bzrlib/tests/blackbox/test_pull.py test_pull.py-20051201144907-64959364f629947f
bzrlib/tests/blackbox/test_remove.py test_remove.py-20060530011439-fika5rm84lon0goe-1
bzrlib/tests/blackbox/test_send.py test_bundle.py-20060616222707-c21c8b7ea5ef57b1
bzrlib/tests/blackbox/test_status.py teststatus.py-20050712014354-508855eb9f29f7dc
bzrlib/tests/blackbox/test_version.py test_version.py-20070312060045-ol7th9z035r3im3d-1
bzrlib/tests/branch_implementations/test_commit.py test_commit.py-20070206022134-117z1i5b644p63r0-1
bzrlib/tests/branch_implementations/test_sprout.py test_sprout.py-20070521151739-b8t8p7axw1h966ws-1
bzrlib/tests/bzrdir_implementations/test_bzrdir.py test_bzrdir.py-20060131065642-0ebeca5e30e30866
bzrlib/tests/commands/__init__.py __init__.py-20070520095518-ecfl8531fxgjeycj-1
bzrlib/tests/commands/test_missing.py test_missing.py-20070525171057-qr1z4sleurlp9b5v-1
bzrlib/tests/interrepository_implementations/test_interrepository.py test_interrepository.py-20060220061411-1ec13fa99e5e3eee
bzrlib/tests/intertree_implementations/test_compare.py test_compare.py-20060724101752-09ysswo1a92uqyoz-2
bzrlib/tests/repository_implementations/test_commit_builder.py test_commit_builder.py-20060606110838-76e3ra5slucqus81-1
bzrlib/tests/test_annotate.py test_annotate.py-20061213215015-sttc9agsxomls7q0-1
bzrlib/tests/test_branch.py test_branch.py-20060116013032-97819aa07b8ab3b5
bzrlib/tests/test_commit.py test_commit.py-20050914060732-279f057f8c295434
bzrlib/tests/test_diff.py testdiff.py-20050727164403-d1a3496ebb12e339
bzrlib/tests/test_dirstate.py test_dirstate.py-20060728012006-d6mvoihjb3je9peu-2
bzrlib/tests/test_errors.py test_errors.py-20060210110251-41aba2deddf936a8
bzrlib/tests/test_ftp_transport.py test_aftp_transport.-20060823221619-98mwjzxtwtkt527k-1
bzrlib/tests/test_graph.py test_graph_walker.py-20070525030405-enq4r60hhi9xrujc-1
bzrlib/tests/test_help.py test_help.py-20070419045354-6q6rq15j9e2n5fna-1
bzrlib/tests/test_inv.py testinv.py-20050722220913-1dc326138d1a5892
bzrlib/tests/test_knit.py test_knit.py-20051212171302-95d4c00dd5f11f2b
bzrlib/tests/test_log.py testlog.py-20050728115707-1a514809d7d49309
bzrlib/tests/test_lsprof.py test_lsprof.py-20070606095601-bctdndm8yhc0cqnc-1
bzrlib/tests/test_merge.py testmerge.py-20050905070950-c1b5aa49ff911024
bzrlib/tests/test_merge_core.py test_merge_core.py-20050824132511-eb99b23a0eec641b
bzrlib/tests/test_msgeditor.py test_msgeditor.py-20051202041359-920315ec6011ee51
bzrlib/tests/test_options.py testoptions.py-20051014093702-96457cfc86319a8f
bzrlib/tests/test_osutils.py test_osutils.py-20051201224856-e48ee24c12182989
bzrlib/tests/test_plugins.py plugins.py-20050622075746-32002b55e5e943e9
bzrlib/tests/test_repository.py test_repository.py-20060131075918-65c555b881612f4d
bzrlib/tests/test_revert.py test_revert.py-20060828180832-fqb1v6ecpyvnlitj-1
bzrlib/tests/test_revision.py testrevision.py-20050804210559-46f5e1eb67b01289
bzrlib/tests/test_selftest.py test_selftest.py-20051202044319-c110a115d8c0456a
bzrlib/tests/test_tag.py test_tag.py-20070212110532-91cw79inah2cfozx-2
bzrlib/tests/test_trace.py testtrace.py-20051110225523-a21117fc7a07eeff
bzrlib/tests/test_transform.py test_transaction.py-20060105172520-b3ffb3946550e6c4
bzrlib/tests/test_versionedfile.py test_versionedfile.py-20060222045249-db45c9ed14a1c2e5
bzrlib/tests/test_weave.py testknit.py-20050627023648-9833cc5562ffb785
bzrlib/tests/test_xml.py test_xml.py-20050905091053-80b45588931a9b35
bzrlib/tests/tree_implementations/__init__.py __init__.py-20060717075546-420s7b0bj9hzeowi-2
bzrlib/tests/tree_implementations/test_inv.py test_inv.py-20070312023226-0cdvk5uwhutis9vg-1
bzrlib/tests/tree_implementations/test_tree.py test_tree.py-20061215160206-usu7lwcj8aq2n3br-1
bzrlib/tests/workingtree_implementations/test_commit.py test_commit.py-20060421013633-1610ec2331c8190f
bzrlib/tests/workingtree_implementations/test_executable.py test_executable.py-20060628162557-tr7h57kl80l3ma8i-1
bzrlib/tests/workingtree_implementations/test_inv.py test_inv.py-20070311221604-ighlq8tbn5xq0kuo-1
bzrlib/tests/workingtree_implementations/test_remove.py test_remove.py-20070413183901-rvnp85rtc0q0sclp-1
bzrlib/tests/workingtree_implementations/test_workingtree.py test_workingtree.py-20060203003124-817757d3e31444fb
bzrlib/trace.py trace.py-20050309040759-c8ed824bdcd4748a
bzrlib/transform.py transform.py-20060105172343-dd99e54394d91687
bzrlib/transport/__init__.py transport.py-20050711165921-4978aa7ce1285ad5
bzrlib/transport/ftp.py ftp.py-20051116161804-58dc9506548c2a53
bzrlib/transport/ssh.py ssh.py-20060824042150-0s9787kng6zv1nwq-1
bzrlib/tree.py tree.py-20050309040759-9d5f2496be663e77
bzrlib/tuned_gzip.py tuned_gzip.py-20060407014720-5aadc518e928e8d2
bzrlib/version.py version.py-20060816024207-ves6ult9a11taj9t-1
bzrlib/versionedfile.py versionedfile.py-20060222045106-5039c71ee3b65490
bzrlib/weave.py knit.py-20050627021749-759c29984154256b
bzrlib/workingtree.py workingtree.py-20050511021032-29b6ec0a681e02e3
bzrlib/workingtree_4.py workingtree_4.py-20070208044105-5fgpc5j3ljlh5q6c-1
bzrlib/xml4.py xml4.py-20050916091259-db5ab55e7e6ca324
bzrlib/xml5.py xml5.py-20050907032657-aac8f960815b66b1
bzrlib/xml6.py xml6.py-20060823042456-dbaaq4atrche7xy5-1
bzrlib/xml_serializer.py xml.py-20050309040759-57d51586fdec365d
doc/developers/HACKING.txt HACKING-20050805200004-2a5dc975d870f78c
doc/developers/performance-roadmap.txt performanceroadmap.t-20070507174912-mwv3xv517cs4sisd-2
doc/developers/performance.dot performance.dot-20070527173558-rqaqxn1al7vzgcto-3
doc/developers/update.txt update.txt-20070713074325-vtxf9eb5c6keg30j-1
doc/en/user-guide/index.txt index.txt-20060622101119-tgwtdci8z769bjb9-2
doc/en/user-guide/server.txt server.txt-20060913044801-h939fvbwzz39gf7g-1
doc/en/user-guide/tutorial.txt tutorial.txt-20050804190939-9dcbba2ef053bc84
setup.py setup.py-20050314065409-02f8a0a6e3f9bc70
tools/doc_generate/autodoc_man.py bzrman.py-20050601153041-0ff7f74de456d15e
bzrlib/_patiencediff_py.py cdvdifflib.py-20051106064558-f8f8097fbf0db4e4
------------------------------------------------------------
revno: 2745.1.2
revision-id: mbp at sourcefrog.net-20070914031101-xd4qllczi7pomemm
parent: mbp at sourcefrog.net-20070829084144-p3h18xqaj1hkfzgm
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Fri 2007-09-14 13:11:01 +1000
message:
Add docstring
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
------------------------------------------------------------
revno: 2745.1.1
revision-id: mbp at sourcefrog.net-20070829084144-p3h18xqaj1hkfzgm
parent: robertc at robertcollins.net-20070828221915-fkprmgth95yihpyw
committer: Martin Pool <mbp at sourcefrog.net>
branch nick: pack-repository
timestamp: Wed 2007-08-29 18:41:44 +1000
message:
Fix docstrings for Index.iter_entries etc
modified:
bzrlib/index.py index.py-20070712131115-lolkarso50vjr64s-1
=== modified file 'bzrlib/index.py'
--- a/bzrlib/index.py 2007-09-24 05:54:25 +0000
+++ b/bzrlib/index.py 2007-09-25 01:47:51 +0000
@@ -496,6 +496,11 @@
"""
self._indices = indices
+ def __repr__(self):
+ return "%s(%s)" % (
+ self.__class__.__name__,
+ ', '.join(map(repr, self._indices)))
+
def insert_index(self, pos, index):
"""Insert a new index in the list of indices to query.
=== modified file 'bzrlib/repofmt/pack_repo.py'
--- a/bzrlib/repofmt/pack_repo.py 2007-09-23 22:33:57 +0000
+++ b/bzrlib/repofmt/pack_repo.py 2007-09-25 01:47:51 +0000
@@ -106,10 +106,24 @@
class RepositoryPackCollection(object):
-
- def __init__(self, repo, transport):
+ """Management of packs within a repository."""
+
+ def __init__(self, repo, transport, index_transport, upload_transport,
+ pack_transport):
+ """Create a new RepositoryPackCollection.
+
+ :param transport: Addresses the repository base directory
+ (typically .bzr/repository/).
+ :param index_transport: Addresses the directory containing indexes.
+ :param upload_transport: Addresses the directory into which packs are written
+ while they're being created.
+ :param pack_transport: Addresses the directory of existing complete packs.
+ """
self.repo = repo
self.transport = transport
+ self._index_transport = index_transport
+ self._upload_transport = upload_transport
+ self._pack_transport = pack_transport
self.packs = []
def add_pack_to_memory(self, pack):
@@ -259,13 +273,13 @@
rev_count = 'all'
mutter('%s: create_pack: creating pack from source packs: '
'%s%s %s revisions wanted %s t=0',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
plain_pack_list, rev_count)
start_time = time.time()
- write_stream = self.repo._upload_transport.open_write_stream(random_name)
+ write_stream = self._upload_transport.open_write_stream(random_name)
if 'fetch' in debug.debug_flags:
mutter('%s: create_pack: pack stream open: %s%s t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
time.time() - start_time)
pack_hash = md5.new()
buffer = []
@@ -294,7 +308,7 @@
revision_index))
if 'fetch' in debug.debug_flags:
mutter('%s: create_pack: revisions copied: %s%s %d items t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
revision_index.key_count(),
time.time() - start_time)
# select inventory keys
@@ -321,7 +335,7 @@
text_filter = None
if 'fetch' in debug.debug_flags:
mutter('%s: create_pack: inventories copied: %s%s %d items t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
inv_index.key_count(),
time.time() - start_time)
# select text keys
@@ -347,7 +361,7 @@
text_index))
if 'fetch' in debug.debug_flags:
mutter('%s: create_pack: file texts copied: %s%s %d items t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
text_index.key_count(),
time.time() - start_time)
# select signature keys
@@ -358,7 +372,7 @@
self._copy_nodes(signature_nodes, signature_index_map, writer, signature_index)
if 'fetch' in debug.debug_flags:
mutter('%s: create_pack: revision signatures copied: %s%s %d items t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
signature_index.key_count(),
time.time() - start_time)
# finish the pack
@@ -374,20 +388,20 @@
text_index.key_count(),
signature_index.key_count(),
)):
- self.repo._upload_transport.delete(random_name)
+ self._upload_transport.delete(random_name)
return None
result = Pack()
result.name = new_name
- result.transport = self.repo._upload_transport.clone('../packs/')
+ result.transport = self._upload_transport.clone('../packs/')
# write indices
- index_transport = self.repo._upload_transport.clone('../indices')
+ index_transport = self._index_transport
rev_index_name = self.repo._revision_store.name_to_revision_index_name(new_name)
revision_index_length = index_transport.put_file(rev_index_name,
revision_index.finish())
if 'fetch' in debug.debug_flags:
# XXX: size might be interesting?
mutter('%s: create_pack: wrote revision index: %s%s t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
time.time() - start_time)
inv_index_name = self.repo._inv_thunk.name_to_inv_index_name(new_name)
inventory_index_length = index_transport.put_file(inv_index_name,
@@ -395,7 +409,7 @@
if 'fetch' in debug.debug_flags:
# XXX: size might be interesting?
mutter('%s: create_pack: wrote inventory index: %s%s t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
time.time() - start_time)
text_index_name = self.repo.weave_store.name_to_text_index_name(new_name)
text_index_length = index_transport.put_file(text_index_name,
@@ -403,7 +417,7 @@
if 'fetch' in debug.debug_flags:
# XXX: size might be interesting?
mutter('%s: create_pack: wrote file texts index: %s%s t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
time.time() - start_time)
signature_index_name = self.repo._revision_store.name_to_signature_index_name(new_name)
signature_index_length = index_transport.put_file(signature_index_name,
@@ -411,7 +425,7 @@
if 'fetch' in debug.debug_flags:
# XXX: size might be interesting?
mutter('%s: create_pack: wrote revision signatures index: %s%s t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
time.time() - start_time)
# add to name
self.allocate(new_name, revision_index_length, inventory_index_length,
@@ -419,11 +433,11 @@
# rename into place. XXX: should rename each index too rather than just
# uploading blind under the chosen name.
write_stream.close()
- self.repo._upload_transport.rename(random_name, '../packs/' + new_name + '.pack')
+ self._upload_transport.rename(random_name, '../packs/' + new_name + '.pack')
if 'fetch' in debug.debug_flags:
# XXX: size might be interesting?
mutter('%s: create_pack: pack renamed into place: %s%s->%s%s t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
result.transport, result.name,
time.time() - start_time)
result.revision_index = revision_index
@@ -433,7 +447,7 @@
if 'fetch' in debug.debug_flags:
# XXX: size might be interesting?
mutter('%s: create_pack: finished: %s%s t+%6.3fs',
- time.ctime(), self.repo._upload_transport.base, random_name,
+ time.ctime(), self._upload_transport.base, random_name,
time.time() - start_time)
return result
@@ -453,7 +467,7 @@
self._remove_pack_by_name(pack_detail[1])
# record the newly available packs and stop advertising the old
# packs
- self.save()
+ self._save_pack_names()
# move the old packs out of the way
for revision_count, pack_details in pack_operations:
self._obsolete_packs(pack_details)
@@ -663,6 +677,27 @@
self._names[name] = (revision_index_length, inventory_index_length,
text_index_length, signature_index_length)
+ def _make_index_map(self, suffix):
+ """Return information on existing indexes.
+
+ :param suffix: Index suffix added to pack name.
+
+ :returns: (pack_map, indices) where indices is a list of GraphIndex
+ objects, and pack_map is a mapping from those objects to the
+ pack tuple they describe.
+ """
+ indices = []
+ pack_map = {}
+ self.ensure_loaded()
+ for name in self.names():
+ # TODO: maybe this should expose size to us to allow
+ # sorting of the indices for better performance ?
+ index_name = name + suffix
+ new_index = GraphIndex(self._index_transport, index_name)
+ indices.append(new_index)
+ pack_map[new_index] = self._pack_tuple(name)
+ return pack_map, indices
+
def _max_pack_count(self, total_revisions):
"""Return the maximum number of packs to use for total revisions.
@@ -698,9 +733,11 @@
pack_detail[0].rename(pack_detail[1],
'../obsolete_packs/' + pack_detail[1])
basename = pack_detail[1][:-4]
- index_transport = pack_detail[0].clone('../indices')
+ # TODO: Probably needs to know all possible indexes for this pack
+ # - or maybe list the directory and move all indexes matching this
+ # name whether we recognize it or not?
for suffix in ('iix', 'six', 'tix', 'rix'):
- index_transport.rename(basename + suffix,
+ self._index_transport.rename(basename + suffix,
'../obsolete_packs/' + basename + suffix)
def pack_distribution(self, total_revisions):
@@ -719,6 +756,10 @@
result.append(size)
return list(reversed(result))
+ def _pack_tuple(self, name):
+ """Return a tuple with the transport and file name for a pack name."""
+ return self._pack_transport, name + '.pack'
+
def _remove_pack_by_name(self, name):
# strip .pack
self._names.pop(name[:-5])
@@ -727,61 +768,34 @@
self._names = None
self.packs = []
+ def _make_index_to_pack_map(self, pack_details, index_suffix):
+ """Given a list (transport,name), return a map of (index)->(transport, name)."""
+ # the simplest thing for now is to create new index objects.
+ # this should really reuse the existing index objects for these
+ # packs - this means making the way they are managed in the repo be
+ # more sane.
+ indices = {}
+ for transport, name in pack_details:
+ index_name = name[:-5] + index_suffix
+ indices[GraphIndex(self._index_transport, index_name)] = \
+ (transport, name)
+ return indices
+
def _inv_index_map(self, pack_details):
"""Get a map of inv index -> packs for pack_details."""
- # the simplest thing for now is to create new index objects.
- # this should really reuse the existing index objects for these
- # packs - this means making the way they are managed in the repo be
- # more sane.
- indices = {}
- for transport, name in pack_details:
- index_name = name[:-5]
- index_name = self.repo._inv_thunk.name_to_inv_index_name(index_name)
- indices[GraphIndex(transport.clone('../indices'), index_name)] = \
- (transport, name)
- return indices
+ return self._make_index_to_pack_map(pack_details, '.iix')
def _revision_index_map(self, pack_details):
"""Get a map of revision index -> packs for pack_details."""
- # the simplest thing for now is to create new index objects.
- # this should really reuse the existing index objects for these
- # packs - this means making the way they are managed in the repo be
- # more sane.
- indices = {}
- for transport, name in pack_details:
- index_name = name[:-5]
- index_name = self.repo._revision_store.name_to_revision_index_name(index_name)
- indices[GraphIndex(transport.clone('../indices'), index_name)] = \
- (transport, name)
- return indices
+ return self._make_index_to_pack_map(pack_details, '.rix')
def _signature_index_map(self, pack_details):
"""Get a map of signature index -> packs for pack_details."""
- # the simplest thing for now is to create new index objects.
- # this should really reuse the existing index objects for these
- # packs - this means making the way they are managed in the repo be
- # more sane.
- indices = {}
- for transport, name in pack_details:
- index_name = name[:-5]
- index_name = self.repo._revision_store.name_to_signature_index_name(index_name)
- indices[GraphIndex(transport.clone('../indices'), index_name)] = \
- (transport, name)
- return indices
+ return self._make_index_to_pack_map(pack_details, '.six')
def _text_index_map(self, pack_details):
"""Get a map of text index -> packs for pack_details."""
- # the simplest thing for now is to create new index objects.
- # this should really reuse the existing index objects for these
- # packs - this means making the way they are managed in the repo be
- # more sane.
- indices = {}
- for transport, name in pack_details:
- index_name = name[:-5]
- index_name = self.repo.weave_store.name_to_text_index_name(index_name)
- indices[GraphIndex(transport.clone('../indices'), index_name)] = \
- (transport, name)
- return indices
+ return self._make_index_to_pack_map(pack_details, '.tix')
def _index_contents(self, pack_map, key_filter=None):
"""Get an iterable of the index contents from a pack_map.
@@ -797,7 +811,7 @@
else:
return all_index.iter_entries(key_filter)
- def save(self):
+ def _save_pack_names(self):
builder = GraphIndexBuilder()
for name, sizes in self._names.iteritems():
builder.add_node((name, ), ' '.join(str(size) for size in sizes))
@@ -808,6 +822,77 @@
if self.repo.control_files._lock_mode != 'w':
raise errors.NotWriteLocked(self)
+ def _start_write_group(self):
+ random_name = self.repo.control_files._lock.nonce
+ self.repo._open_pack_tuple = (self._upload_transport, random_name + '.pack')
+ write_stream = self._upload_transport.open_write_stream(random_name + '.pack')
+ self._write_stream = write_stream
+ self._open_pack_hash = md5.new()
+ def write_data(bytes, write=write_stream.write,
+ update=self._open_pack_hash.update):
+ write(bytes)
+ update(bytes)
+ self._open_pack_writer = pack.ContainerWriter(write_data)
+ self._open_pack_writer.begin()
+ self.setup()
+ self.repo._revision_store.setup()
+ self.repo.weave_store.setup()
+ self.repo._inv_thunk.setup()
+
+ def _abort_write_group(self):
+ # FIXME: just drop the transient index.
+ self.repo._revision_store.reset()
+ self.repo.weave_store.reset()
+ self.repo._inv_thunk.reset()
+ # forget what names there are
+ self.reset()
+ self._open_pack_hash = None
+
+ def _commit_write_group(self):
+ data_inserted = (self.repo._revision_store.data_inserted() or
+ self.repo.weave_store.data_inserted() or
+ self.repo._inv_thunk.data_inserted())
+ if data_inserted:
+ self._open_pack_writer.end()
+ new_name = self._open_pack_hash.hexdigest()
+ new_pack = Pack()
+ new_pack.name = new_name
+ new_pack.transport = self._upload_transport.clone('../packs/')
+ # To populate:
+ # new_pack.revision_index =
+ # new_pack.inventory_index =
+ # new_pack.text_index =
+ # new_pack.signature_index =
+ self.repo.weave_store.flush(new_name, new_pack)
+ self.repo._inv_thunk.flush(new_name, new_pack)
+ self.repo._revision_store.flush(new_name, new_pack)
+ self._write_stream.close()
+ self._upload_transport.rename(self.repo._open_pack_tuple[1],
+ '../packs/' + new_name + '.pack')
+ # If this fails, its a hash collision. We should:
+ # - determine if its a collision or
+ # - the same content or
+ # - the existing name is not the actual hash - e.g.
+ # its a deliberate attack or data corruption has
+ # occuring during the write of that file.
+ self.allocate(new_name, new_pack.revision_index_length,
+ new_pack.inventory_index_length, new_pack.text_index_length,
+ new_pack.signature_index_length)
+ self.repo._open_pack_tuple = None
+ if not self.autopack():
+ self._save_pack_names()
+ else:
+ # remove the pending upload
+ self._upload_transport.delete(self.repo._open_pack_tuple[1])
+ self.repo._revision_store.reset()
+ self.repo.weave_store.reset()
+ self.repo._inv_thunk.reset()
+ # forget what names there are - should just refresh and deal with the
+ # delta.
+ self.reset()
+ self._open_pack_hash = None
+ self._write_stream = None
+
class GraphKnitRevisionStore(KnitRevisionStore):
"""An object to adapt access from RevisionStore's to use GraphKnits.
@@ -839,13 +924,12 @@
"""Get the revision versioned file object."""
if getattr(self.repo, '_revision_knit', None) is not None:
return self.repo._revision_knit
- self.repo._packs.ensure_loaded()
- pack_map, indices = self._make_rev_pack_map()
+ pack_map, indices = self.repo._packs._make_index_map('.rix')
if self.repo.is_in_write_group():
# allow writing: queue writes to a new index
indices.insert(0, self.repo._revision_write_index)
pack_map[self.repo._revision_write_index] = self.repo._open_pack_tuple
- writer = self.repo._open_pack_writer, self.repo._revision_write_index
+ writer = self.repo._packs._open_pack_writer, self.repo._revision_write_index
add_callback = self.repo._revision_write_index.add_nodes
else:
writer = None
@@ -864,35 +948,16 @@
access_method=knit_access)
return self.repo._revision_knit
- def _make_rev_pack_map(self):
- indices = []
- pack_map = {}
- for name in self.repo._packs.names():
- # TODO: maybe this should expose size to us to allow
- # sorting of the indices for better performance ?
- index_name = self.name_to_revision_index_name(name)
- indices.append(GraphIndex(self.transport, index_name))
- pack_map[indices[-1]] = (self.repo._pack_tuple(name))
- return pack_map, indices
-
def get_signature_file(self, transaction):
"""Get the signature versioned file object."""
if getattr(self.repo, '_signature_knit', None) is not None:
return self.repo._signature_knit
- indices = []
- self.repo._packs.ensure_loaded()
- pack_map = {}
- for name in self.repo._packs.names():
- # TODO: maybe this should expose size to us to allow
- # sorting of the indices for better performance ?
- index_name = self.name_to_signature_index_name(name)
- indices.append(GraphIndex(self.transport, index_name))
- pack_map[indices[-1]] = (self.repo._pack_tuple(name))
+ pack_map, indices = self.repo._packs._make_index_map('.six')
if self.repo.is_in_write_group():
# allow writing: queue writes to a new index
indices.insert(0, self.repo._signature_write_index)
pack_map[self.repo._signature_write_index] = self.repo._open_pack_tuple
- writer = self.repo._open_pack_writer, self.repo._signature_write_index
+ writer = self.repo._packs._open_pack_writer, self.repo._signature_write_index
add_callback = self.repo._signature_write_index.add_nodes
else:
writer = None
@@ -929,12 +994,12 @@
# create a pack map for the autopack code - XXX finish
# making a clear managed list of packs, indices and use
# that in these mapping classes
- self.repo._revision_pack_map = self._make_rev_pack_map()[0]
+ self.repo._revision_pack_map = self.repo._packs._make_index_map('.rix')[0]
else:
del self.repo._revision_pack_map[self.repo._revision_write_index]
self.repo._revision_write_index = None
new_index = GraphIndex(self.transport, new_index_name)
- self.repo._revision_pack_map[new_index] = (self.repo._pack_tuple(new_name))
+ self.repo._revision_pack_map[new_index] = (self.repo._packs._pack_tuple(new_name))
# revisions 'knit' accessed : update it.
self.repo._revision_all_indices.insert_index(0, new_index)
# remove the write buffering index. XXX: API break
@@ -988,12 +1053,14 @@
if self.repo._revision_knit is not None:
self.repo._revision_all_indices.insert_index(0, self.repo._revision_write_index)
self.repo._revision_knit._index._add_callback = self.repo._revision_write_index.add_nodes
- self.repo._revision_knit_access.set_writer(self.repo._open_pack_writer,
+ self.repo._revision_knit_access.set_writer(
+ self.repo._packs._open_pack_writer,
self.repo._revision_write_index, self.repo._open_pack_tuple)
if self.repo._signature_knit is not None:
self.repo._signature_all_indices.insert_index(0, self.repo._signature_write_index)
self.repo._signature_knit._index._add_callback = self.repo._signature_write_index.add_nodes
- self.repo._signature_knit_access.set_writer(self.repo._open_pack_writer,
+ self.repo._signature_knit_access.set_writer(
+ self.repo._packs._open_pack_writer,
self.repo._signature_write_index, self.repo._open_pack_tuple)
@@ -1039,15 +1106,8 @@
"""Create the combined index for all texts."""
if getattr(self.repo, '_text_all_indices', None) is not None:
return
- indices = []
- self.repo._packs.ensure_loaded()
- self.repo._text_pack_map = {}
- for name in self.repo._packs.names():
- # TODO: maybe this should expose size to us to allow
- # sorting of the indices for better performance ?
- index_name = self.name_to_text_index_name(name)
- indices.append(GraphIndex(self.transport, index_name))
- self.repo._text_pack_map[indices[-1]] = (self.repo._pack_tuple(name))
+ pack_map, indices = self.repo._packs._make_index_map('.tix')
+ self.repo._text_pack_map = pack_map
if for_write or self.repo.is_in_write_group():
# allow writing: queue writes to a new index
indices.insert(0, self.repo._text_write_index)
@@ -1135,7 +1195,7 @@
def _setup_knit(self, for_write):
if for_write:
- writer = (self.repo._open_pack_writer, self.repo._text_write_index)
+ writer = (self.repo._packs._open_pack_writer, self.repo._text_write_index)
else:
writer = None
self.repo._text_knit_access = _PackAccess(
@@ -1171,15 +1231,7 @@
"""Create the combined index for all inventories."""
if getattr(self.repo, '_inv_all_indices', None) is not None:
return
- indices = []
- self.repo._packs.ensure_loaded()
- pack_map = {}
- for name in self.repo._packs.names():
- # TODO: maybe this should expose size to us to allow
- # sorting of the indices for better performance ?
- index_name = self.name_to_inv_index_name(name)
- indices.append(GraphIndex(self.transport, index_name))
- pack_map[indices[-1]] = (self.repo._pack_tuple(name))
+ pack_map, indices = self.repo._packs._make_index_map('.iix')
if self.repo.is_in_write_group():
# allow writing: queue writes to a new index
indices.append(self.repo._inv_write_index)
@@ -1212,7 +1264,7 @@
if self.repo.is_in_write_group():
add_callback = self.repo._inv_write_index.add_nodes
self.repo._inv_pack_map[self.repo._inv_write_index] = self.repo._open_pack_tuple
- writer = self.repo._open_pack_writer, self.repo._inv_write_index
+ writer = self.repo._packs._open_pack_writer, self.repo._inv_write_index
else:
add_callback = None # no data-adding permitted.
writer = None
@@ -1266,27 +1318,18 @@
KnitRepository.__init__(self, _format, a_bzrdir, control_files,
_revision_store, control_store, text_store)
index_transport = control_files._transport.clone('indices')
- self._packs = RepositoryPackCollection(self, control_files._transport)
+ self._packs = RepositoryPackCollection(self, control_files._transport,
+ index_transport,
+ control_files._transport.clone('upload'),
+ control_files._transport.clone('packs'))
self._revision_store = GraphKnitRevisionStore(self, index_transport, self._revision_store)
self.weave_store = GraphKnitTextStore(self, index_transport, self.weave_store)
self._inv_thunk = InventoryKnitThunk(self, index_transport)
- self._upload_transport = control_files._transport.clone('upload')
- self._pack_transport = control_files._transport.clone('packs')
# for tests
self._reconcile_does_inventory_gc = False
def _abort_write_group(self):
- # FIXME: just drop the transient index.
- self._revision_store.reset()
- self.weave_store.reset()
- self._inv_thunk.reset()
- # forget what names there are
- self._packs.reset()
- self._open_pack_hash = None
-
- def _pack_tuple(self, name):
- """Return a tuple with the transport and file name for a pack name."""
- return self._pack_transport, name + '.pack'
+ self._packs._abort_write_group()
def _refresh_data(self):
if self.control_files._lock_count==1:
@@ -1297,65 +1340,10 @@
self._packs.reset()
def _start_write_group(self):
- random_name = self.control_files._lock.nonce
- self._open_pack_tuple = (self._upload_transport, random_name + '.pack')
- write_stream = self._upload_transport.open_write_stream(random_name + '.pack')
- self._write_stream = write_stream
- self._open_pack_hash = md5.new()
- def write_data(bytes, write=write_stream.write, update=self._open_pack_hash.update):
- write(bytes)
- update(bytes)
- self._open_pack_writer = pack.ContainerWriter(write_data)
- self._open_pack_writer.begin()
- self._packs.setup()
- self._revision_store.setup()
- self.weave_store.setup()
- self._inv_thunk.setup()
+ self._packs._start_write_group()
def _commit_write_group(self):
- data_inserted = (self._revision_store.data_inserted() or
- self.weave_store.data_inserted() or
- self._inv_thunk.data_inserted())
- if data_inserted:
- self._open_pack_writer.end()
- new_name = self._open_pack_hash.hexdigest()
- new_pack = Pack()
- new_pack.name = new_name
- new_pack.transport = self._upload_transport.clone('../packs/')
- # To populate:
- # new_pack.revision_index =
- # new_pack.inventory_index =
- # new_pack.text_index =
- # new_pack.signature_index =
- self.weave_store.flush(new_name, new_pack)
- self._inv_thunk.flush(new_name, new_pack)
- self._revision_store.flush(new_name, new_pack)
- self._write_stream.close()
- self._upload_transport.rename(self._open_pack_tuple[1],
- '../packs/' + new_name + '.pack')
- # If this fails, its a hash collision. We should:
- # - determine if its a collision or
- # - the same content or
- # - the existing name is not the actual hash - e.g.
- # its a deliberate attack or data corruption has
- # occuring during the write of that file.
- self._packs.allocate(new_name, new_pack.revision_index_length,
- new_pack.inventory_index_length, new_pack.text_index_length,
- new_pack.signature_index_length)
- self._open_pack_tuple = None
- if not self._packs.autopack():
- self._packs.save()
- else:
- # remove the pending upload
- self._upload_transport.delete(self._open_pack_tuple[1])
- self._revision_store.reset()
- self.weave_store.reset()
- self._inv_thunk.reset()
- # forget what names there are - should just refresh and deal with the
- # delta.
- self._packs.reset()
- self._open_pack_hash = None
- self._write_stream = None
+ return self._packs._commit_write_group()
def get_inventory_weave(self):
return self._inv_thunk.get_weave()
@@ -1397,27 +1385,18 @@
KnitRepository3.__init__(self, _format, a_bzrdir, control_files,
_revision_store, control_store, text_store)
index_transport = control_files._transport.clone('indices')
- self._packs = RepositoryPackCollection(self, control_files._transport)
+ self._packs = RepositoryPackCollection(self, control_files._transport,
+ index_transport,
+ control_files._transport.clone('upload'),
+ control_files._transport.clone('packs'))
self._revision_store = GraphKnitRevisionStore(self, index_transport, self._revision_store)
self.weave_store = GraphKnitTextStore(self, index_transport, self.weave_store)
self._inv_thunk = InventoryKnitThunk(self, index_transport)
- self._upload_transport = control_files._transport.clone('upload')
- self._pack_transport = control_files._transport.clone('packs')
# for tests
self._reconcile_does_inventory_gc = False
def _abort_write_group(self):
- # FIXME: just drop the transient index.
- self._revision_store.reset()
- self.weave_store.reset()
- self._inv_thunk.reset()
- # forget what names there are
- self._packs.reset()
- self._open_pack_hash = None
-
- def _pack_tuple(self, name):
- """Return a tuple with the transport and file name for a pack name."""
- return self._pack_transport, name + '.pack'
+ return self._packs._abort_write_group()
def _refresh_data(self):
if self.control_files._lock_count==1:
@@ -1428,65 +1407,10 @@
self._packs.reset()
def _start_write_group(self):
- random_name = self.control_files._lock.nonce
- self._open_pack_tuple = (self._upload_transport, random_name + '.pack')
- write_stream = self._upload_transport.open_write_stream(random_name + '.pack')
- self._write_stream = write_stream
- self._open_pack_hash = md5.new()
- def write_data(bytes, write=write_stream.write, update=self._open_pack_hash.update):
- write(bytes)
- update(bytes)
- self._open_pack_writer = pack.ContainerWriter(write_data)
- self._open_pack_writer.begin()
- self._packs.setup()
- self._revision_store.setup()
- self.weave_store.setup()
- self._inv_thunk.setup()
+ self._packs._start_write_group()
def _commit_write_group(self):
- data_inserted = (self._revision_store.data_inserted() or
- self.weave_store.data_inserted() or
- self._inv_thunk.data_inserted())
- if data_inserted:
- self._open_pack_writer.end()
- new_name = self._open_pack_hash.hexdigest()
- new_pack = Pack()
- new_pack.name = new_name
- new_pack.transport = self._upload_transport.clone('../packs/')
- # To populate:
- # new_pack.revision_index =
- # new_pack.inventory_index =
- # new_pack.text_index =
- # new_pack.signature_index =
- self.weave_store.flush(new_name, new_pack)
- self._inv_thunk.flush(new_name, new_pack)
- self._revision_store.flush(new_name, new_pack)
- self._write_stream.close()
- self._upload_transport.rename(self._open_pack_tuple[1],
- '../packs/' + new_name + '.pack')
- # If this fails, its a hash collision. We should:
- # - determine if its a collision or
- # - the same content or
- # - the existing name is not the actual hash - e.g.
- # its a deliberate attack or data corruption has
- # occuring during the write of that file.
- self._packs.allocate(new_name, new_pack.revision_index_length,
- new_pack.inventory_index_length, new_pack.text_index_length,
- new_pack.signature_index_length)
- self._open_pack_tuple = None
- if not self._packs.autopack():
- self._packs.save()
- else:
- # remove the pending upload
- self._upload_transport.delete(self._open_pack_tuple[1])
- self._revision_store.reset()
- self.weave_store.reset()
- self._inv_thunk.reset()
- # forget what names there are - should just refresh and deal with the
- # delta.
- self._packs.reset()
- self._open_pack_hash = None
- self._write_stream = None
+ return self._packs._commit_write_group()
def get_inventory_weave(self):
return self._inv_thunk.get_weave()
=== modified file 'bzrlib/repository.py'
--- a/bzrlib/repository.py 2007-09-23 22:43:32 +0000
+++ b/bzrlib/repository.py 2007-09-25 01:47:51 +0000
@@ -2375,7 +2375,7 @@
revision_ids,
)
if pack is not None:
- self.target._packs.save()
+ self.target._packs._save_pack_names()
self.target._packs.add_pack_to_memory(pack)
# Trigger an autopack. This may duplicate effort as we've just done
# a pack creation, but for now it is simpler to think about as
More information about the bazaar-commits
mailing list