Rev 4149: (robertc) Add a new repository method refresh_data to allow clean in file:///home/pqm/archives/thelove/bzr/%2Btrunk/
Canonical.com Patch Queue Manager
pqm at pqm.ubuntu.com
Mon Mar 16 08:26:33 GMT 2009
At file:///home/pqm/archives/thelove/bzr/%2Btrunk/
------------------------------------------------------------
revno: 4149
revision-id: pqm at pqm.ubuntu.com-20090316082629-xuzqut3b3ur5bn3b
parent: pqm at pqm.ubuntu.com-20090316065136-1g3udxbvvlgtsbup
parent: robertc at robertcollins.net-20090316074405-t9guvf13rj4mlhuk
committer: Canonical.com Patch Queue Manager <pqm at pqm.ubuntu.com>
branch nick: +trunk
timestamp: Mon 2009-03-16 08:26:29 +0000
message:
(robertc) Add a new repository method refresh_data to allow clean
handling of _real_repositories after inserting a stream via the
smart server. (Robert Collins, Andrew Bennetts)
added:
bzrlib/tests/per_repository/test_refresh_data.py test_refresh_data.py-20090316045630-5sw0ipqwk7rvpn3h-1
modified:
NEWS NEWS-20050323055033-4e00b5db738777ff
bzrlib/lockable_files.py control_files.py-20051111201905-bb88546e799d669f
bzrlib/remote.py remote.py-20060720103555-yeeg2x51vn0rbtdp-1
bzrlib/repofmt/knitrepo.py knitrepo.py-20070206081537-pyy4a00xdas0j4pf-1
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
bzrlib/repository.py rev_storage.py-20051111201905-119e9401e46257e3
bzrlib/tests/__init__.py selftest.py-20050531073622-8d0e3c8845c97a64
bzrlib/tests/blackbox/test_push.py test_push.py-20060329002750-929af230d5d22663
bzrlib/tests/blackbox/test_shared_repository.py test_shared_repository.py-20060317053531-ed30c0d79325e483
bzrlib/tests/branch_implementations/test_stacking.py test_stacking.py-20080214020755-msjlkb7urobwly0f-1
bzrlib/tests/per_repository/__init__.py __init__.py-20060131092037-9564957a7d4a841b
bzrlib/tests/per_repository/test_fetch.py test_fetch.py-20070814052151-5cxha9slx4c93uog-1
bzrlib/tests/test_knit.py test_knit.py-20051212171302-95d4c00dd5f11f2b
bzrlib/tests/test_remote.py test_remote.py-20060720103555-yeeg2x51vn0rbtdp-2
bzrlib/tests/test_repository.py test_repository.py-20060131075918-65c555b881612f4d
bzrlib/tests/test_smart.py test_smart.py-20061122024551-ol0l0o0oofsu9b3t-2
------------------------------------------------------------
revno: 4145.1.6
revision-id: robertc at robertcollins.net-20090316074405-t9guvf13rj4mlhuk
parent: robertc at robertcollins.net-20090316055532-n2i9etfgbf8sd13y
committer: Robert Collins <robertc at robertcollins.net>
branch nick: integration
timestamp: Mon 2009-03-16 18:44:05 +1100
message:
More test fallout, but all caught now.
modified:
bzrlib/tests/test_knit.py test_knit.py-20051212171302-95d4c00dd5f11f2b
bzrlib/tests/test_remote.py test_remote.py-20060720103555-yeeg2x51vn0rbtdp-2
bzrlib/tests/test_repository.py test_repository.py-20060131075918-65c555b881612f4d
bzrlib/tests/test_smart.py test_smart.py-20061122024551-ol0l0o0oofsu9b3t-2
------------------------------------------------------------
revno: 4145.1.5
revision-id: robertc at robertcollins.net-20090316055532-n2i9etfgbf8sd13y
parent: robertc at robertcollins.net-20090316053331-84t8k9hv0xw7kak4
committer: Robert Collins <robertc at robertcollins.net>
branch nick: integration
timestamp: Mon 2009-03-16 16:55:32 +1100
message:
More fixes from grabbing the Repository implementation of fetch for RemoteRepository.
modified:
bzrlib/remote.py remote.py-20060720103555-yeeg2x51vn0rbtdp-1
------------------------------------------------------------
revno: 4145.1.4
revision-id: robertc at robertcollins.net-20090316053331-84t8k9hv0xw7kak4
parent: robertc at robertcollins.net-20090316050819-unhkpvsvu3mzaore
committer: Robert Collins <robertc at robertcollins.net>
branch nick: integration
timestamp: Mon 2009-03-16 16:33:31 +1100
message:
Prevent regression to overhead of lock_read on pack repositories.
modified:
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
bzrlib/tests/blackbox/test_push.py test_push.py-20060329002750-929af230d5d22663
bzrlib/tests/blackbox/test_shared_repository.py test_shared_repository.py-20060317053531-ed30c0d79325e483
------------------------------------------------------------
revno: 4145.1.3
revision-id: robertc at robertcollins.net-20090316050819-unhkpvsvu3mzaore
parent: robertc at robertcollins.net-20090316050552-hqcgx49ugew0facc
parent: pqm at pqm.ubuntu.com-20090316041621-taek91nogxt42bfy
committer: Robert Collins <robertc at robertcollins.net>
branch nick: repository.refresh_data
timestamp: Mon 2009-03-16 16:08:19 +1100
message:
NEWS conflicts.
modified:
NEWS NEWS-20050323055033-4e00b5db738777ff
bzrlib/builtins.py builtins.py-20050830033751-fc01482b9ca23183
bzrlib/remote.py remote.py-20060720103555-yeeg2x51vn0rbtdp-1
bzrlib/repository.py rev_storage.py-20051111201905-119e9401e46257e3
bzrlib/tests/blackbox/test_merge.py test_merge.py-20060323225809-9bc0459c19917f41
------------------------------------------------------------
revno: 4145.1.2
revision-id: robertc at robertcollins.net-20090316050552-hqcgx49ugew0facc
parent: robertc at robertcollins.net-20090316033405-wpq3do6shludp1bh
committer: Robert Collins <robertc at robertcollins.net>
branch nick: repository.refresh_data
timestamp: Mon 2009-03-16 16:05:52 +1100
message:
Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.
added:
bzrlib/tests/per_repository/test_refresh_data.py test_refresh_data.py-20090316045630-5sw0ipqwk7rvpn3h-1
modified:
NEWS NEWS-20050323055033-4e00b5db738777ff
bzrlib/lockable_files.py control_files.py-20051111201905-bb88546e799d669f
bzrlib/remote.py remote.py-20060720103555-yeeg2x51vn0rbtdp-1
bzrlib/repofmt/knitrepo.py knitrepo.py-20070206081537-pyy4a00xdas0j4pf-1
bzrlib/repofmt/pack_repo.py pack_repo.py-20070813041115-gjv5ma7ktfqwsjgn-1
bzrlib/repository.py rev_storage.py-20051111201905-119e9401e46257e3
bzrlib/tests/__init__.py selftest.py-20050531073622-8d0e3c8845c97a64
bzrlib/tests/branch_implementations/test_stacking.py test_stacking.py-20080214020755-msjlkb7urobwly0f-1
bzrlib/tests/per_repository/__init__.py __init__.py-20060131092037-9564957a7d4a841b
------------------------------------------------------------
revno: 4145.1.1
revision-id: robertc at robertcollins.net-20090316033405-wpq3do6shludp1bh
parent: pqm at pqm.ubuntu.com-20090316024046-58qc87pfdgu2ugok
committer: Robert Collins <robertc at robertcollins.net>
branch nick: repository.refresh_data
timestamp: Mon 2009-03-16 14:34:05 +1100
message:
Explicitly prevent fetching while the target repository is in a write group.
modified:
NEWS NEWS-20050323055033-4e00b5db738777ff
bzrlib/remote.py remote.py-20060720103555-yeeg2x51vn0rbtdp-1
bzrlib/repository.py rev_storage.py-20051111201905-119e9401e46257e3
bzrlib/tests/per_repository/test_fetch.py test_fetch.py-20070814052151-5cxha9slx4c93uog-1
=== modified file 'NEWS'
--- a/NEWS 2009-03-16 05:58:42 +0000
+++ b/NEWS 2009-03-16 08:26:29 +0000
@@ -48,6 +48,11 @@
* Fixed incorrect "Source format does not support stacking" warning
when pushing to a smart server. (Andrew Bennetts, #334114)
+ * It is no longer possible to fetch between repositories while the
+ target repository is in a write group. This prevents race conditions
+ that prevent the use of RPC's to perform fetch, and thus allows
+ optimising more operations. (Robert Collins, Andrew Bennetts)
+
* ``merge --force`` works again. (Robert Collins, #342105)
* The GNU Changelog formatter is slightly improved in the case where
@@ -78,6 +83,10 @@
* New ``assertLength`` method based on one Martin has squirreled away
somewhere. (Robert Collins, Martin Pool)
+ * New repository method ``refresh_data`` to cause any repository to
+ make visible data inserted into the repository by a smart server
+ fetch operation. (Robert Collins, Andrew Bennetts)
+
* ``_walk_to_common_revisions`` will now batch up at least 50
revisions before calling ``get_parent_map`` on the target,
regardless of ``InterRepository``.
=== modified file 'bzrlib/lockable_files.py'
--- a/bzrlib/lockable_files.py 2009-03-11 21:44:21 +0000
+++ b/bzrlib/lockable_files.py 2009-03-16 05:05:52 +0000
@@ -271,7 +271,7 @@
#traceback.print_stack()
self._lock_mode = 'w'
self._lock_warner.lock_count = 1
- self._set_transaction(transactions.WriteTransaction())
+ self._set_write_transaction()
self._token_from_lock = token_from_lock
return token_from_lock
@@ -285,9 +285,17 @@
#traceback.print_stack()
self._lock_mode = 'r'
self._lock_warner.lock_count = 1
- self._set_transaction(transactions.ReadOnlyTransaction())
- # 5K may be excessive, but hey, its a knob.
- self.get_transaction().set_cache_size(5000)
+ self._set_read_transaction()
+
+ def _set_read_transaction(self):
+ """Setup a read transaction."""
+ self._set_transaction(transactions.ReadOnlyTransaction())
+ # 5K may be excessive, but hey, its a knob.
+ self.get_transaction().set_cache_size(5000)
+
+ def _set_write_transaction(self):
+ """Setup a write transaction."""
+ self._set_transaction(transactions.WriteTransaction())
def unlock(self):
if not self._lock_mode:
=== modified file 'bzrlib/remote.py'
--- a/bzrlib/remote.py 2009-03-16 05:47:53 +0000
+++ b/bzrlib/remote.py 2009-03-16 08:26:29 +0000
@@ -643,6 +643,14 @@
"""Ensure that there is a _real_repository set.
Used before calls to self._real_repository.
+
+ Note that _ensure_real causes many roundtrips to the server which are
+ not desirable, and prevents the use of smart one-roundtrip RPC's to
+ perform complex operations (such as accessing parent data, streaming
+ revisions etc). Adding calls to _ensure_real should only be done when
+ bringing up new functionality, adding fallbacks for smart methods that
+ require a fallback path, and never to replace an existing smart method
+ invocation. If in doubt chat to the bzr network team.
"""
if self._real_repository is None:
self.bzrdir._ensure_real()
@@ -1082,6 +1090,20 @@
self._ensure_real()
return self._real_repository.make_working_trees()
+ def refresh_data(self):
+ """Re-read any data needed to to synchronise with disk.
+
+ This method is intended to be called after another repository instance
+ (such as one used by a smart server) has inserted data into the
+ repository. It may not be called during a write group, but may be
+ called at any other time.
+ """
+ if self.is_in_write_group():
+ raise errors.InternalBzrError(
+ "May not refresh_data while in a write group.")
+ if self._real_repository is not None:
+ self._real_repository.refresh_data()
+
def revision_ids_to_search_result(self, result_set):
"""Convert a set of revision ids to a graph SearchResult."""
result_parents = set()
@@ -1108,11 +1130,15 @@
def fetch(self, source, revision_id=None, pb=None, find_ghosts=False,
fetch_spec=None):
+ # No base implementation to use as RemoteRepository is not a subclass
+ # of Repository; so this is a copy of Repository.fetch().
if fetch_spec is not None and revision_id is not None:
raise AssertionError(
"fetch_spec and revision_id are mutually exclusive.")
- # Not delegated to _real_repository so that InterRepository.get has a
- # chance to find an InterRepository specialised for RemoteRepository.
+ if self.is_in_write_group():
+ raise errors.InternalBzrError(
+ "May not fetch while in a write group.")
+ # fast path same-url fetch operations
if self.has_same_location(source) and fetch_spec is None:
# check that last_revision is in 'from' and then return a
# no-operation.
@@ -1120,12 +1146,12 @@
not revision.is_null(revision_id)):
self.get_revision(revision_id)
return 0, []
+ # if there is no specific appropriate InterRepository, this will get
+ # the InterRepository base class, which raises an
+ # IncompatibleRepositories when asked to fetch.
inter = repository.InterRepository.get(source, self)
- try:
- return inter.fetch(revision_id=revision_id, pb=pb,
- find_ghosts=find_ghosts, fetch_spec=fetch_spec)
- except NotImplementedError:
- raise errors.IncompatibleRepositories(source, self)
+ return inter.fetch(revision_id=revision_id, pb=pb,
+ find_ghosts=find_ghosts, fetch_spec=fetch_spec)
def create_bundle(self, target, base, fileobj, format=None):
self._ensure_real()
@@ -1501,12 +1527,7 @@
self._ensure_real()
self._real_repository._pack_collection.autopack()
return
- if self._real_repository is not None:
- # Reset the real repository's cache of pack names.
- # XXX: At some point we may be able to skip this and just rely on
- # the automatic retry logic to do the right thing, but for now we
- # err on the side of being correct rather than being optimal.
- self._real_repository._pack_collection.reload_pack_names()
+ self.refresh_data()
if response[0] != 'ok':
raise errors.UnexpectedSmartServerResponse(response)
@@ -1565,11 +1586,7 @@
resume_tokens = tokens
return resume_tokens, missing_keys
else:
- if self.target_repo._real_repository is not None:
- collection = getattr(self.target_repo._real_repository,
- '_pack_collection', None)
- if collection is not None:
- collection.reload_pack_names()
+ self.target_repo.refresh_data()
return [], set()
=== modified file 'bzrlib/repofmt/knitrepo.py'
--- a/bzrlib/repofmt/knitrepo.py 2009-02-27 01:02:40 +0000
+++ b/bzrlib/repofmt/knitrepo.py 2009-03-16 05:05:52 +0000
@@ -213,6 +213,17 @@
revision_id = osutils.safe_revision_id(revision_id)
return self.get_revision_reconcile(revision_id)
+ def _refresh_data(self):
+ if not self.is_locked():
+ return
+ # Create a new transaction to force all knits to see the scope change.
+ # This is safe because we're outside a write group.
+ self.control_files._finish_transaction()
+ if self.is_write_locked():
+ self.control_files._set_write_transaction()
+ else:
+ self.control_files._set_read_transaction()
+
@needs_write_lock
def reconcile(self, other=None, thorough=False):
"""Reconcile this repository."""
=== modified file 'bzrlib/repofmt/pack_repo.py'
--- a/bzrlib/repofmt/pack_repo.py 2009-03-12 02:45:17 +0000
+++ b/bzrlib/repofmt/pack_repo.py 2009-03-16 05:33:31 +0000
@@ -1297,6 +1297,7 @@
:param index_builder_class: The index builder class to use.
:param index_class: The index class to use.
"""
+ # XXX: This should call self.reset()
self.repo = repo
self.transport = transport
self._index_transport = index_transport
@@ -1307,6 +1308,7 @@
self._suffix_offsets = {'.rix': 0, '.iix': 1, '.tix': 2, '.six': 3}
self.packs = []
# name:Pack mapping
+ self._names = None
self._packs_by_name = {}
# the previous pack-names content
self._packs_at_load = None
@@ -1527,6 +1529,10 @@
return [[final_rev_count, final_pack_list]]
def ensure_loaded(self):
+ """Ensure we have read names from disk.
+
+ :return: True if the disk names had not been previously read.
+ """
# NB: if you see an assertion error here, its probably access against
# an unlocked repo. Naughty.
if not self.repo.is_locked():
@@ -1538,8 +1544,12 @@
name = key[0]
self._names[name] = self._parse_index_sizes(value)
self._packs_at_load.add((key, value))
+ result = True
+ else:
+ result = False
# populate all the metadata.
self.all_packs()
+ return result
def _parse_index_sizes(self, value):
"""Parse a string of index sizes."""
@@ -1838,8 +1848,17 @@
This should be called when we find out that something we thought was
present is now missing. This happens when another process re-packs the
repository, etc.
+
+ :return: True if the in-memory list of packs has been altered at all.
"""
- # This is functionally similar to _save_pack_names, but we don't write
+ # The ensure_loaded call is to handle the case where the first call
+ # made involving the collection was to reload_pack_names, where we
+ # don't have a view of disk contents. Its a bit of a bandaid, and
+ # causes two reads of pack-names, but its a rare corner case not struck
+ # with regular push/pull etc.
+ first_read = self.ensure_loaded()
+ if first_read:
+ return True
# out the new value.
disk_nodes, _, _ = self._diff_pack_names()
self._packs_at_load = disk_nodes
@@ -2115,14 +2134,9 @@
return graph.CachingParentsProvider(self)
def _refresh_data(self):
- if self._write_lock_count == 1 or (
- self.control_files._lock_count == 1 and
- self.control_files._lock_mode == 'r'):
- # forget what names there are
- self._pack_collection.reset()
- # XXX: Better to do an in-memory merge when acquiring a new lock -
- # factor out code from _save_pack_names.
- self._pack_collection.ensure_loaded()
+ if not self.is_locked():
+ return
+ self._pack_collection.reload_pack_names()
def _start_write_group(self):
self._pack_collection._start_write_group()
@@ -2153,7 +2167,8 @@
return self._write_lock_count
def lock_write(self, token=None):
- if not self._write_lock_count and self.is_locked():
+ locked = self.is_locked()
+ if not self._write_lock_count and locked:
raise errors.ReadOnlyError(self)
self._write_lock_count += 1
if self._write_lock_count == 1:
@@ -2161,9 +2176,11 @@
for repo in self._fallback_repositories:
# Writes don't affect fallback repos
repo.lock_read()
- self._refresh_data()
+ if not locked:
+ self._refresh_data()
def lock_read(self):
+ locked = self.is_locked()
if self._write_lock_count:
self._write_lock_count += 1
else:
@@ -2171,7 +2188,8 @@
for repo in self._fallback_repositories:
# Writes don't affect fallback repos
repo.lock_read()
- self._refresh_data()
+ if not locked:
+ self._refresh_data()
def leave_lock_in_place(self):
# not supported - raise an error
=== modified file 'bzrlib/repository.py'
--- a/bzrlib/repository.py 2009-03-16 02:21:04 +0000
+++ b/bzrlib/repository.py 2009-03-16 05:08:19 +0000
@@ -884,18 +884,22 @@
XXX: this docstring is duplicated in many places, e.g. lockable_files.py
"""
+ locked = self.is_locked()
result = self.control_files.lock_write(token=token)
for repo in self._fallback_repositories:
# Writes don't affect fallback repos
repo.lock_read()
- self._refresh_data()
+ if not locked:
+ self._refresh_data()
return result
def lock_read(self):
+ locked = self.is_locked()
self.control_files.lock_read()
for repo in self._fallback_repositories:
repo.lock_read()
- self._refresh_data()
+ if not locked:
+ self._refresh_data()
def get_physical_lock_status(self):
return self.control_files.get_physical_lock_status()
@@ -1084,6 +1088,19 @@
def suspend_write_group(self):
raise errors.UnsuspendableWriteGroup(self)
+ def refresh_data(self):
+ """Re-read any data needed to to synchronise with disk.
+
+ This method is intended to be called after another repository instance
+ (such as one used by a smart server) has inserted data into the
+ repository. It may not be called during a write group, but may be
+ called at any other time.
+ """
+ if self.is_in_write_group():
+ raise errors.InternalBzrError(
+ "May not refresh_data while in a write group.")
+ self._refresh_data()
+
def resume_write_group(self, tokens):
if not self.is_write_locked():
raise errors.NotWriteLocked(self)
@@ -1103,6 +1120,10 @@
If revision_id is None and fetch_spec is None, then all content is
copied.
+ fetch() may not be used when the repository is in a write group -
+ either finish the current write group before using fetch, or use
+ fetch before starting the write group.
+
:param find_ghosts: Find and copy revisions in the source that are
ghosts in the target (and not reachable directly by walking out to
the first-present revision in target from revision_id).
@@ -1117,6 +1138,9 @@
if fetch_spec is not None and revision_id is not None:
raise AssertionError(
"fetch_spec and revision_id are mutually exclusive.")
+ if self.is_in_write_group():
+ raise errors.InternalBzrError(
+ "May not fetch while in a write group.")
# fast path same-url fetch operations
if self.has_same_location(source) and fetch_spec is None:
# check that last_revision is in 'from' and then return a
@@ -1838,7 +1862,11 @@
for repositories to maintain loaded indices across multiple locks
by checking inside their implementation of this method to see
whether their indices are still valid. This depends of course on
- the disk format being validatable in this manner.
+ the disk format being validatable in this manner. This method is
+ also called by the refresh_data() public interface to cause a refresh
+ to occur while in a write lock so that data inserted by a smart server
+ push operation is visible on the client's instance of the physical
+ repository.
"""
@needs_read_lock
=== modified file 'bzrlib/tests/__init__.py'
--- a/bzrlib/tests/__init__.py 2009-03-16 01:56:18 +0000
+++ b/bzrlib/tests/__init__.py 2009-03-16 05:05:52 +0000
@@ -2113,6 +2113,13 @@
made_control = self.make_bzrdir(relpath, format=format)
return made_control.create_repository(shared=shared)
+ def make_smart_server(self, path):
+ smart_server = server.SmartTCPServer_for_testing()
+ smart_server.setUp(self.get_server())
+ remote_transport = get_transport(smart_server.get_url()).clone(path)
+ self.addCleanup(smart_server.tearDown)
+ return remote_transport
+
def make_branch_and_memory_tree(self, relpath, format=None):
"""Create a branch on the default transport and a MemoryTree for it."""
b = self.make_branch(relpath, format=format)
=== modified file 'bzrlib/tests/blackbox/test_push.py'
--- a/bzrlib/tests/blackbox/test_push.py 2009-03-06 11:26:37 +0000
+++ b/bzrlib/tests/blackbox/test_push.py 2009-03-16 05:33:31 +0000
@@ -196,13 +196,12 @@
t.commit(allow_pointless=True, message='first commit')
self.reset_smart_call_log()
self.run_bzr(['push', self.get_url('to-one')], working_dir='from')
- rpc_count = len(self.hpss_calls)
# This figure represent the amount of work to perform this use case. It
# is entirely ok to reduce this number if a test fails due to rpc_count
# being too low. If rpc_count increases, more network roundtrips have
# become necessary for this use case. Please do not adjust this number
# upwards without agreement from bzr's network support maintainers.
- self.assertEqual(20, rpc_count)
+ self.assertLength(20, self.hpss_calls)
def test_push_smart_stacked_streaming_acceptance(self):
self.setup_smart_server_with_call_log()
@@ -213,13 +212,12 @@
self.reset_smart_call_log()
self.run_bzr(['push', '--stacked', '--stacked-on', '../parent',
self.get_url('public')], working_dir='local')
- rpc_count = len(self.hpss_calls)
# This figure represent the amount of work to perform this use case. It
# is entirely ok to reduce this number if a test fails due to rpc_count
# being too low. If rpc_count increases, more network roundtrips have
# become necessary for this use case. Please do not adjust this number
# upwards without agreement from bzr's network support maintainers.
- self.assertEqual(56, rpc_count)
+ self.assertLength(56, self.hpss_calls)
remote = Branch.open('public')
self.assertEndsWith(remote.get_stacked_on_url(), '/parent')
=== modified file 'bzrlib/tests/blackbox/test_shared_repository.py'
--- a/bzrlib/tests/blackbox/test_shared_repository.py 2009-02-23 15:29:35 +0000
+++ b/bzrlib/tests/blackbox/test_shared_repository.py 2009-03-16 05:33:31 +0000
@@ -114,10 +114,9 @@
# be fixed.
self.setup_smart_server_with_call_log()
self.run_bzr(['init-repo', self.get_url('repo')])
- rpc_count = len(self.hpss_calls)
# This figure represent the amount of work to perform this use case. It
# is entirely ok to reduce this number if a test fails due to rpc_count
# being too low. If rpc_count increases, more network roundtrips have
# become necessary for this use case. Please do not adjust this number
# upwards without agreement from bzr's network support maintainers.
- self.assertEqual(16, rpc_count)
+ self.assertLength(16, self.hpss_calls)
=== modified file 'bzrlib/tests/branch_implementations/test_stacking.py'
--- a/bzrlib/tests/branch_implementations/test_stacking.py 2009-03-11 06:50:16 +0000
+++ b/bzrlib/tests/branch_implementations/test_stacking.py 2009-03-16 05:05:52 +0000
@@ -126,14 +126,6 @@
self.assertRevisionNotInRepository('mainline', new_branch_revid)
self.assertRevisionInRepository('newbranch', new_branch_revid)
- # XXX: this helper probably belongs on TestCaseWithTransport
- def make_smart_server(self, path):
- smart_server = server.SmartTCPServer_for_testing()
- smart_server.setUp(self.get_server())
- remote_transport = get_transport(smart_server.get_url()).clone(path)
- self.addCleanup(smart_server.tearDown)
- return remote_transport
-
def test_sprout_stacked_from_smart_server(self):
if isinstance(self.branch_format, branch.BzrBranchFormat4):
raise TestNotApplicable('Branch format 4 is not usable via HPSS.')
=== modified file 'bzrlib/tests/per_repository/__init__.py'
--- a/bzrlib/tests/per_repository/__init__.py 2009-03-07 06:58:17 +0000
+++ b/bzrlib/tests/per_repository/__init__.py 2009-03-16 05:05:52 +0000
@@ -869,6 +869,7 @@
'test_iter_reverse_revision_history',
'test_pack',
'test_reconcile',
+ 'test_refresh_data',
'test_repository',
'test_revision',
'test_statistics',
=== modified file 'bzrlib/tests/per_repository/test_fetch.py'
--- a/bzrlib/tests/per_repository/test_fetch.py 2009-03-11 07:55:14 +0000
+++ b/bzrlib/tests/per_repository/test_fetch.py 2009-03-16 03:34:05 +0000
@@ -45,6 +45,17 @@
repo.fetch(tree_a.branch.repository,
revision_id=None)
+ def test_fetch_fails_in_write_group(self):
+ # fetch() manages a write group itself, fetching within one isn't safe.
+ repo = self.make_repository('a')
+ repo.lock_write()
+ self.addCleanup(repo.unlock)
+ repo.start_write_group()
+ self.addCleanup(repo.abort_write_group)
+ # Don't need a specific class - not expecting flow control based on
+ # this.
+ self.assertRaises(errors.BzrError, repo.fetch, repo)
+
def test_fetch_to_knit3(self):
# create a repository of the sort we are testing.
tree_a = self.make_branch_and_tree('a')
=== added file 'bzrlib/tests/per_repository/test_refresh_data.py'
--- a/bzrlib/tests/per_repository/test_refresh_data.py 1970-01-01 00:00:00 +0000
+++ b/bzrlib/tests/per_repository/test_refresh_data.py 2009-03-16 05:05:52 +0000
@@ -0,0 +1,79 @@
+# Copyright (C) 2009 Canonical Ltd
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""Tests for Repository.refresh_data."""
+
+from bzrlib import (
+ errors,
+ remote,
+ )
+from bzrlib.tests import TestSkipped
+from bzrlib.tests.per_repository import TestCaseWithRepository
+
+
+class TestRefreshData(TestCaseWithRepository):
+
+ def test_refresh_data_unlocked(self):
+ # While not interesting, it should not error.
+ repo = self.make_repository('.')
+ repo.refresh_data()
+
+ def test_refresh_data_read_locked(self):
+ # While not interesting, it should not error.
+ repo = self.make_repository('.')
+ repo.lock_read()
+ self.addCleanup(repo.unlock)
+ repo.refresh_data()
+
+ def test_refresh_data_write_locked(self):
+ # While not interesting, it should not error.
+ repo = self.make_repository('.')
+ repo.lock_write()
+ self.addCleanup(repo.unlock)
+ repo.refresh_data()
+
+ def test_refresh_data_in_write_group_errors(self):
+ repo = self.make_repository('.')
+ repo.lock_write()
+ self.addCleanup(repo.unlock)
+ repo.start_write_group()
+ self.addCleanup(repo.abort_write_group)
+ # No flow control anticipated, BzrError is enough
+ self.assertRaises(errors.BzrError, repo.refresh_data)
+
+ def test_refresh_data_after_fetch_new_data_visible(self):
+ source = self.make_branch_and_tree('source')
+ revid = source.commit('foo')
+ repo = self.make_repository('target')
+ token = repo.lock_write()
+ self.addCleanup(repo.unlock)
+ # Force data reading on weaves/knits
+ repo.revisions.keys()
+ repo.inventories.keys()
+ # server repo is the instance a smart server might hold for this
+ # repository.
+ server_repo = repo.bzrdir.open_repository()
+ try:
+ server_repo.lock_write(token)
+ except errors.TokenLockingNotSupported:
+ raise TestSkipped('Cannot concurrently insert into repo format %r'
+ % self.repository_format)
+ try:
+ server_repo.fetch(source.branch.repository, revid)
+ finally:
+ server_repo.unlock()
+ repo.refresh_data()
+ self.assertNotEqual({}, repo.get_graph().get_parent_map([revid]))
=== modified file 'bzrlib/tests/test_knit.py'
--- a/bzrlib/tests/test_knit.py 2009-02-25 21:13:22 +0000
+++ b/bzrlib/tests/test_knit.py 2009-03-16 07:44:05 +0000
@@ -368,25 +368,23 @@
"""
tree = self.make_branch_and_memory_tree('tree')
tree.lock_write()
- try:
- tree.add([''], ['root-id'])
- tree.commit('one', rev_id='rev-1')
- tree.commit('two', rev_id='rev-2')
- tree.commit('three', rev_id='rev-3')
- # Pack these two revisions into another pack file, but don't remove
- # the originials
- repo = tree.branch.repository
- collection = repo._pack_collection
- collection.ensure_loaded()
- orig_packs = collection.packs
- packer = pack_repo.Packer(collection, orig_packs, '.testpack')
- new_pack = packer.pack()
-
- vf = tree.branch.repository.revisions
- finally:
- tree.unlock()
- tree.branch.repository.lock_read()
self.addCleanup(tree.branch.repository.unlock)
+ tree.add([''], ['root-id'])
+ tree.commit('one', rev_id='rev-1')
+ tree.commit('two', rev_id='rev-2')
+ tree.commit('three', rev_id='rev-3')
+ # Pack these three revisions into another pack file, but don't remove
+ # the originals
+ repo = tree.branch.repository
+ collection = repo._pack_collection
+ collection.ensure_loaded()
+ orig_packs = collection.packs
+ packer = pack_repo.Packer(collection, orig_packs, '.testpack')
+ new_pack = packer.pack()
+ # forget about the new pack
+ collection.reset()
+ repo.refresh_data()
+ vf = tree.branch.repository.revisions
del tree
# Set up a reload() function that switches to using the new pack file
new_index = new_pack.revision_index
=== modified file 'bzrlib/tests/test_remote.py'
--- a/bzrlib/tests/test_remote.py 2009-03-16 05:55:42 +0000
+++ b/bzrlib/tests/test_remote.py 2009-03-16 08:26:29 +0000
@@ -1945,8 +1945,15 @@
class _StubRealPackRepository(object):
def __init__(self, calls):
+ self.calls = calls
self._pack_collection = _StubPackCollection(calls)
+ def is_in_write_group(self):
+ return False
+
+ def refresh_data(self):
+ self.calls.append(('pack collection reload_pack_names',))
+
class _StubPackCollection(object):
@@ -1956,9 +1963,6 @@
def autopack(self):
self.calls.append(('pack collection autopack',))
- def reload_pack_names(self):
- self.calls.append(('pack collection reload_pack_names',))
-
class TestRemotePackRepositoryAutoPack(TestRemoteRepository):
"""Tests for RemoteRepository.autopack implementation."""
=== modified file 'bzrlib/tests/test_repository.py'
--- a/bzrlib/tests/test_repository.py 2009-03-13 05:40:03 +0000
+++ b/bzrlib/tests/test_repository.py 2009-03-16 07:44:05 +0000
@@ -929,6 +929,7 @@
tree.lock_read()
self.addCleanup(tree.unlock)
packs = tree.branch.repository._pack_collection
+ packs.reset()
packs.ensure_loaded()
name = packs.names()[0]
pack_1 = packs.get_pack_by_name(name)
=== modified file 'bzrlib/tests/test_smart.py'
--- a/bzrlib/tests/test_smart.py 2009-03-16 04:48:09 +0000
+++ b/bzrlib/tests/test_smart.py 2009-03-16 08:26:29 +0000
@@ -1282,6 +1282,8 @@
def test_autopack_needed(self):
repo = self.make_repo_needing_autopacking()
+ repo.lock_write()
+ self.addCleanup(repo.unlock)
backing = self.get_transport()
request = smart.packrepository.SmartServerPackRepositoryAutopack(
backing)
@@ -1293,6 +1295,8 @@
def test_autopack_not_needed(self):
tree = self.make_branch_and_tree('.', format='pack-0.92')
repo = tree.branch.repository
+ repo.lock_write()
+ self.addCleanup(repo.unlock)
for x in range(9):
tree.commit('commit %s' % x)
backing = self.get_transport()
More information about the bazaar-commits
mailing list