[MERGE] Enable caching of negative revision lookups in RemoteRepository write locks when no _real_repository has been constructed.
John Arbash Meinel
john at arbash-meinel.com
Wed Apr 29 22:26:17 BST 2009
John Arbash Meinel has voted tweak.
Status is now: Conditionally approved
Comment:
if self._real_repository is None:
+ self._unstacked_provider.missing_keys.clear()
self.bzrdir._ensure_real()
self._set_real_repository(
self.bzrdir._real_bzrdir.open_repository())
^- Don't you need to call
self._unstacked_provider.enable_cache(cache_misses=True) at this point?
Then it seems consistent with:
+ cache_misses = self._real_repository is None
+
self._unstacked_provider.enable_cache(cache_misses=cache_misses)
(so on the first request, if the real_repo isn't present, you cache
misses, until something calls _ensure_real, then you stop caching
misses).
I'm not 100% sure about this one:
def insert_stream(self, stream, src_format, resume_tokens):
target = self.target_repo
+ target._unstacked_provider.missing_keys.clear()
if target._lock_token:
verb = 'Repository.insert_stream_locked'
extra_args = (target._lock_token or '',)
Though my guess is that you wouldn't have to stop caching misses as long
as _real_repo isn't invoked yet. You could even get away with looping
over the stream and only removing ones that are seen, but this is
certainly simpler.
For details, see:
http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C1240901973.5830.19.camel%40lifeless-64%3E
Project: Bazaar
More information about the bazaar
mailing list