[MERGE] Enable caching of negative revision lookups in RemoteRepository write locks when no _real_repository has been constructed.
John Arbash Meinel
john at arbash-meinel.com
Fri May 1 15:23:08 BST 2009
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Robert Collins wrote:
> On Wed, 2009-04-29 at 17:26 -0400, John Arbash Meinel wrote:
>> John Arbash Meinel has voted tweak.
>> Status is now: Conditionally approved
>> Comment:
>> if self._real_repository is None:
>> + self._unstacked_provider.missing_keys.clear()
>> self.bzrdir._ensure_real()
>> self._set_real_repository(
>> self.bzrdir._real_bzrdir.open_repository())
>>
>> ^- Don't you need to call
>> self._unstacked_provider.enable_cache(cache_misses=True) at this point?
>
> We already have a populated cache; enable_cache clears it. Clearing the
> missing keys isn't quite enough - I'll follow up with a patch to caching
> provider to allow a 'disable_cache_misses' method.
>
I think I meant "enable_cache(cache_misses=False)", so yeah, some sort
of 'know we need to stop caching misses', whether that is a separate
function or not.
>
>> I'm not 100% sure about this one:
>> def insert_stream(self, stream, src_format, resume_tokens):
>> target = self.target_repo
>> + target._unstacked_provider.missing_keys.clear()
>> if target._lock_token:
>> verb = 'Repository.insert_stream_locked'
>> extra_args = (target._lock_token or '',)
>>
>>
>> Though my guess is that you wouldn't have to stop caching misses as long
>> as _real_repo isn't invoked yet. You could even get away with looping
>> over the stream and only removing ones that are seen, but this is
>> certainly simpler.
>
> Long term we may do that, but its more work today, and not directly
> beneficial.
>
> -Rob
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkn7BcsACgkQJdeBCYSNAANfOgCggNbSipIJ7TOzbSfcisyJgPWG
h4MAn1j60HSEkbbakGtgEbO6pF1kkEUW
=w8HM
-----END PGP SIGNATURE-----
More information about the bazaar
mailing list