[MERGE][0.11] Waiting on locks
Robey Pointer
robey at lag.net
Sun Aug 27 08:01:01 BST 2006
On 25 Aug 2006, at 13:54, John Arbash Meinel wrote:
> Recently there was a discussion that we should actually try to wait
> on a
> remote lock, rather than immediately failing. I think Martin mentioned
> that it should already be doing that, but experience said
> otherwise, and
> looking closer, I can see why Martin might have thought that.
>
> We have:
>
> LockDir.attempt_lock() which actually attempts to lock the remote
> file,
> and fails otherwise.
> LockDir.wait_lock() Which will spin for a while, trying to obtain the
> lock every X seconds for Y total seconds before failing.
> and
> LockDir.lock_write() which is what the LockableFiles calls.
>
> The documentation on lock_write says:
> def lock_write(self):
> """Wait for and acquire the lock."""
> self.attempt_lock()
>
> So I think it was intended that lock_write() would call
> 'self.wait_lock()' rather than calling 'self.attempt_lock()'.
>
> However, the default lock timeout of 5 minutes seems a little bit
> long.
>
> If we wanted to be really nice, we would make it configurable per
> branch, since we might expect to wait longer on some branches than
> others. However, that brings up lots of layering issues. Since now we
> need to pass that information from the Branch down into the
> LockableFiles, which has to know that it is using a LockDir, since it
> needs to use wait_lock() instead of lock_write().
>
> To be slightly less helpful, we could make it globally
> configurable, but
> I still feel that having LockDir instantiate a GlobalConfig()
> object, so
> that it can figure out the default lock timeout is not optimal either.
As I mentioned last week, I think the default timeout should be
basically infinite. Let the user hit ^C if things aren't going well
-- network lag can be pretty unpredictable.
For the bzrlib API, a timeout arg would be a good thing, but for the
CLI, I think it should try until interrupted.
robey
More information about the bazaar
mailing list