Branch.lock_read() rather expensive

Martin Pool martinpool at gmail.com
Sun Oct 16 03:16:49 BST 2005


On 16/10/05, Aaron Bentley <aaron.bentley at utoronto.ca> wrote:

> John Arbash Meinel wrote:
> > What I found was that quite a bit of time was taken up, just in
> > lock_read(). I figured part of this is because nothing takes out a read
> > lock beforehand, so it takes out 1 read lock for each revision it parses.
> > To prove this, I wrapped the common_ancestor() call with a lock/unlock
> > statement, and this is what I found

It's interesting that it is so slow.  I wonder if it is the trace call
or something else.

> Ah.  I didn't understand the mechanics here.  I assumed if I didn't
> explicitly lock it, it wouldn't be locked at all.

The policy is meant to be this:

 - Read locks are never required, because they can't be enforced over http

 - Read locks turn on caching of some objects in memory, so they can
make operations faster and give repeatable reads.  It's a good idea to
take and release the locks at a high level so that the cache is
effective and so there's not too much overhead.  Typically this should
be just below the command level.

--
Martin




More information about the bazaar mailing list