rich roots conversion

Robert Collins robert.collins at
Wed Apr 15 13:49:37 BST 2009

On Wed, 2009-04-15 at 22:28 +1000, Ian Clatworthy wrote:
> While chatting at the last sprint, I was grumbling to poolie about the
> limitations (e.g. no info on end of lifecycle) and overhead of keeping
> the per file graphs updated. He suggested that, being derived
> metadata,
> we didn't need to strictly store them and could calculate them and
> cache
> them on demand if required. I'm not sure how serious he was but I
> thought I'd throw the idea out there for wider discussion.

Its been an ongoing theme in inventory design for some years. Serious
yes, on the table for current work - I don't think we have the
bandwidth. I considered it for brisbane-core, but decided against it for
a number of reasons, not least that it raised the complexity of the
project by at least one OOM.

> Among other reasons, an advantage of late calculation is that we can
> change
> the algorithm over time as our needs change. And this is *non-trivial*
> logic already as jam explained to me in Brisbane. Adding features like
> proper copy tracking makes me shiver when I think how it might
> complicate
> the rules further still.

I don't know that it would.

> Outside log, what parts of our application use this metadata?

annotate, merge, search

> If we didn't calculate it as part of every commit, how much faster
> would commit of a merge be, say? Do we *really* need per-file-graphs
> now
> given other advances we're made like the CHK-based format?

I can't answer this without serious benchmarking :).

> IIRC, poolie said that git didn't store per-file-graph metadata but
> hg did. Can anyone confirm that?

Thats correct.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
Url : 

More information about the bazaar mailing list