[MERGE] Graph.heads()
John Arbash Meinel
john at arbash-meinel.com
Wed Aug 22 18:19:40 BST 2007
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Robert Collins wrote:
> Robert Collins has voted resubmit.
> Status is now: Resubmit
> Comment:
> In packs we have a larger graph space.
> Rather than one graph per file, we have one graph for the repository,
> with pairs (file_id, revision_id) as keys.
>
> It would be nice to cut a couple of layers out and work directly in that
> space. If not, then at least the repository should be able to do that
> itself and translate only the result - so I'd really prefer to see this
> functionality on repository.
Well, a few bits to consider...
Are you saying that the actual .heads() call should not be on Graph?
I would *like* to have the file graph call be:
g = Repository.get_file_graph(file_id)
heads = g.heads(revision_ids)
Alternatively, we could put the whole thing on Repository with
heads = Repository.get_file_graph_heads(file_id, revision_ids)
However, all of this is mute, because it can't actually be used by
Inventory.find_previous_heads. Because that is passed a
"versioned_file_store" and not a repository.
Now, I suppose we could push it up further in the stack. Such that these
lines in CommitBuilder:
previous_entries = ie.find_previous_heads(
parent_invs,
self.repository.weave_store,
self.repository.get_transaction())
# we are creating a new revision for ie in the history store
# and inventory.
ie.snapshot(self._new_revision_id, path, previous_entries, tree, self)
Change to something like:
candidates = {}
for inv in parent_invs:
if ie.file_id in inv:
... # Reproduce all the sanity checking in find_previous_heads
candidates[ie.revision] = ie
heads = self.repository.get_file_graph_heads(ie.file_id,
candidates.keys())
previous_entries = [candidates[rev] for rev in heads]
ie.snapshot(self._new_revision_id, path, previous_entries, tree, self)
Now, it would make the most sense to have a new function on ie that acts
*just like* InventoryEntry.find_previous_heads, but that takes a
Repository, rather than taking a versioned file store.
Also, I think Robert really wants to rewrite the whole code so he can do
things in "batches". I'm not really sure how to make that all work,
though. I think it is better to have an intelligently cached index,
rather than trying to figure out how to pre-compute all the files that
you are going to care about, and then make requests for all of those in
a streaming fashion.
I would really like to get something like this merged, so it would be
nice to understand what you think it would take to do so.
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFGzHAsJdeBCYSNAAMRAmulAJ9YiV0h/zDmITW7s59RXZBOBm7nIgCgirQZ
+afInHXxRJ7p4dSDEW+wQuk=
=PXs+
-----END PGP SIGNATURE-----
More information about the bazaar
mailing list