On remote weaves

Aaron Bentley aaron.bentley at utoronto.ca
Tue Aug 2 17:36:13 BST 2005


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

John A Meinel wrote:
> We probably need to work out some of the interfaces to a Storage
> location, since you seem very keen on using Weave for merging, we
> probably need a get_weave() request for store.

I think this adds an unnecessary requirement, since all of the data in a
weave can also be produced from flat stores.

So one approach is to copy all the relevent revisions from the remote
branch to the local branch, and then do the weave using local data.  So
you don't have to worry about the RemoteStore interface-- only
Branch.update_revisions needs to.  And you don't need to convert data,
because we'll assume your local storage is already a weave.

Since we don't need a lot of weave data for a merge, just the history
since any last common ancestor, I'm not convinced that we need local
weave storage at all-- we can just generate them at need, and the cost
is proportional to the number of commits to that file since the last merge.

> The other possibility was to make a SmartBranch, which started to
> override more of the Branch operations. Aaron was thinking that it could
> serve up the .text_store and .inventory_store, etc on it's own (not a
> separate Storage + Transport class). He feels that they are part of the
> public interface of Branch, and thus needs to be preserved.

The reason I say they're part of the public interface is because their
names don't begin with '_'.  That makes them public variables, and part
of the interface.

>>From my experience, though, the *_store members are more of an
> implementation detail. Everyone else should be going through the
> get_revision() type interfaces.

The problem with that is commands like update_revisions, which operate
on another branch's *store members.

> (Otherwise they just get files, rather
> than getting Revision objects).

Actually, they get the texts assigned to those ids.  The underlying
files may be different.

> There might be some places that go directly to the store, but I feel
> those probably should just be cleaned up.

It's conceivable, but to me it looks like there's not a lot of return
for the effort.

> A SmartBranch would take quite a bit more to implement. It's probably
> worth it, but I think we could have nice remote operations working over
> SmartTransport a lot sooner.

I don't see that.  xml-rpc looks quite easy to layer onto existing objects.

Like this: (from
http://www.onlamp.com/pub/a/python/2001/01/17/xmlrpcserver.html)
    def call(self, method, params):
	print "Dispatching: ", method, params
	try:
		server_method = getattr(self, method)
	except:
		raise AttributeError, \
		    "Server does not have XML-RPC " \
		    "procedure %s" % method
    return server_method(method, params)

I don't see what's lacking in the sftp protocol.  Locking and pipelining
are both explicitly supported, as well as lstat.  It's just that by
using the commandline sftp client you limit yourself to what it
supports, and you aren't able to take full advantage of the protocol.

I can understand that it would be fun to implement your own remote
filesystem protocol, but unless there are capabilities SFTP lacks, I
think it makes more sense to use what's already out there.

Aaron
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD4DBQFC76D90F+nu1YWqI0RAmn4AJ0f9NhxRI2vJn+vKM8DiXQiYiHkcACY+POQ
ynAH3IvuLel7Q2CRLSwuzQ==
=QMtl
-----END PGP SIGNATURE-----




More information about the bazaar mailing list