Persistence in charms

Gary Poster gary.poster at canonical.com
Fri Feb 24 02:36:22 UTC 2012


On 02/22/12 11:55, Clint Byrum wrote:
> Excerpts from Brad Crittenden's message of Tue Feb 21 14:11:42 -0800 2012:
>> Hi,
>>
>> For our work with the buildbot master charm we'd need the ability to save history before the master service is torn down and restore it when it is re-deployed.  I've been told there has been some discussion about direct support in Juju but I've not been able to find the threads.  A pointer to that discussion or a summary as to the plan would be very helpful.
>>
>
> I think we've just had wild, beer/redbull fueled discussions, but nobody has
> been willing to go too deep into actual planning because there is so much we
> have to do.
>
>> I tried rolling my own store via scp but find there are too many pitfalls (key management, issues with ssh/scp and user identities, etc) and unattractive aspects.  I'm switching gears to provide persistence using boto to an S3-like store.
>>
>
> We *do* need to abstract object storage just like we do with compute for
> machines. It should be fairly easy, and those discussions, while also
> verbal, did produce some notes that need publishing at some point. The
> general idea is that you'd have two commands:
>
> get-object http://some-website/foo
>
> And it would download foo from that url, and cache it in object storage
> for the next time somebody requests that url. This makes the system more
> robust and independent of remote network failures.
>
> put-object /var/lib/mything/myfile.txt
>
> Would return a url that would be fetchable with get-object. This would
> be useful for things like sharing big data blobs across relations, as
> you could stick that url into the relationship and the other side can
> then fetch it.
>
> The files would be in S3 for EC2 (including whatever you've specified as
> your S3 provider for OpenStack) and the webdav service for Orchestra and
> the local provider. Because all providers already have to define object
> storage for charm storage, this is a pretty low hanging fruit to make
> charms more useful and robust.
>
> Unfortunately, we haven't even had time to write this down in a spec. :-P

:-)

This sounds like a potentially interesting feature, but, if we 
understand it, it doesn't seem to fit the use case we had in mind.  If 
put-object returns a URL when the charm stops, we still have to stash 
that URL somewhere for the charm to use when it restarts.  We'd rather 
have a more seamless approach.

Moreover, wanting a public url for these kinds of persistent 
files--logs, database files and the like--seems like it would be the 
exception rather than the rule.  Having them shared privately seems more 
commonly desirable.

>
>> When running a non-local (ec2 or openstack) do charms have access to the access-key and secret-key?  Would it be possible, even for lxc deployments, to add support for an external store via the environments.yaml file that the charm can access?  Providing that configuration via the environments file would incur less exposure to secrets than putting them in a config file.
>>
>
> Technically no, they should not have access to those details. In practice,
> they're in Zookeeper, and everybody has full read access to Zookeeper,
> so you could get them out. However, that is not a guarantee, and will most
> certainly be shut down soon as we get more serious about security in juju.
>
> For now, your best bet is probably to wait for subordinates to land (VERY
> SOON!) and then you can write a subordinate charm that will send any file
> on the filesystem elsewhere. I plan to use this to write a bacula-client
> subordinate charm and then use that for backups on a cluster of machines.
>

Subordinate charms will be a valuable tool and I look forward to them 
being around.

Unless I misunderstand, they cannot abstract away the environment--lxc 
or ec2 or maas.  In our discussions, we think charms ought to be able to 
say "stash this file" and "give me back this file" without having to 
worry about whether the stashing happened in an S3 bucket, a directory 
somewhere in the LXC host's filesystem, or whatever the maas equivalent is.

The approach to this that I like best would be for Juju to simply 
provide a mounted directory in a known/documented location that promises 
to be a persistent store for the machine, if such has been configured. 
The charm simply looks for that directory, and uses it as it wishes. 
That could be easy to use, powerful, and fairly easy to implement (I 
suspect).  It would not even require writing any hooks.

Alternatively, we could have something like the two commands you 
mentioned, but put-file would not return anything, and get-file would 
simply take the same name.  For instance, if you wanted to stash 
/var/lib/some_program/some.log, you would be able to say "put-file 
/var/lib/some_program/some.log" in a stop hook and it would do so in 
whatever way was appropriate for the environment, or give an error if 
the functionality was not configured or available.  In an install hook, 
you could say "get-file /var/lib/some_program/some.log" and it would get 
the previously saved file, or give an error if there was no previously 
saved file for whatever reason.  Perhaps it would support globs....

We think either of these could work with EC2 (optionally configured S3 
bucket in environments.yaml, for instance), lxc (optionally configured 
additional data directory in the host) and presumably with MaaS (because 
they do something like this now, but I don't know what it is :-P ).

We'd like to have this and could produce a prototype (and maybe more) of 
one of these in slack time if there's a bit of interest and encouragement.

Pertinent to all of these stories, Brad raised the possible issue of 
charms using more S3/disk/whatever storage than a deployer might desire. 
  That could presumably be addressed with constraints at a later date.

Gary



More information about the Juju mailing list