Reliably share a persistent filesystem between units

Samuel Cozannet samuel.cozannet at canonical.com
Thu Jul 7 10:37:48 UTC 2016


Hi Robin,

Interesting question, I'm expecting a lot of answers and comments :). Here
are mines:

* If the volume of data is small and its life expectancy is long, with few
writes, but potentially lots of reads (like config files for example), you
may want to use a simple system like etcd or consul. They are primarily
used as service discovery, but this can also be a valid use case for the
technology.
* If you need faster or bigger data that you can represent as KV, Redis
would provide you with the speed, and some architectures offer HA though it
doesn't scale much by default at least in write mode. Here you would
essentially use redis with key = $(cat document)
* If you absolutely need file system storage,
  * There are a few p2p sync technologies like https://getsync.com/ that
would give you this functionality
  * I wouldn't focus on NFS being the PoF. Actually, this is not your
problem, but the storage provider's problem to provide a valid, reachable
endpoint for storage. Your app shall not be aware of the underlying
storage, and NFS is a service you just consume. The shining new EFS on
Amazon is a distributed storage system with an NFS endpoint. Ceph can offer
NFS endpoints as well, and so do a lot of storage backends. So NFS is just
a way to present a mounted file system to the network.
  * S3 offer a lot of tooling to sync files, so if speed, latency and cost
are not a problem, you may want to use that. It however has the drawback of
requiring secrets.
  * Samba is also an option eventually.
* If your workloads is containerized, ClusterHQ Flocker is offering a semi
viable option for container storage, where they allocate cloud block
storage to containers and manage resiliency. As most clouds offer block
storage backup / snapshot, you would get the feature you want. However,
Flocker currently has a SPOF itself as the master is not HA.

My 2 cents,
Best,
Sam




--
Samuel Cozannet
Cloud, Big Data and IoT Strategy Team
Business Development - Cloud and ISV Ecosystem
Changing the Future of Cloud
Ubuntu <http://ubuntu.com>  / Canonical UK LTD <http://canonical.com> / Juju
<https://jujucharms.com>
samuel.cozannet at canonical.com
mob: +33 616 702 389
skype: samnco
Twitter: @SaMnCo_23
[image: View Samuel Cozannet's profile on LinkedIn]
<https://es.linkedin.com/in/scozannet>

On Thu, Jul 7, 2016 at 12:02 PM, Robin Winslow <robin at canonical.com> wrote:

> Does anyone know of the best way to share a folder between Juju units in a
> persistent and reliable way?
>
> Up until now we've been dealing with shared data that needs persistence by
> storing it in Swift, and so I have written my applications to interact
> with a Swift server directly (and therefore the need to be provided with
> the Swift credentials).
>
> However, I now have a situation where it would be much neater if the
> application could simply interact with a local folder without any knowledge
> of the underlying storage system. So I need a way to have the data in that
> folder reliably shared between all the application units with Juju, and
> also persisted somewhere outside the deployment in case the environment is
> destroyed.
>
> I assume the sharing could be simply achieved using NFS
> <https://jujucharms.com/nfs/> or similar, but AFAIK that doesn't in and
> of itself provide any redundancy or help with persisting the data.
>
> Has anyone done anything like this?
>
> From looking into it, it looks like CephFS
> <http://docs.ceph.com/docs/master/cephfs/> might do what I want (and it
> looks like
> <http://docs.ceph.com/docs/master/cephfs/best-practices/#which-ceph-version>
> it became officially "stable" in the Jewel release
> <http://ceph.com/releases/v10-2-2-jewel-released/> on June 15th),
> allowing me to mount a remote Ceph setup at a specific folder within my
> unit (correct me if I'm wrong here). However I have no prior experience
> with setting up or interacting with Ceph either within or outside Juju.
>
> Has anyone implemented CephFS before? Does anyone know if it can be done
> with existing Juju charms? If not, I'd be happy to try writing something,
> although Ceph will be a significant learning curve for me. It looks like
> it needs a Ceph Metadata Server (MDS) which I couldn't see a charm for
> <https://jujucharms.com/q/ceph>.
>
> Or is there a simpler solution that makes sense? Perhaps I'm
> overestimating the problems with using the NFS charm for this?
>
> Any help would be much appreciated. Thanks.
>
> --
> Juju mailing list
> Juju at lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20160707/c9400cc9/attachment.html>


More information about the Juju mailing list