Juju storage - early access
Andrew Wilkins
andrew.wilkins at canonical.com
Thu Apr 2 11:07:21 UTC 2015
Hello,
Hot on the heels of health/status is a pre-release look into the "storage"
feature. If you're interested in seeing where things are at and/or in being
a guinea pig, read on. There are still significant chunks of work to be
done, but what's there should be usable. If you do have a play, please let
us know of any issues or suggestions that you have.
If you are so inclined to test this feature out, you will need to build
from source and enable the feature with:
export JUJU_DEV_FEATURE_FLAGS=storage
prior to bootstrapping.
In a nutshell, the storage feature will enable you to create charms that
declare storage requirements; when you deploy the charm, you can specify
how those requirements should be fulfilled by specifying a storage "pool",
size (i.e. size of volume/filesystem), and count (number of
volumes/filesystems). Many charm storage requirements will be singular, but
others may request, for example, multiple block devices (e.g. for redundant
storage).
EXAMPLE
--------------
A while back I modified the PostgreSQL charm to use the storage feature.
You can find my branch at
https://code.launchpad.net/~axwalk/charms/trusty/postgresql/trunk. If
you're interested in seeing the changes required to the charm, they're
here:
http://bazaar.launchpad.net/~axwalk/charms/trusty/postgresql/trunk/revision/112
(4 lines of code, 4 lines of YAML - not bad!)
Anyway, here's how you can go about using the new feature.
$ export JUJU_DEV_FEATURE_FLAGS=storage
$ juju bootstrap --upload-tools # only because it's from source!
$ juju deploy cs:~axwalk/postgresql pg-rootfs
$ juju deploy cs:~axwalk/postgresql --storage data=loop,1G pg-loop
$ juju deploy cs:~axwalk/postgresql --storage data=ebs,10G pg-magnetic
$ juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
$ juju storage pool create ebs-iops ebs volume-type=provisioned-iops
iops=300
$ juju deploy cs:~axwalk/postgresql --storage data=ebs-iops,10G pg-iops
$ sleep $AWHILE
$ juju storage list
[Storage]
UNIT ID LOCATION STATUS PERSISTENT
pg-iops/0 data/4 pending false
pg-loop/0 data/1 /srv/data attached false
pg-magnetic/0 data/2 /srv/data attached false
pg-rootfs/0 data/0 /srv/data attached false
pg-ssd/0 data/3 /srv/data attached false
If no storage constraints are specified, then Juju will place
filesystem-kind storage on the root filesystem. Charms can specify a
minimum size, which will be used, otherwise Juju defaults the size to 1GiB.
The "pg-rootfs" service above is an example of this.
If you specify size and/or count, but no pool, then Juju will choose the
default storage provider for the environment (e.g. "ebs" for "ec2"). This
appears to be broken at the moment.
Each storage provider has an implicit pool which is the storage provider's
name (e.g. "ebs"), with no configuration. Each storage provider may
register additional default storage pools, e.g. "ebs-ssd" as you can see
used above. If the provided pools are not sufficient, you can specify your
own via the CLI.
IMPLEMENTED FEATURES
---------------------------------------
- deploy services with storage (requires a charm that declares storage
requirements)
* block-device storage, i.e. no filesystem, charm can do with the block
device what it wants
* filesystem storage, i.e. a mounted filesystem, may be local or remote
* volume-backed filesystems, Juju will manage a filesystem on
block-device storage
- add machine with volumes (mostly used for testing)
* syntax is "juju add-machine --disks=<pool,size,count>
- X-storage-attached hook, notifying units of storage attachment
- storage-get hook, enabling units to enquire about properties of the
attached storage
- "juju storage" CLI:
* list storage instances/attachments
* list volumes/attachments
* list and create storage pools
- probably other things which elude me
PROVIDER SUPPORT
-------------------------------
All environment providers support the following storage providers:
- loop: block-kind, creates a file in the agent data-dir and attaches a
loop device to it. See the caveats section below for a comment on using the
loop storage provider with local/LXC.
- rootfs: filesystem-kind, creates a sub-directory in the agent's data-dir
for the unit/charm to use
We have implemented support for creating volumes in the ec2 provider, via
the "ebs" storage provider. By default, the ebs provider will create cheap
and nasty magnetic volumes. There is also an "ebs-ssd" storage pool
provided OOTB that will create SSD (gp2) volumes. Finally, you can create
your own pools if you like; the parameters for ebs are:
- volume-type: may be "magnetic", "ssd", or "provisioned-iops"
- iops: number of provisioned IOPS (requires volume-type=provisioned-iops)
Some storage providers also support a "persistent=<bool>" pool attribute.
By using this, Juju will not tie the lifetime of storage entities (volumes,
filesystems) to the lifetime of the machines that they are attached to. In
EC2/EBS terms, a persistent volume is one without any attachments having
the DeleteOnTermination flag set. Juju will not allow you to cleanly
destroy an environment with persistent volumes; you may use "--force" to
subvert this as usual, but please be aware that this will leak the resource.
UNIMPLEMENTED/CAVEATS
-----------------------------------------
- Storage destruction. Unit and machine destruction are prevented if they
have attached storage, so if you're playing with storage expect to have to
"destroy-environment --force".
- Unit agent will not yet wait for required storage before installing.
- Unit/machine placement is currently disabled if storage is specified.
- Doesn't matter at the moment due to above point, but charm deployment
currently does not check for mount-point conflicts.
- Charm upgrade does not currently check for incompatible changes to
storage requirements in deployed charms.
- No X-storage-detach(ing|ed) hook yet; it hopefully goes without saying,
but anyway: since storage destruction isn't yet done, you won't be notified
when storage is destroyed.
- storage-add command: this is being worked on now, and will hopefully be
ready soon.
- OpenStack/Cinder storage provider. This is well under way, and should be
ready within the next couple of weeks.
- MAAS storage provider. We've been syncing up with the MAAS team, but
unfortunately we have not yet been able to schedule time to do the work.
Work will commence as soon as the Cinder provider lands.
- For LXC (local provider or not), you must currently set
"allow-lxc-loop-mounts" for the loop storage provider to work. With the
default AppArmor profile, LXC does not permit containers to mount loop
devices. By setting allow-lxc-loop-mounts=true, you are explicitly enabling
this, and access to all loop devices on the host.
- For LXC only, loop devices should but are not currently marked as
"persistent". This is because loop devices remain in use even after the
container is destroyed. As such, you will need to use "losetup" to detach
loop devices that were allocated by containers.
(Note, that's all scheduled for "phase 1"; there's still much more to do
beyond all of that!)
Whew.
Cheers,
Andrew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju-dev/attachments/20150402/6f18dbb5/attachment-0001.html>
More information about the Juju-dev
mailing list