Deploy openstack-core bundle without maas (manual provider)

Mark Shuttleworth mark at ubuntu.com
Fri Sep 25 13:46:45 UTC 2015


On 25/09/15 13:25, Merlijn Sebrechts wrote:
> I suspect this is because some charms are deployed in lxc containers. These
> containers are not accessible from other machines since they are using a
> virtual network shared with the host, unknown to other machines.

That would definitely prevent them from serving one another, yes.

We work around this often by having the LXC machines DHCP on the base
network. It's not ideal to have dynamic addressing for critical
infrastructure, but for a PoC it's fine and works. Just set a relatively
short lease time if you are deploying and redeploying many times because
you will otherwise quickly exhaust your address space.

> So my questions are:
>
> 1. Am I correct to think that the lxc network issues are the source of the
> problem?

It's plausible, yes.

> 2. If so, what would be the best course of action to solve this problem on
> a manual provider? Preferably using juju/charms.

If you have real machines, you could try co-locating the different
service units on the same machine, without LXC containers to isolate them.

> 3. Does ceph accept  "/dev/sda4" (an unmounted, unformatted partition) as
> osd-device?

A little birdie told me... no. Ceph wants to keep track of the whole
disk, literally, with the ability to format it and determine labels etc.
This is so you can rip the disk out of one machine, stick it in another,
and Ceph will see it, remember it, find all the data on it, as if
nothing had changed.

> 4. Will I encounter other problems trying to run this bundle on a manual
> provider?

It should be fine for a proof of concept. In fact I'd go further and say
I'd like us to commit that it is doable this way, as we've seen on this
list that folks often don't have capacity for a full dynamic MAAS
cluster early in their scale-out journey. We should have a
"manual-friendly" openstack bundle, which includes all the sorts of
changes you're making. So please persist, and thank you!

Mark




More information about the Juju mailing list