[ubuntu-cloud] Fwd: Notes on using OpenStack with Juju

Jorge O. Castro jorge at ubuntu.com
Mon Nov 26 14:58:00 UTC 2012

For those of you not on the juju list...

---------- Forwarded message ----------
From: Thomas Leonard <tal at it-innovation.soton.ac.uk>
Date: Fri, Nov 23, 2012 at 11:38 AM
Subject: Notes on using OpenStack with Juju
To: juju <juju at lists.ubuntu.com>


We decided to set up an OpenStack system on a hosted machine to run
Juju services. I'm new to OpenStack and found it fairly
difficult/confusing. I'm sending a (slightly sanitised) version of the
notes I made during the process in case it's useful for others or
inspires someone to make it easier in future...


Set the desired hostname first. OpenStack doesn't like it if you try
to change it later (like I did) and references to localhost will


OpeStack requires an LVM volume group, which must be named
“nova-volumes”. Create it on a spare partition with e.g.

vgcreate nova-volumes /dev/sda7


We don’t have enough IP addresses to give every VM a public IP.
Instead, the users must establish a VPN connection and use that to
manage the VMs. The few services which need to be public can be
assigned one of the limited public IPs available.

OpenStack supports a system called cloudpipe for automatically creating VPNs:


However, the instructions have many typos and other errors and I
couldn’t get it to work, so I installed OpenVPN manually instead:


I initially installed it in a VM, but there were some complex routing
issues with that, so now it's running on the main OpenStack host. Note
that the VPN is not for security (if we had IPv6, we’d probably just
make them all public IPs anyway), so I made a single client key-pair
for everyone (and configured the server to accept multiple clients
with the same certificate).


Ubuntu provides a guide for installing OpenStack:


It gives three options:

- a live image: I imagine this won't persist after a reboot
- using MAAS: requires at least 10 physical machines and we only have one
- "the hard way": not documented

This seems to be the best guide for setting up OpenStack on Ubuntu
12.04 manually:


Fixed IPs

These are the internal (VPN-only) IPs assigned automatically to
machines. I created a 192.168.0.* network:

nova-manage network create private 1 256 --bridge=br100

Note: if you get it wrong and have to delete the network, it doesn’t
update the database properly and you'll get all kinds of weird bugs
about the old network no longer existing. You have to go in and edit
the database table like this:

delete from fixed_ips where network_id = 1;

Floating IPs

These are public IP addresses which can be assigned to machines as
needed. Register them with:

nova-manage floating create

To list them:

nova-manage floating list

Warning: if you delete a project, the floating IPs will not be
released. Use some SQL to null out the project_id field to recover the


You can get a list of running services like this:

# nova-manage service list
Binary           Host                                 Zone Status
State Updated_At
nova-consoleauth localhost                            nova enabled
XXX   2012-11-06 09:35:58
nova-scheduler   localhost                            nova enabled
XXX   2012-11-06 09:36:02
nova-cert        localhost                            nova enabled
XXX   2012-11-06 09:36:02
nova-compute     localhost                            nova enabled
XXX   2012-11-06 09:35:53
nova-network     localhost                            nova enabled
XXX   2012-11-06 09:36:01
nova-cert        openstack                            nova enabled
:-)   2012-11-07 11:00:34
nova-consoleauth openstack                            nova enabled
:-)   2012-11-07 11:00:34
nova-scheduler   openstack                            nova enabled
:-)   2012-11-07 11:00:34
nova-compute     openstack                            nova enabled
:-)   2012-11-07 11:00:33
nova-network     openstack                            nova enabled
:-)   2012-11-07 11:00:34
nova-volume      openstack                            nova enabled
:-)   2012-11-07 11:00:34

Note that on my system it thinks there should be a set of "localhost"
services too because I renamed the machine after installing.


Nova will detect the LVM group created above and allow creating
volumes via the web interface. However, you can’t attach them to nodes
until you follow this guide (it will just hang in the “attaching”


To unjam the volumes, some SQL is needed (see linked instructions
above). I was never able to get this working with Essex, but upgrading
to Folsom fixed the problems attaching volumes to instances.


Folsom is available here:


Note: the dashboard moves from http://myhost/ to http://myhost/horizon/

It requires a reboot before you can log in (restarting Apache,
keystone, etc, is not enough).



Created a partition to hold the Swift data (I used LVM but you don't have to):

vgcreate storage /dev/sdb1
lvcreate --name swift --size 100G storage

Formatted as ext4 and added to fstab:

/dev/storage/swift /srv/node/storage/swift ext4
noatime,nodiratime,user_xattr 0 0

Create the rings. Make sure you set "1" for the number of replicas
(not 3 as in the tutorial) or it will fail silently later because you
only have one server:

cd /etc/swift

swift-ring-builder account.builder create 18 1 1
swift-ring-builder container.builder create 18 1 1
swift-ring-builder object.builder create 18 1 1

swift-ring-builder account.builder add z1- 100
swift-ring-builder container.builder add z1- 100
swift-ring-builder object.builder add z1- 100

Note: DO NOT put a “/” in the drive name (e.g. storage/swift). It will
appear to work, but the Swift service will fail silently when you try
to use it.

Start the server:

# swift-init proxy start
OSError: [Errno 13] Permission denied: '/root/keystone-signing'

See: https://bugs.launchpad.net/keystone/+bug/1036847

chown swift /var/cache/swift

Note: change port to match the one in
/etc/keystone/default_catalog.templates (8888 -> 8080).


swift -V 2.0 -A http://localhost:5000/v2.0 -U admin -K $OS_PASSWORD stat
  Account: AUTH_02ac15eeaab246a5a1ccfdcf8b9fc2d4
Containers: 0
  Objects: 0
    Bytes: 0
Accept-Ranges: bytes
X-Timestamp: 1352368359.21927
X-Trans-Id: txe265c083c6604d15b519b8f4c18d484c

Note: "401 Unauthorized" just means "an error occurred". Not
necessarily an authorisation failure (could be e.g. insufficient
storage available because the storage partition isn’t mounted, because
there's a "/" in the name).

Ignore the curl-based auth checks. They don’t work.

swift -V 2.0 -A http://localhost:5000/v2.0 -U admin -K $OS_PASSWORD
upload mycontainer myfile.tgz

The file uploads, but the web GUI fails to display it (“Unable to
retrieve container list").

keystone role-create --name swiftoperator
keystone user-role-add --user-id 67bb173cd78a47798c1a755884d1ccd3
--role-id 2e8bbe57a65c41a7a0a0898a1a4dd978 --tenant_id

An unexpected error prevented the server from fulfilling your request.
'NoneType' object has no attribute 'get' (HTTP 500)

Despite the error, it does add the role.

Containers and folders can then be created by ordinary users via the web GUI.



Entry point 'swift3' not found in egg 'swift'.

Instructions are wrong. The entry point should be:

use = egg:swift3#swift3

Trying to download Juju config from Web GUI gives:

“Error: Could not generate Juju environment config: Invalid service
catalog service: s3”

To check:

keystone service-list
|                id                |   name   |     type     |
description        |
| 0ecd251795fb408da5d30ac562149d9e |   nova   |   compute    |
OpenStack Compute Service |
| 303e58c465ec458591091902f54bc913 |  glance  |    image     |
OpenStack Image Service  |
| 3054d7477f3c4a1b9e3c4c6873d81075 |  swift   | object-store |
OpenStack Storage Service |
| ac57776cbd6f43bf87b8d3874fa92098 |    s3    |      s3      |   s3
objectstore service  |
| ea4a7a9829264f689ef5493f34c0b880 |  volume  |    volume    |
OpenStack Volume Service |
| eaa016ede163470ca62220b905c2884e |   ec2    |     ec2      |
OpenStack EC2 service   |
| fb7ef4dbf4b44fffaa921b7d83fe24d8 | keystone |   identity   |
OpenStack Identity    |

Shows "s3". But: "keystone catalog" doesn't.

note: /etc/keystone/default_catalog.templates is a decoy.

keystone endpoint-create --service-id ac57776cbd6f43bf87b8d3874fa92098

This will make Keystone stop working ('NoneType' object has no
attribute 'replace'). To fix that:

mysql -uroot keystone

update endpoint set extra='{"adminurl": "http://myhost:8080",
"internalurl": "http://myhost:8080", "publicurl":
"http://myhost:8080"}' where id='4de64caaaa6343d4beb67da8cdebd06f';

"keystone catalog" now includes "s3". However, the GUI gives the same error.

Downloads OK for admin user and admin project, but not for admin user
on regular project.

Mysteriously started working after a while.


After downloading the environment.yaml file from the web GUI
(settings), juju is able to bootstrap:

$ juju bootstrap
2012-11-08 11:31:06,467 WARNING ssl-hostname-verification is disabled
for this environment
2012-11-08 11:31:06,468 WARNING EC2 API calls not using secure transport
2012-11-08 11:31:06,468 WARNING S3 API calls not using secure transport
2012-11-08 11:31:06,468 WARNING Ubuntu Cloud Image lookups encrypted
but not authenticated
2012-11-08 11:31:06,469 INFO Bootstrapping environment 'openstack'
(origin: distro type: ec2)...
2012-11-08 11:31:10,146 INFO 'bootstrap' command finished successfully

However, it can’t connect to the new instance:

2012-11-08 11:31:48,832 ERROR Invalid host for SSH forwarding: ssh:
Could not resolve hostname
server-49fe7773-7ec4-4ddc-b947-762bde2ec9cf: Name or service not known

After adding the name to /etc/hosts manually:

$ juju status
2012-11-08 11:35:21,942 WARNING ssl-hostname-verification is disabled
for this environment
2012-11-08 11:35:21,943 WARNING EC2 API calls not using secure transport
2012-11-08 11:35:21,943 WARNING S3 API calls not using secure transport
2012-11-08 11:35:21,943 WARNING Ubuntu Cloud Image lookups encrypted
but not authenticated
2012-11-08 11:35:21,944 INFO Connecting to environment...
The authenticity of host 'server-49fe7773-7ec4-4ddc-b947-762bde2ec9cf
(' can't be established.
ECDSA key fingerprint is 9c:08:95:ea:a2:87:2c:10:eb:ce:f6:6b:e4:ab:e6:59.
Are you sure you want to continue connecting (yes/no)? yes
Warning: the ECDSA host key for
'server-49fe7773-7ec4-4ddc-b947-762bde2ec9cf' differs from the key for
the IP address ''
Offending key for IP in /home/tal/.ssh/known_hosts:237
Are you sure you want to continue connecting (yes/no)? yes
2012-11-08 11:35:31,846 INFO Connected to environment.
    agent-state: running
    dns-name: server-49fe7773-7ec4-4ddc-b947-762bde2ec9cf
    instance-id: i-0000001f
    instance-state: running
services: {}
2012-11-08 11:35:32,094 INFO 'status' command finished successfully

Gets stuck: S3 downloads to VMs aren’t working.

Note: the web-GUI caches your AWS keys. If you recreate a project then
it will continue to serve up the old keys in the environment.yaml.
Restart Apache.

There are two ways to fetch a file with S3: you can include an HTTP
header, or put the parameters in the HTTP GET URL. Seems that the
latter doesn’t work.

This is due to a problem checking the date. Swift expects a parsable
date, but S3 uses seconds since the epoch. Edit
/usr/lib/python2.7/dist-packages/swift3/middleware.py:483 to fix.

Am able to deploy the Juju wordpress/mysql example and make it public
(note that "juju expose" updates the firewall but doesn't assign a
public IP; use the web GUI to do that).

EC2 vs OpenStack API

By default, the EC2 API (used by the default environment.yaml returned
by horizon) returns made-up non-routable names and therefore doesn't
work. Changing nova to assign only IP addresses makes it mostly work,
but then the machines get numeric hostnames, which confuses e.g. the
postgresql charm because it then can't configure the mail server [ why
is it doing this anyway? ].

An alternative is to use the OpenStack provider type. A suitable
configuration looks like this:

juju: environments
default: openstack
    type: openstack_s3
    control-bucket: juju-openstack-myproject-95ec-8c2083e67721
    admin-secret: ...
    auth-mode: userpass
    auth-url: http://myhost:5000/v2.0/
    username: tal
    password: ...
    project-name: myproject
    default-series: precise
    default-instance-type: m1.small
    default-image-id: 5d7c1800-d778-47dd-946d-ebd22ee21e34
    s3-uri: http://myhost:8080
    combined-key: ... (EC2 access-key)
    secret-key: (as for EC2)

The image ID is from "glance index". The provider must be
"openstack_s3", not "openstack". Otherwise, it fails to download the
metadata (Authentication failure).

The zookeeper machine still fails to boot because it gets an access
denied error trying to contact Swift (check

juju-admin: error: unrecognized arguments: <head> <title>401
Unauthorized</title> </head> <body> <h1>401 Unauthorized</h1> This
server could not verify that you are authorized to access the document
you requested. Either you supplied the wrong credentials (e.g., bad
password), or your browser does not understand how to supply the
credentials required.<br /><br /> Authentication required </body>

The command it tries is in /var/lib/cloud/instance/cloud-config.txt:

juju-admin initialize --instance-id=$(curl

The problem is a lack of quoting in the curl command. Edit
juju/providers/openstack/launch.py to add quotes.

It then bootstraps correctly.

Adding a new user

Create a new user using the dash. Make them a "swiftoperator". The
user should log in to the dash and go to Settings -> Download Juju
Environment Config.

They save this as ~/.juju/environment.yaml, adding their password in
the indicated places.

Then just "juju bootstrap".


Quotas can be set in /etc/nova/nova.conf. It’s worth increasing some
of them a bit, e.g.


To change the quota for an existing project, use e.g.

nova quota-update 7ed23fe65dd24434838cef60cff37c75  --instances 50


Keystone has an SSL option, but enabling it breaks everything since it
just switches the existing port to SSL, which nothing else is

The recommendation seems to be to put it behind Apache SSL.

Enable Apache SSL as usual (you can reuse the VPN certificates here).
This at least gets SSL for the dashboard.


To use SSL with keystone, enable the mod_proxy and mod_proxy_http and
then add this to default-ssl:

ProxyPass /v2.0/ http://localhost:5000/v2.0/

Note: you have to use /v2.0/ (not e.g. /keystore/v2.0/).

Then, in your environments.yaml:

auth-url: https://myhost/v2.0/

Put the certificate in /usr/local/share/ca-certificates/myproject.crt
and run update-ca-certificates so Juju will find it. Note: MUST end
with ".crt".

S3 doesn’t have a fixed prefix like /v2.0/, so you need to make Apache
listen on a new port (e.g. 8443) and proxy that to 8080.

Problem: Juju fails to deploy successfully when using SSL with S3 (SSL
with keystone is fine):

2012-11-15 15:18:48,473 ERROR Machine provider information missing: machine 0

Cause: the cloudinit script tries to use https to download into the
initial VM, but it doesn’t trust the certificate. To make the VMs
trust our certificate, edit the template VM (see below):

    Put certificates in /usr/local/share/ca-certificates/

    sudo update-ca-certificates

Then curl works:

$ curl https://myhost/
<body><h1>It works!</h1>

(that it works with keystone suggests that keystone doesn't bother to
verify that the certificate is valid)

Next, proxy port 18774 (https) to 8774 (http) in the same way for
nova. Update the publicurl in the keystone/endpoint table. Make sure
you use the DNS name in the endpoint, not the IP address, or it won’t

Finally, update the endpoint table for the "object-store" (use
"keystone catalog" to get the ID). Redirect this to 8443 (which we
already forwarded for S3).

Now Juju runs without printing any warnings about security.

You also need to add the CA certificate to the main OpenStack machine
(the one running Horizon) or that will stop working.

Note: possible problem uploading large charms with SSL ("Connection to
the other side was lost in a non-clean fashion"). Needs investigating


Horizon is the web GUI dashboard for OpenStack. I had to make a few
changes to the Juju template generation:

The Juju configuration download was including the wrong credentials.
I’ve fixed it and filed a bug:

I also updated the forms code to pass the username and project name,
allowing those to be filled in automatically too.

I changed /usr/lib/python2.7/dist-packages/horizon/dashboards/settings/templates/settings/juju/environments.yaml.template

juju: environments
 {{ username }}:
    type: openstack_s3
    control-bucket: juju-{{ username }}
    admin-secret: {{ juju_admin_secret }}
    auth-mode: userpass
    auth-url: https://myhost/v2.0/            (HTTPS URL)
    username: {{ username }}
    password: USER_PASSWORD
    project-name: {{ project_name }}
    default-series: precise
    default-instance-type: m1.small
    default-image-id: 247e6ec1-0d38-4927-a6ea-601a78d16c7f    (not
sure this is needed actually)
    s3-uri: https://myhost:8443/            (HTTPS S3 API)
    combined-key: {{ ec2_access_key }}
    secret-key: {{ ec2_secret_key }}
    ssl-hostname-verification: true
    juju-origin: ppa                (Juju 0.5 doesn’t support openstack_s3)

Custom image

Deploying all the VMs with Juju is quite slow (~4 min per VM). Also,
the machines need to be pre-configured to trust the CA certificate. To
do this:

    Create a new instance
    Add all packages likely to be needed (dependencies, not the actual services)
    Sync and snapshot
    nova image-list
    Wait until it finishes SAVING
    The new image is in /var/lib/glance/images (it will be the most recent one)

glance add name="Ubuntu 12.04 Juju worker" is_public=true
container_format=ovf disk_format=qcow2  < image-file

This will display the new image ID. Update your environments.yaml to
refer to the new image.

Using the Juju client (these are the instructions we give to our users)

These instructions assume an Ubuntu 12.10 client.

 1. In "Access & Security", ensure at least ICMP (ping) and SSH are enabled.
 2. Enable the OpenStack VPN on your own machine
 3. apt-get install juju
 4. Fix it:

    sudo vi /usr/lib/python2.7/dist-packages/juju/providers/openstack/launch.py

    At line 74, add the missing quotes:


 5. Download your Juju configuration from the web GUI (Settings)
        Save as ~/.juju/environments.yaml
        Edit to include your password where indicated
 6. juju bootstrap
    Wait a few minutes for the new machine to boot and install everything
 7. juju status

Dr Thomas Leonard
IT Innovation Centre
Gamma House, Enterprise Road,
Southampton SO16 7NS, UK

tel: +44 23 8059 8866

mailto:tal at it-innovation.soton.ac.uk

Juju mailing list
Juju at lists.ubuntu.com
Modify settings or unsubscribe at:

Jorge Castro
Canonical Ltd.

More information about the Ubuntu-cloud mailing list