Juju architecture questions - nonprovider

Thomas Leonard tal at it-innovation.soton.ac.uk
Fri Sep 21 14:16:53 UTC 2012


On 2012-09-20 10:41, Thomas Leonard wrote:
> On 2012-09-17 17:57, Clint Byrum wrote:
>> Excerpts from Thomas Leonard's message of 2012-09-17 02:06:03 -0700:
> [...]
>>> It's also usually uncertain where we will be deploying, so being able to
>>> handle several different cloud types is useful. Often there is no cloud at
>>> all, so having something which could deploy LXC containers over a fixed set
>>> of existing machines would be very useful (like an extended "local"
>>> deployment method).
>>>
>>
>> I've often thought about a more static "nonprovider" environment type
>> where the addresses of machines are just listed somewhere and SSH is
>> used to install the juju agents. It comes up often enough, I think its
>> time we put this on our "experiments to try soon" list.
>>
>> https://bugs.launchpad.net/juju/+bug/1052065
>
> OK, I had a look into this. How should the cloud init stuff work? For EC2,
> EC2LaunchMachine creates a CloudInit and passes that to the new machine,
> which causes everything to be installed and the machine agents started.
[...]

OK, I've got a prototype of this working now:

   bzr branch lp:~tal-it-innovation/juju/provided

Using this I was able to follow the tutorial to deploy mysql + wordpress 
onto a pre-allocated machine.

Of course, it's very rough and hacky. I just wanted to see if it would work.

You will need:

1. A web-server to serve the charms (replacing S3), plus ssh access for 
uploading files. Only the admin machine needs ssh access.

2. A fresh Ubuntu 12.04 machine for Juju to use. Use a new VM, because it 
doesn't clean up after itself (and there's no LXC containers to isolate things).

It's best to use separate VMs for this, otherwise anything that tries to run 
a web-server (e.g. wordpress) will fight over port 80 with the storage server...

Your ~/.juju/environments.yaml should use the "provided" provider type, e.g.

default: fixed
environments:
   fixed:
     type: provided
     admin-secret: 6608267bbd6b447b8c90934167b2a294999
     storage-url: http://192.9.206.68/store/aigi5exaivee5Aetairu/
     storage-ssh: root at 192.9.206.68:/var/www/store/aigi5exaivee5Aetairu/
     machines: [192.9.206.63]
     default-series: precise
     juju-origin: lp:~tal-it-innovation/juju/provided

storage-url is a directory served over HTTP (e.g. by Apache).

(security note: the token in the URL, which you should generate using e.g. 
pwgen, is to prevent unauthorised reads. Apache indexes should be off for 
"store", https should be used if possible to prevent sniffing the token. It 
currently uses Python's built-in urllib2, with its well known SSL 
certificate verification flaws. This is just for testing. )

storage-ssh is used in an scp command to upload things.

machines is the list of VMs to host units. I've only tested with the default 
placement policy, which is "local" (everything on the first machine).

juju-origin needs to point at my branch, otherwise the remote juju doesn't 
recognise the new "provided" type.


juju bootstrap will install Juju, Zookeeper and the two agents on the first 
machine. I didn't do any async stuff, so you can just sit and watch as the 
output goes by.

juju destroy-environment doesn't do anything except update the storage 
server to say that the environment doesn't exist. You'll have to get a fresh 
VM if you want to bootstrap again, because otherwise it will fail to create 
some directories that will already exist.


-- 
Dr Thomas Leonard
IT Innovation Centre
Gamma House, Enterprise Road,
Southampton SO16 7NS, UK


tel: +44 23 8059 8866

mailto:tal at it-innovation.soton.ac.uk
http://www.it-innovation.soton.ac.uk/



More information about the Juju mailing list