Thoughts about Juju local as Dev

Simon Davy bloodearnest at gmail.com
Fri Dec 5 19:32:53 UTC 2014


On 5 December 2014 at 01:54, Sebastian <sebas5384 at gmail.com> wrote:
> Hey guys!

Hi Sebastian

I've been looking at using a juju charm as the default dev env for one
of our services, login.ubuntu.com.

It's mostly working, but we haven't switched to it yet, as it needs
some more polish but we'd like to at some point.

> Behold the Vagrant workflow
> For some of the developers, this flow was terrible mysterious, only the list
> of things you have to install or know what does each and every software,
> here's the list:
> Vagrant and the download a box, Virtualbox, Juju, Juju-local, Juju-gui (yes,
> it's important to separate, client <> server <> gui), LXC and the containers
> paradigm, SSHFS (which is quite difficult in Mac OS X) to access and edit
> the files on the container, and finally the slow Sshuttle to access the
> containers via ssh from the host.
> Thats a lot to understand, so many things and we don't even start to explain
> how to use the charms, relations and stuff like the charm's hooks.

Right. We're not using vagrant much, as all except our front end guy
are on ubuntu, so we're using lxc. Plus, we have been using a
per-project lxc for development for a long time, with user's home dir
bind mounted, so our devs are familiar with this already.

> Accessing your app services:
> Sshuttle is not the best solution, so let's use Virtualbox networking
> features. I created a private network interface (containerseth0), and then
> setting the networking configurations of the lxc container to it
> (containerseth0). That was one of the best solution I came up till now, but
> I know this is not the right way to do that, but I don't know that much
> about network bridges.

Right. Again, this is not an issue when using lxc direct from your
host, as the 10.0.3.* addresses are already locally routeable. But for
vagrant, that sound's like the way to go.

Although, having said that, I do all my juju work on remote box, and I
use sshuttle on my local machine to access the 10.0.3.* addresses on
my remote box without any problems.


> Download, install and configure = Waste
> When you want to be more efficient, the first thing you have to identify is
> the waste and try to decrease it the best you can. For example, if I'm a
> Drupal developer that wants to start developing in a new or existent project
> he has to wait for download (apt-proxy this is being done right now I
> think), install all the dependencies and then configuring process, again,
> for the same service (charm).
> So, some ideia is to try to clone service unit (container) before the
> "started" status of the charm, so in that way whenever I want a new project
> I don't have to wait all of that, just the config-changed and start process.
> Today the only (not 100% sure) thing is being using cloning is with units
> scaling.

As others have mentioned, using btrfs and squid-deb-proxy or similar
really takes the sting out of this.

> Why my machine is so slow!! /O\
> Every developer have more than 2 projects cloned in their workspace, and
> that result in a lot of deployed running charms, with all their services
> like Nginx, Php-fpm, Varnish and MySQL.

Do you need varnish in a dev environment? Perhaps you do, but I'm
curious as to your use case.

For development, we strip the env down to the base service and it's
dependents (like postgres and memcached).

> So,
> It's naturally that the machine and consequently the applications appear to
> be very slow, there's too many containers running at same time. The solution
> to this where not defined yet, but we are trying to:
> - Use one Vagrant VM for each project, but thats painfully when you must see
> other projects running.

I think the vagrant usage may be key to this slowness. I regularly
have upwards of 20 lxc containers running on a 4 core, 8G box without
any noticeable performance issues or memory exhaustion.

The other issue of course is disk thrashing, which btrfs clone and the
upcoming os-upgrade options will help with.

> - Manually turning off all the containers using lxc-stop, which is other
> painfully process.

Yeah, you can script it though.

> - Parallel local type environments, so it's an env for each project, but
> that needs tweeks to avoid ports conflicts and still we had to manually
> stop/start all the containers.
> So we didn't figure it out yet.

Yeah, I've done this but it is hacky. I'd love a tool to set up a
different local env for me

> juju set mysql dataset-size="20%"
> Fuuuu.... why MySQL isn't starting? Telling to the developers and making
> predefined bundles and config files, was not enough, they forgot about to
> set the MySQL dataset-size when are working in a local environment. The
> charms could react better to the environments types.

Urg, sounds fun :(

> Charm's development is a slow and a complex process
> You are developing a complex charm and guess what? error in the logs, the
> charm's deploy failed, then you modify the charm code and repeat the hall
> process all over again.

Are you using debug-hooks? I find it not too slow when makeing local
changes to the charm on the unit, and then copying changes back to my
host when working

There's a plugin I wrote for this:
https://github.com/juju/plugins/blob/master/juju-sync-charm


> This is the workflow today if you don't know other approaches, for example:
> - Connect to the service unit via ssh, then find the charm's code in the
> server, and then edit it in Vi or Nano and retry. And if it works, you will
> have to remember to replicate all your changes into your beautiful versioned
> source code. Which generally is in GitHub, so go learn how to mix bzr and
> git, because you are gonna need it if you want to put your charm in the
> charm store.

I use the above juju sync-charm plugin to do this.

> - Since you are using linux containers, you can create a symbolic link
> between your source code and where the charm code must to be into the
> container's juju agent (/var/lib/juju/...). But! this doesn't work, and I
> currently don't know why.
>
> - I wish to know if there's another better way...

You can't symlink, but you can bind mount.

If you add a bind mount entry into the container's fstab, in
/var/lib/lxc/<container name>/fstab, then reboot the container, that
path should now be mounted in the container. See
https://www.stgraber.org/2013/12/21/lxc-1-0-advanced-container-usage/
for more info.

A bit awkward, but can be scripted and works for me so far.

HTH

-- 
Simon



More information about the Juju mailing list