Relation's need config too

Clint Byrum clint at
Mon Jul 23 17:14:20 UTC 2012

Excerpts from Tom Haddon's message of 2012-07-23 07:03:14 -0700:
> On 19/07/12 02:57, Robert Collins wrote:
> > On Thu, Jul 19, 2012 at 12:44 PM, Clint Byrum <clint at> wrote:
> >> I"ve run into this problem a few times, and I think there's a hole in
> >> the current model that needs filling.
> >>
> >> Recently Juan Negron and Tom Haddon submitted a really awesome set of
> >> enhancements for the haproxy charm.
> >>
> >>
> >>
> >> You can see there a lot of it was some nice abstraction for juju commands
> >> which I think belongs in its own package. But thats not what I'm writing
> >> about.
> >>
> >> Tom and Juan felt that there needed to be a way to assign units of
> >> a backend web service to servics, which were created by dumping raw
> >> haproxy service configs into a config option.
> > ...
> > 
> > So one thing that may need clarifying: LP runs about 6 vhosts on a
> > single backend. Would what you're proposing would entail generating 6
> > metadata relations per backend? (,
> > etc)
> > 
> > The endpoint host is implied by the relation, the reason that custom
> > end points are needed is cluster management for LP - the topology we
> > run in production is this:
> > internet-> 2x apache (SSL unwrap), IP round robin -> 2xhaproxy using
> > apache LB -> (the same) 2xhaproxy using fail-over mode -> 90 or so LP
> > backends spread over 4 hosts.
> > 
> > The double bounce via haproxy is so that one haproxy has a precise
> > view of the work each backend is doing; IFF it goes down does the
> > other haproxy end up forwarding traffic to backends. This works very
> > well in working around python GIL limitations. [becase any (C)python
> > appserver isn't really threaded, no matter what the packaging on the
> > box says].
> So I'm not sure this would actually be covered, even by the
> implementation we're talking about here. Let me try and elaborate. For
> the Launchpad appservers, we have two haproxy frontends, but one is
> designated as the primary, and one as the backup. This way all traffic
> flows through just one haproxy instance at a time so that:
> - We can control the maxconn for each appserver to be "1" - if we had
> two haproxy instances each configured with "maxconn 1" and in an
> active-active configuration, we'd have at least two connections to each
> appserver
> - One haproxy instance can be authoritative for the number of
> connections it's sending to the appserver(s). In other words, if you
> have two haproxy instances in active-active mode, I'm not aware of a way
> of sharing information between them on how many connections they're
> sending to the backends, so it makes fine tuning the number of
> connections difficult.
> I don't think there's a way of doing this in Juju currently, as multiple
> haproxy instances that are performing the same role are all configured
> the same, so I don't know how we'd say "make this one primary, and this
> other one backup".

This is comes up a lot, and relates to the need for juju to have simple
leader election support:

Basically if you have two units, its really impossible, even with peer
relations, to make them aware of the exact number of units and have one
of them decide to be the master/leader/etc.

The simplest workaround to automate this is to delay any configuration
of any peer relations until you've done something like this:

juju set primary-unit=proxy/4

Another one is to use an out of band leader election protocol. There
are a number of these, from ucarp to pacemaker to whackamole (old school).

Anyway, I suspect there is a floating IP that really designates who is
the primary. In that case, how ever you're doing that leader election
now is what you would use in the juju charm.

More information about the Juju mailing list