Relation's need config too

Tom Haddon tom.haddon at canonical.com
Mon Jul 23 14:03:14 UTC 2012


On 19/07/12 02:57, Robert Collins wrote:
> On Thu, Jul 19, 2012 at 12:44 PM, Clint Byrum <clint at ubuntu.com> wrote:
>> I"ve run into this problem a few times, and I think there's a hole in
>> the current model that needs filling.
>>
>> Recently Juan Negron and Tom Haddon submitted a really awesome set of
>> enhancements for the haproxy charm.
>>
>> https://code.launchpad.net/~mthaddon/charms/precise/haproxy/mini-sprint-sf/+merge/114951
>>
>> You can see there a lot of it was some nice abstraction for juju commands
>> which I think belongs in its own package. But thats not what I'm writing
>> about.
>>
>> Tom and Juan felt that there needed to be a way to assign units of
>> a backend web service to servics, which were created by dumping raw
>> haproxy service configs into a config option.
> ...
> 
> So one thing that may need clarifying: LP runs about 6 vhosts on a
> single backend. Would what you're proposing would entail generating 6
> metadata relations per backend? (launchpad.net, code.launchpad.net
> etc)
> 
> The endpoint host is implied by the relation, the reason that custom
> end points are needed is cluster management for LP - the topology we
> run in production is this:
> internet-> 2x apache (SSL unwrap), IP round robin -> 2xhaproxy using
> apache LB -> (the same) 2xhaproxy using fail-over mode -> 90 or so LP
> backends spread over 4 hosts.
> 
> The double bounce via haproxy is so that one haproxy has a precise
> view of the work each backend is doing; IFF it goes down does the
> other haproxy end up forwarding traffic to backends. This works very
> well in working around python GIL limitations. [becase any (C)python
> appserver isn't really threaded, no matter what the packaging on the
> box says].

So I'm not sure this would actually be covered, even by the
implementation we're talking about here. Let me try and elaborate. For
the Launchpad appservers, we have two haproxy frontends, but one is
designated as the primary, and one as the backup. This way all traffic
flows through just one haproxy instance at a time so that:

- We can control the maxconn for each appserver to be "1" - if we had
two haproxy instances each configured with "maxconn 1" and in an
active-active configuration, we'd have at least two connections to each
appserver
- One haproxy instance can be authoritative for the number of
connections it's sending to the appserver(s). In other words, if you
have two haproxy instances in active-active mode, I'm not aware of a way
of sharing information between them on how many connections they're
sending to the backends, so it makes fine tuning the number of
connections difficult.

I don't think there's a way of doing this in Juju currently, as multiple
haproxy instances that are performing the same role are all configured
the same, so I don't know how we'd say "make this one primary, and this
other one backup".

> So I wonder if there are really three distinct factors here:
>  - running multiple backends on a single node
>  - determining the double-layer config for haproxy.
>  - the vhosts that each backend is serving
> 
> AFAIK we don't use Host: in the haproxy or LP configs - each lp
> appserver serves all the domains that all the other appservers do, so
> the third point, at least for LP, is entirely moot. Thats not to say
> we don't have regular vhosts, but often as not we have dedicated ips
> for them too.
> 
> -Rob
> 




More information about the Juju mailing list