Relation's need config too
Clint Byrum
clint at ubuntu.com
Thu Jul 19 07:06:10 UTC 2012
Excerpts from Robert Collins's message of 2012-07-18 18:57:04 -0700:
> On Thu, Jul 19, 2012 at 12:44 PM, Clint Byrum <clint at ubuntu.com> wrote:
> > I"ve run into this problem a few times, and I think there's a hole in
> > the current model that needs filling.
> >
> > Recently Juan Negron and Tom Haddon submitted a really awesome set of
> > enhancements for the haproxy charm.
> >
> > https://code.launchpad.net/~mthaddon/charms/precise/haproxy/mini-sprint-sf/+merge/114951
> >
> > You can see there a lot of it was some nice abstraction for juju commands
> > which I think belongs in its own package. But thats not what I'm writing
> > about.
> >
> > Tom and Juan felt that there needed to be a way to assign units of
> > a backend web service to servics, which were created by dumping raw
> > haproxy service configs into a config option.
> ...
>
> So one thing that may need clarifying: LP runs about 6 vhosts on a
> single backend. Would what you're proposing would entail generating 6
> metadata relations per backend? (launchpad.net, code.launchpad.net
> etc)
>
> The endpoint host is implied by the relation, the reason that custom
> end points are needed is cluster management for LP - the topology we
> run in production is this:
> internet-> 2x apache (SSL unwrap), IP round robin -> 2xhaproxy using
> apache LB -> (the same) 2xhaproxy using fail-over mode -> 90 or so LP
> backends spread over 4 hosts.
>
If every actual hostname points to the single SSL->haproxy->haproxy
cluster IP(s) then no, you don't care about Host. I'm also not sure why
then you would need multiple service entries in haproxy, as it sounds
like everything that arrives in haproxy gets sprayed to every app server.
> The double bounce via haproxy is so that one haproxy has a precise
> view of the work each backend is doing; IFF it goes down does the
> other haproxy end up forwarding traffic to backends. This works very
> well in working around python GIL limitations. [becase any (C)python
> appserver isn't really threaded, no matter what the packaging on the
> box says].
>
juju models proxying behavior like you state above pretty well. Provide
and require the reverse and forward proxy interfaces for the
haproxy<->haproxy relationship. Then add a 'fallback-backend' relation
and you can configure this, I think, exactly as you describe.
Perhaps thats what the multiple service entries is for?
> So I wonder if there are really three distinct factors here:
> - running multiple backends on a single node
> - determining the double-layer config for haproxy.
> - the vhosts that each backend is serving
>
> AFAIK we don't use Host: in the haproxy or LP configs - each lp
> appserver serves all the domains that all the other appservers do, so
> the third point, at least for LP, is entirely moot. Thats not to say
> we don't have regular vhosts, but often as not we have dedicated ips
> for them too.
Right in this case you can just tell haproxy to listen on dedicated IP
X,Y, and Z, and spray traffic from all of those to all backend servers,
passing Host: along untouched. I think that can be done with current
haproxy right now (though multiple IPs is not something juju understands).
More information about the Juju
mailing list