AWS Load Balancer Use Case

Luis Arias kaaloo at gmail.com
Wed Oct 26 13:59:39 UTC 2011


Implemented elb-relation-broken today.  Haven't had the chance to see
elb-relation-departed, maybe I need to add / remove units, so this may
need implementing too.

I studied the placement: local idea in environments.yaml.  This seems
to be only partly implemented, the environment definitely has a
placement property but it is only used in a test.  It doesn't seem to
be used in either add-unit or deploy.  So I think maybe it would be
good to bring back the notion of a --policy flag when deploying a new
unit, or else maybe a charm needs to be able to specify constraints on
placement that override the provider's policy.  This last idea sounds
more flexible to me since there may be charms that would liked to be
placed together, others like the ELB charm that don't require a new
machine and would be happy to be co-located with any other charm on an
existing machine, some charms may want a tiny instance and others one
with a lot of cpu or memory etc...

In any case I need to have something to work around this issue so I
can go beyond my initial testing.  I'll work on elb-relation-departed
tomorrow.

Luis

On Tue, Oct 25, 2011 at 6:49 PM, Luis Arias <kaaloo at gmail.com> wrote:
> So here is my progress today:
>
> https://code.launchpad.net/~kaaloo/charm/oneiric/elb/trunk
>
> The charm currently expects an existing load balancer instance and
> will not attempt to create one.  There is some configuration involved
> in creating one that is best left in my case to the AWS Console.
> There are also issues with respect to using Route 53 with the load
> balancer that I don't want to go into right now.
>
> I still need to write the elb-relation-broken (? this is what I
> noticed was called in debug-hooks; should I still use
> elb-relation-departed ?) hook which would then remove the instance
> from the elb instance.
>
> I had to worry about availability zones because simply registering the
> instance with the elb apparently does not enable serving whatever
> availability zone the instance is in.  Similarly when removing the
> instance, I think I'll cleanup any availability zones which are
> enabled but with no instances registered.
>
> I did not have time today to try setting a policy: local in the
> environments.yaml file.  I'm not quite sure this is what I need since
> it gives me the impression that then all service units will be
> deployed on machine 0.  Maybe this is not the case, I will try
> tomorrow.
>
> I'm not quite sure what exit code to use from the groovy helper script
> in bin to signal that the hook has failed.  I'm currently exiting with
> error 2 on failure and not doing anything if everything succeeds
> although this needs to be tested more thoroughly because the aws java
> sdk tends to generate an exception when something goes wrong instead
> of returning some error value.
>
> Luis
>
> On Tue, Oct 25, 2011 at 10:39 AM, Luis Arias <kaaloo at gmail.com> wrote:
>> Thank you Kapil for the roadmap  !!  I will start working on this
>> today (very excited about getting this working).  I currently do my
>> AWS scripting in groovy through the AWS Java SDK, I hope that's ok.
>> I'm on irc as kaaloo.  I'll publish on launchpad so you can review.
>> Thanks for explaning the --policy=local deployment option.  I think it
>> will be interesting to see what support is needed in the core for this
>> since I think its unfortunate that the charm cannot set constraints on
>> its deployment policy.  I suppose the consequences are not that bad
>> but it seems a waste of resources to instantiate a machine just to
>> setup the ELB instance.
>>
>> Luis
>>
>> On Mon, Oct 24, 2011 at 6:18 PM, Kapil Thangavelu
>> <kapil.thangavelu at canonical.com> wrote:
>>>
>>> So i'd try to see what you can achieve just within the charm interface, which is
>>> juju's public interface.
>>>
>>> If you think there are things missing and want to contribute to the core, thats
>>> great as well. Its probably a good idea in such a case to come meet the
>>> developers on the freenode irc channel #juju so we can help you figure out what
>>> the best way to do something is.
>>>
>>> So back to an ELB charm
>>>
>>>  - The fact that elb charm is external doesn't need to be noted in the metadata
>>>   or config.
>>>  - Placement is a deploy time decision not a property of the charm, you can
>>>   deploy it with --policy=local if you want to avoid an extra machine
>>>   allocation for the elb service.
>>>
>>> I'd start off with something that uses your favorite language's ec2 library and
>>>
>>>  on install hook
>>>    installs ec2 library and language deps
>>>
>>>  on ?-relation-joined  hook
>>>    create a elb for the related service it doesn't exist
>>>    add the remote unit to the elb
>>>
>>>  on ?-relation-depart  hook
>>>    remove the remove unit from the elb
>>>
>>>  on ?-relation-changed  hook
>>>    double check the unit's elb entry against current port/host, might not
>>>    be needed in a first cut.
>>>
>>>  on stop hook
>>>    destroy the elb.
>>>
>>> for the config metadata the ec2 credentials needed to setup an elb.
>>>
>>> i'd start with just a elb relation for the interface 'http' to get started.
>>>
>>> I should warn that we currently have a bug that prevents the stop hook from
>>> being called, so you'll need to be careful to use the aws console to
>>> shutdown the elb and its instances, as it being created outside of the
>>> environment, and won't be destroyed as part of destroy-environment.
>>>
>>> cheers,
>>>
>>> Kapil
>>>
>>> Excerpts from Luis Arias's message of Mon Oct 24 08:38:16 -0400 2011:
>>>> Hi all,
>>>>
>>>> Thank you so much for the bug (feature) report and discussion!  I
>>>> spent some time this morning looking at the juju codebase to try to
>>>> understand how to implement an external service.  I would like to be
>>>> able to use this feature and would like to help.  I don't know what I
>>>> would be allowed to change though.  For instance, would the fact that
>>>> an ELB charm relies on an external service be something new in the
>>>> charm's metadata or would it be better to place that information in
>>>> the charm's config ?  Seems to me it would be nice to have it in the
>>>> metadata but maybe this data structure is not meant to be changed too
>>>> often.  I was thinking of updating the place_unit function in
>>>> placement.py to force placement_policy to local policy in the case of
>>>> an external service, but I don't quite see how to get the charm
>>>> metadata or config from the unit_state.  This is probably somewhat
>>>> over my head but maybe I could help out with some mentoring and I can
>>>> always test.  I'm certainly glad to have learned a bit more about
>>>> juju's magic already! :)
>>>>
>>>> Luis
>>>>
>>>> On Fri, Oct 21, 2011 at 7:40 PM, Clint Byrum <clint at ubuntu.com> wrote:
>>>> > Excerpts from Kapil Thangavelu's message of Thu Oct 20 12:12:45 -0700 2011:
>>>> >> Excerpts from Clint Byrum's message of Thu Oct 20 14:51:00 -0400 2011:
>>>> >> > Excerpts from Kapil Thangavelu's message of Thu Oct 20 10:47:23 -0700 2011:
>>>> >> > > Excerpts from Clint Byrum's message of Thu Oct 20 12:22:51 -0400 2011:
>>>> >> > > > The haproxy charm could probably use some refactoring (its about the third
>>>> >> > > > charm I ever wrote, maybe the 5th charm ever published. ;) so that these
>>>> >> > > > were safer operations (relating two things to haproxy at once results
>>>> >> > > > in a race condition)
>>>> >> > >
>>>> >> > > Just adding a few minors.
>>>> >> > >
>>>> >> > > hooks are execute serially on a unit, so there is no race on haproxy with
>>>> >> > > concurrent changes.
>>>> >> >
>>>> >> > Indeed, however, the haproxy can only listen on port 80 for one set of
>>>> >> > backing servers without the help of HTTP Host header checking. We have no
>>>> >> > way of knowing what Host header goes where with the current simple 'http'
>>>> >> > interface used. There is a 'url' interface that needs to be implemented
>>>> >> > whereby a backing service can feed what host and sub-path it expects to
>>>> >> > be hosted under.
>>>> >> >
>>>> >> > So its a race because the last service that gets related ends up being
>>>> >> > the only one that gets served by haproxy.
>>>> >>
>>>> >> ic. thanks for clarifying.
>>>> >>
>>>> >> >
>>>> >> > >
>>>> >> > > >
>>>> >> > > > But anyway, thats how I would do it.
>>>> >> > > >
>>>> >> > > > To address how to relate to ELB/RDS, we need something like a virtual
>>>> >> > > > charm which runs its hooks somewhere, but doesn't actually take up
>>>> >> > > > resources in the juju environment itself, and can return different values
>>>> >> > > > for private-address and public-address when related to. That would be
>>>> >> > > > useful for not only AWS things like ELB and RDS, but also for things
>>>> >> > > > like domain registrar DNS services and other business units within the
>>>> >> > > > same organization.
>>>> >> > > >
>>>> >> > > > I think this may be a duplicate but I opened this bug because I know
>>>> >> > > > we've been talking about the concept for a while but haven't actually
>>>> >> > > > done anything about the concept:
>>>> >> > > >
>>>> >> > > > https://bugs.launchpad.net/juju/+bug/878948
>>>> >> > > >
>>>> >> > >
>>>> >> > > As i see it, ELB is effective just another charm without a service backing it,
>>>> >> > > the hooks for it directly interact with ELB via the aws api, and its service
>>>> >> > > config takes aws credentials.
>>>> >> > >
>>>> >> >
>>>> >> > Agreed. The details that are important is that this charm needs to be
>>>> >> > deployed somewhere regardless of resources available (machine 0 and its
>>>> >> > soon to be realized contemporaries maybe) and allow for overriding of
>>>> >> > private-address and public-address, so that it can implement identical
>>>> >> > interfaces to regular charms. For instance, just like haproxy, it would
>>>> >> > provide an http interface so you could relate that to a monitoring
>>>> >> > service.
>>>> >> >
>>>> >>
>>>> >> ic, good points. its useful to note that the private-address exposed to the
>>>> >> related services is overridable with relation-set, the value present in the
>>>> >> relation is just a default. The public address as exposed by status would need
>>>> >> manipulation. introducing an equivalent unit-set to unit-get might be adequate,
>>>> >> with support for just a few keys going to zk, the rest going to local disk
>>>> >> (akin to the faceter usage). The invocation of stop hooks here is critical as
>>>> >> the elb is creating instances outside of the purview of the environment, which
>>>> >> would persist post environment-destruction.
>>>> >>
>>>> >
>>>> > Yeah, that does seem critical for anything that allocates resources
>>>> > external to the machine it is running on.
>>>> >
>>>> > --
>>>> > Juju mailing list
>>>> > Juju at lists.ubuntu.com
>>>> > Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
>>>> >
>>>>
>>>
>>
>



More information about the Juju mailing list