constraints - observations and questions (bringing to list)

Tim Penhey tim.penhey at canonical.com
Tue Feb 12 22:40:25 UTC 2013


Hi folks,

I started a conversation yesterday with William around constraints as
I'm new to the project, and grappling with many of the underlying
concepts.  I didn't want to come straight to the list as I was very
unsure of what I was talking about.  I think I'm getting a better
understanding (slowly) and it was also requested to bring this to the
list instead of having a discussion in a private silo.  Since this topic
is very timely, I think it may help others in their understanding of the
constraint process too.

This is by no means any form of requirements document just me throwing
around ideas.

Much of the original email and reply have been cut but they wouldn't
have added value to you folks anyway (IMO).

On 13/02/13 00:31, William Reade wrote:
> On Tue, 2013-02-12 at 11:29 +1300, Tim Penhey wrote:
>> Hi guys,
>>
>> I spent a lot of time yesterday reading through the recent constraint
>> discussions on the juju-dev list.  I have a few observations and
>> questions around it.

[snippity snip]

OK, my understanding around all this is evolving.

If I grab some info from the current documentation, it shows that the
python version had the following constraint options:
 * cpu
 * mem
 * arch
 * instance-type
 * ec2-zone
 * maas-name
 * orchestra-classes

After some more reading, and poking about it seems that cpu isn't cpu at
all, but instead refers to the ECU (EC2 Compute Units).

>> I'm still unclear around the expected inheritance of constraints as
>> mentioned in the constraints threads where the constraints specified
>> with the boostrap command where then applied to future deploy commands.
> 
> Sorry, I was almost certainly assuming too much context there.
> 
> Constraints can be specified both at an environment level, and at a
> service level; when you specify constraints at bootstrap time, you're
> specifying what you want the bootstrap node to run on and *also* the
> environment constraints. I'd be open to arguments that this is crack,
> and that bootstrap constraints should not be used as starting
> environment constraints, but I'm not sure.

It is my current understanding that right now we instantiate a machine
for each service + 1 for the agent.  If I'm at the state where I want
larger instances for some of my charms, isn't it a waste to have a big
machine for the agent?

I'm not sure that having the bootstrap/agent machine being big and
default makes a huge amount of sense.

Do we allow the user to specify environment constraints in the
environments.yaml file?  Instead of necessarily setting the constraints
for the deploy commands on the command line, does it make sense to have
the constraints specified in a file, whether that be the environments
file or another one I don't much care, but I can envision the situation
where you might want something like:

 * bootstrap machine is smallish
 * 3 load balanced medium machines with service X
 * 1 large machine for db Y

Having the ability to have the deployment constraints in a file gives
you a simpler way to manage the defaults constraints for particular
services.

> Given that environment constraints exist, however they're manipulated,
> services without constraints will use environment constraints to make
> deployment decisions. If a service *does* specify constraints, those
> constraints should be used rather than the environment constraints; but
> any that are not specifically set on the service will be taken from the
> environment instead.
> 
> When we were originally figuring the feature out, this seemed like the
> behaviour most likely to accurately reflect user intention; for example:
> 
>   $ juju set-constraints arch=amd64
>   $ juju set-constraints -s s instance-type=m1.small

What is the "-s s" for?

> ...should not deploy an i386 m1.small; while:
> 
>   $ juju set-constraints instance-type=m1.medium
>   $ juju set-constraints -s s mem=16G
> 
> ...seems to naturally indicate that the env-level instance-type
> constraint is irrelevant when deploying units of s, which require at
> least 16G of memory.

Oh, is it for service called "s" ?

Let me see if I've got the above right.

The first line:

   $ juju set-constraints instance-type=m1.medium

specifies an environment level constraint saying that deployments I want
on this environment should use this m1.medium, and then

   $ juju set-constraints -s s mem=16G

says for services "s" in that environment, make sure we have 16G of
memory.  Is this right?


> Originally, the following was considered to be crazy:
> 
>   $ juju set-constraints mem=2G instance-type=m1.small
> 
> ...but I think it's actually meaningful if you consider mem/cores to be
> a cross-cloud fallback that should not override an explicit request for
> a recognised instance type.

Here I think we can get a little smarter.  Again I'm going to ramble a
bit, so tell me if I've got obvious misunderstandings.

We have back-end environment providers like AWS, OpenStack, LXC etc.
Cloud providers like AWS and OpenStack (and soon Azure ...) have defined
instance types (I'm taking a stab in the dark that OpenStack actually
has a defined instance type - or is it worse than this in that each
OpenStack provider like HP Cloud, Rackspace etc set their own individual
names?)  Can we, with some rationality, convert the named instance types
for the particular providers into some set of known meaningful constraints?

Actually this leads me to another tangent.  The more I read around this,
the more both CPU (in the traditional sense) and Cores are both
crackful. I think that we should have some measure of computational
power (like ECU) that we can then have different instance machines from
different providers provide a calculated (albeit roughly) value that
juju then uses in deployment instructions.

> Are these looking roughly sane to you?

Generally...

[snippity]

> I *think* that allowing optional specification of nebulous "cpu power"
> is not such a bad idea -- ec2, hp, and gce all expose roughly-equivalent
> concepts, give or take some conversion fudge factor.
> 
> Possibly I should call it "vcores", but I'm not sure that actually wins
> in terms of explanatory power -- 8 effective cores on bare metal is
> generally not the same, in practice, as 8 effective cores on openstack.

This is exactly what I just mentioned above, and that we have both come
to it independently makes it more likely a useful measure.  Especially
if there is a public, simply found, translation mechanism that we use to
say:

  power 1 ~= X Hz, 1 core
  power 5 ~= Y Hz, 4 core
  power 10 ~= Z Hz, 8 core
  etc.

As much fun as it may be to have power defined by "small, medium, large,
OMG huge", I think a numeric value is better, and more understandable.

Let me come back to the instance-type concept.

Instance types make sense only for deployment into a particular
environment, but AFAICS the constraints only make sense at deploy time,
and when I'm deploying, I know what type of provider I am deploying to,
and what's more, I may well have particular instance types in mind for
those services for that provider.  To me it makes sense to allow the
user to be explicit in their instanct-type requests.

Which brings us back to the above crackful constraint request:

  $ juju set-constraints mem=2G instance-type=m1.small

If we are able at the provider level to translate the "m1.small"
instance request to { mem=1.7, power=1 } then we should be able to tell
the user that the constraint fails as 1.7 < 2.

I'd also expect the provider to be able to tell me if I ask for an
instance type that doesn't exist.  Is this possible?  Are we hard-coding
values, or do we have a way somewhere to get the underlying provider to
tell us what their instance types are?

Obviously a lot of these constraints make no sense at all for a local
deployment, but that is another topic altogether.

Cheers,
Tim





More information about the Juju-dev mailing list