Juju 2.3 beta2 is here!
Dmitrii Shcherbakov
dmitrii.shcherbakov at canonical.com
Fri Nov 10 10:40:11 UTC 2017
This might not be an ideal example after all. However, I encountered
something else in this case - final model machine IDs are not the same as I
would expect while looking at the bundle. This is Juju 2.2.6, MAAS 2.2.2. I
am not sure there can be any guarantees about that due to parallelization
of provisioner and az spread code - I expected that during an initial
deployment on a *clean model* (no prior machine number allocation), bundle
and model machine numbers would match but apparently they don't.
If model machine numbers matched the bundle ones after a clean model
deployment, my example would be about slicing a huge bundle into multiple
ones and deploying in steps (rabbitmq is a bad example of something to
deploy like that).
So:
bundle.yaml machine definitions:
http://paste.ubuntu.com/25930362/
...
'0':
series: xenial
constraints: tags=gateway
zone: z01
'1':
series: xenial
constraints: tags=gateway
zone: z02
'2':
series: xenial
constraints: tags=gateway
zone: z03
'3': &compute-z1
series: xenial
constraints: tags=compute
zone: z01
'4': &compute-z2
series: xenial
constraints: tags=compute
zone: z02
'5': &compute-z3
series: xenial
constraints: tags=compute
zone: z03
'6': *compute-z1
'7': *compute-z2
'8': *compute-z3
'9': *compute-z1
'10': *compute-z2
'11': *compute-z3
'12': *compute-z1
...
They were eventually enumerated as follows:
http://paste.ubuntu.com/25930364/
0 started 198.51.105.129 7nc46s xenial z01
Deployed
0/lxd/0 started 198.51.105.152 juju-ada6ad-0-lxd-0 xenial z01
Container started
...
1 started 198.51.105.130 hxgx7c xenial z02
Deployed
1/lxd/0 started 198.51.105.150 juju-ada6ad-1-lxd-0 xenial z02
Container started
...
2 started 198.51.105.131 wdskcy xenial z03
Deployed
...
4 started 198.51.105.134 f3e4gm xenial z02
Deployed
5 started 10.30.21.22 k8spqw xenial default
Deployed
6 started 10.30.21.47 feethe xenial default
Deployed
7 started 10.30.21.77 gesdy6 xenial default
Deployed
8 started 10.30.21.45 46rp4s xenial default
Deployed
9 started 10.30.21.55 ce6e38 xenial default
Deployed
10 started 10.30.21.46 yqew8f xenial default
Deployed
11 started 10.30.21.50 bdfn4y xenial default
Deployed
12 started 198.51.105.132 prbbg4 xenial z03
Deployed
12/lxd/0 started 198.51.105.142 juju-ada6ad-12-lxd-0 xenial z03
Container started
...
13 started 10.30.21.88 xrmf76 xenial default
Deployed
14 started 10.30.21.81 sbqwex xenial default
Deployed
15 started 10.30.21.17 hkgcf6 xenial default
Deployed
16 started 10.30.21.62 sdtfmm xenial default
Deployed
17 started 10.30.21.48 ghnxww xenial default
Deployed
18 started 10.30.21.30 xe8da4 xenial default
Deployed
19 started 10.30.21.65 nhd773 xenial default
Deployed
20 started 198.51.105.135 433afe xenial z01
Deployed
...
Note how machines in the "default" AZ in MAAS took precedence in integer ID
allocation after machines 4 and 12 and, more importantly, how node 2 in the
bundle became node 12 in the model.
So, instead of 0, 1, 2 in the bundle we have 0, 1, 12 and have to use --to
lxd:0,lxd:1,lxd:12 for placement.
I understand that there is AZ spread code in Juju https://git.io/vF2nu
https://git.io/vF2nD so I can see why the original bundle numbering does
not match.
The end result is that it requires an additional step in the big bundle
separation use-case:
1. deploy a core.yaml bundle part;
2. get resulting machine numbers and apply to bundle-add-ons.yaml (no juju
export-bundle?)
2. deploy bundle-add-ons.yaml.
Manual processing in step 2 is something to consider because we have no
reliable means of addressing machines in dependent bundles then.
Would reusing existing unit names be possible (the feature is about
existing machine definitions so it's good to clarify)?
bundle-add-ons.yaml:
...
to:
- old-app/0
- old-app/1
- old-app/2
This is different from <machine-id>=<unit-id> mapping which isn't
considered as you mentioned. It's rather a reference to symbolic machine
name ideas.
Best Regards,
Dmitrii Shcherbakov
Field Software Engineer
IRC (freenode): Dmitrii-Sh
On Fri, Nov 10, 2017 at 2:20 AM, Tim Penhey <tim.penhey at canonical.com>
wrote:
> On 10/11/17 12:12, Dmitrii Shcherbakov wrote:
> > It's situations like the following that I am trying to avoid:
> >
> > rabbitmq-server:
> > charm: cs:xenial/rabbitmq-server
> > bindings:
> > "": *oam-space
> > amqp: *internal-space
> > cluster: *internal-space
> > options:
> > source: *openstack-origin
> > min-cluster-size: 3
> > cluster-partition-handling: pause_minority
> > num_units: 3
> > to:
> > - lxd:0
> > - lxd:1
> > - lxd:2
> >
> > serialized by hand to: juju deploy rabbitmq-server --config
> > ../rabbitmq.yaml -n 3 --to lxd:0,lxd:1,lxd:12 --bind "space-oam
> > amqp=space-oam cluster=space-oam"
> >
> > cat ../rabbitmq.yaml
> > rabbitmq-server:
> > source: cloud:xenial-ocata
> > min-cluster-size: 3
> > cluster-partition-handling: pause_minority
> >
> > Which includes brain-parsing the definition, serializing it to
> > <appname>.yaml + -n <n> --to <placement> --bind <spaces>
>
> I don't understand what it is you are trying to avoid here?
>
> What is it you are trying to do?
>
> Tim
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju-dev/attachments/20171110/0fdb2401/attachment-0001.html>
More information about the Juju-dev
mailing list