juju devel 2.0-beta1 is available for testing
Curtis Hovey-Canonical
curtis at canonical.com
Sat Feb 20 16:06:43 UTC 2016
# juju-core 2.0-beta1
A new development release of Juju, juju-core 2.0-beta1, is now available.
This release replaces version 2.0-alpha2.
## Getting Juju
juju-core 2.0-beta1 is available for Xenial and backported to earlier
series in the following PPA:
https://launchpad.net/~juju/+archive/devel
Thie juju-core2 package is co-installable with the juju-core 1.25.3
package. You can choose which juju is the system juju by running
sudo update-alternatives --config juju
Windows, Centos, and OS X users will find installers at:
https://launchpad.net/juju-core/+milestone/2.0-beta1
Development releases use the "devel" simple-streams. You must configure
the `agent-stream` option in your environments.yaml to use the matching
juju agents.
Upgrading from other releases are not supported.
# Notable Changes
* Important Limitation
* Terminology
* Command Name Changes
* New Juju Home Directory
* Multi-Model Support Active by Default
* New Bootstrap and Cloud Management Experience
* Native Support for Charm Bundles
* Multi Series Charms
* Improved Local Charm Deployment
* LXC Local Provider No Longer Available
* LXD Provider
* Microsoft Azure Resource Manager Provider
* Bootstrap Constraints, Series
* Juju Logging Improvements
* Unit Agent Improvements
* API Login with Macaroons
* MAAS Spaces
* Resources
* Juju Status Improvements
* Relation get and set compatibility
* Support for new AWS M4 Instance Types
* Support for win10 and win2016
* Known Issues
### Important Limitation
GUI, Landscape and Deployer won't initially work with this beta1
release. These products are being updated to support the new 2.0 APIs
and compatible versions will be released shortly after this beta1 ships.
Until then, please continue to use the alpha 1 release found in
ppa:juju/experimental.
### Terminology
In Juju 2.0, environments will now be referred to as "models". Commands
which referenced "environments" will now reference "models". Example:
juju get-environment
will become
juju get-model
The "state-server" from Juju 1.x becomes a "controller" in 2.0.
### Command Name Changes
After a while experimenting with nested command structures, the decision
was made to go back to a flat command namespace as the nested commands
always felt clumsy and awkward when being used even though they seemed
like a good idea.
So, we have the following changes:
1.25 command 2.0-alpha1 command
juju environment destroy juju destroy-model * ***
juju environment get juju get-model ** ***
juju environment get-constraints juju get-model-constraints **
juju environment retry-provisioning juju retry-provisioning
juju environment set juju set-model ** ***
juju environment set-constraints juju set-model-constraints **
juju environment share juju share-model ***
juju environment unset juju unset-model ** ***
juju environment unshare juju unshare-model ***
juju environment users juju list-shares
juju user add juju add-user
juju user change-password juju change-user-password
juju user credentials juju get-user-credentials
juju user disable juju disable-user
juju user enable juju enable-user
juju user info juju show-user
juju user list juju list-users
juju machine add juju add-machine **
juju machine remove juju remove-machine **
<new in 2.0> juju list-machines
<new in 2.0> juju show-machines
juju authorised-keys add juju add-ssh-key
juju authorised-keys list juju list-ssh-keys
juju authorised-keys delete juju remove-ssh-key
juju authorised-keys import juju import-ssh-key
juju get juju get-config
juju set juju set-config
juju get-constraints juju get-model-constraints
juju set-constraints juju set-model-constraints
juju get-constraints <service> juju get-constraints
juju set-constraints <service> juju set-constraints
juju backups create juju create-backup ***
juju backups restore juju restore-backup ***
juju action do juju run-action ***
juju action defined juju list-actions ***
juju action fetch juju show-action-output ***
juju action status juju show-action-status ***
juju storage list juju list-storage ***
juju storage show juju show-storage ***
juju storage add juju add-storage ***
juju space create juju add-space ***
juju space list juju list-spaces ***
juju subnet add juju add-subnet ***
juju ensure-availability juju enable-ha ***
* the behaviour of destroy-environment/destroy-model has changed,
see the section on controllers below
** these commands existed at the top level before but become the
recommended approach again.
*** alias, but primary name going forward.
And for the extra commands previously under the "jes" feature flag but
now available out of the box:
juju system create-environment juju create-model
juju system destroy juju destroy-controller
juju system environments juju list-models
juju system kill juju kill-controller
juju system list juju list-controllers
juju system list-blocks juju list-all-blocks
juju system login juju login
juju system remove-blocks juju remove-all-blocks
juju system use-environment juju use-model
Fundamentally, listing things should start with 'list-', and looking at
an individual thing should start with 'show-'. 'remove' is generally
used for things that can be easily added back, whereas 'destroy' is used
when it is not so easy to add back.
### New Juju Home Directory
The directory where Juju stores its working data has changed. We now
follow the XDG directory specification. By default, the Juju data
(formerly home) directory is located at ~/.local/share/juju. This may be
overridden by setting the JUJU_DATA environment variable.
Juju 2.0's data is not compatible with Juju 1.x. Do not set JUJU_DATA
to the and old JUJU_HOME (~/.juju).
### Multi-Model Support Active by Default
The multiple model support that was previously behind the "jes"
developer feature flag is now enabled by default. Along with the
enabling:
A new concept has been introduced, that of a "controller".
A Juju Controller, also sometimes called the "controller model",
describes the model that runs and manages the Juju API servers and the
underlying database.
The controller model is what is created when the bootstrap command is
used. This controller model is a normal Juju model that just happens to
have machines that manage Juju. A single Juju controller can manage many
Juju models, meaning less resources are needed for Juju's management
infrastructure and new models can be created almost instantly.
In order to keep a clean separation of concerns, it is now considered
best practice to create additional models for deploying workloads,
leaving the controller model for Juju's own infrastructure. Services can
still be deployed to the controller model, but it is generally expected
that these be only for management and monitoring purposes (e.g Landscape
and Nagios).
When creating a Juju controller that is going to be used by more than
one person, it is good practice to create users for each individual that
will be accessing the models.
The main new commands of note are:
juju list-models
juju create-model
juju share-model
juju list-shares
juju use-model
Also see:
juju help controllers
juju help users
Also, since controllers are now special in that they can host multiple
other models, destroying controllers now needs to be done with more
care.
juju destroy-model
does not work on controllers, but now only on hosted models (those
models that the controller looks after).
juju destroy-controller
is the way to do an orderly takedown.
juju kill-controller
will attempt to do an orderly takedown, but if the API server is
unreachable it will force a takedown through the provider. However,
forcibly taking down a controller could leave other models running with
no way to talk to an API server.
### New Bootstrap and Cloud Management Experience
This release introduces a new way of bootstrapping and managing clouds
and credentials that involves less editing of files and makes Juju work
out of the box with major public clouds like AWS, Azure, Joyent,
Rackspace, Google, Cloudsigma.
Firstly, there is no more environments.yaml file to edit. Clouds and
credentials are defined in separate files, and the only cloud
information that requires editing is for private MAAS and Openstack
deployments.
#### Public Clouds
So, we've installed Juju, let's see what clouds are available:
juju list-clouds
CLOUD TYPE REGIONS
aws ec2 us-east-1, us-west-1, us-west-2, ...
aws-china ec2 cn-north-1
aws-gov ec2 us-gov-west-1
azure azure japanwest, centralindia, eastus2, ...
azure-china azure chinaeast, chinanorth
cloudsigma cloudsigma mia, sjc, wdc, zrh, hnl
google gce us-east1, us-central1, europe-west1, ...
joyent joyent us-east-1, us-east-2, us-east-3, ...
rackspace rackspace LON, SYD, HKG, DFW, ORD, IAD
To see more detail on a particular cloud, use show-cloud:
juju show-cloud azure
We want to bootstrap a controller on AWS. In this case, as is possible
with previous versions of Juju, our credentials are already set up as
environment variables so we can just get straight to it:
juju bootstrap mycontroller aws
The default region is shown first in the list-clouds output so we'll get
a controller called "mycontroller" on us-east-1. But we can also specify
a different region.
juju bootstrap mycontroller aws/us-west-2
(Note that in the "list-clouds" output above, the order of regions is
(incorrect. There is a bug in the command that will be resolved in
(2.0-beta2.)
#### Managing Controllers and Models
We can see what controllers I can talk to:
juju list-controllers
CONTROLLER MODEL USER SERVER
mycontroller* mycontroller admin at local 10.0.1.12:17070
test test admin at local 10.0.3.13:17070
The default controller is indicated with an *.
Note: currently the controller model (see multi-model above) is named
after the controller. You would then create a new hosted model in which
workloads are run. The next Juju beta will create the controller model
as "admin" and an initial hosted model as part of bootstrap.
Note: locally bootstrapped controllers will be prefixed by the "local."
label in the next Juju beta. So "mycontroller" above becomes
"local.mycontroller".
It's possible to use juju switch to select the current controller and/or
model.
juju switch mymodel (switch to mymodel on current controller)
juju switch mycontroller (switch current controller to mycontroller)
juju switch mycontroller:mymodel (switch to mymodel on mycontroller)
To see the name of the current controller (and model), run switch with
no arguments:
juju switch
To see the full details of the current controller, run show-controller
with no arguments:
juju show-controller
Note: The model commands used for multi-model support, as outlined in
the previous section, work across multiple controllers also.
juju create-model mynewmodel -c mycontroller
The above command creates a new model on the nominated controller and
switches to that controller and model as the default for subsequent
commands.
#### LXD and Manual Providers
Bootstrapping models using the LXD or manual providers also Just Works.
juju bootstrap mycontroller lxd
juju bootstrap mycontroller manual/<hostname>
For now, LXD just supports localhost as the LXD host which is the
default. The manual provider sees the hostname (or IP address)
specified as shown above rather than a config setting.
#### Private Clouds
For MAAS and Openstack clouds, it's necessary to edit a clouds.yaml
file. This can be done from anywhere and then the juju add-cloud command
is used to add these to Juju.
clouds:
homestack:
type: openstack
auth-types: [userpass, access-key]
regions:
london:
endpoint: http://london/1.0
homemaas:
type: maas
auth-types: [oauth1]
endpoint: http://homemaas/MAAS
Assuming you save the above to personal-clouds.yaml, you can add the
openstack cloud to Juju:
juju add-cloud homestack personal-clouds.yaml
Then when you juju list-clouds:
CLOUD TYPE REGIONS
aws ec2 us-west-2, eu-west-1, ap-southeast-2 ...
aws-china ec2 cn-north-1
aws-gov ec2 us-gov-west-1
azure azure eastus, southeastasia, centralindia ...
azure-china azure chinaeast, chinanorth
cloudsigma cloudsigma hnl, mia, sjc, wdc, zrh
google gce us-east1, us-central1, europe-west1, ...
joyent joyent eu-ams-1, us-sw-1, us-east-1, us-east-2 ...
rackspace rackspace ORD, IAD, LON, SYD, HKG, DFW
local:homestack openstack london
And now you can bootstrap that openstack cloud:
juju bootstrap mycontroller homestack
#### Credential Management
Credentials are managed in a separate credentials.yaml file located in
~/.local/share/juju. For now, we don't support auto discovery of
credentials so this file needs to be maintained by hand. Credentials are
per cloud. If there's only one credential, that's what's used. Or you
can define the default credential in the file. Otherwise you can specify
the credential name when bootstrapping:
juju bootstrap mycontroller aws --credential mysecrets
The credentials file supports the necessary credential types for each
cloud. This is where it's also possible to define the default region to
use for a cloud if none is specified when bootstrapping. An example
credentials.yaml file:
credentials:
aws:
default-credential: peter
default-region: us-west-2
peter:
auth-type: access-key
access-key: key
secret-key: secret
paul:
auth-type: access-key
access-key: key
secret-key: secret
homemaas:
peter:
auth-type: oauth1
maas-oauth: mass-oauth-key
homestack:
default-region: region-a
peter:
auth-type: userpass
password: secret
tenant-name: tenant
username: user
google:
peter:
auth-type: jsonfile
file: path-to-json-file
azure:
peter:
auth-type: userpass
application-id: blah
subscription-id: blah
application-password: blah
joyent:
peter:
auth-type: userpass
sdc-user: blah
sdc-key-id: blah
private-key: blah (or private-key-path)
manta-user: blah
manta-key-id: blah
algorithm: blah
#### Model Configuration at Bootstrap
When bootstrapping, it's sometimes also necessary to pass in
configuration values. These was previously done via the
environments.yaml file. For this release, you can specify config values
as bootstrap arguments or via a file:
juju bootstrap
mycontroller aws/us-west-2
--config key1=value1 --config key2=value2 --config /path/to/file
Values as name pairs take precedence over the content of any file
specified. Example:
juju bootstrap mycontroller aws --config image-stream=daily
#### Sharing Models
You can now easily give other people access to your models, even if that
user did not previously have the ability to login to the controller
hosting the model.
juju add-user bob --share mymodel
User "bob" added
Model "mymodel" is now shared
Please send this command to bob:
juju register MDoTA2JvYjAREw8xMC4wLjEuMTI6MTcwNzAEIMZ7bVxwiApr
Now all bob has to do is run register command and he is prompted to
enter a new password and name the controller and he will be logged into
the controller and access the shared model.
juju register MDoTA2JvYjAREw8xMC4wLjEuMTI6MTcwNzAEIMZ7bVxwiApr
Please set a name for this controller: controller
Enter password:
Confirm password:
Welcome, bob. You are now logged into "controller".
The above process is cryptographically secure with end-end encryption
and message signing/authentication and is immune to man-in-the-middle
attacks.
#### Known Issues
Two Joyent regions are misconfigured in this release: us-east-1 and
us-east-2. If you want to use the us-east region, select us-east-3.
#### Coming in Subsequent Betas
juju default-region
juju default-credential
juju autoload-credentials
juju update-clouds
juju add-credential
juju login
juju logout
controller model will be called "admin"
better current status when listing controllers/models
logged in, default, shared etc
### Native Support for Charm Bundles
The Juju 'deploy' command can now deploy a bundle. The Juju Quickstart
or Deployer plugins are not needed to deploy a bundle of charms. You can
deploy the mediawiki-single bundle like so:
juju deploy cs:bundle/mediawiki-single
Local bundles can be deployed by passing the path to the bundle. For
example:
juju deploy ./openstack/bundle.yaml
Local bundles can also be deployed from a local repository. Bundles
reside in the "bundle" subdirectory. For example, your local juju
repository might look like this:
juju-repo/
|
- trusty/
- bundle/
|
- openstack/
|
- bundle.yaml
and you can deploy the bundle like so:
export JUJU_REPOSITORY="$HOME/juju-repo"
juju deploy local:bundle/openstack
Bundles, when deployed from the command line like this, now support
storage constraints. To specify how to allocate storage for a service,
you can add a 'storage' key underneath a service, and under 'storage'
add a key for each store you want to allocate, along with the
constraints. e.g. say you're deploying ceph-osd, and you want each unit
to have a 50GiB disk:
ceph-osd:
...
storage:
osd-devices: 50G
Because a bundle should work across cloud providers, the constraints in
the bundle should not specify a pool/storage provider, and just use the
default for the cloud. To customize how storage is allocated, you can use
the '--storage' option with a new bundle-specific format: --storage
service:store=constraints. e.g. say you you're deploying OpenStack, and
you want each unit of ceph-osd to have 3x50GiB disks:
juju deploy ./openstack/bundle.yaml --storage ceph-osd:osd-devices=3,50G
### Multi Series Charms
Charms now have the capability to declare that they support more than
one series. Previously a separate copy of the charm was required for
each series. An important constraint here is that for a given charm,
all of the listed series must be for the same distro/OS; it is not
allowed to offer a single charm for Ubuntu and CentOS for example.
Supported series are added to charm metadata as follows:
name: mycharm
summary: "Great software"
description: It works
maintainer: Some One <some.one at example.com>
categories:
- databases
series:
- precise
- trusty
- wily
provides:
db:
interface: pgsql
requires:
syslog:
interface: syslog
The default series is the first in the list:
juju deploy mycharm
will deploy a mycharm service running on precise.
A different, non-default series may be specified:
juju deploy mycharm --series trusty
It is possible to force the charm to deploy using an unsupported series
(so long as the underlying OS is compatible):
juju deploy mycharm --series xenial --force
or
juju add-machine --series xenial
Machine 1 added.
juju deploy mycharm --to 1 --force
'--force' is required in the above deploy command because the target
machine is running xenial which is not supported by the charm.
The 'force' option may also be required when upgrading charms. Consider
the case where a service is initially deployed with a charm supporting
precise and trusty. A new version of the charm is published which only
supports trusty and xenial. For services deployed on precise, upgrading
to the newer charm revision is allowed, but only using force (note the
use of '--force-series' since upgrade-charm also supports '--force-
units'):
juju upgrade-charm mycharm --force-series
### Improved Local Charm Deployment
Local charms can be deployed directly from their source directory
without having to set up a pre-determined local repository file
structure. This feature makes it more convenient to hack on a charm and
just deploy it, and it also necessary to develop local charms
supporting multi series.
Assuming a local charm exists in directory /home/user/charms/mycharm:
juju deploy ~/charms/mycharm
will deploy the charm using the default series.
juju deploy ~/charms/mycharm --series trusty
will deploy the charm using trusty.
Note that it is no longer necessary to define a JUJU_REPOSITORY nor
locate the charms in a directory named after a series. Any directory
structure can be used, including simply pulling the charm source from a
VCS, hacking on the code, and deploying directly from the local repo.
### LXC Local Provider No Longer Available
With the introduction of the LXD provider (below), the LXC version of
the Local Provider is no longer supported.
### LXD Provider
The new LXD provider is the best way to use Juju locally.
The controller is no longer your host machine; it is now a LXC
container. This keeps your host machine clean and allows you to utilize
your local model more like a traditional Juju model. Because
of this, you can test things like Juju high-availability without needing
to utilize a cloud provider.
#### Requirements
- Running Wily (LXD is installed by default)
- Import the LXD cloud-images that you intend to deploy and register
an alias:
lxd-images import ubuntu trusty --alias ubuntu-trusty
lxd-images import ubuntu wily --alias ubuntu-wily
lxd-images import ubuntu xenial --alias ubuntu-xenial
or register an alias for your existing cloud-images
lxc image alias create ubuntu-trusty <fingerprint>
lxc image alias create ubuntu-wily <fingerprint>
lxc image alias create ubuntu-xenial <fingerprint>
- For 2.0-alpha1, you must specify the "--upload-tools" flag when
bootstrapping the controller that will use trusty cloud-images.
This is because most of Juju's charms are for Trusty, and the
agent-tools for Trusty don't yet have LXD support compiled in.
juju bootstrap --upload-tools
"--upload-tools" is not required for deploying a wily or xenial
controller and services.
Logs are located at '/var/log/lxd/juju-{uuid}-machine-#/ ?
#### Setting up LXD on older Environments
Note: Juju does not set up remotes for you. Run the following
commands on an LXD remote's host to install LXD:
add-apt-repository ppa:ubuntu-lxc/lxd-stable
apt-get update
apt-get install lxd
Before using a locally running LXD (the default for this provider)
after installing it, either through Juju or the LXD CLI ("lxc"),
you must either log out and back in or run this command:
newgrp lxd
See: https://linuxcontainers.org/lxd/getting-started-cli/
### Microsoft Azure Resource Manager Provider
Juju now supports Microsoft Azure's new Resource Manager API. The Azure
provider has effectively been rewritten, but old models are still
supported. To use the new provider support, you must bootstrap a new
model with new configuration. There is no automated method for
migrating.
The new provider supports everything the old provider did, but now also
supports several additional features, as well as support for unit
placement (i.e. you can specify existing machines to which units are
deployed). As before, units of a service will be allocated to machines
in a service-specific Availability Set if no machine is specified.
In the initial release of this provider, each machine will be allocated
a public IP address. In a future release, we will only allocate public
IP addresses to machines that have exposed services, to enable
allocating more machines than there are public IP addresses.
Each model is represented as a "resource group" in Azure, with the VMs,
subnets, disks, etc. being contained within that resource group. This
enables guarantees about ensuring resources are not leaked when
destroying a model, which means we are now able to support persistent
volumes in the Azure storage provider.
Finally, as well as Ubuntu support, the new Azure provider supports
Microsoft Windows Server 2012 (series "win2012"), Windows Server 2012 R2
(series "win2012r2"), and CentOS 7 (series "centos7") natively.
To use the new Azure support, you need the following configuration in
environments.yaml:
type: azure
application-id: <Azure-AD-application-ID>
application-password: <Azure-AD-application-password>
subscription-id: <Azure-account-subscription-ID>
tenant-id: <Azure-AD-tenant-ID>
location: westus # or any other Azure location
To obtain these values, it is recommended that you use the Azure CLI:
https://azure.microsoft.com/en-us/documentation/articles/xplat-cli/.
You will need to create an "application" in Azure Active Directory for
Juju to use, per the following documentation:
https://azure.microsoft.com/en-us/documentation/articles/resource-group-authenticate-service-principal/#authenticate-service-principal-with-password---azure-cli
(NOTE: you should assign the role "Owner", not "Reader", to the
application.)
Take a note of the "Application Id" output when issuing "azure ad app
create". This is the value that you must use in the 'application-id'
configuration for Juju. The password you specify is the value to use in
'application-password'.
To obtain your subscription ID, you can use "azure account list" to list
your account subscriptions and their IDs. To obtain your tenant ID, you
should use "azure account show", passing in the ID of the account
subscription you will use.
You may need to register some resources using the azure CLI when
updating an existing Azure account:
azure provider register Microsoft.Compute
azure provider register Microsoft.Network
azure provider register Microsoft.Storage
### New Support for Rackspace
A new provider has been added that supports hosting a Juju model in
Rackspace Public Cloud As Rackspace Cloud is based on OpenStack,
Rackspace provider internally uses OpenStack provider, and most of the
features and configuration options for those two providers are
identical.
The basic config options in your environments.yaml will look like this:
rackspace:
type: rackspace
tenant-name: "<your tenant name>"
region: <IAD, DFW, ORD, LON, HKG, or SYD>
auth-url: https://identity.api.rackspacecloud.com/v2.0
auth-mode: <userpass or keypair>
username: <your username>
password: <secret>
# access-key: <secret>
# secret-key: <secret>
The values in angle brackets need to be replaced with your rackspace
information.
'tenant-name' must contain the rackspace Account Number. 'region' must
contain rackspace region (IAD, DFW, ORD, LON, HKG, SYD). 'auth-mode'
parameter can contain either 'userpass' or 'keypair'. This parameter
distinguish the authentication mode that provider will use. If you use
'userpass' mode you must also provide 'username' and 'password'
parameters. If you use 'keypair' mode 'access-key' and 'secret-key'
parameters must be provided.
### Bootstrap Constraints, Series
While bootstrapping, you can now specify constraints for the bootstrap
machine independently of the service constraints:
juju bootstrap --constraints <service-constraints>
--bootstrap-constraints <bootstrap-machine-constraints>
You can also specify the series of the bootstrap machine:
juju bootstrap --bootstrap-series trusty
### Juju Logging Improvements
Logs from Juju's machine and unit agents are now streamed to the Juju
controllers over the Juju API in preference to using rsyslogd. This is
more robust and is a requirement now that multi-model support is enabled
by default. Additionally, the centralised logs are now stored in Juju's
database instead of the all-machines.log file. This improves log query
flexibility and performance as well as opening up the possibility of
structured log output in future Juju releases.
Logging to rsyslogd is currently still in place with logs being sent
both to rsyslogd and Juju's DB. Logging to rsyslogd will be removed
before the final Juju 2.0 release.
The 'juju debug-log' command will continue to function as before and
should be used as the default way of accessing Juju's logs.
This change does not affect the per machine (machine-N.log) and per unit
(unit-*-N.log) log files that exist on each Juju managed host. These
continue to function as they did before.
A new 'juju-dumplogs' tool is also now available. This can be run on
Juju controllers to extract the logs from Juju's database even when the
Juju server isn't available. It is intended to be used as a last resort
in emergency situations. 'juju-dumplogs' will be available on the system
$PATH and requires no command line options in typical usage.
### API Login with Macaroons
Juju 2.0 supports an alternate API long method based on macaroons. This
will support the new charm publishing workflow coming future releases
### Unit Agent Improvements
We've made improvements to worker lifecycle management in the unit agent
in this release. The resource dependencies (API connections, locks,
etc.) shared among concurrent workers that comprise the agent are now
well-defined, modeled and coordinated by an engine, in a design inspired
by Erlang supervisor trees.
This improves the long-term testability of the unit agent, and should
improve the agent's resilience to failure. This work also allows hook
contexts to execute concurrently, which supports features in development
targeting 2.0.
### Experimental address-allocation feature flag is no longer supported
In earlier releases, it was possible to get Juju to use static IP
addresses for containers from the same subnet as their host machine,
using the following development feature flag:
$ JUJU_DEV_FEATURE_FLAGS=address-allocation juju bootstrap ...
This flag is no longer supported and will not be accepted.
### Support for MAAS 1.9+ Spaces and Related APIs
Juju 2.0 now natively supports the new spaces API in MAAS 1.9+. Spaces
are automatically discovered from MAAS (1.9+) on bootstrap and available
for use with service endpoint bindings or machine provisioning
constraints (see below). Space discovery works for the controller model
as well as any model created later using "juju create-model".
Currently there is no command to update the spaces in Juju if their
corresponding MAAS spaces change. As a workaround, restarting the
controller machine agent (jujud) discovers any new spaces.
#### Binding Service Endpoints to Spaces
When deploying a service, you can use the optional --bind argument to
specify to which space individual charm endpoints should be bound. The
syntax for the --bind argument is a whitespace-separated list of
endpoint and space names, separated by "=".
Binding means the "bound" endpoints will have addresses from subnets
part of the space the endpoint is bound to. When --bind is not
specified, all endpoints will use the same address (backwards-compatible
behavior) which is the host machine's preferred private address (as
returned by "unit-get private-address"). Additionally, a service-default
space can be specified by omitting the "<endpoint>=" prefix before the
space name. This space will be used for binding all endpoints that are
not explicitly specified.
Examples:
juju deploy mysql --bind "db=database server=internal"
Bind "db" endpoint to an address part of the "database" space (i.e. the
address is coming from one of the "database" space's subnets in MAAS).
juju deploy wordpress --bind internal-apps
Bind *all* endpoints of wordpress to the "internal-apps" space.
juju deploy haproxy --bind "url=public internal"
Bind "url" to "public", and all other endpoints to "internal".
#### Binding Endpoints of Services Within a Bundle to Spaces
Bindings can be specified for services within a bundle the same way as
when deploying individual charms. The bundle YAML can include a section
called "bindings", defining the map of endpoint names to space names.
Example bundle.yaml excerpt:
...
mysql:
charm: cs:trusty/mysql-53
num_units: 1
constraints: mem=4G
bindings:
server: database
cluster: internal
...
Deploying a bundle including a section like in the example above, is
equivalent to running:
juju deploy mysql --bind "server=database cluster=internal"
There is currently no way to declare a service-default space for all
endpoints in a bundle's bindings section. A workaround is to list all
endpoints explicitly.
#### New Hook Command: network-get
When deploying a service with endpoint bindings specified, charm authors
can use the new "network-get" hook command to determine which address to
advertise for a given endpoint. This approach will eventually replace
"unit-get private-address" as well as various other ways to get the
address to use for a given unit.
There is currently a mandatory --primary-address argument to "network-
get", which guarantees a single IP address to be returned.
Example (within a charm hook):
relation-ids cluster
url:2
network-get -r url:2 --primary-address
10.20.30.23
(assuming the service was deployed with e.g. --bind url=internal, and
(10.20.30.0/24 is one of the subnets in that "internal" space).
#### Multiple Positive and Negative Spaces Supported in Constraints
Earlier releases which introduced spaces constraints ignored all but the
first positive space in the list. While the AWS provider still does
that, for MAAS deployments all spaces constraints are applied for
machine selection, positives and negatives.
Example:
juju add-machine --constraints spaces=public,internal,^db,^admin
When used on a MAAS-based model, Juju will select a machine which has
access to both "public" and "internal" spaces, but neither the "db" or
"admin" spaces.
#### mediawiki demo bundle using bindings
A customised version of the mediawiki bundle[1] that deploys haproxy,
mediawiki and mysql. Traffic between haproxy and mediawiki is on a space
called "internal" and traffic between mediawiki and mysql is in a space
called "db". The haproxy website endpoint is bound to the "public"
space.
[1] - http://juju-sapphire.github.io/MAAS%20Spaces%20Demo/
### Resources
A new concept has been introduced into Charms called "resources".
Resources are binary blobs that the charm can utilize, and are declared
in the metadata for the Charm. All resources declared will have a
version stored in the Charm store, however updates to these can be
uploaded from an admin's local machine to the controller.
#### Change to Metadata
A new clause has been added to metadata.yaml for resources. Resources
can be declared as follows:
resources:
name:
type: file # the only type initially
filename: filename.tgz
description: "One line that is useful when operators need to push it."
#### New User Commands
Three new commands have been introduced:
1. juju list-resources
usage: juju list-resources [options] service-or-unit
purpose: show the resources for a service or unit
options:
--format (= tabular)
specify output format (json|tabular|yaml)
-m, --model (= "")
juju model to operate in
-o, --output (= "")
specify an output file
This command shows the resources required by and those in use by an
existing service or unit in your model.
aliases: resources
2. juju push-resource
usage: juju push-resource [options] service name=file
purpose: upload a file as a resource for a service
options:
-m, --model (= "")
juju model to operate in
This command uploads a file from your local disk to the juju
controller to be used as a resource for a service.
3. juju charm list-resources
usage: juju charm [options] <command> ...
purpose: interact with charms
options:
--description (= false)
-h, --help (= false)
show help on a command or other topic
"juju charm" is the juju CLI equivalent of the "charm" command used
by charm authors, though only applicable functionality is mirrored.
commands:
help - show help on a command or other topic
list-resources - display the resources for a charm in the charm store
resources - alias for 'list-resources'
In addition, resources may be uploaded at deploy time by specifying the
resource flag to the deploy command. Following the resource flag should
be name=filepath pair. This flag may be repeated more than once to
upload more than one resource.
juju deploy foo --resource bar=/some/file.tgz --resource baz=./docs/cfg.xml
Where bar and baz are resources named in the metadata for the foo charm.
#### New Charmer Concepts
##### Expansion of the upgrade-charm hook
The concept of what makes up a charm has now expanded to be the tuple of
the charm workload and the resources it is bundled with. For this
reason, if a new resource is uploaded to the charm store or controller,
the 'upgrade-charm' hook will now fire.
##### resource-get
There is a new hook tool 'resource-get' which is used while a hook is
running to get the local path to the file for the identified resource.
This file is an fs-local copy, unique to the unit for which the hook is
running. It is downloaded from the controller, if necessary.
If 'resource-get' for a resource has not been run before (for the unit)
then the resource is downloaded from the controller at the revision
associated with the unit's service. That file is stored in the unit's
local cache. If 'resource-get' **has** been run before then each
subsequent run syncs the resource with the controller. This ensures that
the revision of the unit-local copy of the resource matches the revision
of the resource associated with the unit's service.
Either way, the path provided by 'resource-get' references the up-to-
date file for the resource. Note that the resource may get updated on
the controller for the service at any time, meaning the cached copy
**may** be out of date at any time after you call 'resource-get'.
Consequently, the command should be run at every point where it is
critical that the resource be up to date.
The 'upgrade-charm' hook will be fired whenever a new resource has
become available. Thus, in the hook the charm may call 'resource-get',
forcing an update if the resource has changed.
Note that 'resource-get' only provides an FS path to the resource file.
It does not provide any information about the resource (e.g. revision).
### Juju Status Improvements
The default Juju status format is now tabular (not yaml). Yaml can still
be output by using the "--format yaml" arguments. The deprecated agent-
state and associated yaml attributes are now deleted (these have been
replaced since 1.24 by agent status and workload status attributes).
The tabular status output now also includes relation information. This
was previously only shown in the yaml and json output formats.
### Relation get-config and set-config compatibility
See https://bugs.launchpad.net/juju-core/+bug/1382274
If 'juju get-config' is used to save YAML output to a file, the same
file can now be used as input to 'juju set-config'. The functions are
now reciprocal such that the output of one can be used as the input of
the other with no changes required, so that:
1. complex configuration data containing multiple levels of quotes can
be modified via YAML without needing to be escaped on shell command
lines, and
2. large amounts of config data can be transported from one juju model
to another in a trivial fashion.
### Support for new EC2 M4 Instance Types
Juju now supports m4 instances on EC2.
m4.large
m4.xlarge
m4.2xlarge
m4.4xlarge
m4.10xlarge
### Support for win10 and win2016
Juju now supports Windows 10 Enterprise and Windows Server 2016
### Known issues
* Some providers release wrong resources when destroying hosted models
Lp 1536792
* Local provider using KVM is broken
Lp 1547665
## Resolved issues
* Juju ignores index2.sjson in favour of index.json
Lp 1542131
* New ec2 korea region
Lp 1530957
* Maas bridge script handles vlan nics incorrectly
Lp 1532167
* Controller create model doesn't work with rax provider
Lp 1536446
* Juju's maas bridging script needs to de-duplicate dns-* iface
options
Lp 1536728
* Cpc sjson triggers failed to parse public key: openpgp: invalid
argument: no armored data found
Lp 1542127
Finally
We encourage everyone to subscribe the mailing list at
juju-dev at lists.canonical.com, or join us on #juju-dev on freenode.
--
Curtis Hovey
Canonical Cloud Development and Operations
http://launchpad.net/~sinzui
More information about the Juju
mailing list