Kubernetes charms now support 1.6!

Matt Bruzek matthew.bruzek at canonical.com
Wed Apr 12 22:34:42 UTC 2017


We are proud to release the latest Charms supporting Kubernetes version
1.6.1!


Kubernetes 1.6 is a major milestone for the community, we’ve got a full
write up of features and support on our blog <
https://insights.ubuntu.com/2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/
>
Getting Started

Here’s the simplest way to get a Kubernetes cluster up and running on an
Ubuntu 16.04 system:

sudo snap install conjure-up --classic
conjure-up kubernetes


During the installation conjure-up will ask you what cloud you want to
deploy on and prompt you for the proper credentials. If you’re deploying to
local containers (LXD) see these instructions <
https://kubernetes.io/docs/getting-started-guides/ubuntu/local/> for
localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is
recommended to read the full Canonical Distribution of Kubernetes
documentation <https://kubernetes.io/docs/getting-started-guides/ubuntu/>.
Upgrading an existing cluster

If you’ve got a cluster already deployed, we’ve got instructions to help
get you upgraded. If possible, deploying a new cluster will be the easiest
route. Otherwise, the instructions for upgrading are outlined here:
https://insights.ubuntu.com/2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/#upgrades
Changes in this release

   -

   Support for Kubernetes v1.6, with the current release being 1.6.1
   -

   Installation of components via snaps: kubectl, kube-apiserver,
   kube-controller-manager, kube-scheduler, kubelet, and kube-proxy. To learn
   more about snaps: https://snapcraft.io
   -

   Added ‘allow-privileged’ config option on kubernetes-master and
   kubernetes-worker charms. Valid values are true|false|auto (default: auto).
   If the value is ‘auto’, containers will run in unprivileged mode unless GPU
   hardware is detected on a worker node. If there are GPUs, or the value is
   true, Kubernetes will set `--allow-privileged=true`. Otherwise the flag is
   set to false.
   -

   Added GPU support (beta). If Nvidia GPU hardware is detected on a worker
   node, Nvidia drivers and CUDA packages will be installed, and kubelet will
   be restarted with the flags required to use the GPU hardware. The
   ‘allow-privileged’ config option must be ‘true’ or ‘auto’.
   -

      Nvidia driver version = 375.26; CUDA version = 8.0.61; these will be
      configurable future charm releases.
      -

      GPU support does not currently work on lxd.
      -

      This feature is beta - feedback on the implementation is welcomed.
      -

   Added support for running your own private registry, see the docs here
   <https://github.com/juju-solutions/kubernetes/tree/1.6-staging/cluster/juju/layers/kubernetes-worker#private-registry>
   for instructions.

General Fixes:

   -

   Fixed a bug in the kubeapi-load-balancer not properly forwarding
   SPDY/HTTP2 traffic for `kubectl exec` commands.

Etcd specific changes:

   -

   Installation of etcd and etcdctl is now done using the `snap install`
   command.
   -

   We support upgrading the previous etcd charm, to the latest charm with
   snap delivery mechanism.  See manual upgrade process for updating existing
   etcd clusters.

Changes to the bundles and layers:

   -

   Add registry action to the kubernetes-worker layer, which deploys a
   Docker registry in Kubernetes.
   -

   Add support for kube-proxy cluster-cidr option.

Test results

The Canonical Distribution of Kubernetes is running daily tests to verify
it works with the upstream code. As part of the Kubernetes test
infrastructructure we upload daily test runs. The test results are
available on the dashboard. Follow along with our progress here:

https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node/
How to contact us

We're normally found in the Kubernetes Slack channels and attend these
Special Interest Group (SIG) meetings regularly:

  - sig-cluster-lifecycle  <
https://kubernetes.slack.com/messages/sig-cluster-lifecycle/>
  - sig-cluster-ops  <https://kubernetes.slack.com/messages/sig-cluster-ops/
>

  - sig-onprem  <https://kubernetes.slack.com/messages/sig-onprem/
<https://kubernetes.slack.com/messages/sig-cluster-ops/>>


Operators are an important part of Kubernetes, we encourage you to
participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels <
http://kubernetes.io/community/>, feel free to reach out to us. As always,
PRs, recommendations, and bug reports are welcome:

https://github.com/juju-solutions/bundle-canonical-kubernetes
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20170412/e67d34e5/attachment.html>


More information about the Juju mailing list