lxd server guide section

Serge Hallyn serge.hallyn at ubuntu.com
Wed Mar 30 23:53:29 UTC 2016


Awesome, thanks.

Quoting Simon Quigley (tsimonq2 at ubuntu.com):
> I'll find a way to add a signed-off-by and push for you, if you don't
> mind. Thanks again! :)
> 
> On 03/30/16 17:22, Serge Hallyn wrote:
> > Attached is the diff to C/serverguide/virtualization.xml in lp:serverguide.
> > I've also tossed it into my same git tree as serverguide.lxd.diff.  (bzr
> > push is failing for me and the sun finally came out so I'm not dealing with
> > that now :)
> > 
> > === modified file 'serverguide/C/virtualization.xml'
> > --- serverguide/C/virtualization.xml	2016-02-07 19:02:01 +0000
> > +++ serverguide/C/virtualization.xml	2016-03-30 21:52:58 +0000
> > @@ -771,6 +771,880 @@
> >  
> >    </sect1>
> >  
> > +  <sect1 id="lxd" status="review">
> > +    <title>LXD</title>
> > +
> > +    <para>
> > +    LXD (pronounced lex-dee) is the lightervisor, or lightweight container
> > +    hypervisor.  While this claim has been controversial, it has been <ulink
> > +    url="http://blog.dustinkirkland.com/2015/09/container-summit-presentation-and-live.html">quite
> > +    well justified]</ulink> based on the original academic paper.  It also
> > +    nicely distinguishes LXD from <ulink
> > +    url="https://help.ubuntu.com/lts/serverguide/lxc.html">LXC</ulink>.
> > +    </para>
> > +
> > +    <para>
> > +    LXC (lex-see) is a program which creates and administers "containers" on a
> > +    local system.  It also provides an API to allow higher level managers, such
> > +    as LXD, to administer containers.  In a sense, one could compare LXC to
> > +    QEMU, while comparing LXD to libvirt.
> > +    </para>
> > +
> > +    <para>
> > +    The LXC API deals with a 'container'.  The LXD API deals with 'remotes,'
> > +    which serve images and containers.  This extends the LXC functionality over
> > +    the network, and allows concise management of tasks like container
> > +    migration and container image publishing.
> > +    </para>
> > +
> > +    <para>
> > +    LXD uses LXC under the covers for some container management tasks.
> > +    However, it keeps its own container configuration information and has its
> > +    own conventions, so that it is best not to use classic LXC commands by hand
> > +    with LXD containers.  This document will focus on how to configure and
> > +    administer LXD on Ubuntu systems.
> > +    </para>
> > +
> > +    <sect2 id="lxd-resources"> <title>Online Resources</title>
> > +
> > +      <para>
> > +      There is excellent documentation for <ulink url="http://github.com/lxc/lxd">getting started with LXD</ulink> in the online LXD README.  There is also an online server allowing you to <ulink url="http://linuxcontainers.org/lxd/try-it">try out LXD remotely</ulink>.  Stéphane Graber also has an <ulink url="https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/">excellent blog series</ulink> on LXD 2.0.  Finally, there is great documentation on how to <ulink url="https://jujucharms.com/docs/devel/config-LXD">drive lxd using juju</ulink>.
> > +      </para>
> > +
> > +      <para>
> > +      This document will offer an Ubuntu Server-specific view of LXD, focusing
> > +      on administration.
> > +      </para>
> > +    </sect2>
> > +
> > +    <sect2 id="lxd-installation"> <title>Installation</title>
> > +
> > +      <para>
> > +      LXD is pre-installed on Ubuntu Server cloud images.  On other systems, the lxd
> > +      package can be installed using:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +sudo apt install lxd
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +      This will install LXD as well as the recommended dependencies, including the LXC
> > +      library and lxcfs.
> > +      </para>
> > +    </sect2>
> > +
> > +    <sect2 id="lxd-kernel-prep"> <title> Kernel preparation </title>
> > +
> > +      <para>
> > +      In general, Ubuntu 16.04 should have all the desired features enabled by
> > +      default.  One exception to this is that in order to enable swap
> > +      accounting the boot argument <command>swapaccount=1</command> must be set.  This can be
> > +      done by appending it to the <command>GRUB_CMDLINE_LINUX_DEFAULT=</command>variable in
> > +      /etc/default/grub, then running 'update-grub' as root and rebooting.
> > +      </para>
> > +
> > +    </sect2>
> > +
> > +    <sect2 id="lxd-configuration"> <title> Configuration </title>
> > +
> > +      <para>
> > +      By default, LXD is installed listening on a local UNIX socket, which
> > +      members of group LXD can talk to.  It has no trust password setup.  And
> > +      it uses the filesystem at <filename>/var/lib/lxd</filename> to store
> > +      containers.  To configure LXD with different settings, use <command>lxd
> > +      init</command>.  This will allow you to choose:
> > +      </para>
> > +
> > +      <itemizedlist>
> > +      <listitem>
> > +        Directory or <ulink url="http://open-zfs.org">ZFS</ulink> container
> > +        backend.  If you choose ZFS, you can choose which block devices to use,
> > +        or the size of a file to use as backing store.
> > +        </listitem>
> > +      <listitem> Availability over the network
> > +        </listitem>
> > +      <listitem> A 'trust password' used by remote clients to vouch for their client certificate
> > +        </listitem>
> > +      </itemizedlist>
> > +
> > +      <para>
> > +      You must run 'lxd init' as root.  'lxc' commands can be run as any
> > +      user who is member of group lxd.  If user joe is not a member of group 'lxd',
> > +      you may run:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +adduser joe lxd
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +      as root to change it.  The new membership will take effect on the next login, or after
> > +      running 'newgrp lxd' from an existing login.
> > +      </para>
> > +
> > +      <para>
> > +      For more information on server, container, profile, and device configuration,
> > +      please refer to the definitive configuration provided with the source code,
> > +      which can be found <ulink url="https://github.com/lxc/lxd/blob/master/doc/configuration.md">online</ulink>
> > +      </para>
> > +
> > +    </sect2>
> > +
> > +    <sect2 id="lxd-first-container"> <title> Creating your first container </title>
> > +
> > +      <para>
> > +      This section will describe the simplest container tasks.
> > +      </para>
> > +
> > +      <sect3> <title> Creating a container </title>
> > +
> > +        <para>
> > +        Every new container is created based on either an image, an existing container,
> > +        or a container snapshot.  At install time, LXD is configured with the following
> > +        image servers:
> > +        </para>
> > +
> > +        <itemizedlist>
> > +          <listitem>
> > +          <filename>ubuntu</filename>: this serves official Ubuntu server cloud image releases.
> > +          </listitem>
> > +          <listitem>
> > +          <filename>ubuntu-daily</filename>: this serves official Ubuntu server cloud images of the daily
> > +            development releases.
> > +          </listitem>
> > +          <listitem>
> > +          <filename>images</filename>: this is a default-installed alias for images.linuxcontainers.org.
> > +            This is serves classical lxc images built using the same images which the
> > +            LXC 'download' template uses.  This includes various distributions and
> > +            minimal custom-made Ubuntu images.  This is not the recommended
> > +            server for Ubuntu images.
> > +          </listitem>
> > +        </itemizedlist>
> > +
> > +        <para>
> > +        The command to create and start a container is
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc launch remote:image containername
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        Images are identified by their hash, but are also aliased.  The 'ubuntu'
> > +        server knows many aliases such as '16.04' and 'xenial'.  A list of all
> > +        images available from the Ubuntu Server can be seen using:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc image list ubuntu:
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        To see more information about a particular image, including all the aliases it
> > +        is known by, you can use:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc image info ubuntu:xenial
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        You can generally refer to an Ubuntu image using the release name ('xenial') or
> > +        the release number (16.04).  In addition, 'lts' is an alias for the latest
> > +        supported LTS release.  To choose a different architecture, you can specify the
> > +        desired architecture:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc image info ubuntu:lts/arm64
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        Now, let's start our first container:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc launch ubuntu:xenial x1
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        This will download the official current Xenial cloud image for your current
> > +        architecture, then create a container using that image, and finally start it.
> > +        Once the command returns, you can see it using:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc list
> > +lxc info x1
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        and open a shell in it using:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc exec x1 bash
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        The try-it page gives a full synopsis of the commands you can use to administer
> > +        containers.
> > +      </para>
> > +
> > +      <para>
> > +        Now that the 'xenial' image has been downloaded, it will be kept in sync until
> > +        no new containers have been created based on it for (by default) 10 days.  After
> > +        that, it will be deleted.
> > +      </para>
> > +    </sect3>
> > +  </sect2>
> > +
> > +  <sect2 id="lxd-server-config"> <title> LXD Server Configuration </title>
> > +
> > +      <para>
> > +        By default, LXD is socket activated and configured to listen only on a
> > +        local UNIX socket.  While LXD may not be running when you first look at the
> > +        process listing, any LXC command will start it up.  For instance:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc list
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        This will create your client certificate and contact the LXD server for a
> > +        list of containers.  To make the server accessible over the network you can
> > +        set the http port using:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config set core.https_address :8443
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        This will tell LXD to listen to port 8843 on all addresses.
> > +      </para>
> > +
> > +      <sect3> <title> Authentication</title>
> > +
> > +      <para>
> > +        By default, LXD will allow all members of group 'lxd' (which by default includes
> > +        all members of group admin) to talk to it over the UNIX socket.  Communication
> > +        over the network is authorized using server and client certificates.
> > +      </para>
> > +
> > +      <para>
> > +        Before client c1 wishes to use remote r1, r1 must be registered using:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc remote add r1 r1.example.com:8443
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        The fingerprint of r1's certificate will be shown, to allow the user at
> > +        c1 to reject a false certificate.  The server in turn will verify that
> > +        c1 may be trusted in one of two ways.  The first is to register it in advance
> > +        from any already-registered client, using:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config trust add r1 certfile.crt
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        Now when the client adds r1 as a known remote, it will not need to provide
> > +        a password as it is already trusted by the server.
> > +      </para>
> > +
> > +      <para>
> > +        The other is to configure a 'trust password' with r1, either at initial
> > +        configuration using 'lxd init', or after the fact using
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config set core.trust_password PASSWORD
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        The password can then be provided when the client registers
> > +        r1 as a known remote.
> > +      </para>
> > +
> > +      </sect3>
> > +
> > +      <sect3> <title> Backing store </title>
> > +
> > +      <para>
> > +LXD supports several backing stores.  The recommended backing store is ZFS,
> > +however this is not available on all platforms.  Supported backing stores
> > +include:
> > +      </para>
> > +
> > +      <itemizedlist>
> > +      <listitem>
> > +        <para>
> > +          ext4: this is the default, and easiest to use.  With an ext4 backing store,
> > +          containers and images are simply stored as directories on the host filesystem.
> > +          Launching new containers requires copying a whole filesystem, and 10 containers
> > +          will take up 10 times as much space as one container.
> > +        </para>
> > +      </listitem>
> > +
> > +      <listitem>
> > +        <para>
> > +          ZFS: if ZFS is supported on your architecture (amd64, arm64, or ppc64le), you
> > +          can set LXD up to use it using 'lxd init'.  If you already have a ZFS pool
> > +          configured, you can tell LXD to use it by setting the zfs_pool_name configuration
> > +          key:
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config set storage.zfs_pool_name lxd
> > +</command>
> > +</screen>
> > +
> > +        <para>
> > +          With ZFS, launching a new container
> > +          is fast because the filesystem starts as a copy on write clone of the images'
> > +          filesystem.  Note that unless the container is privileged (see below) LXD will
> > +          need to change ownership of all files before the container can start, however
> > +          this is fast and change very little of the actual filesystem data.
> > +        </para>
> > +      </listitem>
> > +
> > +      <listitem>
> > +        <para>
> > +          Btrfs: btrfs can be used with many of the same advantages as
> > +          ZFS.  To use BTRFS as a LXD backing store, simply mount a Btrfs
> > +          filesystem under <filename>/var/lib/lxd</filename>.  LXD will detect
> > +          this and exploit the Btrfs subvolume feature whenever launching a new
> > +          container or snapshotting a container.
> > +        </para>
> > +      </listitem>
> > +
> > +      <listitem>
> > +        <para>
> > +          LVM: To use a LVM volume group called 'lxd', you may tell LXD to use that
> > +          for containers and images using the command
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +	lxc config set storage.lvm_vg_name lxd
> > +</command>
> > +</screen>
> > +
> > +        <para>
> > +          When launching a new container, its rootfs will start as a lv clone.  It is
> > +          immediately mounted so that the file uids can be shifted, then unmounted.
> > +          Container snapshots also are created as lv snapshots.
> > +        </para>
> > +      </listitem>
> > +      </itemizedlist>
> > +      </sect3>
> > +    </sect2>
> > +
> > +    <sect2 id="lxd-container-config"> <title> Container configuration </title>
> > +
> > +      <para>
> > +        Containers are configured according to a set of profiles, described in the
> > +        next section, and a set of container-specific configuration.  Profiles are
> > +        applied first, so that container specific configuration can override profile
> > +        configuration.
> > +      </para>
> > +
> > +      <para>
> > +        Container configuration includes properties like the architecture, limits
> > +        on resources such as CPU and RAM, security details including apparmor
> > +        restriction overrides, and devices to apply to the container.
> > +      </para>
> > +
> > +      <para>
> > +        Devices can be of several types, including UNIX character, UNIX block,
> > +        network interface, or 'disk'.  In order to insert a host mount into a
> > +        container, a 'disk' device type would be used.  For instance, to mount
> > +        /opt in container c1 at /opt, you could use:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config device add c1 opt disk source=/opt path=opt
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        See:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc help config
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        for more information about editing container configurations.  You may
> > +        also use:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config edit c1
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        to edit the whole of c1's configuration in your specified $EDITOR.
> > +        Comments at the top of the configuration will show examples of
> > +        correct syntax to help administrators hit the ground running.  If
> > +        the edited configuration is not valid when the $EDITOR is exited,
> > +        then $EDITOR will be restarted.
> > +      </para>
> > +
> > +    </sect2>
> > +
> > +    <sect2 id="lxd-profiles"> <title> Profiles </title>
> > +
> > +      <para>
> > +      Profiles are named collections of configurations which may be applied
> > +      to more than one container.  For instance, all containers created with
> > +      'lxc launch', by default, include the 'default' profile, which provides a
> > +      network interface 'eth0'.
> > +      </para>
> > +
> > +      <para>
> > +        To mask a device which would be inherited from a profile but which should
> > +        not be in the final container, define a device by the same name but of
> > +        type 'none':
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config device add c1 eth1 none
> > +</command>
> > +</screen>
> > +
> > +    </sect2>
> > +    <sect2 id="lxd-nesting"> <title> Nesting </title>
> > +
> > +      <para>
> > +        Containers all share the same host kernel.  This means that there is always
> > +        an inherent trade-off between features exposed to the container and host
> > +        security from malicious containers.  Containers by default are therefore
> > +        restricted from features needed to nest child containers.  In order to
> > +        run lxc or lxd containers under a lxd container, the
> > +        'security.nesting' feature must be set to true:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config set container1 security.nesting true
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        Once this is done, container1 will be able to start sub-containers.
> > +      </para>
> > +
> > +      <para>
> > +        In order to run unprivileged (the default in LXD) containers nested under an
> > +        unprivileged container, you will need to ensure a wide enough UID mapping.
> > +        Please see the 'UID mapping' section below.
> > +      </para>
> > +
> > +      <sect3> <title> Docker </title>
> > +
> > +      <para>
> > +        In order to facilitate running docker containers inside a LXD container,
> > +        a 'docker' profile is provided.  To launch a new container with the
> > +        docker profile, you can run:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc launch xenial container1 -p default -p docker
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        Note that currently the docker package in Ubuntu 16.04 is patched to
> > +        facilitate running in a container.  This support is expected to land
> > +        upstream soon.
> > +      </para>
> > +
> > +      <para>
> > +        Note that 'cgroup namespace' support is also required.  This is
> > +        available in the 16.04 kernel as well as in the 4.6 upstream
> > +        source.
> > +      </para>
> > +
> > +      </sect3>
> > +    </sect2>
> > +
> > +    <sect2 id="lxd-limits"> <title> Limits </title>
> > +
> > +      <para>
> > +        LXD supports flexible constraints on the resources which containers
> > +        can consume.  The limits come in the following categories:
> > +      </para>
> > +
> > +      <itemizedlist>
> > +      <listitem>
> > +        CPU: limit cpu available to the container in several ways.
> > +      </listitem>
> > +      <listitem>
> > +        Disk: configure the priority of I/O requests under load
> > +      </listitem>
> > +      <listitem>
> > +        RAM: configure memory and swap availability
> > +      </listitem>
> > +      <listitem>
> > +        Network: configure the network priority under load
> > +      </listitem>
> > +      <listitem>
> > +        Processes: limit the number of concurrent processes in the container.
> > +      </listitem>
> > +      </itemizedlist>
> > +
> > +      <para>
> > +        For a full list of limits known to LXD, see
> > +        <ulink url="https://github.com/lxc/lxd/blob/master/doc/configuration.md">
> > +        the configuration documentation</ulink>.
> > +      </para>
> > +
> > +    </sect2>
> > +
> > +    <sect2 id="lxd-uid"> <title> UID mappings and Privileged containers </title>
> > +
> > +      <para>
> > +        By default, LXD creates unprivileged containers.  This means that root
> > +        in the container is a non-root UID on the host.  It is privileged against
> > +        the resources owned by the container, but unprivileged with respect to
> > +        the host, making root in a container roughly equivalent to an unprivileged
> > +        user on the host.  (The main exception is the increased attack surface
> > +        exposed through the system call interface)
> > +      </para>
> > +
> > +      <para>
> > +        Briefly, in an unprivileged container, 65536 UIDs are 'shifted' into the
> > +        container.  For instance, UID 0 in the container may be 100000 on the host,
> > +        UID 1 in the container is 100001, etc, up to 165535.  The starting value
> > +        for UIDs and GIDs, respectively, is determined by the 'root' entry the
> > +        <filename>/etc/subuid</filename> and <filename>/etc/subgid</filename> files.  (See the
> > +        <ulink url="http://manpages.ubuntu.com/manpages/xenial/en/man5/subuid.5.html">
> > +        subuid(5) manual page</ulink>.
> > +      </para>
> > +
> > +      <para>
> > +        It is possible to request a container to run without a UID mapping by
> > +        setting the security.privileged flag to true:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc config set c1 security.privileged true
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        Note however that in this case the root user in the container is the
> > +        root user on the host.
> > +      </para>
> > +
> > +      </sect2>
> > +
> > +      <sect2 id="lxd-aa"> <title> Apparmor </title>
> > +
> > +      <para>
> > +        LXD confines containers by default with an apparmor profile which protects
> > +        containers from each other and the host from containers.  For instance
> > +        this will prevent root in one container from signaling root in another
> > +        container, even though they have the same uid mapping.  It also prevents
> > +        writing to dangerous, un-namespaced files such as many sysctls and
> > +        <filename> /proc/sysrq-trigger</filename>.
> > +      </para>
> > +
> > +      <para>
> > +        If the apparmor policy for a container needs to be modified for a container
> > +        c1, specific apparmor policy lines can be added in the 'raw.apparmor'
> > +        configuration key.
> > +      </para>
> > +
> > +      </sect2>
> > +
> > +      <sect2 id="lxd-seccomp"> <title> Seccomp </title>
> > +
> > +      <para>
> > +        All containers are confined by a default seccomp policy.  This policy
> > +        prevents some dangerous actions such as forced umounts, kernel module
> > +        loading and unloading, kexec, and the open_by_handle_at system call.
> > +        The seccomp configuration cannot be modified, however a completely
> > +        different seccomp policy - or none - can be requested using raw.lxc
> > +        (see below).
> > +      </para>
> > +
> > +      </sect2>
> > +      <sect2> <title> Raw LXC configuration </title>
> > +
> > +      <para>
> > +        LXD configures containers for the best balance of host safety and
> > +        container usability.  Whenever possible it is highly recommended to
> > +        use the defaults, and use the LXD configuration keys to request LXD
> > +        to modify as needed.  Sometimes, however, it may be necessary to talk
> > +        to the underlying lxc driver itself.  This can be done by specifying
> > +        LXC configuration items in the 'raw.lxc' LXD configuration key.  These
> > +        must be valid items as documented in
> > +        <ulink url="http://manpages.ubuntu.com/manpages/xenial/en/man5/lxc.container.conf.5.html">
> > +        the lxc.container.conf(5) manual page</ulink>.
> > +      </para>
> > +
> > +      </sect2>
> > +<!-- TODO
> > +[//]: # (## Networking)
> > +
> > +[//]: # (Todo Once the ipv6 changes are implemented.)
> > +-->
> > +
> > +      <sect2> <title> Images and containers </title>
> > +
> > +      <para>
> > +LXD is image based.  When you create your first container, you will
> > +generally do so using an existing image.  LXD comes pre-configured
> > +with three default image remotes:
> > +      </para>
> > +
> > +      <itemizedlist>
> > +        <listitem>
> > +          ubuntu: This is a <ulink url="https://launchpad.net/simplestreams">simplestreams-based</ulink>
> > +          remote serving released ubuntu cloud images.
> > +        </listitem>
> > +
> > +        <listitem>
> > +          ubuntu-daily: This is another simplestreams based remote which serves
> > +          'daily' ubuntu cloud images.  These provide quicker but potentially less
> > +          stable images.
> > +        </listitem>
> > +
> > +        <listitem>
> > +          images: This is a remote publishing best-effort container images for
> > +          many distributions, created using community-provided build scripts.
> > +        </listitem>
> > +      </itemizedlist>
> > +
> > +      <para>
> > +        To view the images available on one of these servers, you can use:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc image list ubuntu:
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        Most of the images are known by several aliases for easier reference.  To
> > +        see the full list of aliases, you can use
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc image alias list images:
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        Any alias or image fingerprint can be used to specify how to create the new
> > +        container.  For instance, to create an amd64 Ubuntu 14.04 container, some
> > +        options are:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc launch ubuntu:14.04 trusty1
> > +lxc launch ubuntu:trusty trusty1
> > +lxc launch ubuntu:trusty/amd64 trusty1
> > +lxc launch ubuntu:lts trusty1
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        The 'lts' alias always refers to the latest released LTS image.
> > +      </para>
> > +
> > +        <sect3> <title> Snapshots </title>
> > +
> > +        <para>
> > +          Containers can be renamed and live-migrated using the 'lxc move' command:
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc move c1 final-beta
> > +</command>
> > +</screen>
> > +
> > +        <para>
> > +          They can also be snapshotted:
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc snapshot c1 YYYY-MM-DD
> > +</command>
> > +</screen>
> > +
> > +        <para>
> > +          Later changes to c1 can then be reverted by restoring the snapshot:
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc restore u1 YYYY-MM-DD
> > +</command>
> > +</screen>
> > +
> > +        <para>
> > +          New containers can also be created by copying a container or snapshot:
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc copy u1/YYYY-MM-DD testcontainer
> > +</command>
> > +</screen>
> > +
> > +        </sect3>
> > +
> > +        <sect3> <title> Publishing images </title>
> > +
> > +        <para>
> > +          When a container or container snapshot is ready for consumption by others,
> > +          it can be published as a new image using;
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc publish u1/YYYY-MM-DD --alias foo-2.0
> > +</command>
> > +</screen>
> > +
> > +        <para>
> > +          The published image will be private by default, meaning that LXD will not
> > +          allow clients without a trusted certificate to see them.  If the image
> > +          is safe for public viewing (i.e. contains no private information), then
> > +          the 'public' flag can be set, either at publish time using
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc publish u1/YYYY-MM-DD --alias foo-2.0 public=true
> > +</command>
> > +</screen>
> > +
> > +        <para>
> > +          or after the fact using
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc image edit foo-2.0
> > +</command>
> > +</screen>
> > +
> > +        <para>
> > +          and changing the value of the public field.
> > +        </para>
> > +
> > +      </sect3>
> > +
> > +      <sect3> <title> Image export and import </title>
> > +
> > +        <para>
> > +          Image can be exported as, and imported from, tarballs:
> > +        </para>
> > +
> > +<screen>
> > +<command>
> > +lxc image export foo-2.0 foo-2.0.tar.gz
> > +lxc image import foo-2.0.tar.gz --alias foo-2.0 --public
> > +</command>
> > +</screen>
> > +
> > +        </sect3>
> > +      </sect2>
> > +
> > +      <sect2 id="lxd-troubleshooting"> <title> Troubleshooting </title>
> > +
> > +      <para>
> > +        To view debug information about LXD itself, on a systemd based host use
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +journalctl -u LXD
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        On an Upstart-based system, you can find the log in
> > +        <filename>/var/log/upstart/lxd.log</filename>.  To make LXD provide
> > +        much more information about requests it is serving, add '--debug' to
> > +        LXD's arguments.  In systemd, append '--debug' to the 'ExecStart=' line
> > +        in <filename>/lib/systemd/system/lxd.service</filename>.  In Upstart,
> > +        append it to the <command>exec /usr/bin/lxd</command> line in
> > +        <filename>/etc/init/lxd.conf</filename>.
> > +      </para>
> > +
> > +      <para>
> > +        Container logfiles for container c1 may be seen using:
> > +      </para>
> > +
> > +<screen>
> > +<command>
> > +lxc info c1 --show-log
> > +</command>
> > +</screen>
> > +
> > +      <para>
> > +        The configuration file which was used may be found under <filename> /var/log/lxd/c1/lxc.conf</filename>
> > +        while apparmor profiles can be found in <filename> /var/lib/lxd/security/apparmor/profiles/c1</filename>
> > +        and seccomp profiles in <filename> /var/lib/lxd/security/seccomp/c1</filename>.
> > +      </para>
> > +    </sect2>
> > +
> > +  </sect1>
> > +
> >    <sect1 id="lxc" status="review">
> >      <title>LXC</title>
> >  
> > 
> > 

> pub  4096R/99412C79 2016-03-22 Simon Quigley <tsimonq2 at ubuntu.com>
> sub  4096R/06512CB7 2016-03-22 [expires: 2016-04-21]




More information about the ubuntu-doc mailing list