lxd hook failed change-config

Adam Stokes adam.stokes at canonical.com
Fri Oct 21 15:45:55 UTC 2016


Heather,

A bug has been filed about this issue:
https://bugs.launchpad.net/charm-lxd/+bug/1635659

I've asked that it get top priority in order to have this fixed pushed into
the charmstore asap. I'll let you know once the updated charm is available
for deployment again.

Sorry for the inconvenience

On Thu, Oct 20, 2016 at 9:57 PM Heather Lanigan <hmlanigan at gmail.com> wrote:

> I used conjure-up to deploy openstack-novalxd on a Xenial system.  Before
> deploying, the operating system was updated.  LXD init was setup with dir,
> not xfs.  All but one of the charms has a status of “unit is ready"
>
> The lxd/0 subordinate charm has a status of: hook failed:
> "config-changed”.  See details below.
>
> I can boot an instance within this OpenStack deployment.  However deleting
> the instance fails. A side effect of the lxd/0 issues?
>
> Juju version 2.0.0-xenial-amd64
> conjure-up version 2.0.2
> lxd charm version 2.0.5
>
> Any ideas?
>
> Thanks in advance,
> Heather
>
> ++++++++++++++++++++++++++++++++++++++++++++++
>
> The /var/log/juju/unit-lxd-0.log on the unit reports:
>
> 2016-10-21 01:09:33 INFO config-changed Traceback (most recent call last):
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 140, in
> <module>
> 2016-10-21 01:09:33 INFO config-changed     main()
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 134, in
> main
> 2016-10-21 01:09:33 INFO config-changed     hooks.execute(sys.argv)
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/hookenv.py",
> line 715, in execute
> 2016-10-21 01:09:33 INFO config-changed     self._hooks[hook_name]()
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 78, in
> config_changed
> 2016-10-21 01:09:33 INFO config-changed     configure_lxd_host()
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/decorators.py",
> line 40, in _retry_on_exception_inner_2
> 2016-10-21 01:09:33 INFO config-changed     return f(*args, **kwargs)
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/lxd_utils.py", line 429, in
> configure_lxd_host
> 2016-10-21 01:09:33 INFO config-changed     with open(EXT4_USERNS_MOUNTS,
> 'w') as userns_mounts:
> 2016-10-21 01:09:33 INFO config-changed IOError: [Errno 30] Read-only file
> system: '/sys/module/ext4/parameters/userns_mounts'
> 2016-10-21 01:09:33 ERROR juju.worker.uniter.operation runhook.go:107 hook
> "config-changed" failed: exit status 1
>
>
> root at juju-456efd-13:~# touch /sys/module/ext4/parameters/temp-file
> touch: cannot touch '/sys/module/ext4/parameters/temp-file': Read-only
> file system
> root at juju-456efd-13:~# df -h /sys/module/ext4/parameters/userns_mounts
> Filesystem      Size  Used Avail Use% Mounted on
> sys                0     0     0    - /dev/.lxc/sys
> root at juju-456efd-13:~# touch /home/ubuntu/temp-file
> root at juju-456efd-13:~# ls /home/ubuntu/temp-file
> /home/ubuntu/temp-file
> root at juju-456efd-13:~# df -h
> Filesystem                   Size  Used Avail Use% Mounted on
> /dev/mapper/mitaka--vg-root  165G   47G  110G  30% /
> none                         492K     0  492K   0% /dev
> udev                          16G     0   16G   0% /dev/fuse
> tmpfs                         16G     0   16G   0% /dev/shm
> tmpfs                         16G   49M   16G   1% /run
> tmpfs                        5.0M     0  5.0M   0% /run/lock
> tmpfs                         16G     0   16G   0% /sys/fs/cgroup
> tmpfs                        3.2G     0  3.2G   0% /run/user/112
> tmpfs                        3.2G     0  3.2G   0% /run/user/1000
>
> +++++++++++++++++++++++++++++++++++++++++
>
> heather at mitaka:~$ nova boot --image d2eba22a-e1b1-4a2b-aa87-450ee9d9e492
> --flavor d --nic net-name=ubuntu-net --key-name keypair-admin
> xenial-instance
> heather at mitaka:~/goose-work/src/gopkg.in/goose.v1$ nova list
>
> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
> | ID                                   | Name            | Status | Task
> State | Power State | Networks              |
>
> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
> | 80424b94-f24d-45ff-a330-7b67a911fbc6 | xenial-instance | ACTIVE | -
>      | Running     | ubuntu-net=10.101.0.8 |
>
> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
>
> heather at mitaka:~$ nova delete 80424b94-f24d-45ff-a330-7b67a911fbc6
> Request to delete server 80424b94-f24d-45ff-a330-7b67a911fbc6 has been
> accepted.
> heather at mitaka:~$ nova list
>
> +--------------------------------------+-----------------+--------+------------+-------------+----------+
> | ID                                   | Name            | Status | Task
> State | Power State | Networks |
>
> +--------------------------------------+-----------------+--------+------------+-------------+----------+
> | 80424b94-f24d-45ff-a330-7b67a911fbc6 | xenial-instance | ERROR  | -
>      | Running     |          |
>
> +--------------------------------------+-----------------+--------+------------+-------------+----------+
> heather at mitaka:~$ nova show 80424b94-f24d-45ff-a330-7b67a911fbc6
>> | fault                                | {"message": "Failed to
> communicate with LXD API instance-00000006: Error 400 - Profile is
> currently in use.", "code": 500, "details": "  File
> \"/usr/lib/python2.7/dist-packages/nova/compute/manager.py\", line 375, in
> decorated_function |
> |
> ...
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20161021/a9e67489/attachment.html>


More information about the Juju mailing list