[Bug 1636605] Re: juju controller bootstrapped to openstack-novalxd cloud leaves servers in error state

Heather Lanigan 1636605 at bugs.launchpad.net
Wed Oct 26 14:46:42 UTC 2016


Not sure if this is related... I destroyed the openstack deployment
listed above so I could create a new one and move forward with my work.

I followed the same steps to deploy as above.  However I am now unable
to create instances in the OpenStack cloud.  On nova-cloud-controller/0,
lxc containers are never created.

2016-10-26 14:39:30.610 9898 WARNING nova.scheduler.utils [req-e88125f3-85cc-43b3-9002-dd992ebdc064 4c3d997d1fd74737902bbe63b789d5fc aa73e4bdafed4dada414b1f91484ce51 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 150, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 104, in select_destinations
    dests = self.driver.select_destinations(ctxt, spec_obj)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts
available.

2016-10-26 14:39:30.612 9898 WARNING nova.scheduler.utils [req-e88125f3
-85cc-43b3-9002-dd992ebdc064 4c3d997d1fd74737902bbe63b789d5fc
aa73e4bdafed4dada414b1f91484ce51 - - -] [instance: a6be37be-5907-4f35
-a35c-e5bb7508327a] Setting instance to ERROR state.

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to cinder in Juju Charms Collection.
Matching subscriptions: charm-bugs
https://bugs.launchpad.net/bugs/1636605

Title:
  juju controller bootstrapped to openstack-novalxd cloud leaves servers
  in error state

Status in cinder package in Juju Charms Collection:
  New

Bug description:
  On a Xenial box, I used conjure-up to deploy the openstack-novalxd
  bundle.  I then bootstrapped a juju controller against the Mitaka
  openstack cloud created.  However after removing models and
  controllers the openstack instances were not deleted and left in an
  error state.

  ubuntu1-admin at ubuntu1:~$ nova list
  +--------------------------------------+--------------------------+--------+------------+-------------+------------------------------------+
  | ID                                   | Name                     | Status | Task State | Power State | Networks                           |
  +--------------------------------------+--------------------------+--------+------------+-------------+------------------------------------+
  | 7a5693bd-49e8-4030-abc4-483c264cb270 | juju-3b5290-controller-0 | ERROR  | -          | Running     |                                    |
  ..... 
  +--------------------------------------+--------------------------+--------+------------+-------------+------------------------------------+
  ubuntu1-admin at ubuntu1:~$ nova show 7a5693bd-49e8-4030-abc4-483c264cb270 | grep fault
  | fault                                | {"message": "Failed to communicate with LXD API instance-00000009: Error 400 - Profile is currently in use.", "code": 500, "created": "2016-10-25T17:54:39Z"} |

  When the count of instances reached 10, no more were created.

  On the nova-compute/0 unit, /var/log/nova/nova-compute.log had the following: 
  2016-10-25 17:54:37.804 3311 ERROR nova_lxd.nova.virt.lxd.operations [req-b92b1fd0-d890-42de-9f23-55284b81dc08 102e8e51492d4411ab4665514147affb 704f9c9d33624095964b782a40023eaf - - -] [instance: 7a5693bd-49e8-4030-abc4-483c264cb270] Failed to remove container for instance-00000009: Failed to communicate with LXD API instance-00000009: Error 400 - Profile is currently in use.
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [req-b92b1fd0-d890-42de-9f23-55284b81dc08 102e8e51492d4411ab4665514147affb 704f9c9d33624095964b782a40023eaf - - -] [instance: 7a5693bd-49e8-4030-abc4-483c264cb270] Setting instance vm_state to ERROR
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270] Traceback (most recent call last):
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2510, in do_terminate_instance
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     self._delete_instance(context, instance, bdms, quotas)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova/hooks.py", line 154, in inner
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     rv = f(*args, **kwargs)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2473, in _delete_instance
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     quotas.rollback()
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     self.force_reraise()
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     six.reraise(self.type_, self.value, self.tb)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2437, in _delete_instance
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     self._shutdown_instance(context, instance, bdms)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2346, in _shutdown_instance
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     requested_networks)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     self.force_reraise()
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     six.reraise(self.type_, self.value, self.tb)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2333, in _shutdown_instance
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     block_device_info)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova_lxd/nova/virt/lxd/driver.py", line 120, in destroy
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     migrate_data)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova_lxd/nova/virt/lxd/operations.py", line 361, in destroy
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     instance=instance)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     self.force_reraise()
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     six.reraise(self.type_, self.value, self.tb)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova_lxd/nova/virt/lxd/operations.py", line 352, in destroy
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     self.session.profile_delete(instance)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]   File "/usr/lib/python2.7/dist-packages/nova_lxd/nova/virt/lxd/session.py", line 801, in profile_delete
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]     raise exception.NovaException(msg)
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270] NovaException: Failed to communicate with LXD API instance-00000009: Error 400 - Profile is currently in use.
  2016-10-25 17:54:38.694 3311 ERROR nova.compute.manager [instance: 7a5693bd-49e8-4030-abc4-483c264cb270]

  Running 'lxc list' on the nova-compute/0 unit showed that the
  containers were not removed.  I was able to delete the related
  container, then delete the nova instance.  However I am unable to
  create more instances with juju.

  The last line of /var/log/nova/nova-compute.log is:
  2016-10-25 18:08:48.565 3311 WARNING oslo_messaging._drivers.amqpdriver [-] Number of call queues is greater than warning threshold: 10. There could be a leak. Increasing threshold to: 20

  I've seen this behavior on 2 systems so far.

  Here are the versions of the OpenStack cloud:
  ubuntu1-admin at ubuntu1:~/work/src/gopkg.in/goose.v1/neutron$ juju status
  Model       Controller  Cloud/Region         Version
  conjure-up  spider      localhost/localhost  2.0.0

  App                    Version      Status   Scale  Charm                  Store       Rev  OS      Notes
  ceph-mon               10.2.2       active       3  ceph-mon               jujucharms    6  ubuntu
  ceph-osd               10.2.2       active       3  ceph-osd               jujucharms  239  ubuntu
  ceph-radosgw           10.2.2       active       1  ceph-radosgw           jujucharms  245  ubuntu
  glance                 12.0.0       active       1  glance                 jujucharms  253  ubuntu
  keystone               9.2.0        active       1  keystone               jujucharms  258  ubuntu
  lxd                    2.0.5        active       1  lxd                    jujucharms    5  ubuntu
  mysql                  5.6.21-25.8  active       1  percona-cluster        jujucharms  246  ubuntu
  neutron-api            8.2.0        active       1  neutron-api            jujucharms  246  ubuntu
  neutron-gateway        8.2.0        active       1  neutron-gateway        jujucharms  232  ubuntu
  neutron-openvswitch    8.2.0        active       1  neutron-openvswitch    jujucharms  238  ubuntu
  nova-cloud-controller  13.1.1       active       1  nova-cloud-controller  jujucharms  292  ubuntu
  nova-compute           13.1.1       active       1  nova-compute           jujucharms  259  ubuntu
  ntp                                 waiting      0  ntp                    jujucharms   16  ubuntu
  openstack-dashboard    9.1.0        active       1  openstack-dashboard    jujucharms  243  ubuntu  exposed
  rabbitmq-server        3.5.7        active       1  rabbitmq-server        jujucharms   54  ubuntu

  I can start an OpenStack instance with a nova command, but I can no
  longer juju deploy, add-unit, add-model, bootstrap etc.  Deleting the
  nova created instance, put's it in an error state like the others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/cinder/+bug/1636605/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list