[Bug 954692] Re: cannot detach volume from terminated instance
Adam Gandelman
954692 at bugs.launchpad.net
Thu Mar 15 17:40:04 UTC 2012
Bogged down with other things ATM, but spent some time looking at this
last night.
nova.compute.manager._shutdown_instance() raises an exception if the
instance is already in POWEROFF state. It looks like this conditional
has existed forever:
if current_power_state == power_state.SHUTOFF:
self.db.instance_destroy(context, instance_id)
_msg = _('trying to destroy already destroyed instance: %s')
raise exception.Invalid(_msg % instance_uuid)
It currently does nothing to cleanup bdms and inform nova-volume that the volume is free. We can certainly do that from the compute manager when the condition is met, volumes are freed up to be used elsewhere. The problem there is the iSCSI sessions are never cleaned up from the compute host. Reattaching the volume to another instance on the same compute node works okay since https://review.openstack.org/#change,4611, but having dangling iSCSI sessions hanging around seems dirty.
Looking at the libvirt compute driver, it appears the
l_shutdown_instance()'s later call to driver.destroy() handles
terminating an already SHUTOFF'd instance just fine, and also properly
cleans up its iscsi connections, among other things. It would appear
that, in teh case of libvirt, the condition raised above is obsolete.
But I'm unsure if this is true for other compute drivers and hesitant to
propose dropping it without confirmation.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/954692
Title:
cannot detach volume from terminated instance
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/954692/+subscriptions
More information about the Ubuntu-server-bugs
mailing list