[Bug 1825356] Re: libvirt silently fails to attach a cinder ceph volume
Christian Ehrhardt
1825356 at bugs.launchpad.net
Thu Apr 25 10:35:02 UTC 2019
Interesting that you have no cinder-ceph volume at all from libvirt's
POV.
We no more have to look at/inside the guest (e.g. lsblk).
The breaking point seems to be between openstack and libvirt.
Openstack thinks it has told libvirt to attach the volume
| volumes_attached | id='541bf46c-8ccf-4158-8565-6204a1d8350f' |
But libvirt does not know about it as your `virsh domblklist` is empty.
@James/Corey - I'd like to reiterate my question of comment #2 if you
have seen such a thing?
Furthermore is there a place in Openstack that would allow us to trace
what openstack has told libvirt to do to atatch the device (and what the
answer was)?
** Also affects: cloud-archive
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1825356
Title:
libvirt silently fails to attach a cinder ceph volume
Status in Ubuntu Cloud Archive:
New
Status in ceph package in Ubuntu:
New
Status in libvirt package in Ubuntu:
New
Bug description:
Hi,
On a new openstack environment based in ubuntu bionic + openstack queens,
I create a new volume that looks like:
$ openstack volume show test-volume
+--------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| attachments | [{u'server_id': u'1ee3f5f3-bf0c-4ffb-8e25-68f4bb2cbfb7', u'attachment_id': u'027fe5f2-9189-4ea7-b064-7bcd188b19dc', u'attached_at': |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-04-15T08:30:09.000000 |
| description | None |
| encrypted | False |
| id | 541bf46c-8ccf-4158-8565-6204a1d8350f |
| migration_status | None |
| multiattach | False |
| name | test-volume |
| os-vol-host-attr:host | juju-4301a5-1-lxd-4 at cinder-ceph#cinder-ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 28eb75e9561645cb8ec3ca747cb751f9 |
| properties | attached_mode='rw' |
| replication_status | None |
| size | 10 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| type | None |
| updated_at | 2019-04-17T14:44:54.000000 |
| user_id | 4d16d9a873144ccaa902cb16083a06dd |
+--------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
Attaching it to a server I see:
$ openstack server show volume-test
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | myhypervisor001 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | myhypervisor001.mydomain |
| OS-EXT-SRV-ATTR:instance_name | instance-00000237 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2019-04-17T14:39:03.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | ext-net=10.XX.YYY.74 |
| config_drive | |
| created | 2019-04-17T14:38:55Z |
| flavor | m1.medium (5) |
| hostId | 068b0a66512dc04d718b0d6859ca7ddd33c4393d65f0825f4fc5478f |
| id | 1ee3f5f3-bf0c-4ffb-8e25-68f4bb2cbfb7 |
| image | xenial-kvm (3a72625c-cc4c-4a8d-8106-67510cdb7050) |
| key_name | keypair |
| name | volume-test |
| progress | 0 |
| project_id | 28eb75e9561645cb8ec3ca747cb751f9 |
| properties | |
| security_groups | name='ssh' |
| status | ACTIVE |
| updated | 2019-04-17T14:51:13Z |
| user_id | 4d16d9a873144ccaa902cb16083a06dd |
| volumes_attached | id='541bf46c-8ccf-4158-8565-6204a1d8350f' |
+-------------------------------------+----------------------------------------------------------+
Though openstack volume list says it's in use by this vm as vdb,
lsblock on the vm looks like:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 40G 0 disk
└─vda1 253:1 0 40G 0 part /
Libvirt pieces of the log that points the id of the volume will be
attached soon.
Thanks!
José.
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1825356/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list