[Bug 1336568] Re: LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume
Billy Olsen
billy.olsen at canonical.com
Mon Sep 14 22:43:39 UTC 2015
Was able to verify the fix for this bug today. Installed cinder from the
trusty-proposed pocket and ran the following tests to confirm:
# Test one, ensure default option remains to write-cache on
1. create volume
2. attach iscsi volume to instance
3. Verify generated xml in /var/lib/cinder/volumes/<volume-id> is generated with write-cache on.
# Change the iscsi-write-cache to off and restart cinder volumes
1. Set iscsi_write_cache = off in /etc/cinder/cinder.conf
2. Create lvm volume
3. Attach via iscsi to instance
4. Verify generated xml in /var/lib/cinder/volumes/<volume-id> is generated with write-cache off
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to cinder in Ubuntu.
https://bugs.launchpad.net/bugs/1336568
Title:
LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume
Status in Cinder:
Fix Released
Status in cinder package in Ubuntu:
Confirmed
Status in cinder source package in Trusty:
Fix Committed
Bug description:
I found a problem which LVMiSCSI driver can't issue direct I/O through
tgtd to iscsi volume.
In current implementation, qemu-kvm opens device(storage volume) using
cache='node'at nova side, however, tgtd opens device without "--
bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon
services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue direct I/O from guest to the device.
[root at compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
######### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT
or not at compute node.
=> qemu Process ID is "24836"
[root at compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root at compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root at compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root at compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino at control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino at control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino at control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino at control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino at control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino
========================================================================
[Impact]
* May see data loss without the ability to use write-through caching
(write-cache off) option instead of write-back (write-cache on)
option for iscsi targets.
[Test Case]
* Configure Cinder to use LVMiSCSIDriver
* Create cinder volume (cinder create --display-name foo 1G)
* Attach volume to nova instance (nova volume-attach my-instance <vol-uuid>)
* Observe the write-cache policy specified per cinder volume (found in)
- /var/lib/cinder/volumes/volume-<uuid>
* Observe above information (detailed by Mitsuhiro)
[Regression Potential]
* Low risk of regression as the feature is enabled through a
configurable option in which default value takes original behavior.
To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1336568/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list