[Bug 1793137] Re: [SRU] Fix for KeyError: 'storage.zfs_pool_name' only partially successful -- needs changes
David Ames
david.ames at canonical.com
Tue Nov 20 00:28:11 UTC 2018
** Changed in: charm-lxd
Milestone: None => 19.04
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1793137
Title:
[SRU] Fix for KeyError: 'storage.zfs_pool_name' only partially
successful -- needs changes
Status in OpenStack LXD Charm:
Fix Committed
Status in Ubuntu Cloud Archive:
Triaged
Status in Ubuntu Cloud Archive queens series:
Triaged
Status in Ubuntu Cloud Archive rocky series:
Triaged
Status in nova-lxd:
New
Status in nova-lxd package in Ubuntu:
Triaged
Status in nova-lxd source package in Bionic:
Triaged
Status in nova-lxd source package in Cosmic:
Triaged
Bug description:
[Impact]
The issue is that the fix was only partially successful, in that
whilst it avoids the 'storage.zfs_pool_name', the other branch of code
doesn't get the zfs pool name, but instead the lxd pool name; if they
are different then it fails.
The LXD charm used different names (it's now being patched to use the
same name for the lxd pool and zfs pool), which broke nova-lxd on
bionic.
The code in question is in nova/virt/lxd/driver.py in
get_available_resource(self, nodename) around line 1057:
try:
pool_name = lxd_config['config']['storage.zfs_pool_name']
except KeyError:
pool_name = CONF.lxd.pool
local_disk_info = _get_zpool_info(pool_name)
i.e. storage.zfs_pool_name vs CONF.lxd.pool
When nova-lxd is properly refactored for storage pools, this issue
should be resolved.
[Test Case]
[Regression Potential]
[Discussion]
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1793137/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list