[Bug 1994002] Re: [SRU] migration was active, but no RAM info was set
Brett Milford
1994002 at bugs.launchpad.net
Thu Mar 30 04:47:34 UTC 2023
Verification done on jammy-proposed.
Followed the instructions as per: https://bugs.launchpad.net/cloud-
archive/+bug/1994002/comments/26
With the exception that I had to install the debug symbols package as
per https://wiki.ubuntu.com/Debug%20Symbol%20Packages for -proposed.
jammy-updates -- Fail:
ubuntu at qemu-j:~$ nc 127.0.0.1 3333
QEMU 6.2.0 monitor - type 'help' for more information
(qemu) migrate -d tcp:127.0.0.1:4444
migrate -d tcp:127.0.0.1:4444
(qemu)
(qemu) info migrate
info migrate
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
clear-bitmap-shift: 18
Migration status: active
total time: 0 ms
(qemu)
(qemu) quit
quit
jammy-proposed - Pass:
ubuntu at qemu-j2:~$ nc 127.0.0.1 3333
QEMU 6.2.0 monitor - type 'help' for more information
(qemu) migrate -d tcp:127.0.0.1:4444
migrate -d tcp:127.0.0.1:4444
(qemu) info migrate
info migrate
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
clear-bitmap-shift: 18
Migration status: setup
total time: 0 ms
(qemu)
Full gdb session output: https://pastebin.ubuntu.com/p/mkhQzCXKdk/
** Tags removed: verification-needed-jammy
** Tags added: verification-done-jammy
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1994002
Title:
[SRU] migration was active, but no RAM info was set
Status in Ubuntu Cloud Archive:
New
Status in Ubuntu Cloud Archive ussuri series:
New
Status in qemu package in Ubuntu:
Fix Released
Status in qemu source package in Bionic:
Fix Committed
Status in qemu source package in Focal:
Fix Committed
Status in qemu source package in Jammy:
Fix Committed
Status in qemu source package in Kinetic:
Fix Released
Bug description:
[Impact]
* While live-migrating many instances concurrently, libvirt sometimes
return `internal error: migration was active, but no RAM info was
set:`
* Effects of this bug are mostly observed in large scale clusters
with a lot of live migration activity.
* Has second order effects for consumers of migration monitor such as
libvirt and openstack.
[Test Case]
Synthetic reproducer with GDB in comment #21.
Steps to Reproduce:
1. live evacuate a compute
2. live migration of one or more instances fails with the above error
N.B Due to the nature of this bug it is difficult consistently reproduce.
In an environment where it has been observed it is estimated to occur approximately 1/1000 migrations.
[Where problems could occur]
* In the event of a regression the migration monitor may report an inconsistent state.
[Original Bug Description]
While live-migrating many instances concurrently, libvirt sometimes return internal error: migration was active, but no RAM info was set:
~~~
2022-03-30 06:08:37.197 7 WARNING nova.virt.libvirt.driver [req-5c3296cf-88ee-4af6-ae6a-ddba99935e23 - - - - -] [instance: af339c99-1182-4489-b15c-21e52f50f724] Error monitoring migration: internal error: migration was active, but no RAM info was set: libvirt.libvirtError: internal error: migration was active, but no RAM info was set
~~~
From upstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=2074205
[Other Information]
Related bug: https://bugs.launchpad.net/nova/+bug/1982284
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1994002/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list