[Bug 1994002] Re: [SRU] migration was active, but no RAM info was set

Mauricio Faria de Oliveira 1994002 at bugs.launchpad.net
Thu Nov 17 12:34:18 UTC 2022


Hi Brett,

Thanks for the debdiffs!

I just reviewed them, and there are changes that should be made.

I could do those myself, but that wouldn't be an opportunity to
learn/practice some details for SRUs for you, so I'll add notes.

*However*, if you're too busy and can't do that, do let me know.

cheers,
Mauricio

...

qemu.git

$ git describe --contains 552de79bfdd5e9e53847eb3c6d6e4cd898a4370e
v7.1.0-rc0~136^2

ubuntu archive:

$ rmadison -a source qemu
...
 qemu | 1:2.11+dfsg-1ubuntu7    | bionic          | source
 qemu | 1:2.11+dfsg-1ubuntu7.40 | bionic-security | source
 qemu | 1:2.11+dfsg-1ubuntu7.40 | bionic-updates  | source
 qemu | 1:4.2-3ubuntu6          | focal           | source
 qemu | 1:4.2-3ubuntu6.23       | focal-security  | source
 qemu | 1:4.2-3ubuntu6.23       | focal-updates   | source
 qemu | 1:6.2+dfsg-2ubuntu6     | jammy           | source
 qemu | 1:6.2+dfsg-2ubuntu6.2   | jammy-security  | source
 qemu | 1:6.2+dfsg-2ubuntu6.5   | jammy-updates   | source
 qemu | 1:7.0+dfsg-7ubuntu2     | kinetic         | source
 qemu | 1:7.0+dfsg-7ubuntu2     | lunar           | source

0) Development release

The development release (lunar) still doesn't have the patch.
That is required for SRU / stable releases.

We'll need a debdiff for lunar, slightly different than kinetic
(release name and greater version string for the upgrade path).

I just checked w/ Christian and we shouldn't wait on qemu 7.1
merge from Debian (sid), which would include the patch, since
the merge from Debian should happen in January to get qemu 7.2.


1) Oldest LTS in standard support

Would Bionic benefit from this fix on the long run as well,
just before it goes into expanded/out of standard- support?

Apparently, some deployments/clouds still use Bionic on kvm
compute nodes.

If so, the backport targets qmp_query_migrate()/same file,
per commit 65ace0604551 ("migration: add postcopy total blocktime into query-migrate").


2) Debdiffs:

- version strings: the 'lp*' version suffix is fine for
test builds, but for official packages usually (see [1]):
just increment '.1' on stable releases, and '1' on dev.

example:
kinetic (sru): 1:7.0+dfsg-7ubuntu2 -> ubuntu2.1
luanr (devel): 1:7.0+dfsg-7ubuntu2 -> ubuntu3

- changelog: mostly good! (d/p/file.patch; LP: #number?; releases).

The LP bug number 1982284 refers to another/openstack bug,
but the Ubuntu SRUs are coming through this bug, apparently.

Since this is the bug where Ubuntu Archive/Cloud Archive
have packages/series on, to be closed when SRUs land in
-proposed and -updates (and UCA), we should change:
1) the LP bug number in the changelog 
2) and patch file names
3) also, it's a good idea to link to other LP bug
in the SRU template '[Other Info]' section.

(you could also just move the SRU template/packages/
series/tracks to the other LP bug, I guess. Up to you.)

- quilt patch: add DEP3 headers [2] (Origin:/Bug-Ubuntu:)

- quilt series: missing 'ubuntu/' dir on k/j (not on f)

- duplications: jammy has duplicated messages, and focal
has that plus duplicated changelog entries? -- for HA? x)

[1] https://wiki.ubuntu.com/SecurityTeam/UpdatePreparation#Update_the_packaging
[2] https://dep-team.pages.debian.net/deps/dep3/

** Changed in: qemu (Ubuntu)
       Status: New => Incomplete

** Description changed:

- While live-migrating many instances concurrently, libvirt sometimes return internal error: migration was active, but no RAM info was set:
- ~~~
- 2022-03-30 06:08:37.197 7 WARNING nova.virt.libvirt.driver [req-5c3296cf-88ee-4af6-ae6a-ddba99935e23 - - - - -] [instance: af339c99-1182-4489-b15c-21e52f50f724] Error monitoring migration: internal error: migration was active, but no RAM info was set: libvirt.libvirtError: internal error: migration was active, but no RAM info was set
- ~~~
- 
- From upstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=2074205
- 
  [Impact]
  
-  * Effects of this bug are mostly observed in large scale clusters with a lot of live migration activity.
-  * Has second order effects for consumers of migration monitor such as libvirt and openstack.
+  * While live-migrating many instances concurrently, libvirt sometimes
+ return `internal error: migration was active, but no RAM info was set:`
+ 
+  * Effects of this bug are mostly observed in large scale clusters with
+ a lot of live migration activity.
+ 
+  * Has second order effects for consumers of migration monitor such as
+ libvirt and openstack.
  
  [Test Case]
  Steps to Reproduce:
  1. live evacuate a compute
  2. live migration of one or more instances fails with the above error
  
  N.B Due to the nature of this bug it is difficult consistently
  reproduce.
  
  [Where problems could occur]
-  * In the event of a regression the migration monitor may report an inconsistent state.
+  * In the event of a regression the migration monitor may report an inconsistent state.
+ 
+ [Original Bug Description]
+ 
+ While live-migrating many instances concurrently, libvirt sometimes return internal error: migration was active, but no RAM info was set:
+ ~~~
+ 2022-03-30 06:08:37.197 7 WARNING nova.virt.libvirt.driver [req-5c3296cf-88ee-4af6-ae6a-ddba99935e23 - - - - -] [instance: af339c99-1182-4489-b15c-21e52f50f724] Error monitoring migration: internal error: migration was active, but no RAM info was set: libvirt.libvirtError: internal error: migration was active, but no RAM info was set
+ ~~~
+ 
+ From upstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=2074205

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1994002

Title:
  [SRU] migration was active, but no RAM info was set

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in qemu package in Ubuntu:
  Incomplete
Status in qemu source package in Focal:
  New
Status in qemu source package in Jammy:
  New
Status in qemu source package in Kinetic:
  New

Bug description:
  [Impact]

   * While live-migrating many instances concurrently, libvirt sometimes
  return `internal error: migration was active, but no RAM info was
  set:`

   * Effects of this bug are mostly observed in large scale clusters
  with a lot of live migration activity.

   * Has second order effects for consumers of migration monitor such as
  libvirt and openstack.

  [Test Case]
  Steps to Reproduce:
  1. live evacuate a compute
  2. live migration of one or more instances fails with the above error

  N.B Due to the nature of this bug it is difficult consistently
  reproduce.

  [Where problems could occur]
   * In the event of a regression the migration monitor may report an inconsistent state.

  [Original Bug Description]

  While live-migrating many instances concurrently, libvirt sometimes return internal error: migration was active, but no RAM info was set:
  ~~~
  2022-03-30 06:08:37.197 7 WARNING nova.virt.libvirt.driver [req-5c3296cf-88ee-4af6-ae6a-ddba99935e23 - - - - -] [instance: af339c99-1182-4489-b15c-21e52f50f724] Error monitoring migration: internal error: migration was active, but no RAM info was set: libvirt.libvirtError: internal error: migration was active, but no RAM info was set
  ~~~

  From upstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=2074205

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1994002/+subscriptions




More information about the Ubuntu-openstack-bugs mailing list