[Bug 1783203] Re: Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment

Michael Klishin mklishin at pivotal.io
Fri Mar 15 04:09:14 UTC 2019


There isn't a whole lot of details about node state here but GM (a
multicast module) has nothing to do with the management plugin.

According to the above comments this is on Erlang 18.3 which has known
bugs that stop any activity on a node that had accepted any TCP
connections (including HTTP requests) [1][2]. They were reported by team
RabbitMQ in mid-2017 and fixed shortly after. Erlang 19.3.6.4 is the
minimum supported version for RabbitMQ 3.6.16 primarily because of those
issues.

Somewhat related: RabbitMQ 3.6.x is out of support [3][4] and since
January 2018 [4], Erlang 19.3.6.4 is the minimum supported version even
for 3.6.x.

1. https://bugs.erlang.org/browse/ERL-430
2. https://bugs.erlang.org/browse/ERL-448
3. https://www.rabbitmq.com/which-erlang.html#old-timers
4. http://www.rabbitmq.com/changelog.html

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to rabbitmq-server in Ubuntu.
https://bugs.launchpad.net/bugs/1783203

Title:
  Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment

Status in OpenStack rabbitmq-server charm:
  New
Status in rabbitmq-server package in Ubuntu:
  Confirmed

Bug description:
  While performing an openstack release upgrade from Pike to Queens
  following the charmers guide, we have upgraded Ceph-* and Mysql.
  After setting source=cloud:xenial-queens on the RabbitMQ-Server charm
  and the cluster re-stabilizes, rabbitmq beam processes lock up on one
  cluster node causing complete denial of service on the openstack vhost
  across all 3 members of the cluster.  Killing the beam process on that
  node causes another node to lock up within a short timeframe.

  We have reproduced this twice in the same environment by re-deploying
  a fresh pike rabbitmq cluster and upgrading to queens.  The issue is
  not reproducable with generic workloads such as creating/deleting nova
  instances and creating/attaching/detaching cinder volumes, however,
  when running a full heat stack, we can reproduce this issue.

  This is happening on two of the three clouds on this site when RMQ is
  upgraded to Queens.  The third cloud was able to upgrade to Queens
  without issue but was upgraded on 18.02 charms.  Heat templates
  forthcoming.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1783203/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list