[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load
Seyeong Kim
1789177 at bugs.launchpad.net
Wed Feb 24 05:33:33 UTC 2021
1. deploy rocky
2. installed updated oslo.messaging pkg in below nodes
- neutron-api
- neutron-gateway
- nova-compute
- - restarted openvswitch-agent only
3. tried to reproduce with below config
- created 3000 test queue, exchange, bindings
- juju config rabbitmq-server min-cluster-size=1
- juju config rabbitmq-server connection-backlog=200 ( to make all rabbitmq-server restart )
- shutdown node with maas controller ( one of rabbitmq-server)
- power on with maas controller
I'm able to see Channel not found error for nova, and for neutron-openvswitch-agent on nova-compute node.
neutron-openvswitch-agent on nova-compute node has fixed but rabbitmq-server shows me channel not found error.
However, I can't launch and delete instance on this environment.
I'm not sure how to say about this result.
Also reproduction itself is quite hard to make. It took a lot of time to find regular behavior but I'm not sure there is.
--
You received this bug notification because you are a member of Ubuntu
Sponsors Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1789177
Title:
RabbitMQ fails to synchronize exchanges under high load
Status in Ubuntu Cloud Archive:
Invalid
Status in Ubuntu Cloud Archive mitaka series:
Triaged
Status in Ubuntu Cloud Archive queens series:
Triaged
Status in Ubuntu Cloud Archive rocky series:
Fix Committed
Status in Ubuntu Cloud Archive stein series:
Fix Released
Status in Ubuntu Cloud Archive train series:
Fix Released
Status in oslo.messaging:
Fix Released
Status in python-oslo.messaging package in Ubuntu:
Fix Released
Status in python-oslo.messaging source package in Xenial:
In Progress
Status in python-oslo.messaging source package in Bionic:
Triaged
Bug description:
[Impact]
If there are many exchanges and queues, after failing over, rabbitmq-
server shows us error that exchanges are cannot be found.
Affected
Bionic (Queens)
Not affected
Focal
[Test Case]
1. deploy simple rabbitmq cluster
- https://pastebin.ubuntu.com/p/MR76VbMwY5/
2. juju ssh neutron-gateway/0
- for i in {1..1000}; do systemd restart neutron-metering-agent; sleep 2; done
3. it would be better if we can add more exchanges, queues, bindings
- rabbitmq-plugins enable rabbitmq_management
- rabbitmqctl add_user test password
- rabbitmqctl set_user_tags test administrator
- rabbitmqctl set_permissions -p openstack test ".*" ".*" ".*"
- https://pastebin.ubuntu.com/p/brw7rSXD7q/ ( save this as create.sh) [1]
- for i in {1..2000}; do ./create.sh test_$i; done
4. restart rabbitmq-server service or shutdown machine and turn on several times.
5. you can see the exchange not found error
[1] create.sh (pasting here because pastebins don't last forever)
#!/bin/bash
rabbitmqadmin declare exchange -V openstack name=$1 type=direct -u test -p password
rabbitmqadmin declare queue -V openstack name=$1 durable=false -u test -p password 'arguments={"x-expires":1800000}'
rabbitmqadmin -V openstack declare binding source=$1 destination_type="queue" destination=$1 routing_key="" -u test -p password
[Where problems could occur]
1. every service which uses oslo.messaging need to be restarted.
2. Message transferring could be an issue
[Others]
// original description
Input:
- OpenStack Pike cluster with ~500 nodes
- DVR enabled in neutron
- Lots of messages
Scenario: failover of one rabbit node in a cluster
Issue: after failed rabbit node gets back online some rpc communications appear broken
Logs from rabbit:
=ERROR REPORT==== 10-Aug-2018::17:24:37 ===
Channel error on connection <0.14839.1> (10.200.0.24:55834 -> 10.200.0.31:5672, vhost: '/openstack', user: 'openstack'), channel 1:
operation basic.publish caused a channel exception not_found: no exchange 'reply_5675d7991b4a4fb7af5d239f4decb19f' in vhost '/openstack'
Investigation:
After rabbit node gets back online it gets many new connections immediately and fails to synchronize exchanges for some reason (number of exchanges in that cluster was ~1600), on that node it stays low and not increasing.
Workaround: let the recovered node synchronize all exchanges - forbid
new connections with iptables rules for some time after failed node
gets online (30 sec)
Proposal: do not create new exchanges (use default) for all direct
messages - this also fixes the issue.
Is there a good reason for creating new exchanges for direct messages?
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions
More information about the Ubuntu-sponsors
mailing list