[Merge] ~mirespace/ubuntu/+source/corosync:sru-corosync-bionic-lp1677684-lp1437359 into ubuntu/+source/corosync:ubuntu/bionic-devel

Miriam España Acebal mp+409319 at code.launchpad.net
Tue Sep 28 20:33:34 UTC 2021


The proposal to merge ~mirespace/ubuntu/+source/corosync:sru-corosync-bionic-lp1677684-lp1437359 into ubuntu/+source/corosync:ubuntu/bionic-devel has been updated.

Description changed to:

Hi,

PPA for this is ppa:mirespace/sru-corosync-bionic-lp1677684-lp1437359 .

I have some doubts about the SRU templates (especially for the LP #1437359) and because the build shows a lintian error, so I prefer to show them so we can discuss the way to handle it (I mean, I'm asking for advice on this).

With this, two bugs for corosync are going to be fixed in Bionic: LP1677684  & LP1437359. Both fixes have been cherry-picked from the work of Rafael Tinoco and Jorge Niedbalski but for Focal series:

  [Jorge Niedbalski] - dd471ac791ee8f522d6c792de45c56a13db5a28f
  * d/control: corosync binary depends on libqb-dev (LP: #1677684)

  [Rafael David Tinoco] - 16a37d42582913cc04921b268e2fbb008b135d82
  * debian/corosync-notifyd.init: fix for 2 PIDFILEs declared (LP: #1437359)

For the first one, we can test that with the fix we obtain a correct answer:

ubuntu at bionic:~/tmp$ sudo corosync-blackbox
Dumping the contents of /var/lib/corosync/fdata
[debug] shm size:8392704; real_size:8392704; rb->word_size:2098176
[debug] read total of: 8392724
Ringbuffer:
 ->NORMAL
 ->write_pt [2866]
 ->read_pt [0]
 ->size [2098176 words]
 =>free [8381236 bytes]
 =>used [11464 bytes]
debug   Sep 28 20:21:50 totempg_waiting_trans_ack_cb(286):14: waiting_trans_ack changed to 1
debug   Sep 28 20:21:50 totemsrp_initialize(900):14: Token Timeout (3000 ms) retransmit timeout (294 ms)
debug   Sep 28 20:21:50 totemsrp_initialize(903):14: token hold (225 ms) retransmits before loss (10 retrans)
debug   Sep 28 20:21:50 totemsrp_initialize(910):14: join (50 ms) send_join (0 ms) consensus (3600 ms) merge (200 ms)
debug   Sep 28 20:21:50 totemsrp_initialize(913):14: downcheck (1000 ms) fail to recv const (2500 msgs)
debug   Sep 28 20:21:50 totemsrp_initialize(915):14: seqno unchanged const (30 rotations) Maximum network MTU 1401
debug   Sep 28 20:21:50 totemsrp_initialize(919):14: window size per rotation (50 messages) maximum messages per rotation (17 messages)
debug   Sep 28 20:21:50 totemsrp_initialize(923):14: missed count const (5 messages)
debug   Sep 28 20:21:50 totemsrp_initialize(926):14: send threads (0 threads)
debug   Sep 28 20:21:50 totemsrp_initialize(929):14: RRP token expired timeout (294 ms)
debug   Sep 28 20:21:50 totemsrp_initialize(932):14: RRP token problem counter (2000 ms)
debug   Sep 28 20:21:50 totemsrp_initialize(935):14: RRP threshold (10 problem count)
debug   Sep 28 20:21:50 totemsrp_initialize(938):14: RRP multicast threshold (100 problem count)
debug   Sep 28 20:21:50 totemsrp_initialize(941):14: RRP automatic recovery check timeout (1000 ms)
debug   Sep 28 20:21:50 totemsrp_initialize(943):14: RRP mode set to none.
debug   Sep 28 20:21:50 totemsrp_initialize(946):14: heartbeat_failures_allowed (0)
debug   Sep 28 20:21:50 totemsrp_initialize(948):14: max_network_delay (50 ms)
debug   Sep 28 20:21:50 totemsrp_initialize(971):14: HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
notice  Sep 28 20:21:50 totemnet_instance_initialize(248):14: Initializing transport (UDP/IP Multicast).
notice  Sep 28 20:21:50 init_nss(688):14: Initializing transmit/receive security (NSS) crypto: none hash: none
debug   Sep 28 20:21:50 totemudp_build_sockets_ip(923):14: Receive multicast socket recv buffer size (320000 bytes).
debug   Sep 28 20:21:50 totemudp_build_sockets_ip(929):14: Transmit multicast socket send buffer size (320000 bytes).
debug   Sep 28 20:21:50 totemudp_build_sockets_ip(935):14: Local receive multicast loop socket recv buffer size (320000 bytes).
debug   Sep 28 20:21:50 totemudp_build_sockets_ip(941):14: Local transmit multicast loop socket send buffer size (320000 bytes).
trace   Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 2 for FD 8
trace   Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 3 for FD 9
trace   Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 4 for FD 12
notice  Sep 28 20:21:50 timer_function_netif_check_timeout(669):14: The network interface [127.0.0.1] is now up.
debug   Sep 28 20:21:50 main_iface_change_fn(5101):14: Created or loaded sequence id 8.127.0.0.1 for this ring.
info    Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: cmap
trace   Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 5 for FD 13
info    Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: cfg
trace   Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 6 for FD 14
info    Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: cpg
trace   Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 7 for FD 15
info    Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: votequorum
trace   Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 8 for FD 16
info    Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: quorum
trace   Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 9 for FD 17
debug   Sep 28 20:21:50 memb_state_gather_enter(2222):14: entering GATHER state from 15(interface change).
debug   Sep 28 20:21:50 memb_state_commit_token_create(3274):14: Creating commit token because I am the rep.
debug   Sep 28 20:21:50 old_ring_state_save(1605):14: Saving state aru 0 high seq received 0
debug   Sep 28 20:21:50 memb_state_commit_enter(2271):14: entering COMMIT state.
debug   Sep 28 20:21:50 message_handler_memb_commit_token(4929):14: got commit token
debug   Sep 28 20:21:50 memb_state_recovery_enter(2308):14: entering RECOVERY state.
debug   Sep 28 20:21:50 memb_state_recovery_enter(2354):14: position [0] member 127.0.0.1:
debug   Sep 28 20:21:50 memb_state_recovery_enter(2358):14: previous ring seq 8 rep 127.0.0.1
debug   Sep 28 20:21:50 memb_state_recovery_enter(2364):14: aru 0 high delivered 0 received flag 1
debug   Sep 28 20:21:50 memb_state_recovery_enter(2462):14: Did not need to originate any messages in recovery.
debug   Sep 28 20:21:50 message_handler_memb_commit_token(4929):14: got commit token
debug   Sep 28 20:21:50 message_handler_memb_commit_token(4994):14: Sending initial ORF token
trace   Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to pending queue
debug   Sep 28 20:21:50 message_handler_orf_token(4176):14: token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
debug   Sep 28 20:21:50 message_handler_orf_token(4187):14: install seq 0 aru 0 high seq received 0
debug   Sep 28 20:21:50 message_handler_orf_token(4176):14: token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
debug   Sep 28 20:21:50 message_handler_orf_token(4187):14: install seq 0 aru 0 high seq received 0
debug   Sep 28 20:21:50 message_handler_orf_token(4176):14: token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
debug   Sep 28 20:21:50 message_handler_orf_token(4187):14: install seq 0 aru 0 high seq received 0
debug   Sep 28 20:21:50 message_handler_orf_token(4176):14: token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
debug   Sep 28 20:21:50 message_handler_orf_token(4187):14: install seq 0 aru 0 high seq received 0
debug   Sep 28 20:21:50 message_handler_orf_token(4206):14: retrans flag count 4 token aru 0 install seq 0 aru 0 0
debug   Sep 28 20:21:50 old_ring_state_reset(1621):14: Resetting old ring state
debug   Sep 28 20:21:50 deliver_messages_from_recovery_to_regular(1852):14: recovery to regular 1-0
trace   Sep 28 20:21:50 memb_state_operational_enter(1943):14: Delivering to app 1 to 0
debug   Sep 28 20:21:50 totempg_waiting_trans_ack_cb(286):14: waiting_trans_ack changed to 1
debug   Sep 28 20:21:50 memb_state_operational_enter(2128):14: entering OPERATIONAL state.
notice  Sep 28 20:21:50 memb_state_operational_enter(2134):14: A new membership (127.0.0.1:12) was formed. Members joined: 2130706433
trace   Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to pending queue
trace   Sep 28 20:21:50 messages_deliver_to_app(4278):14: Delivering 0 to 2
trace   Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST message with seq 1 to pending delivery queue
trace   Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST message with seq 2 to pending delivery queue
trace   Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to pending queue
trace   Sep 28 20:21:50 messages_deliver_to_app(4278):14: Delivering 2 to 3
trace   Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST message with seq 3 to pending delivery queue
trace   Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to pending queue
trace   Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to pending queue
trace   Sep 28 20:21:50 messages_free(2676):14: releasing messages up to and including 2
trace   Sep 28 20:21:50 messages_deliver_to_app(4278):14: Delivering 3 to 5
trace   Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST message with seq 4 to pending delivery queue
trace   Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST message with seq 5 to pending delivery queue
trace   Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to pending queue
trace   Sep 28 20:21:50 messages_free(2676):14: releasing messages up to and including 3
trace   Sep 28 20:21:50 messages_deliver_to_app(4278):14: Delivering 5 to 6
trace   Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST message with seq 6 to pending delivery queue
debug   Sep 28 20:21:50 totempg_waiting_trans_ack_cb(286):14: waiting_trans_ack changed to 0
trace   Sep 28 20:21:50 messages_free(2676):14: releasing messages up to and including 5
trace   Sep 28 20:21:50 messages_free(2676):14: releasing messages up to and including 6
trace   Sep 28 20:22:03 qb_loop_poll_add(368):9: grown poll array to 10 for FD 18
debug   Sep 28 20:22:03 handle_new_connection(647):9: IPC credentials authenticated (3202-3255-18)
debug   Sep 28 20:22:03 qb_ipcs_shm_connect(285):9: connecting to client [3255]
debug   Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589; real_size:1052672; rb->word_size:263168
debug   Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589; real_size:1052672; rb->word_size:263168
debug   Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589; real_size:1052672; rb->word_size:263168
trace   Sep 28 20:22:03 qb_loop_poll_add(368):9: grown poll array to 11 for FD 18
debug   Sep 28 20:22:03 qb_ipcs_dispatch_connection_request(759):9: HUP conn (3202-3255-18)
debug   Sep 28 20:22:03 qb_ipcs_disconnect(606):9: qb_ipcs_disconnect(3202-3255-18) state:2
trace   Sep 28 20:22:03 qb_rb_close(290):9: ENTERING qb_rb_close()
debug   Sep 28 20:22:03 qb_rb_close_helper(337):9: Free'ing ringbuffer: /dev/shm/qb-cmap-response-3202-3255-18-header
trace   Sep 28 20:22:03 my_posix_sem_destroy(91):9: ENTERING my_posix_sem_destroy()
trace   Sep 28 20:22:03 qb_rb_close(290):9: ENTERING qb_rb_close()
debug   Sep 28 20:22:03 qb_rb_close_helper(337):9: Free'ing ringbuffer: /dev/shm/qb-cmap-event-3202-3255-18-header
trace   Sep 28 20:22:03 my_posix_sem_destroy(91):9: ENTERING my_posix_sem_destroy()
trace   Sep 28 20:22:03 qb_rb_close(290):9: ENTERING qb_rb_close()
debug   Sep 28 20:22:03 qb_rb_close_helper(337):9: Free'ing ringbuffer: /dev/shm/qb-cmap-request-3202-3255-18-header
trace   Sep 28 20:22:03 my_posix_sem_destroy(91):9: ENTERING my_posix_sem_destroy()
debug   Sep 28 20:22:03 handle_new_connection(647):9: IPC credentials authenticated (3202-3257-18)
debug   Sep 28 20:22:03 qb_ipcs_shm_connect(285):9: connecting to client [3257]
debug   Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589; real_size:1052672; rb->word_size:263168
debug   Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589; real_size:1052672; rb->word_size:263168
debug   Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589; real_size:1052672; rb->word_size:263168
ERROR: qb_rb_chunk_read failed: Connection timed out
[trace] ENTERING qb_rb_close()
[debug] Free'ing ringbuffer: /dev/shm/qb-create_from_file-header

For the second one I didn't see any check (I suppose for the bug dependency with sysV)... I can force it and check the PID's change (in fact, I didn't write the SRU template for that depending on your opinion).

About the SRUs template, there are a kind-of previous one for LP: #1677684, and no one for the PID bug... still thinking about the steps to reproduce for this last one (as I said before).

Also, I noticed an error on lintian when building (that I suppose it has to be resolved before it can be SRU-processed):

E: libtotem-pg5: symbols-file-contains-current-version-with-debian-revision on symbol crypto_get_current_sec_header_size at Base
E: Lintian run failed (policy violation)

Autopackage Tests (OK):

autopkgtest [20:28:15]: test corosync: [-----------------------
+ corosync-cfgtool -s
+ grep -20 ring 0 active with no faults
Printing ring status.
Local node ID 2130706433
RING ID 0
 id	= 127.0.0.1
 status	= ring 0 active with no faults
+ corosync-quorumtool
+ grep -20  1 localhost (local)
Quorum information
------------------
Date:             Tue Sep 28 18:28:16 2021
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          2130706433
Ring ID:          2130706433/4
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
2130706433          1 localhost (local)
autopkgtest [20:28:16]: test corosync: -----------------------]
autopkgtest [20:28:16]: test corosync:  - - - - - - - - - - results - - - - - - - - - -
corosync             PASS
autopkgtest [20:28:17]: @@@@@@@@@@@@@@@@@@@@ summary
corosync             PASS

Thanks in advance for your time in reviewing this! And for your hints about what needs to be done.

For more details, see:
https://code.launchpad.net/~mirespace/ubuntu/+source/corosync/+git/corosync/+merge/409319
-- 
Your team Ubuntu Core Development Team is requested to review the proposed merge of ~mirespace/ubuntu/+source/corosync:sru-corosync-bionic-lp1677684-lp1437359 into ubuntu/+source/corosync:ubuntu/bionic-devel.




More information about the Ubuntu-reviews mailing list