[Bug 799711] Re: o2cb[11796]: ERROR: ocfs2_controld.pcmk did not come up

HenningMalzahn 799711 at bugs.launchpad.net
Wed Aug 31 08:47:58 UTC 2011


Hello Jacob,

you were right. Of course the leftover from the Master/Slave setup up was utterly wrong and I removed that.
Other than that I changed the following Pacemaker objects:

- Object that defines the multi state resource:
  - ms msDrbd2 resDrbd2 meta resource-stickiness="100" master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally-unique="false" interleave="true"
  - Previously defined without the interleave=true" option

- Object that defines the primitive for the DLM
  - primitive resDlm ocf:pacemaker:controld op monitor interval="120s"
  - Now added WITHOUT the previously used options: op start interval="0" timeout="90" op stop interval="0" timeout="100"

- Object that defines the primitive for the O2CB service
  - primitive resO2CB ocf:pacemaker:o2cb op monitor interval="120s"
  - Now added WITHOUT the previously used options: op start interval="0" timeout="90" op stop interval="0" timeout="100"

- Object that defines the primitive for the filesystem object
  - primitive resFs2 ocf:heartbeat:Filesystem params device="/dev/drbd2" fstype="ocfs2" directory="/var/www" op monitor interval="120s"
  - Now added WITHOUT the previously used options: op start interval="0" timeout="90" op stop interval="0" timeout="100" meta target-role="stopped"

So the overall configuration that works is the following:

primitive resDrbd2 ocf:linbit:drbd params drbd_resource="r2" operations
$id="resDrbd2-operations" op monitor interval="20s" role="Master"
timeout="20s" op monitor interval="30s" role="Slave" timeout="20s"

ms msDrbd2 resDrbd2 meta resource-stickiness="100" master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally-unique="false" interleave="true"
location locDrbd2AllowedNodes msDrbd2 rule 200: #uname eq node1 or #uname eq node2

primitive resDlm ocf:pacemaker:controld op monitor interval="120s"
clone cloneDlm resDlm meta globally-unique="false" interleave="true"
colocation colDlmDrbd inf: cloneDlm msDrbd2:Master
order ordDrbdDlm 0: msDrbd2:promote cloneDlm
location locCloneDlmAllowedNodes cloneDlm rule 200: #uname eq node1 or #uname eq node2

primitive resO2CB ocf:pacemaker:o2cb op monitor interval="120s"
clone cloneO2CB resO2CB meta globally-unique="false" interleave="true"
colocation colO2CBDlm inf: cloneO2CB cloneDlm
order ordDlmO2CB 0: cloneDlm cloneO2CB
location locCloneO2CBAllowedNodes cloneO2CB rule 200: #uname eq node1 or #uname eq node2

primitive resFs2 ocf:heartbeat:Filesystem params device="/dev/drbd2" fstype="ocfs2" directory="/var/www" op monitor interval="120s"
clone cloneFs2 resFs2 meta globally-unique="false" interleave="true"
colocation colFs2-on-CloneO2CB inf: cloneFs2 cloneO2CB
order ordFs2-after-cloneO2CB inf: cloneO2CB cloneFs2
location locFs2AllowedNodes cloneFs2 rule 200: #uname eq node1 or #uname eq node2


I monitored this configuration for the last several weeks and the only thing left to figure out why the network connection between the nodes is dropped from time to time. The "uptime" varies before the following is logged on the first node:

Aug 26 11:27:42 node1 kernel: [93305.714992] block drbd2: sock was shut down by peer
Aug 26 11:27:42 node1 kernel: [93305.714998] block drbd2: peer( Primary -> Unknown ) conn( Connected -> BrokenPipe ) pdsk( UpToDate -> DUnknown )
Aug 26 11:27:42 node1 kernel: [93305.715031] block drbd2: short read expecting header on sock: r=0
Aug 26 11:27:42 node1 kernel: [93305.717147] block drbd2: meta connection shut down by peer.
Aug 26 11:27:42 node1 kernel: [93305.717150] block drbd2: asender terminated
Aug 26 11:27:42 node1 kernel: [93305.717154] block drbd2: Terminating asender thread
Aug 26 11:27:42 node1 kernel: [93305.717180] block drbd2: Creating new current UUID
Aug 26 11:27:42 node1 kernel: [93305.749481] block drbd2: Connection closed
Aug 26 11:27:42 node1 kernel: [93305.749486] block drbd2: conn( BrokenPipe -> Unconnected )
Aug 26 11:27:42 node1 kernel: [93305.749489] block drbd2: receiver terminated
Aug 26 11:27:42 node1 kernel: [93305.749491] block drbd2: Restarting receiver thread
Aug 26 11:27:42 node1 kernel: [93305.749493] block drbd2: receiver (re)started
Aug 26 11:27:42 node1 kernel: [93305.749496] block drbd2: conn( Unconnected -> WFConnection )
Aug 26 11:27:42 node1 kernel: [93306.044319] block drbd2: Handshake successful: Agreed network protocol version 91
Aug 26 11:27:42 node1 kernel: [93306.045138] block drbd2: Peer authenticated using 20 bytes of 'sha1' HMAC
Aug 26 11:27:42 node1 kernel: [93306.045143] block drbd2: conn( WFConnection -> WFReportParams )
Aug 26 11:27:42 node1 kernel: [93306.045155] block drbd2: Starting asender thread (from drbd2_receiver [3210])
Aug 26 11:27:42 node1 kernel: [93306.045296] block drbd2: data-integrity-alg: sha1
Aug 26 11:27:42 node1 kernel: [93306.045575] block drbd2: drbd_sync_handshake:
Aug 26 11:27:42 node1 kernel: [93306.045577] block drbd2: self FC552462665D4783:BDD65C832186D577:9F7DA9B62FDA6714:2AF446A4F5219D1C bits:0 flags:0
Aug 26 11:27:42 node1 kernel: [93306.045579] block drbd2: peer 8CA84B52E8B891A7:BDD65C832186D577:9F7DA9B62FDA6714:2AF446A4F5219D1C bits:0 flags:0
Aug 26 11:27:42 node1 kernel: [93306.045581] block drbd2: uuid_compare()=100 by rule 90
Aug 26 11:27:42 node1 kernel: [93306.045583] block drbd2: Split-Brain detected, dropping connection!
Aug 26 11:27:42 node1 kernel: [93306.062952] block drbd2: helper command: /sbin/drbdadm split-brain minor-2
Aug 26 11:27:42 node1 kernel: [93306.064522] block drbd2: helper command: /sbin/drbdadm split-brain minor-2 exit code 0 (0x0)
Aug 26 11:27:42 node1 kernel: [93306.064526] block drbd2: conn( WFReportParams -> Disconnecting )
Aug 26 11:27:42 node1 kernel: [93306.064530] block drbd2: error receiving ReportState, l: 4!
Aug 26 11:27:42 node1 kernel: [93306.078690] block drbd2: meta connection shut down by peer.
Aug 26 11:27:42 node1 kernel: [93306.078693] block drbd2: asender terminated
Aug 26 11:27:42 node1 kernel: [93306.078694] block drbd2: Terminating asender thread
Aug 26 11:27:42 node1 kernel: [93306.098559] block drbd2: Connection closed
Aug 26 11:27:42 node1 kernel: [93306.098567] block drbd2: conn( Disconnecting -> StandAlone )
Aug 26 11:27:42 node1 kernel: [93306.098597] block drbd2: receiver terminated
Aug 26 11:27:42 node1 kernel: [93306.098598] block drbd2: Terminating receiver thread

and on the second node:

Aug 26 07:40:59 node2 lrmd: [1782]: info: rsc:resDrbd2:1:39: monitor
Aug 26 08:41:05 node2 lrmd: [1782]: info: rsc:resDrbd2:1:39: monitor
Aug 26 09:41:11 node2 lrmd: [1782]: info: rsc:resDrbd2:1:39: monitor
Aug 26 10:41:17 node2 lrmd: [1782]: info: rsc:resDrbd2:1:39: monitor
Aug 26 11:27:42 node2 kernel: [92955.613719] block drbd2: PingAck did not arrive in time.
Aug 26 11:27:42 node2 kernel: [92955.629774] block drbd2: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )
Aug 26 11:27:42 node2 kernel: [92955.629785] block drbd2: asender terminated
Aug 26 11:27:42 node2 kernel: [92955.629787] block drbd2: Terminating asender thread
Aug 26 11:27:42 node2 kernel: [92955.629812] block drbd2: short read expecting header on sock: r=-512
Aug 26 11:27:42 node2 kernel: [92955.630751] block drbd2: Creating new current UUID
Aug 26 11:27:42 node2 kernel: [92955.645302] block drbd2: Connection closed
Aug 26 11:27:42 node2 kernel: [92955.645306] block drbd2: conn( NetworkFailure -> Unconnected )
Aug 26 11:27:42 node2 kernel: [92955.645315] block drbd2: receiver terminated
Aug 26 11:27:42 node2 kernel: [92955.645317] block drbd2: Restarting receiver thread
Aug 26 11:27:42 node2 kernel: [92955.645318] block drbd2: receiver (re)started
Aug 26 11:27:42 node2 kernel: [92955.645321] block drbd2: conn( Unconnected -> WFConnection )
Aug 26 11:27:42 node2 kernel: [92955.974572] block drbd2: Handshake successful: Agreed network protocol version 91
Aug 26 11:27:42 node2 kernel: [92955.975297] block drbd2: Peer authenticated using 20 bytes of 'sha1' HMAC
Aug 26 11:27:42 node2 kernel: [92955.975302] block drbd2: conn( WFConnection -> WFReportParams )
Aug 26 11:27:42 node2 kernel: [92955.975346] block drbd2: Starting asender thread (from drbd2_receiver [3100])
Aug 26 11:27:42 node2 kernel: [92955.976133] block drbd2: data-integrity-alg: sha1
Aug 26 11:27:42 node2 kernel: [92955.976246] block drbd2: drbd_sync_handshake:
Aug 26 11:27:42 node2 kernel: [92955.976249] block drbd2: self 8CA84B52E8B891A7:BDD65C832186D577:9F7DA9B62FDA6714:2AF446A4F5219D1C bits:0 flags:0
Aug 26 11:27:42 node2 kernel: [92955.976251] block drbd2: peer FC552462665D4783:BDD65C832186D577:9F7DA9B62FDA6714:2AF446A4F5219D1C bits:0 flags:0
Aug 26 11:27:42 node2 kernel: [92955.976253] block drbd2: uuid_compare()=100 by rule 90
Aug 26 11:27:42 node2 kernel: [92955.976254] block drbd2: Split-Brain detected, dropping connection!
Aug 26 11:27:42 node2 kernel: [92955.991397] block drbd2: helper command: /sbin/drbdadm split-brain minor-2
Aug 26 11:27:42 node2 kernel: [92955.992758] block drbd2: helper command: /sbin/drbdadm split-brain minor-2 exit code 0 (0x0)
Aug 26 11:27:42 node2 kernel: [92955.992761] block drbd2: conn( WFReportParams -> Disconnecting )
Aug 26 11:27:42 node2 kernel: [92955.992764] block drbd2: error receiving ReportState, l: 4!
Aug 26 11:27:42 node2 kernel: [92956.008897] block drbd2: asender terminated
Aug 26 11:27:42 node2 kernel: [92956.008906] block drbd2: Terminating asender thread
Aug 26 11:27:42 node2 kernel: [92956.008986] block drbd2: Connection closed
Aug 26 11:27:42 node2 kernel: [92956.008994] block drbd2: conn( Disconnecting -> StandAlone )
Aug 26 11:27:42 node2 kernel: [92956.008998] block drbd2: receiver terminated
Aug 26 11:27:42 node2 kernel: [92956.008999] block drbd2: Terminating receiver thread
Aug 26 11:41:24 node2 lrmd: [1782]: info: rsc:resDrbd2:1:39: monitor

This causes a split brain every time this happens even though there are
no writes on the devices yet.

Stopping the Apache clone that uses the resource and the multi state resource itself, followed by the following command on the second node:
- drbdadm attach r2
- drbdadm -- --discard-my-data connect r2

and the following on the first node:
- drbdadm attach r2
- drbdadm connect r2

brings up the resources successfully again. Starting the multi state resource afterwards also succeeds without any problem and the setup works again - sometimes for several hours sometimes for 2 - 3 days until the manual resync needs to happen again.
Don't know if this is still the place to discuss this issue as this might have nothing to do anymore with the original issue.

Thanks again to everyone who helped me out!

Henning

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/799711

Title:
  o2cb[11796]: ERROR: ocfs2_controld.pcmk did not come up

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/799711/+subscriptions



More information about the Ubuntu-server-bugs mailing list