Red Hat Cluster Suite / Handling of shutdown and mounting of GFS
Ante Karamatic
ivoks at grad.hr
Fri Sep 5 13:36:12 UTC 2008
On Fri, 5 Sep 2008 12:51:42 +0200
Chris Joelly <chris-m-lists at joelly.net> wrote:
> ack. But the error i get has nothing to do with split brain. And i'm
> trying to figure out what device rhcs use and thus cannot remove the
> node. The service which where hosted on store02 was successfully moved
> to store01, so this must be an mistake from cman_tool? But how can i
> find this device? Using strace and lsof i was not able to track it
> down :-/
Moving services isn't an issue here (you could remove all services from
node with /etc/init.d/rgmanager stop). This problem is related with
cluster membership. I don't know exactly where the problem is (I'm
just a user, not developer :).
I'll repeat once more, having only two nodes in cluster is worst
possible scenario for RHCS.
> Which means that the whole rhcs stuff is rather useless? Or may i
> assume that the rhcs stuff in RH, CentOS is much better integrated
> and tested than in Ubuntu server? And therefore it's worth the
> subscription costs at RH or the switch to CentOS?
I wouldn't use it on two-node cluster if I really don't have to (but I
do in one case), but it's far away from useless. It's great :) The same
problem exist on all distributions (FWIW my crappy two node cluster is
on RedHat and all others are on Ubuntu).
> > These things should be easier once we put upstart in use.
>
> upstart? aha. sounds interesting... never heard of this before.
upstart is replacement for the oldest part in Unix - SysV init
scripts :D Check it out:
http://upstart.ubuntu.com
Best thing since sliced bread. Really.
> this is the way i use DRBD-LVM2-GFS on my 2-node cluster. But as i
> understand cluster.conf and system-config-cluster i have to define
> resources for a service. If e.g. i want to create 2 services which
> both rely on the same GFS mount and are expected to run on the same
> node, then i don't know how to share this GFS resource. Does the
> resource manager take care if the GFS resource is already mounted
> when starting service1 on node1 when he decides to bring up service2
> on node1 too? Or e.g. i setup the cluster so that each node is the
> fail over node for the other, and the services have a GFS resource
> defined which would cause an GFS mount which is already there on the
> fail over node? Or would that 'double' mount trigger an failed start
> of the failed over service?
Since RHCS isn't aware of DRBD, you can't really rely on it to handle
GFS mount. This is why I don't manage GFS mounts with RHCS. I rather
mount GFS on both machines and then let the services read it when they
need to. For example:
If I have two apache nodes, then I mount /var/www as GFS on both
(underneath this GFS is a DRBD device with both nodes in
primary-primary). As soon as first node dies, service is started on the
other node. RHCS doesn't manage my /var/www mount.
More information about the ubuntu-server
mailing list