Part 1: Setting up the servers
yum install ntp
ntpdate time.nist.gov
service ntpd start
/etc/ntp.conf
file to use the following servers:server 0.pool.ntp.org server 1.pool.ntp.org server 2.pool.ntp.org server 3.pool.ntp.org
service ntpd restart
chkconfig ntpd on
RedHat Cluster must be set up before the GFS2 File systems can be created and mounted.
yum install openais cman rgmanager lvm2-cluster gfs2-utils ccs
/etc/cluster/cluster.conf
REMEMBER: Always increment the “config_version” parameter in the cluster
tag!
<?xml version=“1.0”?> <cluster config_version=“24” name=“web-production”> <cman expected_votes=“1” two_node=“1”/> <fence_daemon clean_start=“1” post_fail_delay=“6” post_join_delay=“3”/> <totem rrp_mode=“none” secauth=“off”/> <clusternodes> <clusternode name=“bill” nodeid="1"> <fence> <method name="ipmi"> <device action=“reboot” name=“ipmi_bill”/> </method> </fence> </clusternode> <clusternode name=“ted” nodeid="2"> <fence> <method name="ipmi"> <device action=“reboot” name=“ipmi_ted”/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=“fence_ipmilan” ipaddr=“billsp” login=“root” name=“ipmi_bill” passwd=“PASSWORD-HERE”/> <fencedevice agent=“fence_ipmilan” ipaddr=“tedsp” login=“root” name=“ipmi_ted” passwd=“PASSWORD-HERE”/> </fencedevices> <rm log_level="5"> <resources> <clusterfs device=“/dev/mapper/StorageTek2530-sites” fstype=“gfs2” mountpoint=“/sites” name=“sites”/> <clusterfs device=“/dev/mapper/StorageTek2530-databases” fstype=“gfs2” mountpoint=“/databases” name=“databases”/> <clusterfs device=“/dev/mapper/StorageTek2530-logs” fstype=“gfs2” mountpoint=“/logs” name=“logs”/> </resources> <failoverdomains> <failoverdomain name=“bill-only” nofailback=“1” ordered=“0” restricted="1"> <failoverdomainnode name=“bill”/> </failoverdomain> <failoverdomain name=“ted-only” nofailback=“1” ordered=“0” restricted="1"> <failoverdomainnode name=“ted”/> </failoverdomain> </failoverdomains> </rm> </cluster>
ccs_config_validate
passwd ricci
service ricci start
chkconfig ricci on
service modclusterd start
chkconfig modclusterd on
ccs -f /etc/cluster/cluster.conf -h ted --setconf
service cman start
chkconfig cman on
Create a partition on the new scsi device /dev/mapper/mpatha using parted. NOTE: This part only needs to be done once on one server
parted /dev/mapper/mpatha
mklabel gpt
mkpart primary 1 -1
set 1 lvm on
quit
parted -l
Edit the /etc/lvm/lvm.conf file
and set the value for locking_type = 3
to allow for cluster locking.
In order to enable the LVM volumes you are creating in a cluster, the cluster infrastructure must be running and the cluster must be quorate.
service clvmd start
chkconfig clvmd on
chkconfig gfs2 on
Create LVM partitions on the raw drive available from the StorageTek. NOTE: This part only needs to be done once on one server.
pvcreate /dev/mapper/mpatha1
vgcreate -c y StorageTek2530 /dev/mapper/mpatha1
Now create the different partitions for the system: sites, databases, logs, home, root
lvcreate --name sites --size 350GB StorageTek2530
lvcreate --name databases --size 100GB StorageTek2530
lvcreate --name logs --size 50GB StorageTek2530
lvcreate --name root --size 50GB StorageTek2530
Make a temporary directory /root-b
and copy everything from root’s home directory to there, because it will be erased when we make the GFS2 file system.
Copy /root/.ssh/known_hosts to /etc/ssh/root_known_hosts so the file is different for both servers.
Before doing the home directory, we have to remove it from the local LVM.
unmount /home
lvremove bill_local/home
and on ted lvremove ted_local/home
/etc/fstab
referring to the /home directory on the local LVMlvcreate --name home --size 50GB StorageTek2530
Create GFS2 files systems on the LVM partitions created on the StorageTek. Make sure they are unmounted, first. NOTE: This part only needs to be done once on one server.
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:sites /dev/mapper/StorageTek2530-sites
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:databases /dev/mapper/StorageTek2530-databases
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:logs /dev/mapper/StorageTek2530-logs
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:root /dev/mapper/StorageTek2530-root
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:home /dev/mapper/StorageTek2530-home
Mount the GFS2 partitions
Make the appropriate folders on each node (/home is already there).
mkdir /sites /logs /databases
Make sure the appropriate lines are in /etc/fstab
#GFS2 partitions shared in the cluster /dev/mapper/StorageTek2530-root /root gfs2 defaults,acl 0 0 /dev/mapper/StorageTek2530-home /home gfs2 defaults,acl 0 0 /dev/mapper/StorageTek2530-databases /databases gfs2 defaults,acl 0 0 /dev/mapper/StorageTek2530-logs /logs gfs2 defaults,acl 0 0 /dev/mapper/StorageTek2530-sites /sites gfs2 defaults,acl 0 0
Once the GFS2 partitions are set up and in /etc/fstab
, rgmanager can be started. This will mount the GFS2 partions.
service rgmanager start
chkconfig rgmanager on
To start the cluster software on a node, type the following commands in this order:
service cman start
service clvmd start
service gfs2 start
service rgmanager start
To stop the cluster software on a node, type the following commands in this order:
service ossec-hids stop
service rgmanager stop
service gfs2 stop
umount -at gfs2
service clvmd stop
service cman stop
If a service shows as ‘failed’ when checking on services with clustat
clusvcadm -d service-name
clusvcadm -e service-name
Have Shorewall start sooner in the boot process.
/etc/init.d/shorewall
and change the line near the top from # chkconfig: - 28 90
to
# chkconfig: - 18 90
chkconfig shorewall off
chkconfig shorewall on