{"id":1438,"date":"2013-02-01T15:12:57","date_gmt":"2013-02-01T20:12:57","guid":{"rendered":"http:\/\/mossiso.com\/?p=1438"},"modified":"2014-09-22T14:38:05","modified_gmt":"2014-09-22T18:38:05","slug":"setting-up-a-hosting-environment-part-3-redhat-cluster-and-gfs2","status":"publish","type":"post","link":"https:\/\/mossiso.com\/2013\/02\/01\/setting-up-a-hosting-environment-part-3-redhat-cluster-and-gfs2\/","title":{"rendered":"Setting up a Hosting Environment, Part 3: RedHat Cluster and GFS2"},"content":{"rendered":"
Previous posts in this series:<\/p>\n
Part 1: Setting up the servers<\/a><\/p>\n Part 2: Connecting the Array<\/a><\/p>\n RedHat Cluster must be set up before the GFS2 File systems can be created and mounted.<\/p>\n Create a partition on the new scsi device \/dev\/mapper\/mpatha using parted. NOTE: This part only needs to be done once on one server<\/p>\n Edit the In order to enable the LVM volumes you are creating in a cluster, the cluster infrastructure must be running and the cluster must be quorate.<\/p>\n Create LVM partitions on the raw drive available from the StorageTek. NOTE: This part only needs to be done once on one server.<\/p>\n Now create the different partitions for the system: sites, databases, logs, home, root<\/p>\n Make a temporary directory Copy \/root\/.ssh\/known_hosts to \/etc\/ssh\/root_known_hosts so the file is different for both servers.<\/p>\n Before doing the home directory, we have to remove it from the local LVM.<\/p>\n Create GFS2 files systems on the LVM partitions created on the StorageTek. Make sure they are unmounted, first. NOTE: This part only needs to be done once on one server.<\/p>\n Mount the GFS2 partitions<\/p>\n Make the appropriate folders on each node (\/home is already there).<\/p>\n Make sure the appropriate lines are in \/etc\/fstab<\/p>\n Once the GFS2 partitions are set up and in To start the cluster software on a node, type the following commands in this order:<\/p>\n To stop the cluster software on a node, type the following commands in this order:<\/p>\n If a service shows as \u2018failed\u2019 when checking on services with Have Shorewall start sooner in the boot process.<\/p>\n Previous posts in this series: Part 1: Setting up the servers Part 2: Connecting the Array RedHat Cluster and GFS2 Setup Required reading: https:\/\/alteeve.com\/w\/2-Node_Red_Hat_KVM_Cluster_Tutorial Seriously. Don’t do anything until you read his tutorial a couple of times. Might as well read a bunch of his other tutorials, too. Unless mentioned, the following commands should be … Continue reading Setting up a Hosting Environment, Part 3: RedHat Cluster and GFS2<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":1446,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[243,249,167],"tags":[256,257,258,255,179],"class_list":["post-1438","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-howto","category-setting-up-a-hosting-environment","category-technical","tag-cluster","tag-gfs2","tag-lvm","tag-redhat","tag-servers"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/mossiso.com\/wp-content\/uploads\/2013\/02\/redhat-cluster-gfs2.jpg","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9wosP-nc","_links":{"self":[{"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/posts\/1438"}],"collection":[{"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/comments?post=1438"}],"version-history":[{"count":7,"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/posts\/1438\/revisions"}],"predecessor-version":[{"id":1654,"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/posts\/1438\/revisions\/1654"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/media\/1446"}],"wp:attachment":[{"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/media?parent=1438"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/categories?post=1438"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mossiso.com\/wp-json\/wp\/v2\/tags?post=1438"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}RedHat Cluster and GFS2 Setup<\/h2>\n
\n
Set date\/time to be accurate and within a few minutes of each other.<\/h3>\n
\n
\n
yum install ntp<\/code><\/li>\n
ntpdate time.nist.gov<\/code><\/li>\n<\/ul>\n<\/li>\n
\n
service ntpd start<\/code><\/li>\n
\/etc\/ntp.conf<\/code> file to use the following servers:<\/li>\n
server 0.pool.ntp.org\r\nserver 1.pool.ntp.org\r\nserver 2.pool.ntp.org\r\nserver 3.pool.ntp.org<\/pre>\n<\/li>\n<\/ul>\n<\/li>\n
\n
service ntpd restart<\/code><\/li>\n
chkconfig ntpd on<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n
Cluster setup<\/h3>\n
\n
\n
yum install openais cman rgmanager lvm2-cluster gfs2-utils ccs<\/code><\/li>\n
\/etc\/cluster\/cluster.conf<\/code> REMEMBER: Always increment the \u201cconfig_version\u201d parameter in the
cluster<\/code> tag!\n
\n
<?xml version=\u201c1.0\u201d?>\r\n <cluster config_version=\u201c24\u201d name=\u201cweb-production\u201d>\r\n <cman expected_votes=\u201c1\u201d two_node=\u201c1\u201d\/>\r\n <fence_daemon clean_start=\u201c1\u201d post_fail_delay=\u201c6\u201d post_join_delay=\u201c3\u201d\/>\r\n <totem rrp_mode=\u201cnone\u201d secauth=\u201coff\u201d\/>\r\n <clusternodes>\r\n <clusternode name=\u201cbill\u201d nodeid=\"1\">\r\n <fence>\r\n <method name=\"ipmi\">\r\n <device action=\u201creboot\u201d name=\u201cipmi_bill\u201d\/>\r\n <\/method>\r\n <\/fence>\r\n <\/clusternode>\r\n <clusternode name=\u201cted\u201d nodeid=\"2\">\r\n <fence>\r\n <method name=\"ipmi\">\r\n <device action=\u201creboot\u201d name=\u201cipmi_ted\u201d\/>\r\n <\/method>\r\n <\/fence>\r\n <\/clusternode>\r\n <\/clusternodes>\r\n <fencedevices>\r\n <fencedevice agent=\u201cfence_ipmilan\u201d ipaddr=\u201cbillsp\u201d login=\u201croot\u201d name=\u201cipmi_bill\u201d passwd=\u201cPASSWORD-HERE\u201d\/>\r\n <fencedevice agent=\u201cfence_ipmilan\u201d ipaddr=\u201ctedsp\u201d login=\u201croot\u201d name=\u201cipmi_ted\u201d passwd=\u201cPASSWORD-HERE\u201d\/>\r\n <\/fencedevices>\r\n <rm log_level=\"5\">\r\n <resources>\r\n <clusterfs device=\u201c\/dev\/mapper\/StorageTek2530-sites\u201d fstype=\u201cgfs2\u201d mountpoint=\u201c\/sites\u201d name=\u201csites\u201d\/>\r\n <clusterfs device=\u201c\/dev\/mapper\/StorageTek2530-databases\u201d fstype=\u201cgfs2\u201d mountpoint=\u201c\/databases\u201d name=\u201cdatabases\u201d\/>\r\n <clusterfs device=\u201c\/dev\/mapper\/StorageTek2530-logs\u201d fstype=\u201cgfs2\u201d mountpoint=\u201c\/logs\u201d name=\u201clogs\u201d\/>\r\n <\/resources>\r\n <failoverdomains>\r\n <failoverdomain name=\u201cbill-only\u201d nofailback=\u201c1\u201d ordered=\u201c0\u201d restricted=\"1\">\r\n <failoverdomainnode name=\u201cbill\u201d\/>\r\n <\/failoverdomain>\r\n <failoverdomain name=\u201cted-only\u201d nofailback=\u201c1\u201d ordered=\u201c0\u201d restricted=\"1\">\r\n <failoverdomainnode name=\u201cted\u201d\/>\r\n <\/failoverdomain>\r\n <\/failoverdomains>\r\n <\/rm>\r\n <\/cluster><\/pre>\n<\/div>\n<\/li>\n<\/ul>\n<\/li>\n
\n
ccs_config_validate<\/code><\/li>\n<\/ul>\n<\/li>\n
\n
passwd ricci<\/code><\/li>\n<\/ul>\n<\/li>\n
\n
service ricci start<\/code><\/li>\n
chkconfig ricci on<\/code><\/li>\n<\/ul>\n<\/li>\n
\n
service modclusterd start<\/code><\/li>\n
chkconfig modclusterd on<\/code><\/li>\n<\/ul>\n<\/li>\n
\n
ccs -f \/etc\/cluster\/cluster.conf -h ted --setconf<\/code><\/li>\n<\/ul>\n<\/li>\n
\n
service cman start<\/code><\/li>\n<\/ul>\n<\/li>\n
\n
chkconfig cman on<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n
\n
Create GFS2 partitions<\/h3>\n
\n
parted \/dev\/mapper\/mpatha<\/code><\/li>\n
mklabel gpt<\/code><\/li>\n
mkpart primary 1 -1<\/code><\/li>\n
set 1 lvm on<\/code><\/li>\n
quit<\/code><\/li>\n
\n
parted -l<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n
\/etc\/lvm\/lvm.conf file<\/code> and set the value for
locking_type = 3<\/code> to allow for cluster locking.<\/p>\n
\n
service clvmd start<\/code><\/li>\n
chkconfig clvmd on<\/code><\/li>\n
chkconfig gfs2 on<\/code><\/li>\n<\/ul>\n
\n
pvcreate \/dev\/mapper\/mpatha1<\/code><\/li>\n
vgcreate -c y StorageTek2530 \/dev\/mapper\/mpatha1<\/code><\/li>\n<\/ul>\n
\n
lvcreate --name sites --size 350GB StorageTek2530<\/code><\/li>\n
lvcreate --name databases --size 100GB StorageTek2530<\/code><\/li>\n
lvcreate --name logs --size 50GB StorageTek2530<\/code><\/li>\n
lvcreate --name root --size 50GB StorageTek2530<\/code><\/li>\n<\/ul>\n
\/root-b<\/code> and copy everything from root\u2019s home directory to there, because it will be erased when we make the GFS2 file system.<\/p>\n
\n
unmount \/home<\/code><\/li>\n
lvremove bill_local\/home<\/code> and on ted
lvremove ted_local\/home<\/code><\/li>\n
\/etc\/fstab<\/code> referring to the \/home directory on the local LVM<\/li>\n
\n
lvcreate --name home --size 50GB StorageTek2530<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n
\n
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:sites \/dev\/mapper\/StorageTek2530-sites<\/code><\/li>\n
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:databases \/dev\/mapper\/StorageTek2530-databases<\/code><\/li>\n
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:logs \/dev\/mapper\/StorageTek2530-logs<\/code><\/li>\n
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:root \/dev\/mapper\/StorageTek2530-root<\/code><\/li>\n
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:home \/dev\/mapper\/StorageTek2530-home<\/code><\/li>\n<\/ul>\n
\n
\n
\n
mkdir \/sites \/logs \/databases<\/code><\/li>\n<\/ul>\n
#GFS2 partitions shared in the cluster\r\n\/dev\/mapper\/StorageTek2530-root \/root gfs2 defaults,acl 0 0\r\n\/dev\/mapper\/StorageTek2530-home \/home gfs2 defaults,acl 0 0\r\n\/dev\/mapper\/StorageTek2530-databases \/databases gfs2 defaults,acl 0 0\r\n\/dev\/mapper\/StorageTek2530-logs \/logs gfs2 defaults,acl 0 0\r\n\/dev\/mapper\/StorageTek2530-sites \/sites gfs2 defaults,acl 0 0<\/pre>\n
\/etc\/fstab<\/code>, rgmanager can be started. This will mount the GFS2 partions.<\/p>\n
\n
service rgmanager start<\/code><\/li>\n
chkconfig rgmanager on<\/code><\/li>\n<\/ul>\n
Starting Cluster Software<\/h3>\n
\n
service cman start<\/code><\/li>\n
service clvmd start<\/code><\/li>\n
service gfs2 start<\/code><\/li>\n
service rgmanager start<\/code><\/li>\n<\/ul>\n
Stopping Cluster Software<\/h3>\n
\n
service ossec-hids stop<\/code>\n
\n
service rgmanager stop<\/code><\/li>\n
service gfs2 stop<\/code><\/li>\n
umount -at gfs2<\/code><\/li>\n
service clvmd stop<\/code><\/li>\n
service cman stop<\/code><\/li>\n<\/ul>\n
Cluster tips<\/h3>\n
clustat<\/code><\/p>\n
\n
clusvcadm -d service-name<\/code><\/li>\n
clusvcadm -e service-name<\/code><\/li>\n<\/ul>\n
\n
\/etc\/init.d\/shorewall<\/code> and change the line near the top from
# chkconfig: - 28 90<\/code> to\n
\n
# chkconfig: - 18 90<\/code><\/li>\n<\/ul>\n<\/li>\n
\n
chkconfig shorewall off<\/code><\/li>\n
chkconfig shorewall on<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"