本文主要是介绍pacemaker之三节点drbd(单primary),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
本文档用于测试三节点drbd在pacemaker中的配置。
同一时间仅有一个节点/dev/drbd0为primary,使用drbd9的auto-promote特性,根据场景自动在primary和secondary角色中切换。
一 os环境
准备三个操作系统环境,每个系统两个网卡,一个单独用于drbd的磁盘
# cat /etc/openEuler-release
openEuler release 20.03 (LTS-SP1)
# uname -a
Linux test3 4.19.90-2012.5.0.0054.oe1.x86_64 #1 SMP Tue Dec 22 15:58:47 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.90.172 test1
172.16.90.173 test2
172.16.90.174 test3
192.168.1.11 test11
192.168.1.12 test22
192.168.1.13 test33
# cat /etc/selinux/config
SELINUX=disabled
SELINUXTYPE=targeted
# systemctl stop firewalld
# systemctl disable firewalld
每个操作系统中存在一个单独的磁盘用于DRBD,本文档中均为/dev/sdb设备,大小为1G
# fdisk -l /dev/sdb
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
二 配置三节点DRBD
三个节点都配置文件r0.res,内容保持一致
# cat /etc/drbd.d/r0.res
resource r0 {
options {
auto-promote yes;
}
volume 0 {
device /dev/drbd0;
disk /dev/sdb;
meta-disk internal;
}
on hatest1 {
address 192.168.1.11:7788;
node-id 0;
}
on hatest2 {
address 192.168.1.12:7788;
node-id 1;
}
on hatest3 {
address 192.168.1.13:7788;
node-id 2;
}
connection-mesh {
hosts hatest1 hatest2 hatest3;
}
}
三个节点都执行:
# drbdadm create-md r0
# drbdadm up r0
hatest1上执行:
# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:r0/0 Connected(3*) Secondary(3*) Inco(hatest1)/Inco(hatest3,hatest2)
# drbdsetup primary r0 --force
# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:r0/0 Connected(3*) Prim(hatest1)/Seco(hatest3,hatest2) UpTo(hatest1)/Inco(hatest2,hatest3)
提示Inco(hatest2,hatest3)数据不一致,等待一段时间后继续查看数据同步完成
三个节点状态分别显示如下:
[root@hatest1 ~]# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:r0/0 Connected(3*) Prim(hatest1)/Seco(hatest3,hatest2) UpTo(hatest1)/UpTo(hatest2,hatest3)
[root@hatest2 ~]# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:r0/0 Connected(3*) Seco(hatest2,hatest3)/Prim(hatest1) UpTo(hatest2)/UpTo(hatest1,hatest3)
[root@hatest3 ~]# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:r0/0 Connected(3*) Seco(hatest3,hatest2)/Prim(hatest1) UpTo(hatest3)/UpTo(hatest1,hatest2)
详细信息
[root@hatest1 ~]# drbdsetup status r0 --verbose --statistics
r0 node-id:0 role:Primary suspended:no
write-ordering:flush
volume:0 minor:0 disk:UpToDate quorum:yes
size:2096988 read:4194104 written:0 al-writes:0 bm-writes:0 upper-pending:0 lower-pending:0 al-suspended:no blocked:no
hatest2 node-id:1 connection:Connected role:Secondary congested:no ap-in-flight:0 rs-in-flight:0
volume:0 replication:Established peer-disk:UpToDate resync-suspended:no
received:0 sent:2096988 out-of-sync:0 pending:0 unacked:0
hatest3 node-id:2 connection:Connected role:Secondary congested:no ap-in-flight:0 rs-in-flight:0
volume:0 replication:Established peer-disk:UpToDate resync-suspended:no
received:0 sent:2096988 out-of-sync:0 pending:0 unacked:0
[root@hatest1 ~]# mkfs.xfs -f /dev/drbd0
meta-data=/dev/drbd0 isize=512 agcount=4, agsize=131062 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=524247, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
验证:
哪个节点使用该/dev/drbd0设备,哪个节点自动promote为primary节点;当没有节点使用/dev/drbd0设备时,所有节点都处于secondary状态。
[root@hatest1 ~]# mount /dev/drbd0 /media/
[root@hatest1 ~]# umount /media
[root@hatest1 ~]# drbdsetup secondary r0
[root@hatest2 ~]# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:r0/0 Connected(3*) Secondary(3*) UpTo(hatest2)/UpTo(hatest3,hatest1)
[root@hatest2 ~]# mount /dev/drbd0 /media/
[root@hatest2 ~]# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:r0/0 Connected(3*) Prim(hatest2)/Seco(hatest3,hatest1) UpTo(hatest2)/UpTo(hatest3,hatest1) /media xfs 2.0G 47M 2.0G 3%
[root@hatest2 ~]# umount /media/
[root@hatest2 ~]# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:r0/0 Connected(3*) Secondary(3*) UpTo(hatest2)/UpTo(hatest1,hatest3)
每个节点上执行:
# drbdadm down r0
三 配置高可用集群
三个节点执行:
# systemctl start pcsd
# systemctl enable pcsd
# mkdir /data
hatest1节点上执行:
[root@hatest1 ~]# pcs host auth hatest1 hatest2 hatest3
Username: hacluster
Password:
hatest1: Authorized
hatest2: Authorized
hatest3: Authorized
[root@hatest1 ~]# pcs cluster setup hacluster hatest1 addr=172.16.90.172 hatest2 addr=172.16.90.173 hatest3 addr=172.16.90.174
Destroying cluster on hosts: 'hatest1', 'hatest2', 'hatest3'...
hatest1: Successfully destroyed cluster
hatest2: Successfully destroyed cluster
hatest3: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'hatest1', 'hatest2', 'hatest3'
hatest2: successful removal of the file 'pcsd settings'
hatest1: successful removal of the file 'pcsd settings'
hatest3: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'hatest1', 'hatest2', 'hatest3'
hatest1: successful distribution of the file 'corosync authkey'
hatest1: successful distribution of the file 'pacemaker authkey'
hatest2: successful distribution of the file 'corosync authkey'
hatest2: successful distribution of the file 'pacemaker authkey'
hatest3: successful distribution of the file 'corosync authkey'
hatest3: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'hatest1', 'hatest2', 'hatest3'
hatest1: successful distribution of the file 'corosync.conf'
hatest2: successful distribution of the file 'corosync.conf'
hatest3: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@hatest1 ~]# pcs cluster start --all
hatest1: Starting Cluster...
hatest2: Starting Cluster...
hatest3: Starting Cluster...
[root@hatest1 ~]# pcs cluster enable --all
hatest1: Cluster Enabled
hatest2: Cluster Enabled
hatest3: Cluster Enabled
[root@hatest1 ~]# pcs property set stonith-enabled=false
[root@hatest1 ~]# pcs resource create drbd_dev systemd:drbd clone clone-max=3 clone-node-max=1
[root@hatest1 ~]# pcs resource create drbd_mount ocf:heartbeat:Filesystem device=/dev/drbd0 directory=/data fstype=xfs
[root@hatest1 ~]# pcs status
Cluster name: hacluster
Cluster Summary:
* Stack: corosync
* Current DC: hatest3 (version 2.0.4-6.oe1-2deceaa3ae) - partition with quorum
* Last updated: Tue Feb 2 12:05:57 2021
* Last change: Tue Feb 2 12:05:36 2021 by root via cibadmin on hatest1
* 3 nodes configured
* 4 resource instances configured
Node List:
* Online: [ hatest1 hatest2 hatest3 ]
Full List of Resources:
* Clone Set: drbd_dev-clone [drbd_dev]:
* Started: [ hatest1 hatest2 hatest3 ]
* drbd_mount (ocf::heartbeat:Filesystem): Started hatest1
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@hatest1 ~]# pcs constraint order start drbd_dev-clone then drbd_mount
四 验证
[root@hatest1 ~]# pcs resource move drbd_mount hatest3
可以看到,hatest1上的/dev/drbd0设备已经取消挂载,而hatest3节点上的/dev/drbd0挂载到/data目录
这篇关于pacemaker之三节点drbd(单primary)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!