pacemaker之fence_xvm:libvirtd

2024-05-31 13:48
文章标签 fence pacemaker libvirtd xvm

本文主要是介绍pacemaker之fence_xvm:libvirtd,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

FENCE设备可以分为两种:内部FENCE和外部FENCE,常用的内部FENCE有IBM RSAII卡,HP的iLO卡,还有IPMI的设备等,外部fence设备有UPS、SANSWITCH、NETWORKSWITCH等

本示例针对libvirtd的虚拟机,宿主机为linux。

一 os环境

虚拟机系统为openEuler 20.03 LTS SP1 aarch64

# cat /etc/openEuler-release
openEuler release 20.03 (LTS-SP1)
# uname -r
4.19.90-2012.4.0.0053.oe1.aarch64

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.72.223 hatest1
172.16.72.224 hatest2
172.16.72.229 server

关闭selinux,firewalld,设置ntp,设置everything和EPOL的yum源


二 宿主机设置

宿主机  172.16.72.229
# dnf search fence
# dnf install fence-virt fence-virtd相关包

# mkdir /etc/cluster
# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
    (if=/dev/urandom输出随机数,导入一个字节数为128随机数作为密钥)
# fence_virtd -c  (注意此步骤)
# systemctl restart fence_virtd.service
# systemctl status fence_virtd.service
# systemctl enable fence_virtd.service
# systemctl disable --now firewalld

示例:
# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:

Available backends:
    libvirt 0.3
Available listeners:
    multicast 1.2
    tcp 0.1
    serial 0.4

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0  #此处是虚拟机的网关设备,即宿主机与虚拟机通讯使用设备

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:

Configuration complete.

=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br1";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y


# fence_xvm -h
usage: fence_xvm [args]
  -d                    Specify (stdin) or increment (command line) debug level
  -i <family>           IP Family ([auto], ipv4, ipv6)                         
  -a <address>          Multicast address (default=225.0.0.12 / ff05::3:1)     
  -p <port>             TCP, Multicast, VMChannel, or VM socket port           
                        (default=1229)                                         
  -r <retrans>          Multicast retransmit time (in 1/10sec; default=20)     
  -C <auth>             Authentication (none, sha1, [sha256], sha512)          
  -c <hash>             Packet hash strength (none, sha1, [sha256], sha512)    
  -k <file>             Shared key file (default=/etc/cluster/fence_xvm.key)   
  -H <domain>           Virtual Machine (domain name) to fence                 
  -u                    Treat [domain] as UUID instead of domain name. This is
                        provided for compatibility with older fence_xvmd       
                        installations.                                         
  -o <operation>        Fencing action (null, off, on, [reboot], status, list,
                        list-status, monitor, validate-all, metadata)          
  -t <timeout>          Fencing timeout (in seconds; default=30)               
  -?                    Help (alternate)                                       
  -h                    Help                                                   
  -V                    Display version and exit                               
  -w <delay>            Fencing delay (in seconds; default=0)                  

With no command line argument, arguments are read from standard input.
Arguments read from standard input take the form of:

    arg1=value1
    arg2=value2

  debug                 Specify (stdin) or increment (command line) debug level
  ip_family             IP Family ([auto], ipv4, ipv6)                         
  multicast_address     Multicast address (default=225.0.0.12 / ff05::3:1)     
  ipport                TCP, Multicast, VMChannel, or VM socket port           
                        (default=1229)                                         
  retrans               Multicast retransmit time (in 1/10sec; default=20)     
  auth                  Authentication (none, sha1, [sha256], sha512)          
  hash                  Packet hash strength (none, sha1, [sha256], sha512)    
  key_file              Shared key file (default=/etc/cluster/fence_xvm.key)   
  port                  Virtual Machine (domain name) to fence                 
  use_uuid              Treat [domain] as UUID instead of domain name. This is
                        provided for compatibility with older fence_xvmd       
                        installations.                                         
  action                Fencing action (null, off, on, [reboot], status, list,
                        list-status, monitor, validate-all, metadata)          
  timeout               Fencing timeout (in seconds; default=30)               
  delay                 Fencing delay (in seconds; default=0)          

 

三 虚拟机设置

172.16.72.223/224

mkdir /etc/cluster
scp root@172.16.72.229:/etc/cluster/* /etc/cluster

可以在节点上查看可用的fence
 stonith_admin -I
 stonith_admin -M -a fence_xvm 查看fence_xvm的详细信息

# which fence_xvm
/usr/sbin/fence_xvm
# rpm -qf /usr/sbin/fence_xvm
fence-virt-1.0.0-1.oe1.aarch64

四 配置集群

hatest1上执行:

认证节点
# systemctl start pcsd
# echo '111111"' | passwd --stdin hacluster
# pcs host auth hatest1 hatest2
Username: hacluster
Password:
hatest2: Authorized
hatest1: Authorized

创建集群
# pcs cluster setup hacluster hatest1 addr=172.16.72.223 hatest2 addr=172.16.72.224
Destroying cluster on hosts: 'hatest1', 'hatest2'...
hatest2: Successfully destroyed cluster
hatest1: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'hatest1', 'hatest2'
hatest1: successful removal of the file 'pcsd settings'
hatest2: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'hatest1', 'hatest2'
hatest2: successful distribution of the file 'corosync authkey'
hatest2: successful distribution of the file 'pacemaker authkey'
hatest1: successful distribution of the file 'corosync authkey'
hatest1: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'hatest1', 'hatest2'
hatest1: successful distribution of the file 'corosync.conf'
hatest2: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.

启动集群以及开机自启动
# pcs cluster start --all
hatest2: Starting Cluster...
hatest1: Starting Cluster...
# pcs cluster enable --all
hatest1: Cluster Enabled
hatest2: Cluster Enabled

设置集群属性
# pcs property set no-quorum-policy=ignore
# pcs property --all |grep stonith-enabled
 stonith-enabled: true


查看隔离资源类型信息
# pcs stonith describe fence_xvm
fence_xvm - Fence agent for virtual machines

fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

Stonith options:
  debug: Specify (stdin) or increment (command line) debug level
  ip_family: IP Family ([auto], ipv4, ipv6)
  multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
  ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229)
  retrans: Multicast retransmit time (in 1/10sec; default=20)
  auth: Authentication (none, sha1, [sha256], sha512)
  hash: Packet hash strength (none, sha1, [sha256], sha512)
  key_file: Shared key file (default=/etc/cluster/fence_xvm.key)
  port: Virtual Machine (domain name) to fence
  use_uuid: Treat [domain] as UUID instead of domain name. This is provided for compatibility with older fence_xvmd installations.
  timeout: Fencing timeout (in seconds; default=30)
  delay: Fencing delay (in seconds; default=0)
  domain: Virtual Machine (domain name) to fence (deprecated; use port)
  pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. Eg. node1:1;node2:2,3 would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2
  pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list).
  pcmk_host_check: How to determine which machines are controlled by the device. Allowed values: dynamic-list (query the device via the 'list' command), static-list (check the pcmk_host_list attribute), status
                   (query the device via the 'status' command), none (assume every device can fence every machine)
  pcmk_delay_max: Enable a random delay for stonith actions and specify the maximum of random delay. This prevents double fencing when using slow devices such as sbd. Use this to enable a random delay for
                  stonith actions. The overall delay is derived from this random delay value adding a static delay so that the sum is kept below the maximum delay.
  pcmk_delay_base: Enable a base delay for stonith actions and specify base delay value. This prevents double fencing when different delays are configured on the nodes. Use this to enable a static delay for
                   stonith actions. The overall delay is derived from a random delay value adding this static delay so that the sum is kept below the maximum delay.
  pcmk_action_limit: The maximum number of actions can be performed in parallel on this device Cluster property concurrent-fencing=true needs to be configured first. Then use this to specify the maximum number
                     of actions can be performed in parallel on this device. -1 is unlimited.

Default operations:
  monitor: interval=60s

添加普通集群资源
# pcs resource create dummy ocf:heartbeat:Dummy
# pcs status resources
  * dummy    (ocf::heartbeat:Dummy):     Started hatest1


宿主机上执行命令:
查看
# fence_xvm -o list
hatest1                          8f38adce-fbbf-46ec-be0c-77a88a30a7e9 on
hatest2                          7d8e4f03-9d7e-4177-8e3f-98bf879e8ff3 on
此处名称为domain名称
确认:
on,off,reboot,status等操作
# fence_xvm -o off -H hatest1
# fence_xvm -o off -H hatest2
重启两个虚拟机


创建fencs_xvm类型的vmfence隔离资源

pcs stonith create vmfence fence_xvm pcmk_host_map="hatest1:1;hatest2:2,3" op monitor interval=30s # 对于没有主机名称的设备,可以指定哪些端口控制对应的主机
或者
pcs stonith create vmfence fence_xvm pcmk_host_map="hatest1:hatest1;hatest2:hatest2" op monitor interval=30s # 本示例中采用
注: 映射规则 "主机名: 域名",多个之间用; 隔开
域名是在宿主机上通过 fence_xvm -o list列出来的虚拟机的名称

# pcs stonith show vmfence
Warning: This command is deprecated and will be removed. Please use 'pcs stonith config' instead.
 Resource: vmfence (class=stonith type=fence_xvm)
  Attributes: pcmk_host_map=hatest1:hatest1;hatest2:hatest2
  Operations: monitor interval=30s (vmfence-monitor-interval-30s)


# pcs status
Cluster name: hacluster
Cluster Summary:
  * Stack: corosync
  * Current DC: hatest1 (version 2.0.4-6.oe1-2deceaa3ae) - partition with quorum
  * Last updated: Mon Feb  1 11:27:14 2021
  * Last change:  Mon Feb  1 11:27:11 2021 by hacluster via crmd on hatest2
  * 2 nodes configured
  * 2 resource instances configured

Node List:
  * Online: [ hatest1 hatest2 ]

Full List of Resources:
  * dummy    (ocf::heartbeat:Dummy):     Started hatest1
  * vmfence    (stonith:fence_xvm):     Starting hatest2

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/disabled

五 验证隔离设备

此时资源在hatest1节点上, 在hatest2通过fence_xvm -H node1使hatest1主机重启, 可以使资源漂移到hatest2节点上
-H 参数指定的是虚拟机的域名
# fence_xvm -H hatest1 -d -o reboot
-- args @ 0xffffe92dada8 --
  args->domain = hatest1
  args->op = 2
  args->mode = 0
  args->debug = 1
  args->timeout = 30
  args->delay = 0
  args->retr_time = 20
  args->flags = 0
  args->net.addr = 225.0.0.12
  args->net.ipaddr = (null)
  args->net.cid = 0
  args->net.key_file = /etc/cluster/fence_xvm.key
  args->net.port = 1229
  args->net.hash = 2
  args->net.auth = 2
  args->net.family = 2
  args->net.ifindex = 0
  args->serial.device = (null)
  args->serial.speed = 115200,8N1
  args->serial.address = 10.0.2.179
-- end args --

等待hatest1虚拟机启动后,可以查看状态如下:
# pcs status
Cluster name: hacluster
Cluster Summary:
  * Stack: corosync
  * Current DC: hatest2 (version 2.0.4-6.oe1-2deceaa3ae) - partition with quorum
  * Last updated: Mon Feb  1 11:59:54 2021
  * Last change:  Mon Feb  1 11:57:42 2021 by root via cibadmin on hatest1
  * 2 nodes configured
  * 2 resource instances configured

Node List:
  * Online: [ hatest1 hatest2 ]

Full List of Resources:
  * dummy    (ocf::heartbeat:Dummy):     Started hatest2
  * vmfence    (stonith:fence_xvm):     Started hatest1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

中断服务或者关闭网卡,虚拟机都会自动重启
强杀服务:kill -9 `pidof httpd`
业务资源所在节点:ifconfig enp1s0 down
宕掉内核:echo "c" > /proc/sysrq-trigger

现象: 业务所在节点自动重启,资源切换

 

这篇关于pacemaker之fence_xvm:libvirtd的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1018097

相关文章

poj1821 Fence 单调队列dp

题意:有n个人刷长度为m的墙。对于每个人有3种属性分别为l,s,w,分别表示这个人可以刷墙的长度,这个人如果刷墙则要刷包含s 位置的一段区间,每刷1单位长的墙获得的利润。问如何安排这n个人,使得获得总利润尽可能大。 思路:设dp[i][j]表示前i个人刷 j面墙可获得的最大利润,那么这个状态可由两个转移而来。 1.如果第i个人刷墙,那么dp[i][j]=max(dp[i][j],dp[i-1

POJ--3253 -- Fence Repair

Fence Repair   Time Limit: 2000MS   Memory Limit: 65536K Total Submissions: 19763   Accepted: 6269 Description Farmer John wants to repair a small length of the fence around t

pacemaker之资源布局

一 集群属性 设置placement-strategy集群属性,否则容量配置无效。 placement-strategy集群属性可用值: - default: 根本不考虑utilization属性值。根据分配分数分配资源。如果分数相等,则资源在节点间均匀分布。 - utilization: 只有在决定一个节点是否合格(即,它是否有足够的空闲容量来满足资源的需求)时才会考虑utilizati

pacemaker之三节点drbd(单primary)

本文档用于测试三节点drbd在pacemaker中的配置。 同一时间仅有一个节点/dev/drbd0为primary,使用drbd9的auto-promote特性,根据场景自动在primary和secondary角色中切换。 一 os环境 准备三个操作系统环境,每个系统两个网卡,一个单独用于drbd的磁盘 # cat /etc/openEuler-release openEuler rele

Android GPU渲染SurfaceFlinger合成RenderThread的dequeueBuffer/queueBuffer与fence机制(2)

Android GPU渲染SurfaceFlinger合成RenderThread的dequeueBuffer/queueBuffer与fence机制(2)   计算fps帧率 用 adb shell dumpsys SurfaceFlinger --list 查询当前的SurfaceView,然后有好多行,再把要查询的行内容完整的传给 adb shell dumpsys Surfac

poj 3253 Fence Repair

题目链接:点击打开链接 Description Farmer John wants to repair a small length of the fence around the pasture. He measures the fence and finds that he needs N (1 ≤ N ≤ 20,000) planks of wood, each having so

Fence同步

在《Android图形显示系统》没有介绍到帧同步的相关概念,这里简单介绍补充一下。      在图形显示系统中,图形缓存GraphicBuffer可以被不同的硬件来访问,如CPU、GPU、HWC都可以对缓存进行读写,如果同时对图形缓存进行操作,有可能出现意想不到的效果。由于GPU的执行是异步的,向GPU发命令,CPU是不知道命令什么时候执行完的,如果GPU渲染的内容还没完成,图形缓

USACO-Section4.1 Fence Loops【Floyd算法】

题目描述: 农夫布朗的牧场上的篱笆已经失去控制了。它们分成了1~200英尺长的线段。只有在线段的端点处才能连接两个线段,有时给定的一个端点上会有两个以上的篱笆。结果篱笆形成了一张网分割了布朗的牧场。布朗想将牧场恢复原样,出于这个考虑,他首先得知道牧场上哪一块区域的周长最小。 布朗将他的每段篱笆从1到N进行了标号(N=线段的总数)。他知道每段篱笆有如下属性: 该段篱笆的长度 该段篱笆的一端所连

poj3253 Fence Repair

 Farmer John wants to repair a small length of the fence around the pasture. He measures the fence and finds that he needs N (1 ≤ N ≤ 20,000) planks of wood, each having some integer length Li (1

Codeforces 1132 problem C Painting the Fence —— 取n-2个线段的并集最大

You have a long fence which consists of n sections. Unfortunately, it is not painted, so you decided to hire q painters to paint it. i-th painter will paint all sections x such that li≤x≤ri. Unfortun