Keepalived搭建主从架构、主主架构实例

2023-11-22 01:30

本文主要是介绍Keepalived搭建主从架构、主主架构实例,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

实例拓扑图:

DR1和DR2部署Keepalived和lvs作主从架构或主主架构,RS1和RS2部署nginx搭建web站点。

注意:各节点的时间需要同步(ntpdate ntp1.aliyun.com);关闭firewalld(systemctl stop firewalld.service,systemctl disable firewalld.service),设置selinux为permissive(setenforce 0);同时确保DR1和DR2节点的网卡支持MULTICAST(多播)通信。通过命令ifconfig可以查看到是否开启了MULTICAST:

       

Keepalived的主从架构

搭建RS1:
[root@RS1 ~]# yum -y install nginx   #安装nginx
[root@RS1 ~]# vim /usr/share/nginx/html/index.html   #修改主页<h1> 192.168.4.118 RS1 server </h1>
[root@RS1 ~]# systemctl start nginx.service   #启动nginx服务
[root@RS1 ~]# vim RS.sh   #配置lvs-dr的脚本文件#!/bin/bash#vip=192.168.4.120mask=255.255.255.255case $1 instart)echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 1 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/all/arp_announceecho 2 > /proc/sys/net/ipv4/conf/lo/arp_announceifconfig lo:0 $vip netmask $mask broadcast $vip uproute add -host $vip dev lo:0;;stop)ifconfig lo:0 downecho 0 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/all/arp_announceecho 0 > /proc/sys/net/ipv4/conf/lo/arp_announce;;*) echo "Usage $(basename $0) start|stop"exit 1;;esac
[root@RS1 ~]# bash RS.sh start
参考RS1的配置搭建RS2。
搭建DR1:
[root@DR1 ~]# yum -y install ipvsadm keepalived   #安装ipvsadm和keepalived
[root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改keepalived.conf配置文件global_defs {notification_email {root@localhost}notification_email_from keepalived@localhostsmtp_server 127.0.0.1smtp_connect_timeout 30router_id 192.168.4.116vrrp_skip_check_adv_addrvrrp_mcast_group4 224.0.0.10}vrrp_instance VIP_1 {state MASTERinterface eno16777736virtual_router_id 1priority 100advert_int 1authentication {auth_type PASSauth_pass %&hhjj99}virtual_ipaddress {192.168.4.120/24 dev eno16777736 label eno16777736:0}}virtual_server 192.168.4.120 80 {delay_loop 6lb_algo rrlb_kind DRprotocol TCPreal_server 192.168.4.118 80 {weight 1HTTP_GET {url {path /index.htmlstatus_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 3}}real_server 192.168.4.119 80 {weight 1HTTP_GET {url {path /index.htmlstatus_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 3}}}
[root@DR1 ~]# systemctl start keepalived
[root@DR1 ~]# ifconfigeno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)RX packets 14604  bytes 1376647 (1.3 MiB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 6722  bytes 653961 (638.6 KiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
[root@DR1 ~]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  192.168.4.120:80 rr-> 192.168.4.118:80             Route   1      0          0         -> 192.168.4.119:80             Route   1      0          0
DR2的搭建基本同DR1,主要修改一下配置文件中/etc/keepalived/keepalived.conf的state和priority:state BACKUP、priority 90. 同时我们发现作为backup的DR2没有启用eno16777736:0的网口:

客户端进行测试:
[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
[root@DR1 ~]# systemctl stop keepalived.service #关闭DR1的keepalived服务
[root@DR2 ~]# systemctl status keepalived.service #观察DR2,可以看到DR2已经进入MASTER状态 ● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)Active: active (running) since Tue 2018-09-04 11:33:04 CST; 7min agoProcess: 12983 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)Main PID: 12985 (keepalived)CGroup: /system.slice/keepalived.service├─12985 /usr/sbin/keepalived -D├─12988 /usr/sbin/keepalived -D└─12989 /usr/sbin/keepalived -DSep 04 11:37:41 happiness Keepalived_healthcheckers[12988]: SMTP alert successfully sent. Sep 04 11:40:22 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Transition to MASTER STATE Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Entering MASTER STATE Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) setting protocol VIPs. Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120 Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Sending/queueing gratuitous ARPs on eno16777736 for 192.168.4.120 Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120 Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120 Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120 Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done #可以看到客户端正常访问 <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1> <h1> 192.168.4.119 RS2 server</h1> <h1> 192.168.4.118 RS1 server </h1>

Keepalived的主主架构

 修改RS1和RS2,添加新的VIP:
[root@RS1 ~]# cp RS.sh RS_bak.sh
[root@RS1 ~]# vim RS_bak.sh   #添加新的VIP#!/bin/bash#vip=192.168.4.121mask=255.255.255.255case $1 instart)echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 1 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/all/arp_announceecho 2 > /proc/sys/net/ipv4/conf/lo/arp_announceifconfig lo:1 $vip netmask $mask broadcast $vip uproute add -host $vip dev lo:1;;stop)ifconfig lo:1 downecho 0 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/all/arp_announceecho 0 > /proc/sys/net/ipv4/conf/lo/arp_announce;;*)echo "Usage $(basename $0) start|stop"exit 1;;esac
[root@RS1 ~]# bash RS_bak.sh start
[root@RS1 ~]# ifconfig...lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 192.168.4.120  netmask 255.255.255.255loop  txqueuelen 0  (Local Loopback)lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 192.168.4.121  netmask 255.255.255.255loop  txqueuelen 0  (Local Loopback) 
[root@RS1 ~]# scp RS_bak.sh root@192.168.4.119:~
root@192.168.4.119's password: 
RS_bak.sh                100%  693     0.7KB/s   00:00[root@RS2 ~]# bash RS_bak.sh   #直接运行脚本添加新的VIP 
[root@RS2 ~]# ifconfig...lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 192.168.4.120  netmask 255.255.255.255loop  txqueuelen 0  (Local Loopback)lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 192.168.4.121  netmask 255.255.255.255loop  txqueuelen 0  (Local Loopback)
修改DR1和DR2:
[root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改DR1的配置文件,添加新的实例,配置服务器组...vrrp_instance VIP_2 {state BACKUPinterface eno16777736virtual_router_id 2priority 90advert_int 1authentication {auth_type PASSauth_pass UU**99^^}virtual_ipaddress {192.168.4.121/24 dev eno16777736 label eno16777736:1}}virtual_server_group ngxsrvs {192.168.4.120 80192.168.4.121 80}virtual_server group ngxsrvs {...}
[root@DR1 ~]# systemctl restart keepalived.service   #重启服务
[root@DR1 ~]# ifconfig   #此时可以看到eno16777736:1,因为DR2还未配置eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)RX packets 54318  bytes 5480463 (5.2 MiB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 38301  bytes 3274990 (3.1 MiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
[root@DR1 ~]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  192.168.4.120:80 rr-> 192.168.4.118:80             Route   1      0          0         -> 192.168.4.119:80             Route   1      0          0         TCP  192.168.4.121:80 rr-> 192.168.4.118:80             Route   1      0          0         -> 192.168.4.119:80             Route   1      0          0[root@DR2 ~]# vim /etc/keepalived/keepalived.conf   #修改DR2的配置文件,添加实例,配置服务器组...vrrp_instance VIP_2 {state MASTERinterface eno16777736virtual_router_id 2priority 100advert_int 1authentication {auth_type PASSauth_pass UU**99^^}virtual_ipaddress {192.168.4.121/24 dev eno16777736 label eno16777736:1}}virtual_server_group ngxsrvs {192.168.4.120 80192.168.4.121 80}virtual_server group ngxsrvs {...}
[root@DR2 ~]# systemctl restart keepalived.service   #重启服务
[root@DR2 ~]# ifconfigeno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.4.117  netmask 255.255.255.0  broadcast 192.168.4.255inet6 fe80::20c:29ff:fe3d:a31b  prefixlen 64  scopeid 0x20<link>ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (Ethernet)RX packets 67943  bytes 6314537 (6.0 MiB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 23250  bytes 2153847 (2.0 MiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (Ethernet)
[root@DR2 ~]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  192.168.4.120:80 rr-> 192.168.4.118:80             Route   1      0          0         -> 192.168.4.119:80             Route   1      0          0         TCP  192.168.4.121:80 rr-> 192.168.4.118:80             Route   1      0          0         -> 192.168.4.119:80             Route   1      0          0 
客户端测试:
[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done<h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1>
[root@client ~]# for i in {1..20};do curl http://192.168.4.121;done<h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1><h1> 192.168.4.119 RS2 server</h1><h1> 192.168.4.118 RS1 server </h1>

 

转载于:https://www.cnblogs.com/walk1314/p/9578468.html

这篇关于Keepalived搭建主从架构、主主架构实例的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/406499

相关文章

MySQL中的LENGTH()函数用法详解与实例分析

《MySQL中的LENGTH()函数用法详解与实例分析》MySQLLENGTH()函数用于计算字符串的字节长度,区别于CHAR_LENGTH()的字符长度,适用于多字节字符集(如UTF-8)的数据验证... 目录1. LENGTH()函数的基本语法2. LENGTH()函数的返回值2.1 示例1:计算字符串

Knife4j+Axios+Redis前后端分离架构下的 API 管理与会话方案(最新推荐)

《Knife4j+Axios+Redis前后端分离架构下的API管理与会话方案(最新推荐)》本文主要介绍了Swagger与Knife4j的配置要点、前后端对接方法以及分布式Session实现原理,... 目录一、Swagger 与 Knife4j 的深度理解及配置要点Knife4j 配置关键要点1.Spri

mysql中的服务器架构详解

《mysql中的服务器架构详解》:本文主要介绍mysql中的服务器架构,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1、背景2、mysql服务器架构解释3、总结1、背景简单理解一下mysqphpl的服务器架构。2、mysjsql服务器架构解释mysql的架

java向微信服务号发送消息的完整步骤实例

《java向微信服务号发送消息的完整步骤实例》:本文主要介绍java向微信服务号发送消息的相关资料,包括申请测试号获取appID/appsecret、关注公众号获取openID、配置消息模板及代码... 目录步骤1. 申请测试系统2. 公众号账号信息3. 关注测试号二维码4. 消息模板接口5. Java测试

MySQL数据库的内嵌函数和联合查询实例代码

《MySQL数据库的内嵌函数和联合查询实例代码》联合查询是一种将多个查询结果组合在一起的方法,通常使用UNION、UNIONALL、INTERSECT和EXCEPT关键字,下面:本文主要介绍MyS... 目录一.数据库的内嵌函数1.1聚合函数COUNT([DISTINCT] expr)SUM([DISTIN

如何使用Haporxy搭建Web群集

《如何使用Haporxy搭建Web群集》Haproxy是目前比较流行的一种群集调度工具,同类群集调度工具有很多如LVS和Nginx,本案例介绍使用Haproxy及Nginx搭建一套Web群集,感兴趣的... 目录一、案例分析1.案例概述2.案例前置知识点2.1 HTTP请求2.2 负载均衡常用调度算法 2.

k8s上运行的mysql、mariadb数据库的备份记录(支持x86和arm两种架构)

《k8s上运行的mysql、mariadb数据库的备份记录(支持x86和arm两种架构)》本文记录在K8s上运行的MySQL/MariaDB备份方案,通过工具容器执行mysqldump,结合定时任务实... 目录前言一、获取需要备份的数据库的信息二、备份步骤1.准备工作(X86)1.准备工作(arm)2.手

一文详解如何在idea中快速搭建一个Spring Boot项目

《一文详解如何在idea中快速搭建一个SpringBoot项目》IntelliJIDEA作为Java开发者的‌首选IDE‌,深度集成SpringBoot支持,可一键生成项目骨架、智能配置依赖,这篇文... 目录前言1、创建项目名称2、勾选需要的依赖3、在setting中检查maven4、编写数据源5、开启热

Python实例题之pygame开发打飞机游戏实例代码

《Python实例题之pygame开发打飞机游戏实例代码》对于python的学习者,能够写出一个飞机大战的程序代码,是不是感觉到非常的开心,:本文主要介绍Python实例题之pygame开发打飞机... 目录题目pygame-aircraft-game使用 Pygame 开发的打飞机游戏脚本代码解释初始化部

Springboot整合Redis主从实践

《Springboot整合Redis主从实践》:本文主要介绍Springboot整合Redis主从的实例,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录前言原配置现配置测试LettuceConnectionFactory.setShareNativeConnect