本文主要是介绍centos7基于keepalived+nginx部署k8s1.26.0高可用集群,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
《centos7基于keepalived+nginx部署k8s1.26.0高可用集群》Kubernetes是一个开源的容器编排平台,用于自动化地部署、扩展和管理容器化应用程序,在生产环境中,为了确保集...
k8s集群角色 | IP地址 | 主机名 |
master | 192.168.209.116 | k8s-master1 |
master | 192.168.209.117 | k8s-master2 |
master | 192.168.209.118 | k8s-master3 |
node | 192.168.209.119 | k8s-node1 |
echo "设置主机名" echo "在 192.168.209.116 上执行如下:" hostnamectl set-hostname k8s-master1 && bash echo "在 192.168.209.117 上执行如下:" hostnamectl set-hostname k8s-master2 && bash echo "在 192.168.209.118 上执行如下:" hostnamectl set-hostname k8s-master3 && bash echo "在 192.168.209.119 上执行如下:" hostnamectl set-hostname k8s-node1 && bash
一、初始化(所有节点都执行)
echo "配置阿里云yum源" mv /etc/yum.repos.d/Centos-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo yum clean all yum makecache echo "更新系统并安装必要工具..." yum update -y yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion echo "禁用 SElinux 和防火墙..." setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config systemctl disable --now firewalld echo "禁用 swap..." swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab echo "# 优化系统配置,开启 IP 转发、关闭 swap 等" echo "优化系统配置..." cat <<EOF | tee /etc/sysctl.d/k8s.conf vm.swappiness = 0 vm.panic_on_oom = 0 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_fin_timeout = 30 net.ipv4.tcp_syncookies = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1 net.ipv4.ip_forward = 1 net.ipv6.conf.all.disable_ipv6 = 1 net.netfilter.nf_conntrack_max = 2310720 fs.inotify.max_user_instances = 8192 fs.inotify.max_user_watches = 1048576 fs.file-max = 52706963 fs.nr_open = 52706963 EOF sysctl -p /etc/sysctl.d/k8s.conf echo "加载 br_netfilter 模块..." modprobe br_netfilter lsmod | grep br_netfilter echo "安装 ipset 和 ipvsadm..." yum -y install ipset ipvsadm echo "配置 ipvsadm 模块加载方式..." cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs lsmod | grep -e ip_vs -e nf_conntrack
二、安装containerd(所有节点都执行)
echo "安装 Containerd..." yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y containerd.io containerd config default > /etc/containerd/config.toml #修改/etc/containerd/config.toml文件: #1、把 SystemdCgroup = false 修改成 SystemdCgroup = true #2、把 sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7" #3、在[plugins."io.containerd.grpc.v1.cri".registry.mirrors]下面加4行 [plugins."io.containerd.grpc.v1.cri".registry.mirrors."swr.cn-north-4.myhuaweicloud.com"] endpoint = ["https://swr.cn-north-4.myhuaweicloud.com"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io"]
systemctl enable --now containerd systemctl start containerd
三、安装docker-ce(所有节点都执行)
echo "停止旧版本docker" sudo systemctl stop docker echo "卸载旧版本docker" # yum remove -y docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-selinux \ docker-engine-selinux \ docker-engine sudo rm -rf /var/lib/docker sudo rm -rf /run/docker sudo rm -rf /var/run/docker sudo rm -rf /etc/docker echo "安装docker-ce" yum install -y yum-utils device-mapper-persistent-data lvm2 git yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install docker-ce -y
cat /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [ "https://2a6bf1988cb6428c877f723ec7530dbc.mirror.swr.myhuaweicloud.com", "https://docker.m.daocloud.io", "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://your_preferred_mirror", "https://dockerhub.icu", "https://docker.registry.cyou", "https://docker-cf.registry.cyou", "https://dockercf.jsdelivr.fyi", "https://docker.jsdelivr.fyi", "https://dockerhttp://www.chinasem.cntest.jsdelivr.fyi", "https://mirror.aliyuncs.com", "https://dockerproxy.com", "https://mirror.baidubce.com", "https://docker.m.daocloud.io", "https://docker.nju.edu.cn", "https://docker.mirrors.sjtug.sjtu.edu.cn", "https://docker.mirrors.ustc.edu.cn", "https://mirror.iscas.ac.cn", "https://docker.rainbond.cc" ] }
systemctl enable --now docker systemctl restart docker
四、安装kubelet+kubeadm+kubectl(所有节点都执行)
echo "安装 Kubernetes 工具..." cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet-1.26.0 kubeadm-1.25.0 kubectl-1.26.0 systemctl enable kubelet systemctl restart kubelet
五、安装keepalived+nginx(只在master节点执行)
sudo yum install -y epel-release sudo yum install -y nginx keepalived echo "配置nginx" vim /etc/nginx/nginx.conf #在http块的上方加上stream块 ... stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; Access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.209.118:6443 weight=5 max_fails=3 fail_timeout=30s; server 192.168.209.117:6443 weight=5 max_fails=3 fail_timeout=30s; server 192.168.209.116:6443 weight=5 max_fails=3 fail_timeout=30s; } server { listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突 proxy_pass k8s-apiserver; } } http {
echo "配置keepalived" echo "--------k8s-master1配置--------------" cat /etc/keepalived/keepalived.conf global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens33 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 100 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 192.168.209.111/24 } track_script { check_nginx } }
echo "配置keepalived" echo "--------k8s-master2配置--------------" cat /etc/keepalived/keepalived.conf global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens33 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 90 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 192.168.209.111/24 } track_script { check_nginx } }
echo "配置keepalived" echo "--------k8s-master3配置--------------" global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens33 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 80 # 优先级,备服务器设置 80 phpadvert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 192.168.209.111/24 } track_script { check_nginx } }
cat /etc/keepalived/check_nginx.sh #!/bin/bash #1、判断Nginx是否存活 counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" ) if [ $counter -eq 0 ]; then #2、如果不存活则尝试启动Nginx service nginx start sleep 2 #3、等待2秒后再次获取一次Nginx状态 counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" ) #4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移 if [ $counter -eq 0 ]; then service keepalived stop fi fi
echo "启动nginx和keepalived服务" chmod +x /etc/keepalived/check_nginx.sh systemctl daemon-reload && systemctl restart nginx systemctl restart keepalived && systemctl enable nginx keepalived
六、kubeadm初始化k8s集群(只在k8s-master1执行)
改注释文字的地方,改成下面这个样子
kubeadm config print init-defaults > kubeadm.yaml cat kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration #localAPIEndpoint: #注释掉 # advertiseAddress: 1.2.3.4 #注释掉 # bindPort: 6443 #注释掉 nodeRegistration: criSocket: Unix:///run/containerd/containerd.sock #指定好 imagePullPolicy: IfNotPresent # name: nodjse taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #改成这样 kind: ClusterConfiguration kubernetesVersion: 1.26.0 controlPlaneEndpoint: 192.168.209.111:16443 #改成vip+nginx端口 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 #指定pod网段 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
kubeadm init --cnotallow=kubeadm.yaml --ignore-preflight-errors=SystemVerification
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HjavascriptOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get nodes
至此 k8s主节点已安装完成
-----------------------------
注意提前去k8s-master2和k8s-master3节点上创建个文件夹
mkdir -p /etc/kubernetes/pki/etcd/
echo "将k8s-master1中的证书scp到k8s-master2和k8s-master3节点" cd /etc/kubernetes/pki/ scp ca.* k8s-master2:/etc/kubernetes/pki/ scp sa.* k8s-master2:/etc/kubernetes/pki/ scp front-proxy-ca.* k8s-master2:/etc/kubernetes/pki/ scp etcd/ca.* k8s-master2:/etc/kubernetes/pki/etcd/ scp ca.* k8s-master3:/etc/kubernetes/pki/ scp sa.* k8s-master3:/etc/kubernetes/pki/ scp front-proxy-ca.* k8s-master3:/etc/kubernetes/pki/ scp etcd/ca.* k8s-master3:/etc/kubernetes/pki/etcd/
七、将其他master join到k8s集群(在k8s-master2和k8s-master3执行)
--control-plane --ignore-preflight-errors=SystemVerification 加上这两个
kubeadm join 192.168.209.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:ec363a06d7681d941e7969fb6e994f4a4c1c4ef0d154c7290131c1e830b4bec5 \ --control-plane --ignore-preflight-errors=SystemVerification
八、将node节点join到k8s集群(只在k8s-node1节点执行)
kubeadm join 192.168.209.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:ec363a06d7681d941e7969fb6e994f4a4c1c4ef0d154c7290131c1e830b4bec5 --ignore-preflight-errors=SystemVerification
九、部署calico网络插件(只在k8s-master1执行)
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml -O sed -i 's|docker.io/calico/cni:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/cni:v3.25.0|g' calico.yaml sed -i 's|docker.io/calico/node:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/node:v3.25.0|g' calico.yaml sed -i 's|docker.io/calico/kube-controllers:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/kube-controllers:v3.25.0|g' calico.yaml
cat calico.yaml #在下面对应位置加上2行内容 - name: IP_AUTODETECTION_METHOD value: "interface=ens33" #在下面对应位置加上2行内容 - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
kubectl apply -f calico.yaml
calico相关pod为running代表成功
测试DNS 解析和网络是否正常
kubectl run busybox --image swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/library/busybox:1.28 --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh
十、配置etcd高可用状态
vim /etc/kubernetes/manifests/etcd.yaml #将- --initial-cluster=xuegod63=https://192.168.209.116:2380 改成 - --initial-cluster=k8s-master1=https://192.168.209.116:2380,k8s-master2=https://192.168.209.117:2380,k8s-master3=https://192.168.209.118:2380
测试 etcd 集群是否配置成功:
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://192.168.209.116:2379,https://192.168.209.117:2379,https://192.168.209.118:2379 endpoint health --cluster
全是successfuly代表正常,博主电脑资源不足把一台master3关机了
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyunchttp://www.chinasem.cns.com/google_containers/etcd:3.5.4-0 etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://192.168.209.116:2379,https://192.168.209.117:2379,https://192.168.209.118:2379 endpoint status --cluster
能看到三个endpoint代表正常,博主电脑资源不足把一台master3关机了
十一、总结
到此这篇关于centos7基于keepalived+nginx部署k8s1.26.0高可用集群的文章就介绍到这了,更多相关Centos7安装部署Kubernetes(k8s) 高可用集群内容请搜索China编程(www.chinasem.cn)以前的文章或继续浏览下面的相关文章希望大家以后多多支持编程China编程(www.chinasem.cn)!
这篇关于centos7基于keepalived+nginx部署k8s1.26.0高可用集群的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!