a29.ansible 生产实战案例 -- 基于二进制包安装kubernetes v1.20 -- 集群升级(二)

本文主要是介绍a29.ansible 生产实战案例 -- 基于二进制包安装kubernetes v1.20 -- 集群升级(二),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

18.升级kubernetes

把yaml文件先备份了

[root@k8s-master01 ~]# mkdir bak
[root@k8s-master01 ~]# mv *.yaml bak/
[root@k8s-master01 ~]# ls bak/
admin.yaml  bootstrap.secret.yaml  calico-etcd.yaml  components.yaml  coredns.yaml  recommended.yaml

18.1 etcd

18.1.1 升级etcd

[root@ansible-server ansible]# mkdir -p roles/etcd-update/{files,tasks}
[root@ansible-server ansible]# cd roles/etcd-update/
[root@ansible-server etcd-update]# ls
files  tasks[root@ansible-server etcd-update]# wget https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz[root@ansible-server etcd-update]# tar -xf etcd-v3.5.0-linux-amd64.tar.gz --strip-components=1 -C files/ etcd-v3.5.0-linux-amd64/etcd{,ctl}
[root@ansible-server etcd-update]# ls files/
etcd  etcdctl
[root@ansible-server etcd-update]# rm -f etcd-v3.5.0-linux-amd64.tar.gz[root@ansible-server etcd-update]# vim tasks/upgrade_etcd01.yml
- name: stop etcdsystemd:name: etcdstate: stoppedwhen:- ansible_hostname=="k8s-etcd01"
- name: copy etcd files to etcd01copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- etcd- etcdctlwhen:- ansible_hostname=="k8s-etcd01"
- name: start etcdsystemd:name: etcdstate: restartedwhen:- ansible_hostname=="k8s-etcd01"[root@ansible-server etcd-update]# vim tasks/upgrade_etcd02.yml
- name: stop etcdsystemd:name: etcdstate: stoppedwhen:- ansible_hostname=="k8s-etcd02"
- name: copy etcd files to etcd02copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- etcd- etcdctlwhen:- ansible_hostname=="k8s-etcd02"
- name: start etcdsystemd:name: etcdstate: restartedwhen:- ansible_hostname=="k8s-etcd02"[root@ansible-server etcd-update]# vim tasks/upgrade_etcd03.yml
- name: stop etcdsystemd:name: etcdstate: stoppedwhen:- ansible_hostname=="k8s-etcd03"
- name: copy etcd files to etcd03copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- etcd- etcdctlwhen:- ansible_hostname=="k8s-etcd03"
- name: start etcdsystemd:name: etcdstate: restartedwhen:- ansible_hostname=="k8s-etcd03"[root@ansible-server etcd-update]# vim tasks/main.yml
- include: upgrade_etcd01.yml
- include: upgrade_etcd02.yml
- include: upgrade_etcd03.yml[root@ansible-server ansible]# tree roles/etcd-update/
roles/etcd-update/
├── files
│   ├── etcd
│   └── etcdctl
└── tasks├── main.yml├── upgrade_etcd01.yml├── upgrade_etcd02.yml└── upgrade_etcd03.yml2 directories, 6 files[root@ansible-server ansible]# vim etcd_update_role.yml
---
- hosts: etcdroles:- role: etcd-update[root@ansible-server ansible]# ansible-playbook etcd_update_role.yml

18.1.2 验证etcd

[root@k8s-etcd01 ~]# etcdctl --endpoints="172.31.3.108:2379,172.31.3.109:2379,172.31.3.110:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.31.3.108:2379 | a9fef56ff96ed75c |   3.5.0 |  6.0 MB |      true |      false |         4 |      43156 |              43156 |        |
| 172.31.3.109:2379 | 8319ef09e8b3d277 |   3.5.0 |  6.1 MB |     false |      false |         4 |      43156 |              43156 |        |
| 172.31.3.110:2379 | 209a1f57c506dba2 |   3.5.0 |  6.0 MB |     false |      false |         4 |      43156 |              43156 |        |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

18.2 master

18.2.1 升级master

[root@ansible-server ansible]# mkdir -p roles/kubernetes-master-update/{files,tasks,templates,vars}
[root@ansible-server ansible]# cd roles/kubernetes-master-update/
[root@ansible-server kubernetes-master-update]# ls
files  tasks  templates  vars[root@ansible-server kubernetes-master-update]# wget https://dl.k8s.io/v1.22.6/kubernetes-server-linux-amd64.tar.gz
[root@ansible-server kubernetes-master-update]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C files/ kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
[root@ansible-server kubernetes-master-update]# ls files/
kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler
[root@ansible-server kubernetes-master-update]# rm -f kubernetes-server-linux-amd64.tar.gz #下面MASTER01、MASTER02和MASTER03的IP地址根据自己的更改,HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server kubernetes-master-update]# vim vars/main.yml
MASTER01: 172.31.3.101
MASTER02: 172.31.3.102
MASTER03: 172.31.3.103
HARBOR_DOMAIN: harbor.raymonds.cc
PAUSE_VERSION: 3.5MASTER_SERVICE:- kube-apiserver- kube-controller-manager- kube-scheduler- kube-proxy- kubelet[root@ansible-server kubernetes-master-update]# vim templates/10-kubelet.conf.j2 
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image={{ HARBOR_DOMAIN }}/google_containers/pause:{{ PAUSE_VERSION }}"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARG[root@ansible-server kubernetes-master-update]# vim tasks/upgrade_master01.yml
- name: install CentOS or Rocky socatyum: name: socatwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- inventory_hostname in groups.ha
- name: install Ubuntu socatapt:name: socatforce: yeswhen:- ansible_distribution=="Ubuntu"- inventory_hostname in groups.ha
- name: download pause imageshell: |docker pull registry.aliyuncs.com/google_containers/pause:{{ PAUSE_VERSION }}docker tag registry.aliyuncs.com/google_containers/pause:{{ PAUSE_VERSION }} {{ HARBOR_DOMAIN }}/google_containers/pause:{{ PAUSE_VERSION }}docker rmi registry.aliyuncs.com/google_containers/pause:{{ PAUSE_VERSION }}docker push {{ HARBOR_DOMAIN }}/google_containers/pause:{{ PAUSE_VERSION }}when:- ansible_hostname=="k8s-master01"
- name: down master01shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/{{ MASTER01 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"
- name: stop servicesystemd:name: "{{ item }}"state: stoppedloop:"{{ MASTER_SERVICE }}"when:- ansible_hostname=="k8s-master01"
- name: copy kubernetes files to master01copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- kube-apiserver- kube-controller-manager- kubectl- kubelet- kube-proxy- kube-schedulerwhen:- ansible_hostname=="k8s-master01"
- name: copy 10-kubelet.conf to master01template: src: 10-kubelet.conf.j2dest: /etc/systemd/system/kubelet.service.d/10-kubelet.confwhen:- ansible_hostname=="k8s-master01"
- name: start servicesystemd:name: "{{ item }}"state: restarteddaemon_reload: yesloop:"{{ MASTER_SERVICE }}"when:- ansible_hostname=="k8s-master01"
- name: up master01shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/{{ MASTER01 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"[root@ansible-server kubernetes-master-update]# vim tasks/upgrade_master02.yml 
- name: down master02shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/{{ MASTER02 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"
- name: stop servicesystemd:name: "{{ item }}"state: stoppedloop:"{{ MASTER_SERVICE }}"when:- ansible_hostname=="k8s-master02"
- name: copy kubernetes files to master02copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- kube-apiserver- kube-controller-manager- kubectl- kubelet- kube-proxy- kube-schedulerwhen:- ansible_hostname=="k8s-master02"
- name: copy 10-kubelet.conf to master02template: src: 10-kubelet.conf.j2dest: /etc/systemd/system/kubelet.service.d/10-kubelet.confwhen:- ansible_hostname=="k8s-master02"
- name: start servicesystemd:name: "{{ item }}"state: restarteddaemon_reload: yesloop:"{{ MASTER_SERVICE }}"when:- ansible_hostname=="k8s-master02"
- name: up master02shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/{{ MASTER02 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"[root@ansible-server kubernetes-master-update]# vim tasks/upgrade_master03.yml 
- name: down master03shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/{{ MASTER03 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"
- name: stop servicesystemd:name: "{{ item }}"state: stoppedloop:"{{ MASTER_SERVICE }}"when:- ansible_hostname=="k8s-master03"
- name: copy kubernetes files to master03copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- kube-apiserver- kube-controller-manager- kubectl- kubelet- kube-proxy- kube-schedulerwhen:- ansible_hostname=="k8s-master03"
- name: copy 10-kubelet.conf to master03template: src: 10-kubelet.conf.j2dest: /etc/systemd/system/kubelet.service.d/10-kubelet.confwhen:- ansible_hostname=="k8s-master03"
- name: start servicesystemd:name: "{{ item }}"state: restarteddaemon_reload: yesloop:"{{ MASTER_SERVICE }}"when:- ansible_hostname=="k8s-master03"
- name: up master03shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/{{ MASTER03 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"[root@ansible-server kubernetes-master-update]# vim tasks/main.yml
- include: upgrade_master01.yml
- include: upgrade_master02.yml
- include: upgrade_master03.yml[root@ansible-server kubernetes-master-update]# cd ../../
[root@ansible-server ansible]# tree roles/kubernetes-master-update/
roles/kubernetes-master-update/
├── files
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── tasks
│   ├── main.yml
│   ├── upgrade_master01.yml
│   ├── upgrade_master02.yml
│   └── upgrade_master03.yml
├── templates
│   └── 10-kubelet.conf.j2
└── vars└── main.yml4 directories, 12 files[root@ansible-server ansible]# vim kubernetes_master_update_role.yml
---
- hosts: master:haroles:- role: kubernetes-master-update[root@ansible-server ansible]# ansible-playbook kubernetes_master_update_role.yml

18.2.2 验证master

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES    AGE   VERSION
k8s-master01.example.local   Ready    <none>   46m   v1.22.6
k8s-master02.example.local   Ready    <none>   46m   v1.22.6
k8s-master03.example.local   Ready    <none>   46m   v1.22.6
k8s-node01.example.local     Ready    <none>   46m   v1.20.14
k8s-node02.example.local     Ready    <none>   46m   v1.20.14
k8s-node03.example.local     Ready    <none>   46m   v1.20.14

18.3 升级calico

18.3.1 升级calico

[root@ansible-server ansible]# mkdir -p roles/calico-update/{tasks,vars,templates}
[root@ansible-server ansible]# cd roles/calico-update
[root@ansible-server calico-update]# ls
tasks  templates  vars#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址,POD_SUBNET改成自己规划的容器网段,MASTER01、MASTER02和MASTER03的IP地址根据自己的更改
[root@ansible-server calico-update]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds.cc
POD_SUBNET: 192.168.0.0/12
MASTER01: 172.31.3.101
MASTER02: 172.31.3.102
MASTER03: 172.31.3.103[root@ansible-server calico-update]# wget https://docs.projectcalico.org/manifests/calico-etcd.yaml -p templates/calico-etcd.yaml.j2[root@k8s-master01 ~]# vim templates/calico-etcd.yaml.j2
...
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: OnDelete #修改这里,calico不会滚动更新,只有重启了kubelet,才会更新template:metadata:labels:k8s-app: calico-node
...#修改下面内容
[root@ansible-server calico-update]# grep "etcd_endpoints:.*" templates/calico-etcd.yaml.j2 etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"[root@ansible-server calico-update]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "{% for i in groups.etcd %}https://{{ hostvars[i].ansible_default_ipv4.address }}:2379{% if not loop.last %},{% endif %}{% endfor %}"#g' templates/calico-etcd.yaml.j2  [root@ansible-server calico-update]# grep "etcd_endpoints:.*" templates/calico-etcd.yaml.j2etcd_endpoints: "{% for i in groups.etcd %}https://{{ hostvars[i].ansible_default_ipv4.address }}:2379{% if not loop.last %},{% endif %}{% endfor %}"	[root@ansible-server calico-update]# vim tasks/calico_file.yml
- name: copy calico-etcd.yaml filetemplate:src: calico-etcd.yaml.j2dest: /root/calico-etcd.yamlwhen:- ansible_hostname=="k8s-master01"[root@ansible-server calico-update]# vim tasks/config.yml
- name: get ETCD_KEY keyshell:cmd: cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'register: ETCD_KEYwhen:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd-key:.*" linereplace:path: /root/calico-etcd.yamlregexp: '# (etcd-key:) null'replace: '\1 {{ ETCD_KEY.stdout }}'when:- ansible_hostname=="k8s-master01"
- name: get ETCD_CERT keyshell:cmd: cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'register: ETCD_CERTwhen:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd-cert:.*" linereplace:path: /root/calico-etcd.yamlregexp: '# (etcd-cert:) null'replace: '\1 {{ ETCD_CERT.stdout }}'when:- ansible_hostname=="k8s-master01"
- name: get ETCD_CA keyshell:cmd: cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'when:- ansible_hostname=="k8s-master01"register: ETCD_CA
- name: Modify the ".*etcd-ca:.*" linereplace:path: /root/calico-etcd.yamlregexp: '# (etcd-ca:) null'replace: '\1 {{ ETCD_CA.stdout }}'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_ca:.*" linereplace:path: /root/calico-etcd.yamlregexp: '(etcd_ca:) ""'replace: '\1 "/calico-secrets/etcd-ca"'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_cert:.*" linereplace:path: /root/calico-etcd.yamlregexp: '(etcd_cert:) ""'replace: '\1 "/calico-secrets/etcd-cert"'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_key:.*" linereplace:path: /root/calico-etcd.yamlregexp: '(etcd_key:) ""'replace: '\1 "/calico-secrets/etcd-key"'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*CALICO_IPV4POOL_CIDR.*" linereplace:path: /root/calico-etcd.yamlregexp: '# (- name: CALICO_IPV4POOL_CIDR)'replace: '\1'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*192.168.0.0.*" linereplace:path: /root/calico-etcd.yamlregexp: '#   (value:) "192.168.0.0/16"'replace: '  \1 "{{ POD_SUBNET }}"'when:- ansible_hostname=="k8s-master01"
- name: Modify the "image:" linereplace:path: /root/calico-etcd.yamlregexp: '(.*image:) docker.io/calico(/.*)'replace: '\1 {{ HARBOR_DOMAIN }}/google_containers\2'when:- ansible_hostname=="k8s-master01"[root@ansible-server calico-update]# vim tasks/download_images.yml
- name: get calico versionshell:chdir: /rootcmd: awk -F "/"  '/image:/{print $NF}' calico-etcd.yamlregister: CALICO_VERSIONwhen:- ansible_hostname=="k8s-master01"
- name: download calico imageshell: |{% for i in CALICO_VERSION.stdout_lines %}docker pull registry.cn-beijing.aliyuncs.com/raymond9/{{ i }}docker tag registry.cn-beijing.aliyuncs.com/raymond9/{{ i }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}docker rmi registry.cn-beijing.aliyuncs.com/raymond9/{{ i }}docker push {{ HARBOR_DOMAIN }}/google_containers/{{ i }}{% endfor %}when:- ansible_hostname=="k8s-master01"[root@ansible-server calico-update]# vim tasks/install_calico.yml
- name: install calicoshell:chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f calico-etcd.yaml"when:- ansible_hostname=="k8s-master01"[root@ansible-server calico-update]# vim tasks/delete_master01_calico_container.yml 
- name: down master01shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/{{ MASTER01 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"
- name: get calico containershell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig get pod -n kube-system -o wide|grep calico |grep master01 |awk -F " " '{print $1}'register: CALICO_CONTAINERwhen:- ansible_hostname=="k8s-master01"
- name: delete calico containershell: |kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig delete pod {{ CALICO_CONTAINER.stdout }} -n kube-systemsleep 30swhen:- ansible_hostname=="k8s-master01"
- name: up master01shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/{{ MASTER01 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"[root@ansible-server calico-update]# vim tasks/delete_master02_calico_container.yml 
- name: down master02shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/{{ MASTER02 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"
- name: get calico containershell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig get pod -n kube-system -o wide|grep calico |grep master02 |awk -F " " '{print $1}'register: CALICO_CONTAINERwhen:- ansible_hostname=="k8s-master01"
- name: delete calico containershell: |kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig delete pod {{ CALICO_CONTAINER.stdout }} -n kube-systemsleep 30swhen:- ansible_hostname=="k8s-master01"
- name: up master02shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/{{ MASTER02 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"[root@ansible-server calico-update]# vim tasks/delete_master03_calico_container.yml 
- name: down master03shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/{{ MASTER03 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"
- name: get calico containershell:cmd: kubectl get --kubeconfig=/etc/kubernetes/admin.kubeconfig pod -n kube-system -o wide|grep calico |grep master03 |awk -F " " '{print $1}'register: CALICO_CONTAINERwhen:- ansible_hostname=="k8s-master01"
- name: delete calico containershell: |kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig delete pod {{ CALICO_CONTAINER.stdout }} -n kube-systemsleep 30swhen:- ansible_hostname=="k8s-master01"
- name: up master03shell:cmd: ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/{{ MASTER03 }}" | socat stdio /var/lib/haproxy/haproxy.sock"when:- ansible_hostname=="k8s-master01"[root@ansible-server calico-update]# vim tasks/main.yml
- include: calico_file.yml
- include: config.yml
- include: download_images.yml
- include: install_calico.yml
- include: delete_master01_calico_container.yml
- include: delete_master02_calico_container.yml
- include: delete_master03_calico_container.yml[root@ansible-server calico-update]# cd ../../
[root@ansible-server ansible]# tree roles/calico-update/
roles/calico-update/
├── tasks
│   ├── calico_file.yml
│   ├── config.yml
│   ├── delete_master01_calico_container.yml
│   ├── delete_master02_calico_container.yml
│   ├── delete_master03_calico_container.yml
│   ├── download_images.yml
│   ├── install_calico.yml
│   └── main.yml
├── templates
│   └── calico-etcd.yaml.j2
└── vars└── main.yml3 directories, 10 files[root@ansible-server ansible]# vim calico_update_role.yml 
---
- hosts: master:etcdroles:- role: calico-update[root@ansible-server ansible]# ansible-playbook calico_update_role.yml

18.3.2 验证calico

[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide|grep calico|grep master01|tail -n1
calico-node-pkx8j                          1/1     Running   0          7m14s   172.31.3.101      k8s-master01.example.local   <none>           <none>
[root@k8s-master01 ~]# kubectl get pod  calico-node-pkx8j -n kube-system -o yaml|grep "image"image: harbor.raymonds.cc/google_containers/node:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/cni:v3.21.4imagePullPolicy: IfNotPresent- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/node:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/node@sha256:1ce80e8524bc68b29593e4e7a6186ad0c6986a0e68f3cd55ccef8637bdd2e922image: harbor.raymonds.cc/google_containers/cni:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/cni@sha256:8dd84a8c73929a6b1038774d2cf5fd669856e09eaf3d960fd321df433dc1f05bimage: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/pod2daemon-flexvol@sha256:5b5fcca78d54341bfbd729ba8199624af61f7144a980bc46fcd1347d20bd8eef[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide|grep calico|grep master02|tail -n1
calico-node-zzpwt                          1/1     Running   0          30m     172.31.3.102      k8s-master02.example.local   <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-zzpwt -n kube-system -o yaml|grep "image"image: harbor.raymonds.cc/google_containers/node:v3.15.3imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/cni:v3.15.3imagePullPolicy: IfNotPresent- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/node:v3.15.3imageID: docker-pullable://harbor.raymonds.cc/google_containers/node@sha256:988095dbe39d2066b1964aafaa4a302a1286b149a4a80c9a1eb85544f2a0cdd0image: harbor.raymonds.cc/google_containers/cni:v3.15.3imageID: docker-pullable://harbor.raymonds.cc/google_containers/cni@sha256:a559d264c7a75a7528560d11778dba2d3b55c588228aed99be401fd2baa9b607image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3imageID: docker-pullable://harbor.raymonds.cc/google_containers/pod2daemon-flexvol@sha256:6bd1246d0ea1e573a6a050902995b1666ec0852339e5bda3051f583540361b55[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide|grep calico|grep master03|tail -n1
calico-node-kncgx                          1/1     Running   0          6m47s   172.31.3.103      k8s-master03.example.local   <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-kncgx -n kube-system -o yaml|grep "image"image: harbor.raymonds.cc/google_containers/node:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/cni:v3.21.4imagePullPolicy: IfNotPresent- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/node:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/node@sha256:1ce80e8524bc68b29593e4e7a6186ad0c6986a0e68f3cd55ccef8637bdd2e922image: harbor.raymonds.cc/google_containers/cni:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/cni@sha256:8dd84a8c73929a6b1038774d2cf5fd669856e09eaf3d960fd321df433dc1f05bimage: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/pod2daemon-flexvol@sha256:5b5fcca78d54341bfbd729ba8199624af61f7144a980bc46fcd1347d20bd8eef

18.4 node

18.4.1 升级node

[root@ansible-server ansible]# mkdir -p roles/kubernetes-node-update/{files,tasks,templates,vars}
[root@ansible-server ansible]# cd roles/kubernetes-node-update/
[root@ansible-server kubernetes-node-update]# ls
files  tasks  templates  vars[root@ansible-server kubernetes-node-update]# cp /data/ansible/roles/kubernetes-master-update/files/{kubelet,kube-proxy} files/
[root@ansible-server kubernetes-node-update]# ls files/
kubelet  kube-proxy#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server kubernetes-node-update]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds.cc
PAUSE_VERSION: 3.5NODE_SERVICE:- kube-proxy- kubelet[root@ansible-server kubernetes-node-update]# vim templates/10-kubelet.conf.j2
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image={{ HARBOR_DOMAIN }}/google_containers/pause:{{ PAUSE_VERSION }}"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS[root@ansible-server kubernetes-node-update]# vim tasks/upgrade_node01.yml
- name: drain node01shell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig drain k8s-node01.example.local --delete-emptydir-data --force --ignore-daemonsetswhen:- ansible_hostname=="k8s-master01"
- name: stop servicesystemd:name: "{{ item }}"state: stoppedloop:"{{ NODE_SERVICE }}"when:- ansible_hostname=="k8s-node01"
- name: copy kubernetes files to node01copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- kubelet- kube-proxywhen:- ansible_hostname=="k8s-node01"
- name: copy 10-kubelet.conf to node01template: src: 10-kubelet.conf.j2dest: /etc/systemd/system/kubelet.service.d/10-kubelet.confwhen:- ansible_hostname=="k8s-node01"
- name: start servicesystemd:name: "{{ item }}"state: restarteddaemon_reload: yesloop:"{{ NODE_SERVICE }}"when:- ansible_hostname=="k8s-node01"
- name: get calico containershell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig get pod -n kube-system -o wide|grep calico |grep node01 |tail -n1|awk -F " " '{print $1}'register: CALICO_CONTAINERwhen:- ansible_hostname=="k8s-master01"
- name: delete calico containershell: |kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig delete pod {{ CALICO_CONTAINER.stdout }} -n kube-systemsleep 60swhen:- ansible_hostname=="k8s-master01"
- name: uncordon node01shell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig uncordon k8s-node01.example.localwhen:- ansible_hostname=="k8s-master01"[root@ansible-server kubernetes-node-update]# vim tasks/upgrade_node02.yml 
- name: drain node02shell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig drain k8s-node02.example.local --delete-emptydir-data --force --ignore-daemonsetswhen:- ansible_hostname=="k8s-master01"
- name: stop servicesystemd:name: "{{ item }}"state: stoppedloop:"{{ NODE_SERVICE }}"when:- ansible_hostname=="k8s-node02"
- name: copy kubernetes files to node02copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- kubelet- kube-proxywhen:- ansible_hostname=="k8s-node02"
- name: copy 10-kubelet.conf to node02template: src: 10-kubelet.conf.j2dest: /etc/systemd/system/kubelet.service.d/10-kubelet.confwhen:- ansible_hostname=="k8s-node02"
- name: start servicesystemd:name: "{{ item }}"state: restarteddaemon_reload: yesloop:"{{ NODE_SERVICE }}"when:- ansible_hostname=="k8s-node02"
- name: get calico containershell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig get pod -n kube-system -o wide|grep calico |grep node02 |tail -n1|awk -F " " '{print $1}'register: CALICO_CONTAINERwhen:- ansible_hostname=="k8s-master01"
- name: delete calico containershell: |kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig delete pod {{ CALICO_CONTAINER.stdout }} -n kube-systemsleep 60swhen:- ansible_hostname=="k8s-master01"
- name: uncordon node02shell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig uncordon k8s-node02.example.localwhen:- ansible_hostname=="k8s-master01"[root@ansible-server kubernetes-node-update]# vim tasks/upgrade_node03.yml 
- name: drain node03shell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig drain k8s-node03.example.local --delete-emptydir-data --force --ignore-daemonsetswhen:- ansible_hostname=="k8s-master01"
- name: stop servicesystemd:name: "{{ item }}"state: stoppedloop:"{{ NODE_SERVICE }}"when:- ansible_hostname=="k8s-node03"
- name: copy kubernetes files to node03copy:src: "{{ item }}"dest: /usr/local/bin/mode: 0755loop:- kubelet- kube-proxywhen:- ansible_hostname=="k8s-node03"
- name: copy 10-kubelet.conf to node03template: src: 10-kubelet.conf.j2dest: /etc/systemd/system/kubelet.service.d/10-kubelet.confwhen:- ansible_hostname=="k8s-node03"
- name: start servicesystemd:name: "{{ item }}"state: restarteddaemon_reload: yesloop:"{{ NODE_SERVICE }}"when:- ansible_hostname=="k8s-node03"
- name: get calico containershell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig get pod -n kube-system -o wide|grep calico |grep node03 |tail -n1|awk -F " " '{print $1}'register: CALICO_CONTAINERwhen:- ansible_hostname=="k8s-master01"
- name: delete calico containershell: |kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig delete pod {{ CALICO_CONTAINER.stdout }} -n kube-systemsleep 60swhen:- ansible_hostname=="k8s-master01"
- name: uncordon node03shell:cmd: kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig uncordon k8s-node03.example.localwhen:- ansible_hostname=="k8s-master01"[root@ansible-server kubernetes-node-update]# vim tasks/main.yml
- include: upgrade_node01.yml
- include: upgrade_node02.yml
- include: upgrade_node03.yml[root@ansible-server kubernetes-node-update]# cd ../../
[root@ansible-server ansible]# tree roles/kubernetes-node-update/
roles/kubernetes-node-update/
├── files
│   ├── kubelet
│   └── kube-proxy
├── tasks
│   ├── main.yml
│   ├── upgrade_node01.yml
│   ├── upgrade_node02.yml
│   └── upgrade_node03.yml
├── templates
│   └── 10-kubelet.conf.j2
└── vars└── main.yml4 directories, 8 files[root@ansible-server ansible]# vim kubernetes_node_update_role.yml
---
- hosts: master01:noderoles:- role: kubernetes-node-update[root@ansible-server ansible]# ansible-playbook kubernetes_node_update_role.yml 

18.4.2 验证node

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES    AGE   VERSION
k8s-master01.example.local   Ready    <none>   99m   v1.22.6
k8s-master02.example.local   Ready    <none>   99m   v1.22.6
k8s-master03.example.local   Ready    <none>   99m   v1.22.6
k8s-node01.example.local     Ready    <none>   99m   v1.22.6
k8s-node02.example.local     Ready    <none>   99m   v1.22.6
k8s-node03.example.local     Ready    <none>   99m   v1.22.6[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide|grep calico|grep node01|tail -n1
calico-node-xthsn                          1/1     Running   0          4m34s   172.31.3.111      k8s-node01.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-xthsn -n kube-system -o yaml |grep imageimage: harbor.raymonds.cc/google_containers/node:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/cni:v3.21.4imagePullPolicy: IfNotPresent- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/node:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/node@sha256:1ce80e8524bc68b29593e4e7a6186ad0c6986a0e68f3cd55ccef8637bdd2e922image: harbor.raymonds.cc/google_containers/cni:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/cni@sha256:8dd84a8c73929a6b1038774d2cf5fd669856e09eaf3d960fd321df433dc1f05bimage: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/pod2daemon-flexvol@sha256:5b5fcca78d54341bfbd729ba8199624af61f7144a980bc46fcd1347d20bd8eef[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide|grep calico|grep node02|tail -n1
calico-node-7jp6h                          1/1     Running   0          3m52s   172.31.3.112      k8s-node02.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-7jp6h -n kube-system -o yaml |grep imageimage: harbor.raymonds.cc/google_containers/node:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/cni:v3.21.4imagePullPolicy: IfNotPresent- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/node:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/node@sha256:1ce80e8524bc68b29593e4e7a6186ad0c6986a0e68f3cd55ccef8637bdd2e922image: harbor.raymonds.cc/google_containers/cni:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/cni@sha256:8dd84a8c73929a6b1038774d2cf5fd669856e09eaf3d960fd321df433dc1f05bimage: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/pod2daemon-flexvol@sha256:5b5fcca78d54341bfbd729ba8199624af61f7144a980bc46fcd1347d20bd8eef[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide|grep calico|grep node03|tail -n1
calico-node-wfxmr                          1/1     Running   0          3m2s    172.31.3.113      k8s-node03.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-wfxmr -n kube-system -o yaml |grep imageimage: harbor.raymonds.cc/google_containers/node:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/cni:v3.21.4imagePullPolicy: IfNotPresent- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imagePullPolicy: IfNotPresentimage: harbor.raymonds.cc/google_containers/node:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/node@sha256:1ce80e8524bc68b29593e4e7a6186ad0c6986a0e68f3cd55ccef8637bdd2e922image: harbor.raymonds.cc/google_containers/cni:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/cni@sha256:8dd84a8c73929a6b1038774d2cf5fd669856e09eaf3d960fd321df433dc1f05bimage: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4imageID: docker-pullable://harbor.raymonds.cc/google_containers/pod2daemon-flexvol@sha256:5b5fcca78d54341bfbd729ba8199624af61f7144a980bc46fcd1347d20bd8eef

18.5 coredns

18.5.1 升级coredns

[root@ansible-server ansible]# mkdir -p roles/coredns-update/{tasks,templates,vars}
[root@ansible-server ansible]# cd roles/coredns-update/
[root@ansible-server coredns-update]# ls
tasks  templates  vars#下面CLUSTERDNS改成自己规划的service网段的第10个IP地址,HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server coredns-update]# vim vars/main.yml 
CLUSTERDNS: 10.96.0.10                                 
HARBOR_DOMAIN: harbor.raymonds.cc[root@ansible-server coredns-update]# cat templates/coredns.yaml.j2 
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
rules:- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: "CoreDNS"
spec:# replicas: not specified here:# 1. Default is 1.# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:kubernetes.io/os: linuxaffinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: ["kube-dns"]topologyKey: kubernetes.io/hostnamecontainers:- name: corednsimage: coredns/coredns:1.8.6imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPdnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"kubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 10.96.0.10ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP[root@ansible-server coredns-update]# vim templates/coredns.yaml.j2
...
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loop ##将loop插件直接删除,避免内部循环reloadloadbalance}
...
spec:selector:k8s-app: kube-dnsclusterIP: {{ CLUSTERDNS }} #修改这里
...[root@ansible-server coredns-update]# vim tasks/coredns_file.yml
- name: copy coredns.yaml filetemplate:src: coredns.yaml.j2dest: /root/coredns.yamlwhen:- ansible_hostname=="k8s-master01"[root@ansible-server coredns-update]# vim tasks/config.yml
- name: Modify the "image:" linereplace:path: /root/coredns.yamlregexp: '(.*image:) coredns(/.*)'replace: '\1 {{ HARBOR_DOMAIN }}/google_containers\2'[root@ansible-server coredns-update]# vim tasks/download_images.yml
- name: get coredns versionshell:chdir: /rootcmd: awk -F "/"  '/image:/{print $NF}' coredns.yamlregister: COREDNS_VERSION
- name: download coredns imageshell: |{% for i in COREDNS_VERSION.stdout_lines %}docker pull registry.aliyuncs.com/google_containers/{{ i }}docker tag registry.aliyuncs.com/google_containers/{{ i }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}docker rmi registry.aliyuncs.com/google_containers/{{ i }}docker push {{ HARBOR_DOMAIN }}/google_containers/{{ i }}{% endfor %}[root@ansible-server coredns-update]# vim tasks/install_coredns.yml
- name: install corednsshell:chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f coredns.yaml"[root@ansible-server coredns-update]# vim tasks/main.yml
- include: coredns_file.yml
- include: config.yml
- include: download_images.yml
- include: install_coredns.yml[root@ansible-server coredns-update]# cd ../../
[root@ansible-server ansible]# tree roles/coredns/
roles/coredns/
├── tasks
│   ├── config.yml
│   ├── coredns_file.yml
│   ├── download_images.yml
│   ├── install_coredns.yml
│   └── main.yml
├── templates
│   └── coredns.yaml.j2
└── vars└── main.yml3 directories, 7 files[root@ansible-server ansible]# vim coredns_role.yml
---
- hosts: master01roles:- role: coredns-update[root@ansible-server ansible]# ansible-playbook coredns_update_role.yml

18.5.2 验证coredns

[root@k8s-master01 ~]# kubectl get po -n kube-system -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-867b8c5ddf-2nnfd   1/1     Running   0          33s

18.6 metrics

18.6.1 升级metrics

[root@ansible-server ansible]# mkdir -p roles/metrics-update/{files,vars,tasks}
[root@ansible-server ansible]# cd roles/metrics-update/
[root@ansible-server metrics-update]# ls
files  tasks  vars[root@ansible-server metrics-update]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -P files/#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server metrics-update]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds.cc[root@ansible-server metrics-update]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -P files/[root@ansible-server metrics-update]# vim files/components.yaml
...spec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15s
#添加下面两行内容- --kubelet-insecure-tls- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
...volumeMounts:- mountPath: /tmpname: tmp-dir
#添加下面两行内容- name: ca-sslmountPath: /etc/kubernetes/pki
...volumes:- emptyDir: {}name: tmp-dir
#添加下面三行内容- name: ca-sslhostPath:path: /etc/kubernetes/pki
...[root@ansible-server metrics-update]# vim tasks/metrics_file.yml
- name: copy components.yaml filecopy:src: components.yamldest: /root/components.yaml[root@ansible-server metrics-update]# vim tasks/config.yml
- name: Modify the "image:" linereplace:path: /root/components.yamlregexp: '(.*image:) k8s.gcr.io/metrics-server(/.*)'replace: '\1 {{ HARBOR_DOMAIN }}/google_containers\2'[root@ansible-server metrics-update]# vim tasks/download_images.yml
- name: get metrics versionshell:chdir: /rootcmd: awk -F "/"  '/image:/{print $NF}' components.yamlregister: METRICS_VERSION
- name: download metrics imageshell: |{% for i in METRICS_VERSION.stdout_lines %}docker pull registry.aliyuncs.com/google_containers/{{ i }}docker tag registry.aliyuncs.com/google_containers/{{ i }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}docker rmi registry.aliyuncs.com/google_containers/{{ i }}docker push {{ HARBOR_DOMAIN }}/google_containers/{{ i }}{% endfor %}[root@ansible-server metrics-update]# vim tasks/install_metrics.yml
- name: install metricsshell:chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f components.yaml"[root@ansible-server metrics-update]# vim tasks/main.yml
- include: metrics_file.yml
- include: config.yml
- include: download_images.yml
- include: install_metrics.yml[root@ansible-server metrics-update]# cd ../../
[root@ansible-server ansible]# tree roles/metrics-update/
roles/metrics-update/
├── files
│   └── components.yaml
├── tasks
│   ├── config.yml
│   ├── download_images.yml
│   ├── install_metrics.yml
│   ├── main.yml
│   └── metrics_file.yml
└── vars└── main.yml3 directories, 7 files[root@ansible-server ansible]# vim metrics_update_role.yml 
---
- hosts: master01roles:- role: metrics-update[root@ansible-server ansible]# ansible-playbook metrics_update_role.yml

18.6.2 验证metrics

[root@k8s-master01 ~]# kubectl get pod -A|grep metrics-server
kube-system            metrics-server-648f84647-2l5bk               1/1     Running   0          34s[root@k8s-master01 ~]# kubectl top node 
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01.example.local   143m         7%     1853Mi          48%       
k8s-master02.example.local   140m         7%     1561Mi          40%       
k8s-master03.example.local   129m         6%     1446Mi          37%       
k8s-node01.example.local     97m          4%     863Mi           22%       
k8s-node02.example.local     80m          4%     854Mi           22%       
k8s-node03.example.local     80m          4%     884Mi           23%

18.7 dashboard

18.7.1 升级dashboard

[root@ansible-server ansible]# mkdir -p roles/dashboard-update/{files,templates,vars,tasks}
[root@ansible-server ansible]# cd roles/dashboard-update/
[root@ansible-server dashboard-update]# ls
files  tasks  templates  vars[root@ansible-server dashboard-update]# vim files/admin.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system[root@ansible-server dashboard-update]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml -P templates/recommended.yaml.j2[root@ansible-server dashboard-update]# vim templates/recommended.yaml.j2
...
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePort #添加这行ports:- port: 443targetPort: 8443nodePort: {{ NODEPORT }} #添加这行selector:k8s-app: kubernetes-dashboard
...#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server dashboard-update]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds.cc
NODEPORT: 30005[root@ansible-server dashboard-update]# vim tasks/dashboard_file.yml
- name: copy recommended.yaml filetemplate:src: recommended.yaml.j2dest: /root/recommended.yaml
- name: copy admin.yaml filecopy:src: admin.yamldest: /root/admin.yaml[root@ansible-server dashboard-update]# vim tasks/config.yml
- name: Modify the "image:" linereplace:path: /root/recommended.yamlregexp: '(.*image:) kubernetesui(/.*)'replace: '\1 {{ HARBOR_DOMAIN }}/google_containers\2'[root@ansible-server dashboard-update]# vim tasks/download_images.yml
- name: get dashboard versionshell:chdir: /rootcmd: awk -F "/"  '/image:/{print $NF}' recommended.yamlregister: DASHBOARD_VERSION
- name: download dashboard imageshell: |{% for i in DASHBOARD_VERSION.stdout_lines %}docker pull kubernetesui/{{ i }}docker tag kubernetesui/{{ i }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}docker rmi kubernetesui/{{ i }}docker push {{ HARBOR_DOMAIN }}/google_containers/{{ i }}{% endfor %}[root@ansible-server dashboard-update]# vim tasks/install_dashboard.yml
- name: install dashboardshell:chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f recommended.yaml -f admin.yaml"[root@ansible-server dashboard-update]# vim tasks/main.yml
- include: dashboard_file.yml
- include: config.yml
- include: download_images.yml
- include: install_dashboard.yml[root@ansible-server dashboard-update]# cd ../../
[root@ansible-server ansible]# tree roles/dashboard-update/
roles/dashboard-update/
├── files
│   └── admin.yaml
├── tasks
│   ├── config.yml
│   ├── dashboard_file.yml
│   ├── download_images.yml
│   ├── install_dashboard.yml
│   └── main.yml
├── templates
│   └── recommended.yaml.j2
└── vars└── main.yml4 directories, 8 files[root@ansible-server ansible]# vim dashboard_update_role.yml 
---
- hosts: master01roles:- role: dashboard-update[root@ansible-server ansible]# ansible-playbook dashboard_update_role.yml 

18.7.2 登录dashboard

https://172.31.3.101:30005

[root@k8s-master01 ~]#  kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-s426j
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 1eb8f4ab-1ef8-46e0-ab60-bd8592da66b9Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1411 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InV4dHpkbGRoSGZnbEJHU0dLRnE4TmNzYzU5cTlLazN6SFpUdGFPYjczbXcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXM0MjZqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxZWI4ZjRhYi0xZWY4LTQ2ZTAtYWI2MC1iZDg1OTJkYTY2YjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Hde0T5KKcVOD4SBr44WMsGAX96jKLGywE9m3ML-J3bMeZzHmLbRgfSkFaJ_Gu_vR8gBbY_HoeoSdq136WlkJzr8hEjRPypoWoWNHiYL3Xl-9U-tZ4SKCmrFqGpYQV38rG4rzX086M2nSpLL880dZKc8i_PtuKeAtvVTqJ6V_-ozyuO7BWiU4vpvTk-U6tFNTegCjtfT3mavveAUcfYN3mTrFRr0E-KHkjxdCf6i-bsJk46xPC8ZvkaqY4VL3kGGJSkx2NwWlC9B_Eq0YsajGqdIr9gSJWQ2KFdHlsOWgT-um8sW7oBSYoTsZ4ZFjDfbEvAgFjbSaRfPGGQidXNvKmg

在这里插入图片描述

这篇关于a29.ansible 生产实战案例 -- 基于二进制包安装kubernetes v1.20 -- 集群升级(二)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/334149

相关文章

网页解析 lxml 库--实战

lxml库使用流程 lxml 是 Python 的第三方解析库,完全使用 Python 语言编写,它对 XPath表达式提供了良好的支 持,因此能够了高效地解析 HTML/XML 文档。本节讲解如何通过 lxml 库解析 HTML 文档。 pip install lxml lxm| 库提供了一个 etree 模块,该模块专门用来解析 HTML/XML 文档,下面来介绍一下 lxml 库

服务器集群同步时间手记

1.时间服务器配置(必须root用户) (1)检查ntp是否安装 [root@node1 桌面]# rpm -qa|grep ntpntp-4.2.6p5-10.el6.centos.x86_64fontpackages-filesystem-1.41-1.1.el6.noarchntpdate-4.2.6p5-10.el6.centos.x86_64 (2)修改ntp配置文件 [r

Zookeeper安装和配置说明

一、Zookeeper的搭建方式 Zookeeper安装方式有三种,单机模式和集群模式以及伪集群模式。 ■ 单机模式:Zookeeper只运行在一台服务器上,适合测试环境; ■ 伪集群模式:就是在一台物理机上运行多个Zookeeper 实例; ■ 集群模式:Zookeeper运行于一个集群上,适合生产环境,这个计算机集群被称为一个“集合体”(ensemble) Zookeeper通过复制来实现

CentOS7安装配置mysql5.7 tar免安装版

一、CentOS7.4系统自带mariadb # 查看系统自带的Mariadb[root@localhost~]# rpm -qa|grep mariadbmariadb-libs-5.5.44-2.el7.centos.x86_64# 卸载系统自带的Mariadb[root@localhost ~]# rpm -e --nodeps mariadb-libs-5.5.44-2.el7

Centos7安装Mongodb4

1、下载源码包 curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.1.tgz 2、解压 放到 /usr/local/ 目录下 tar -zxvf mongodb-linux-x86_64-rhel70-4.2.1.tgzmv mongodb-linux-x86_64-rhel70-4.2.1/

Hadoop企业开发案例调优场景

需求 (1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。 (2)需求分析: 1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster 平均每个节点运行10个 / 3台 ≈ 3个任务(4    3    3) HDFS参数调优 (1)修改:hadoop-env.sh export HDFS_NAMENOD

HDFS—集群扩容及缩容

白名单:表示在白名单的主机IP地址可以,用来存储数据。 配置白名单步骤如下: 1)在NameNode节点的/opt/module/hadoop-3.1.4/etc/hadoop目录下分别创建whitelist 和blacklist文件 (1)创建白名单 [lytfly@hadoop102 hadoop]$ vim whitelist 在whitelist中添加如下主机名称,假如集群正常工作的节

Hadoop集群数据均衡之磁盘间数据均衡

生产环境,由于硬盘空间不足,往往需要增加一块硬盘。刚加载的硬盘没有数据时,可以执行磁盘数据均衡命令。(Hadoop3.x新特性) plan后面带的节点的名字必须是已经存在的,并且是需要均衡的节点。 如果节点不存在,会报如下错误: 如果节点只有一个硬盘的话,不会创建均衡计划: (1)生成均衡计划 hdfs diskbalancer -plan hadoop102 (2)执行均衡计划 hd

NameNode内存生产配置

Hadoop2.x 系列,配置 NameNode 内存 NameNode 内存默认 2000m ,如果服务器内存 4G , NameNode 内存可以配置 3g 。在 hadoop-env.sh 文件中配置如下。 HADOOP_NAMENODE_OPTS=-Xmx3072m Hadoop3.x 系列,配置 Nam

性能分析之MySQL索引实战案例

文章目录 一、前言二、准备三、MySQL索引优化四、MySQL 索引知识回顾五、总结 一、前言 在上一讲性能工具之 JProfiler 简单登录案例分析实战中已经发现SQL没有建立索引问题,本文将一起从代码层去分析为什么没有建立索引? 开源ERP项目地址:https://gitee.com/jishenghua/JSH_ERP 二、准备 打开IDEA找到登录请求资源路径位置