kubernetes v1.11 生产环境 二进制部署 全过程

2024-08-21 05:08

本文主要是介绍kubernetes v1.11 生产环境 二进制部署 全过程,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

原创内容,转载请注明出处

博主地址:https://aronligithub.github.io/

闲言乱语

在前段日子编写了kubernetes部署全过程之后,好友告诉我,你写得太长啦。能不能拆分章节一下。但是由于各种工作上和学习自研上的计划以及任务太多了,这个篇章的修改以及新篇章的编写给延迟了下来,但是为了更加方便各位读者们阅读,我以下对内容做了四个篇章的拆分

kubernetes v1.11 二进制部署篇章目录

  • kubernetes v1.11 二进制部署
    • (一)环境介绍
    • (二)Openssl自签TLS证书
    • (三)master组件部署
    • (四)node组件部署
    • (五)Calico集成kubernetes的CNI网络部署全过程、启用CA自签名

前言

在经过上一篇章关于kubernetes 基本技术概述铺垫,在部署etcd集群之后,就可以开始部署kubernetes的集群服务了。

如果你是直接访问到该篇章,不清楚etcd如何部署,不清楚我写的kubernetes系列文章铺垫,可以访问这里。

13423234-3a6c9499ff3ed79b.png


部署基本步骤说明

  • 下载kubernetes二进制可执行文件
  • 使用openssl生成ca证书
  • 部署kubernetes的master服务
  • 部署kubernetes的node服务

环境准备

服务器拓扑

13423234-2cba715507dd1e1d.png

host nameServerIPServices
Server81172.16.5.81master 、node 、etcd
Server86172.16.5.86node 、etcd
Server87172.16.5.87node 、etcd

服务器预处理配置

  1. 关闭防火墙服务
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
  1. 关闭selinux
查看SELinux状态:
1、/usr/sbin/sestatus -v  或者sestatus
2、修改配置文件需要重启机器:
修改/etc/selinux/config 文件
将SELINUX=enforcing改为SELINUX=disabled
重启机器即可
  1. 配置服务器的ntp时间钟(保证服务器之间的时间同步)
yum install ntp ntpdate -y
timedatectl status
timedatectl list-timezones | grep Shanghai
timedatectl set-timezone Asia/Hong_Kong
timedatectl set-ntp yes
date
  1. 关闭硬盘的swap分区
关闭swap
sudo swapoff -a
#要永久禁掉swap分区,打开如下文件注释掉swap那一行 
sudo vi /etc/fstab

k8s1.11下载二进制文件

从Kubernetes官网Github下载编译好的二进制包

访问kubernetes的Github,查看页面如下:

13423234-1959bd686d1f50f9.png
下载kubernetes.tar.gz文件,包含了Kubernetes的服务程序文件、文档和示例。
'注意:现在下载都需要翻墙才可以下载了。(不翻墙的话印象中后面也可以下载,不过很慢)'

解压二进制文件以及下载server以及client执行文件

  1. 上传并解压二进制文件压缩包


    13423234-1c6fa110bc2a61f7.png

2.下载client和server的二进制文件


13423234-c07b5f6d680c894f.png
从kubernetes/client的介绍文件中可以知道,需要去执行
Run cluster/get-kube-binaries.sh to download client and server binaries.
13423234-b78d95af6abaaeb4.png

3.查看下载好的server文件

13423234-1b886537f336c5e3.png

好了,这里已经下载好kubernetes所需的二进制文件了,那么下一步就是 创建kubernetes集群所需要的TLS证书文件。
13423234-3ef7878e8a2c041a.png


使用openssl创建CA证书

13423234-f2290732a96555c8.png

部署kubernetes服务使用的所需证书如下

名称公钥与私钥
根证书公钥与私钥ca.pem与ca.key
API Server公钥与私钥apiserver.pem与apiserver.key
集群管理员公钥与私钥admin.pem与admin.key
节点proxy公钥与私钥proxy.pem与proxy.key

节点kubelet的公钥与私钥:是通过boostrap响应的方式,在启动kubelet自动会产生, 然后在master通过csr请求,就会产生。
那么知道这些基本概念之后,下面就开始创建证书的步骤说明。
再次之前可以先看看生成之后的结果图:

13423234-58ef76235ab60548.png
证书生成的结果图

13423234-fc1877c16e92e84e.png
kubelet证书自动生成结果图

创建根证书

# Generate the root CA. #生成RSA私钥(无加密)openssl genrsa -out ca.key 2048 #生成 RSA 私钥和自签名证书openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.pem -subj "/CN=kubernetes/O=k8s"# 参数说明:
-new 指生成证书请求
-x509 表示直接输出证书
-key 指定私钥文件
-days 指定证书过期时间为10000天
-out 导出结束后证书文件
-subj 输入证书拥有者信息,这里指定 CN 以及 O 的值# 重要的CN以及0关键参数:
-subj 设置CN以及0的值很重要,kubernetes会从证书这两个值对应获取相关的用户名以及用户租的值,如下:
"CN":Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
"O":Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

apiserver证书生成

master中需要证书如下:
根证书公钥(root CA public key, ca.key)、根证书(ca.pem);
apiserver证书:apiserver.pem与其私钥apiserver-key.pem

1.创建openssl.cnf

openssl示例

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = ${K8S_SERVICE_IP}
IP.2 = ${MASTER_IPV4}
[^_^]:
使用有API被访问的Master的IP地址替换${MASTER_IPV4},使用自己规划作为kubernetes service IP端的首IP替换${K8S_SERVICE_IP}如:一般以10.100.0.0/16作为service的服务IP端,则此处以10.100.0.1替换${K8S_SERVICE_IP}
如果在高可用配置中部署多个Master节点,需要添加更多的TLS subjectAltNames (SANs)。每个证书合适的SANs配置依赖于从节点与kubectl用户是怎样与Master节点通讯的:直接通过IP地址、通过负载均衡、或者通过解析DNS名称。
DNS.5 = ${MASTER_DNS_NAME}
IP.3 = ${MASTER_IP}
IP.4 = ${MASTER_LOADBALANCER_IP}
从节点将通过${MASTER_DNS_NAME}访问到Loadbalancer。

根据上面的示例,下面则以server81作为master服务器,创建openssl的cnf文件。


创建openssl.cnf文件

[root@server81 openssl]# vim openssl.cnf [req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = k8s_master
IP.1 = 10.0.6.1              # ClusterServiceIP 地址
IP.2 = 172.16.5.81           # master IP地址
IP.3 = 10.1.0.1              # docker IP地址
IP.4 = 10.0.6.200            # kubernetes DNS IP地址

2.生成apiserver 证书对

# Generate the API server keypair.
openssl genrsa -out apiserver.key 2048openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kubernetes/O=k8s" -config openssl.cnfopenssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnf

一般生成的根证书(ca.key, ca.pem)与apiserver证书(apiserver.key,apiserver.pem)放置在Master节点的/etc/kubernetes/kubernetesTLS/路径下(这个路径是可以自定义修改的,不一定要用我这个)


3.证书配置相关说明

apiserver的配置中需要指定如下参数:

## Kubernetes的访问证书配置:
--token-auth-file=/etc/kubernetes/token.csv   
--tls-cert-file=/etc/kubernetes/kubernetesTLS/apiserver.pem 
--tls-private-key-file=/etc/kubernetes/kubernetesTLS/apiserver.key  
--client-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem  
--service-account-key-file=/etc/kubernetes/kubernetesTLS/ca.key  ## Etcd的访问证书配置:
--storage-backend=etcd3  
--etcd-cafile=/etc/etcd/etcdSSL/ca.pem  
--etcd-certfile=/etc/etcd/etcdSSL/etcd.pem  
--etcd-keyfile=/etc/etcd/etcdSSL/etcd-key.pem  

controller-manager的配置中需要指定如下参数:

## Kubernetes的访问证书配置:
--cluster-name=kubernetes  
--cluster-signing-cert-file=/etc/kubernetes/kubernetesTLS/ca.pem  
--cluster-signing-key-file=/etc/kubernetes/kubernetesTLS/ca.key  
--service-account-private-key-file=/etc/kubernetes/kubernetesTLS/ca.key  
--root-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem

admin集群管理员证书生成

## 此证书用于kubectl,设置方式如下:
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=system:masters/OU=System"
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out admin.pem -days 3650

说明:

由于后续 kube-apiserver 在启用RBAC模式之后, 客户端(如 kubelet、kube-proxy、Pod)请求进行授权的时候会需要认证用户名、以及用户组
那么所谓的用户名用户组哪里来定义呢?
我们来看看上面openssl创建证书的语句:

openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=system:masters/OU=System"

其中这里的/CN=admin/O=system:masters/OU=System就是在CN定义用户为adminO定义用户组为system:mastersOU 指定该证书的 Group 为 system:masters

那么定义好之后,在kubernetes中是怎么使用的呢?

kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings(角色),如 cluster-admin (角色)Group(组) system:mastersRole(角色) cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
那么当然的,我们创建admin的证书的时候,就要按照该上面的说明定义好证书的组、用户

另外当kubelet使用该证书访问kube-apiserver是什么样的过程呢?

在证书的签名中,OU 指定该证书的 Group 为 system:masterskubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限
同理,如果你是使用CFSSL来签名证书也需要这样去配置好用户和用户组。在这里就不单独写CFSSL签kubernetes的相关证书了。
重要的是要好好理解证书签名kubernetesRBAC角色绑定的关系。


节点proxy证书生成

openssl genrsa -out proxy.key 2048
openssl req -new -key proxy.key -out proxy.csr -subj "/CN=system:kube-proxy"
openssl x509 -req -in proxy.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out proxy.pem -days 3650

说明:

从上面解析说明admin的CN签名与kubernetes角色绑定的关系中,这里简单一眼就看出CN是拿来定义proxy的用户的。

CN 指定该证书的请求 User(用户)system:kube-proxy
kubernetesRABC默认角色绑定中,kube-apiserver 预定义的 RoleBinding cluster-adminUser system:kube-proxyRole system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;


将生成的ca证书拷贝至准备部署的指定目录

13423234-d903ff4398343755.png

以上就是部署master节点所需要的证书文件了。

在这个过程CA产生的过程,大家肯定会角色笔者为什么要这么啰嗦详详细细去写那么多注释和说明。而且看了那么多内容之后,内心肯定觉得步骤好多呀,好烦躁。
不着急,步骤说明详细可以让读者的你更加好去理解;步骤多而烦躁我已经写好了自动化签订证书的脚本了。
在这里附上源码:

  • 第一步,创建openssl的cnf文件
[root@server81 openssl]# cat create_openssl_cnf.sh 
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)################## Set PARAMS ######################MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`
DockerServiceIP="10.1.0.1"  ## 10.1.0.0/16
ClusterServiceIP="10.0.6.1" ## 10.0.6.0/24
kubeDnsIP="10.0.6.200"## function
function create_openssl_cnf(){
cat <<EOF > $basedir/openssl.cnf 
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = k8s_master
IP.1 = $ClusterServiceIP              # ClusterServiceIP 地址
IP.2 = $MASTER_IP                     # master IP地址
IP.3 = $DockerServiceIP               # docker IP地址
IP.4 = $kubeDnsIP                     # kubernetes DNS IP地址
EOF
}create_openssl_cnf
[root@server81 openssl]# 
13423234-f557505ef4a66714.png

- 第二步,创建master所需的TLS证书

[root@server81 install_k8s_master]# ls
configDir     Step1_create_CA.sh  Step2_create_token.sh       Step4_install_controller.sh  Step6_create_kubeconfig_file.sh
Implement.sh  Step1_file          Step3_install_apiserver.sh  Step5_install_scheduler.sh   Step7_set_master_info.sh
[root@server81 install_k8s_master]# 
[root@server81 install_k8s_master]# vim Step1_create_CA.sh 
[root@server81 install_k8s_master]# cat Step1_create_CA.sh 
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
configdir=$basedir/Step1_file
openssldir=$configdir/openssl
ssldir=$configdir/kubernetesTLS
kubernetsDir=/etc/kubernetes
kubernetsTLSDir=/etc/kubernetes/kubernetesTLS################## Set PARAMS ######################
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`## function and implments
function check_firewalld_selinux(){systemctl status firewalld/usr/sbin/sestatus -vswapoff -a
}check_firewalld_selinuxfunction create_ssl(){cd $configdir && rm -rf $ssldir && mkdir -p $ssldircd $ssldir && \# Generate the root CA. openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.pem -subj "/CN=kubernetes/O=k8s"ls $ssldir
}create_ssl function create_openssl_cnf(){sh $openssldir/create_openssl_cnf.sh cat $openssldir/openssl.cnf > $ssldir/openssl.cnf
}create_openssl_cnffunction create_apiserver_key_pem(){cd $ssldir && \openssl genrsa -out apiserver.key 2048openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kubernetes/O=k8s" -config openssl.cnfopenssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnfls $ssldir
}create_apiserver_key_pemfunction create_admin_key_pem(){cd $ssldir && \openssl genrsa -out admin.key 2048 openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=system:masters/OU=System" openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out admin.pem -days 3650ls $ssldir
}create_admin_key_pemfunction create_proxy_key_pem(){cd $ssldir && \openssl genrsa -out proxy.key 2048openssl req -new -key proxy.key -out proxy.csr -subj "/CN=system:kube-proxy"openssl x509 -req -in proxy.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out proxy.pem -days 3650ls $ssldir
}create_proxy_key_pemfunction setup_ca(){rm -rf $kubernetsDirmkdir -p $kubernetsTLSDircat $ssldir/ca.pem > $kubernetsTLSDir/ca.pemcat $ssldir/ca.key > $kubernetsTLSDir/ca.keycat $ssldir/apiserver.pem > $kubernetsTLSDir/apiserver.pemcat $ssldir/apiserver.key > $kubernetsTLSDir/apiserver.keycat $ssldir/admin.pem > $kubernetsTLSDir/admin.pemcat $ssldir/admin.key > $kubernetsTLSDir/admin.keycat $ssldir/proxy.pem > $kubernetsTLSDir/proxy.pemcat $ssldir/proxy.key > $kubernetsTLSDir/proxy.keyecho "checking TLS file:"ls $kubernetsTLSDir
}setup_ca
[root@server81 install_k8s_master]# 

执行生成证书如下

[root@server81 install_k8s_master]# ./Step1_create_CA.sh 
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
SELinux status:                 disabled
Generating RSA private key, 2048 bit long modulus
................................................+++
............................................................................+++
e is 65537 (0x10001)
ca.key  ca.pem
Generating RSA private key, 2048 bit long modulus
.......................................................................................+++
.............+++
e is 65537 (0x10001)
Signature ok
subject=/CN=kubernetes/O=k8s
Getting CA Private Key
apiserver.csr  apiserver.key  apiserver.pem  ca.key  ca.pem  ca.srl  openssl.cnf
Generating RSA private key, 2048 bit long modulus
.......................................+++
...........+++
e is 65537 (0x10001)
Signature ok
subject=/CN=admin/O=system:masters/OU=System
Getting CA Private Key
admin.csr  admin.key  admin.pem  apiserver.csr  apiserver.key  apiserver.pem  ca.key  ca.pem  ca.srl  openssl.cnf
Generating RSA private key, 2048 bit long modulus
...+++
..+++
e is 65537 (0x10001)
Signature ok
subject=/CN=system:kube-proxy
Getting CA Private Key
admin.csr  admin.pem      apiserver.key  ca.key  ca.srl       proxy.csr  proxy.pem
admin.key  apiserver.csr  apiserver.pem  ca.pem  openssl.cnf  proxy.key
checking TLS file:
admin.key  admin.pem  apiserver.key  apiserver.pem  ca.key  ca.pem  proxy.key  proxy.pem
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# ls
configDir     Step1_create_CA.sh  Step2_create_token.sh       Step4_install_controller.sh  Step6_create_kubeconfig_file.sh
Implement.sh  Step1_file          Step3_install_apiserver.sh  Step5_install_scheduler.sh   Step7_set_master_info.sh
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# ls /etc/kubernetes/
kubernetesTLS
[root@server81 install_k8s_master]# ls /etc/kubernetes/kubernetesTLS/
admin.key  admin.pem  apiserver.key  apiserver.pem  ca.key  ca.pem  proxy.key  proxy.pem
[root@server81 install_k8s_master]# 
[root@server81 install_k8s_master]# ls -ll /etc/kubernetes/kubernetesTLS/
total 32
-rw-r--r-- 1 root root 1675 Aug 19 22:21 admin.key
-rw-r--r-- 1 root root 1050 Aug 19 22:21 admin.pem
-rw-r--r-- 1 root root 1675 Aug 19 22:21 apiserver.key
-rw-r--r-- 1 root root 1302 Aug 19 22:21 apiserver.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 ca.key
-rw-r--r-- 1 root root 1135 Aug 19 22:21 ca.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 proxy.key
-rw-r--r-- 1 root root 1009 Aug 19 22:21 proxy.pem
[root@server81 install_k8s_master]# 

怎么样?有了这个脚本是不是感觉世界都美好了。只要理解清楚详细配置步骤,然后执行一下脚本,你就可以拥有更加多的咖啡时间了。


13423234-6804d40e7a12773b.jpg

部署master

将master所需的二进制执行文件拷贝至/user/bin目录下

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
softwaredir=$basedir/../install_kubernetes_softwarefunction copy_bin(){cp -v $softwaredir/kube-apiserver $binDircp -v $softwaredir/kube-controller-manager $binDircp -v $softwaredir/kube-scheduler $binDircp -v $softwaredir/kubectl $binDir
}copy_bin

API Server权限控制方式介绍

API Server权限控制分为三种:
Authentication(身份认证)、Authorization(授权)、AdmissionControl(准入控制)

身份认证:
当客户端向Kubernetes非只读端口发起API请求时,Kubernetes通过三种方式来认证用户的合法性。kubernetes中,验证用户是否有权限操作api的方式有三种:证书认证,token认证,基本信息认证。

① 证书认证

   设置apiserver的启动参数:--client_ca_file=SOMEFILE ,这个被引用的文件中包含的验证client的证书,如果被验证通过,那么这个验证记录中的主体对象将会作为请求的username。

② Token认证(本次使用token认证的方式

   设置apiserver的启动参数:--token_auth_file=SOMEFILE。 token file的格式包含三列:token,username,userid。当使用token作为验证方式时,在对apiserver的http请求中,增加 一个Header字段:Authorization ,将它的值设置为:Bearer SOMETOKEN。

③ 基本信息认证

   设置apiserver的启动参数:--basic_auth_file=SOMEFILE,如果更改了文件中的密码,只有重新启动apiserver使 其重新生效。其文件的基本格式包含三列:password,username,userid。当使用此作为认证方式时,在对apiserver的http 请求中,增加一个Header字段:Authorization,将它的值设置为: Basic BASE64ENCODEDUSER:PASSWORD。

,

创建 TLS Bootstrapping Token

Token auth file
Token可以是任意的包涵128 bit的字符串,可以使用安全的随机数发生器生成。

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)## set param 
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')## function and implments
function save_BOOTSTRAP_TOKEN(){
cat > $configConfDir/BOOTSTRAP_TOKEN <<EOF
$BOOTSTRAP_TOKEN
EOF
}save_BOOTSTRAP_TOKENfunction create_token(){
cat > $kubernetesDir/token.csv <<EOF
$BOOTSTRAP_TOKEN,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
}create_token

后续将token.csv发到所有机器(Master 和 Node)的 /etc/kubernetes/ 目录。


创建admin用户的集群参数

在前面使用openssl创建TLS证书的时候已经对证书的用户以及组签名至证书之中,那么下一步就是定义admin用户在集群中的参数了。

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)## set param 
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`
KUBE_APISERVER="https://$MASTER_IP:6443"# 设置集群参数
function config_cluster_param(){kubectl config set-cluster kubernetes \--certificate-authority=$kubernetesTLSDir/ca.pem \--embed-certs=true \--server=$KUBE_APISERVER
}config_cluster_param# 设置管理员认证参数
function config_admin_credentials(){kubectl config set-credentials admin \--client-certificate=$kubernetesTLSDir/admin.pem \--client-key=$kubernetesTLSDir/admin.key \--embed-certs=true
}config_admin_credentials# 设置管理员上下文参数
function config_admin_context(){kubectl config set-context kubernetes --cluster=kubernetes --user=admin
}config_admin_context# 设置集群默认上下文参数
function config_default_context(){kubectl config use-context kubernetes
}config_default_context

值得注意的采用token认证的方式,kubernetes在后续是需要创建bootstrap.kubeconfig的文件的,那么我们需要将admin相关的TLS证书文件写入这个bootstrap.kubeconfig文件。

该如何将admin的TLS文件参数写入bootstrap.kubeconfig呢?

这时候就要借助这个--embed-certs 的参数了,当该参数为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig文件中。
在指定了参数之后,后续由 kube-apiserver 自动生成;


安装kube-apiserver

  1. 编写kube-apiserver.service(/usr/lib/systemd/system)

将kube-apiserver.service文件写入/usr/lib/systemd/system/中,后续用来启动二进制文件:

[Unit]
Description=Kube-apiserver Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target
[Service]
Type=notify
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_ETCD_SERVERS \$KUBE_API_ADDRESS \$KUBE_API_PORT \$KUBELET_PORT \$KUBE_ALLOW_PRIV \$KUBE_SERVICE_ADDRESSES \$KUBE_ADMISSION_CONTROL \$KUBE_API_ARGS
Restart=always
LimitNOFILE=65536[Install]
WantedBy=default.target

kube-apiserver.service参数说明


EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver

说明:定义apiserver加载的两个配置文件


ExecStart=/usr/bin/kube-apiserver \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_ETCD_SERVERS \$KUBE_API_ADDRESS \$KUBE_API_PORT \$KUBELET_PORT \$KUBE_ALLOW_PRIV \$KUBE_SERVICE_ADDRESSES \$KUBE_ADMISSION_CONTROL \$KUBE_API_ARGS

说明:定义二进制可执行文件启用的文件路径/usr/bin/kube-apiserver,并且设置多个启用参数的变量。这些变量都是从配置文件中获取的。


2.编写config配置文件(/etc/kubernetes)

config配置文件是提供apiserver、controller-manager、scheduler服务读取kubernetes相关通用参数配置的。
config配置文件写入/etc/kubernetes目录下,当然这个/etc/kubernetes也是可以自定义的,需要改动的话,注意要在service的环境变量文件填写处修改即可。

[root@server81 kubernetes]# vim config ###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
# 表示错误日志记录到文件还是输出到stderr。
KUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debug
# 日志等级。设置0则是debug等级
KUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containers
# 允许运行特权容器。
KUBE_ALLOW_PRIV="--allow-privileged=true"# How the controller-manager, scheduler, and proxy find the apiserver
# 设置master服务器的访问
KUBE_MASTER="--master=http://172.16.5.81:8080"    

  1. 编写apiserver配置文件(/etc/kubernetes)

apiserver配置文件是单独提供apiserver服务读取相关参数的。
apiserver配置文件写入/etc/kubernetes目录下。

###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=172.16.5.81 --bind-address=172.16.5.81 --insecure-bind-address=172.16.5.81"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.5.81:2379,https://172.16.5.86:2379,https://172.16.5.87:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.6.0/24"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,NodeRestriction"## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC  --runtime-config=rbac.authorization.k8s.io/v1beta1  --kubelet-https=true  --token-auth-file=/etc/kubernetes/token.csv  --service-node-port-range=30000-32767  --tls-cert-file=/etc/kubernetes/kubernetesTLS/apiserver.pem  --tls-private-key-file=/etc/kubernetes/kubernetesTLS/apiserver.key  --client-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem  --service-account-key-file=/etc/kubernetes/kubernetesTLS/ca.key  --storage-backend=etcd3  --etcd-cafile=/etc/etcd/etcdSSL/ca.pem  --etcd-certfile=/etc/etcd/etcdSSL/etcd.pem  --etcd-keyfile=/etc/etcd/etcdSSL/etcd-key.pem  --enable-swagger-ui=true  --apiserver-count=3  --audit-log-maxage=30  --audit-log-maxbackup=3  --audit-log-maxsize=100  --audit-log-path=/var/lib/audit.log  --event-ttl=1h"

配置文件相关参数说明如下:

MASTER IP地址以及节点IP地址的绑定

## The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=$MASTER_IP --bind-address=$MASTER_IP --insecure-bind-address=$MASTER_IP"

说明:MASTER_IP就是填写安装master节点服务的IP地址,示例:
--advertise-address=172.16.5.81 --bind-address=172.16.5.81 --insecure-bind-address=172.16.5.81

etcd集群的endpoint访问地址

## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=$ETCD_ENDPOINT"

说明:ETCD_ENDPOINT访问etcd集群的方式,示例:
--etcd-servers=https://172.16.5.81:2379,https://172.16.5.86:2379,https://172.16.5.87:2379
如果是单台etcd的话,那么一个的单台IP即可,示例:
--etcd-servers=https://172.16.5.81:2379

kubernetes中service定义的虚拟网段

kubernetes主要分为pods的IP网段、service的IP网段,这里定义的是service的虚拟IP网段。

## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.6.0/24"

配置kubernetes的认证控制启动插件

## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,NodeRestriction"

配置多个自定义参数

## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC \--runtime-config=rbac.authorization.k8s.io/v1beta1 \--kubelet-https=true \                                                         --token-auth-file=$kubernetesDir/token.csv \                  --service-node-port-range=30000-32767 \--tls-cert-file=$kubernetesTLSDir/apiserver.pem \--tls-private-key-file=$kubernetesTLSDir/apiserver.key \--client-ca-file=$kubernetesTLSDir/ca.pem \--service-account-key-file=$kubernetesTLSDir/ca.key \--storage-backend=etcd3 \--etcd-cafile=$etcdCaPem \--etcd-certfile=$etcdPem \--etcd-keyfile=$etcdKeyPem \--enable-swagger-ui=true \--apiserver-count=3 \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/var/lib/audit.log \--event-ttl=1h"
参数说明
--authorization-mode=Node,RBAC启用Node RBAC插件
--runtime-config=rbac.authorization.k8s.io/v1beta1运行的rabc配置文件
--kubelet-https=true启用https
--token-auth-file=$kubernetesDir/token.csv指定生成token文件
--service-node-port-range=30000-32767设置node port端口号范围30000~32767
--tls-cert-file=$kubernetesTLSDir/apiserver.pem指定apiserver的tls公钥证书
--tls-private-key-file=$kubernetesTLSDir/apiserver.key指定apiserver的tls私钥证书
--client-ca-file=$kubernetesTLSDir/ca.pem指定TLS证书的ca根证书公钥
--service-account-key-file=$kubernetesTLSDir/ca.key指定apiserver的tls证书
--storage-backend=etcd3指定etcd存储为version 3系列
--etcd-cafile=$etcdCaPem指定etcd访问的ca根证书公钥
--etcd-certfile=$etcdPem指定etcd访问的TLS证书公钥
--etcd-keyfile=$etcdKeyPem指定etcd访问的TLS证书私钥
--enable-swagger-ui=true启用 swagger-ui 功能,Kubernetes使用了swagger-ui提供API在线查询功能
--apiserver-count=3设置集群中运行的API Sever数量,这种使用单个也没关系
--event-ttl=1hAPI Server 对于各种审计时间保存1小时

到此,关于apiserver的service以及配置文件基本说明清楚了。还有疑问的就给我留言吧。

3.启动apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

执行如下:

[root@server81 install_kubernetes]# systemctl daemon-reload
[root@server81 install_kubernetes]# systemctl enable kube-apiserver
[root@server81 install_kubernetes]# systemctl start kube-apiserver
[root@server81 install_kubernetes]# systemctl status kube-apiserver
● kube-apiserver.service - Kube-apiserver ServiceLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since Sun 2018-08-19 22:57:48 HKT; 11h agoDocs: https://github.com/GoogleCloudPlatform/kubernetesMain PID: 1688 (kube-apiserver)CGroup: /system.slice/kube-apiserver.service└─1688 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=https://172.16.5.81:2379,https://172.16.5.86:2379,...Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.415631    1688 storage_rbac.go:246] created role.rbac.authorizat...public
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.448673    1688 controller.go:597] quota admission added evaluato...dings}
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.454356    1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.496380    1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.534031    1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.579370    1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.612662    1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.652351    1688 storage_rbac.go:276] created rolebinding.rbac.aut...public
Aug 20 01:00:00 server81 kube-apiserver[1688]: I0820 01:00:00.330487    1688 trace.go:76] Trace[864267216]: "GuaranteedUpdate ...75ms):
Aug 20 01:00:00 server81 kube-apiserver[1688]: Trace[864267216]: [683.232535ms] [674.763984ms] Transaction prepared
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 install_kubernetes]# 

安装kube-controller-manager

1. 编写kube-controller-manager.service(/usr/lib/systemd/system)

kube-controller-manager.service写入/usr/lib/systemd/system目录下,提供二进制文件的service启动文件。

[root@server81 install_k8s_master]# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kube-controller-manager Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
Type=simple
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=always
LimitNOFILE=65536[Install]
WantedBy=default.target
[root@server81 install_k8s_master]# 

kube-controller-manager.service的参数说明

EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager

说明:定义kube-controller-manager.service启用的环境变量配置文件


ExecStart=/usr/bin/kube-controller-manager \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_CONTROLLER_MANAGER_ARGS

说明:定义service启用的二进制可执行文件的路径(/usr/bin/kube-controller-manager),以及启动该go服务后面多个flag参数,当然这些参数都是从配置文件中读取的。


2.配置文件controller-manager(/etc/kubernetes)

controller-manager文件写入/etc/kubernetes目录下。

[root@server81 install_k8s_master]# cat /etc/kubernetes/
apiserver           config              controller-manager  kubernetesTLS/      token.csv           
[root@server81 install_k8s_master]# cat /etc/kubernetes/controller-manager 
###
# The following values are used to configure the kubernetes controller-manager# defaults from config and apiserver should be adequate# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://172.16.5.81:8080  --address=127.0.0.1  --service-cluster-ip-range=10.0.6.0/24  --cluster-name=kubernetes  --cluster-signing-cert-file=/etc/kubernetes/kubernetesTLS/ca.pem  --cluster-signing-key-file=/etc/kubernetes/kubernetesTLS/ca.key  --service-account-private-key-file=/etc/kubernetes/kubernetesTLS/ca.key  --root-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem  --leader-elect=true  --cluster-cidr=10.1.0.0/16"
[root@server81 install_k8s_master]# 

controller-manager的参数说明

参数说明
--master=http://172.16.5.81:8080配置master访问地址
--address=127.0.0.1配置监听本地IP地址,address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器
--service-cluster-ip-range=10.0.6.0/24设置kubernetes的service的网段
--cluster-name=kubernetes设置集群的域名为kubernetes
--cluster-signing-cert-file=$kubernetesTLSDir/ca.pem设置集群签署TLS的ca根证书公钥 。指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
--cluster-signing-key-file=$kubernetesTLSDir/ca.key设置集群签署TLS的ca根证书私钥 ;指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
--service-account-private-key-file=$kubernetesTLSDir/ca.key设置集群安全账号签署TLS的ca根证书私钥
--root-ca-file=$kubernetesTLSDir/ca.pem设置集群root用户签署TLS的ca根证书公钥;用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件;
--leader-elect=true设置启动选举,但是目前只启动一个,也没地方要选择,主要在于API Sever有多个的时候
--cluster-cidr=$podClusterIP设置集群pod的IP网段

  1. controller-manager启动服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

运行结果如下:

[root@server81 conf]# systemctl daemon-reload
[root@server81 conf]# systemctl enable kube-controller-manager
[root@server81 conf]# systemctl start kube-controller-manager
[root@server81 conf]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kube-controller-manager ServiceLoaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 10:22:37 HKT; 33min agoDocs: https://github.com/GoogleCloudPlatform/kubernetesMain PID: 2246 (kube-controller)CGroup: /system.slice/kube-controller-manager.service└─2246 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --master=http://172.16....Aug 20 10:22:37 server81 kube-controller-manager[2246]: I0820 10:22:37.577898    2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.548284    2246 controller_utils.go:1025] Waiting for cac...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.568248    2246 controller_utils.go:1025] Waiting for cac...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.595675    2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.595716    2246 garbagecollector.go:142] Garbage collecto...rbage
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.650186    2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.668935    2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:29:56 server81 kube-controller-manager[2246]: W0820 10:29:56.356490    2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Aug 20 10:39:47 server81 kube-controller-manager[2246]: W0820 10:39:47.125097    2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Aug 20 10:51:45 server81 kube-controller-manager[2246]: W0820 10:51:45.878609    2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 conf]# 

安装kube-scheduler

1.编写kube-scheduler.service(/usr/lib/systemd/system)

kube-scheduler.service写入/usr/lib/systemd/system目录下

[root@server81 install_k8s_master]# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kube-scheduler Service
After=network.target[Service]
Type=simple
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_SCHEDULER_ARGSRestart=always
LimitNOFILE=65536[Install]
WantedBy=default.target
[root@server81 install_k8s_master]# 

kube-scheduler.service参数说明

EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler

说明:定义配置两个服务启用读取的配置文件


ExecStart=/usr/bin/kube-scheduler \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_SCHEDULER_ARGS

说明:定义启用的二进制可执行文件的路径(/usr/bin/kube-scheduler)以及启用相关参数。


2.配置文件scheduler(/etc/kubernetes)

scheduler文件写入/etc/kubernetes目录下。

[root@server81 install_k8s_master]# cat /etc/kubernetes/
apiserver           config              controller-manager  kubernetesTLS/      scheduler           token.csv
[root@server81 install_k8s_master]# cat /etc/kubernetes/scheduler 
###
# The following values are used to configure the kubernetes scheduler# defaults from config and scheduler should be adequate# Add your own!
KUBE_SCHEDULER_ARGS="--master=http://172.16.5.81:8080 --leader-elect=true --address=127.0.0.1"
[root@server81 install_k8s_master]# 

scheduler的参数说明

参数说明
--master=http://172.16.5.81:8080定义配置master的apiserver访问地址
--leader-elect=true设置启动选举,但是目前只启动一个,也没地方要选择,主要在于API Sever有多个的时候
--address=127.0.0.1配置监听本地IP地址,address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器

3.启用scheduler服务

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
systemctl status kube-scheduler

运行结果如下:

[root@server81 install_k8s_master]# systemctl daemon-reload
[root@server81 install_k8s_master]# systemctl enable kube-scheduler
[root@server81 install_k8s_master]# systemctl restart kube-scheduler
[root@server81 install_k8s_master]# systemctl status kube-scheduler
● kube-scheduler.service - Kube-scheduler ServiceLoaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 11:12:28 HKT; 686ms agoMain PID: 2459 (kube-scheduler)CGroup: /system.slice/kube-scheduler.service└─2459 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --master=http://172.16.5.81:8080...Aug 20 11:12:28 server81 systemd[1]: Started Kube-scheduler Service.
Aug 20 11:12:28 server81 systemd[1]: Starting Kube-scheduler Service...
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.724918    2459 options.go:148] WARNING: all flags other than --c... ASAP.
Aug 20 11:12:28 server81 kube-scheduler[2459]: I0820 11:12:28.727302    2459 server.go:126] Version: v1.11.0
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.728311    2459 authorization.go:47] Authorization is disabled
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.728332    2459 authentication.go:55] Authentication is disabled
Aug 20 11:12:28 server81 kube-scheduler[2459]: I0820 11:12:28.728341    2459 insecure_serving.go:47] Serving healthz insecurel...:10251
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 install_k8s_master]# 

执行到这里,master所需要的服务都已经安装完毕了,下面我们可以查看一下组件的情况:

[root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver  config  controller-manager  kubernetesTLS  scheduler  token.csv
[root@server81 install_k8s_master]# 
[root@server81 install_k8s_master]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
[root@server81 install_k8s_master]# 

可以看出,各个组件包含etcd都是正常运行着的。

那么下面我们就要创建为node节点TLS认证服务的kube-proxy kubeconfigkubelet bootstrapping kubeconfig 文件了。
这两个文件主要就提供proxykubelet访问apiserver的。


创建 kube-proxy kubeconfig 文件以及相关集群参数

kube-proxy kubeconfig 文件是提供kube-proxy用户请求apiserver所有API权限的集群参数的。
执行完以下命令之后,自动生成到/etc/kubernetes目录下即可。

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
serviceDir=/usr/lib/systemd/system
binDir=/usr/binkubernetesDir=/etc/kubernetes
kubernetesTLSDir=/etc/kubernetes/kubernetesTLS## set param 
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`BOOTSTRAP_TOKEN=前面记录的,需要一致。## proxy
## 设置proxy的集群参数kubectl config set-cluster kubernetes \--certificate-authority=$kubernetesTLSDir/ca.pem \--embed-certs=true \ ## true将证书自动写入kubeconfig文件--server=https://$MASTER_IP:6443 \  ## 设置访问的master地址--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig ## 生成kubeconfig的路径## 设置kube-proxy用户的参数kubectl config set-credentials kube-proxy \--client-certificate=$kubernetesTLSDir/proxy.pem \--client-key=$kubernetesTLSDir/proxy.key \--embed-certs=true \--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig## 设置kubernetes集群中kube-proxy用户的上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig## 设置kube-proxy用户的默认上下文参数
kubectl config use-context default --kubeconfig=$kubernetesDir/kube-proxy.kubeconfig

创建 kubelet bootstrapping kubeconfig 文件以及相关集群参数

创建kubelet响应式的kubeconfig文件,用于提供apiserver自动生成kubeconfig文件、以及公钥私钥。
该文件创建之后,在node节点kubelet启用的时候,自动会创建三个文件,后续在部署node部分的时候说明。

## 设置kubelet的集群参数kubectl config set-cluster kubernetes \--certificate-authority=$kubernetesTLSDir/ca.pem \--embed-certs=true \--server=https://$MASTER_IP:6443 \--kubeconfig=$kubernetesDir/bootstrap.kubeconfig## 设置kubelet用户的参数kubectl config set-credentials kubelet-bootstrap \--token=$BOOTSTRAP_TOKEN \--kubeconfig=$kubernetesDir/bootstrap.kubeconfig## 设置kubernetes集群中kubelet用户的默认上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=$kubernetesDir/bootstrap.kubeconfig## 设置kubelet用户的默认上下文参数kubectl config use-context default \--kubeconfig=$kubernetesDir/bootstrap.kubeconfig## 创建kubelet的RABC角色kubectl create --insecure-skip-tls-verify clusterrolebinding kubelet-bootstrap \--clusterrole=system:node-bootstrapper \--user=kubelet-bootstrap##参数说明:
#1、跳过tls安全认证直接创建kubelet-bootstrap角色
#2、设置集群角色:system:node-bootstrapper
#3、设置集群用户:kubelet-bootstrap

自动化创建kube-proxy kubeconfig、kubelet bootstrapping kubeconfig 文件

在看到这里的读者肯定会角色指令很多,很麻烦。没关系,送上一段咖啡代码:

[root@server81 install_k8s_master]# cat configDir/conf/BOOTSTRAP_TOKEN 
4b395732894828d5a34737d83c334330
[root@server81 install_k8s_master]# 
[root@server81 install_k8s_master]# cat Step6_create_kubeconfig_file.sh 
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
serviceDir=/usr/lib/systemd/system
binDir=/usr/binkubernetesDir=/etc/kubernetes
kubernetesTLSDir=/etc/kubernetes/kubernetesTLSconfigdir=$basedir/configDir
configServiceDir=$configdir/service
configConfDir=$configdir/conf## set param 
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`BOOTSTRAP_TOKEN=`cat $configConfDir/BOOTSTRAP_TOKEN`#echo $BOOTSTRAP_TOKEN## function and implments
# set proxy
function create_proxy_kubeconfig(){kubectl config set-cluster kubernetes \--certificate-authority=$kubernetesTLSDir/ca.pem \--embed-certs=true \--server=https://$MASTER_IP:6443 \--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}create_proxy_kubeconfigfunction config_proxy_credentials(){kubectl config set-credentials kube-proxy \--client-certificate=$kubernetesTLSDir/proxy.pem \--client-key=$kubernetesTLSDir/proxy.key \--embed-certs=true \--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}config_proxy_credentialsfunction config_proxy_context(){kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}config_proxy_contextfunction set_proxy_context(){kubectl config use-context default --kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}set_proxy_context## set bootstrapping
function create_kubelet_bootstrapping_kubeconfig(){kubectl config set-cluster kubernetes \--certificate-authority=$kubernetesTLSDir/ca.pem \--embed-certs=true \--server=https://$MASTER_IP:6443 \--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}create_kubelet_bootstrapping_kubeconfigfunction config_kubelet_bootstrapping_credentials(){kubectl config set-credentials kubelet-bootstrap \--token=$BOOTSTRAP_TOKEN \--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}config_kubelet_bootstrapping_credentialsfunction config_kubernetes_bootstrap_kubeconfig(){kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}config_kubernetes_bootstrap_kubeconfigfunction set_bootstrap_context(){kubectl config use-context default \--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}set_bootstrap_context## create rolebinding
function create_cluster_rolebinding(){kubectl create --insecure-skip-tls-verify clusterrolebinding kubelet-bootstrap \--clusterrole=system:node-bootstrapper \--user=kubelet-bootstrap
}create_cluster_rolebinding
[root@server81 install_k8s_master]# 

执行结果如下:

[root@server81 install_k8s_master]# ./Step6_create_kubeconfig_file.sh 
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@server81 install_k8s_master]# 
[root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver              config                 kube-proxy.kubeconfig  scheduler              
bootstrap.kubeconfig   controller-manager     kubernetesTLS/         token.csv              
[root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver  bootstrap.kubeconfig  config  controller-manager  kube-proxy.kubeconfig  kubernetesTLS  scheduler  token.csv
[root@server81 install_k8s_master]# 

查看生成的kube-proxy.kubeconfig的内容,如下:

[root@server81 install_k8s_master]# cat /etc/kubernetes/kube-proxy.kubeconfig 
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHVENDQWdHZ0F3SUJBZ0lKQVAxbEpzOTFHbG9wTUEwR0NTcUdTSWIzRFFFQkN3VUFNQ014RXpBUkJnTlYKQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0Y3pBZUZ3MHhPREE0TVRreE5ESXhORFJhRncwMApOakF4TURReE5ESXhORFJhTUNNeEV6QVJCZ05WQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0CmN6Q0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5BejJxOUNsVWozZmNTY20wVTYKWnhrTVFCVVJzSFpFeUpIbXhMWUR1RmNzbGlyUjZxZHFSbExjM3Z1SnlVSHB3dUF5QzZxYzlaZE52clNCUkhOegpxUVFSREVuUENMQXQ0ZFVkUjh2NnQvOVhKbnJ0Y0k3My94U0RKNno2eFh3K2MvTy95c0NET3pQNkFDcmE5cHlPCmJpQ1ZRSEJ4eEI3bGxuM0ErUEFaRWEzOHZSNmhTSklzRndxVjAwKy9iNSt5K3FvVVdtNWFtcS83OWNIM2Zwd0kKNnRmUlZIeHAweXBKNi9TckYyZWVWVU1KVlJxZWtiNjBuZkJRUUNEZ2YyL3lSOGNxVDZlV3VDdmZnVEdCV01QSQpPSjVVM1VxekNMVGNpNHpDSFhaTUlra25EWVFuNFR6Qm05MitzTGhXMlpFZk5DOUxycFZYWHpzTm45alFzeTA3ClliOENBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUI4R0ExVWQKSXdRWU1CYUFGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSQpodmNOQVFFTEJRQURnZ0VCQUtNVGJXcng5WXJmSXByY3RHMThTanJCZHVTYkhLL05FRGcySHNCb1BrU2YwbE1TCmdGTnNzOGZURlliKzY3UWhmTnA1MjBodnk3M3JKU29OVkJweWpBWDR1SnRjVG9aZDdCZVhyUHdNVWVjNXRjQWoKSFdvY1dKaXNpck0vdFV4cUxLekdRdnFhVDhmQy9UUW5kTGUxTkJ0cEFQbjM5RzE5VFVialMvUTlKVE1qZVdMWAo0dU5MVExGUVUrYTAwTWMrMGVSWjdFYUVRSks2U0h1OUNuSEtNZnhIVC81UTdvbXBrZlBtTTZLT0VOVndaK0Q5Clh0ZzlIUmlrampFMGtsNHB3TmlHRnZQYVhuY0V5RDlwVW5vdWI0RGc2UHJ1MU9zTjYxakwyd2VneVY4WU1nUVEKWEdkVTIveExMcEh2cVlPVDNRay9mNWw5MHpackQvYm5vZGhxNS84PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==server: https://172.16.5.81:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kube-proxyname: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxyuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN1ekNDQWFNQ0NRRFZDSG9rSldveEdEQU5CZ2txaGtpRzl3MEJBUXNGQURBak1STXdFUVlEVlFRRERBcHIKZFdKbGNtNWxkR1Z6TVF3d0NnWURWUVFLREFOck9ITXdIaGNOTVRnd09ERTVNVFF5TVRRMFdoY05Namd3T0RFMgpNVFF5TVRRMFdqQWNNUm93R0FZRFZRUUREQkZ6ZVhOMFpXMDZhM1ZpWlMxd2NtOTRlVENDQVNJd0RRWUpLb1pJCmh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWpOVitwVGVFU2d6di9rcDZvQ3Z2T3NoUXFYS0t3RWFrTWEKcDRvNEdoZUZySzVUbW53eTc4YWpJdHM4b0Nyb3l2Q1lVR2VVcVJqaG1xSUdRWWJxWVFPTy9NZ21pZmdFMVFlego3RzNYKzJsQ25qRThOVnZBd011QXpYU0w4L3dkU1NEUTZDdGdvUkVCcFhTQUJWYStaMldXVy9VSm53ZFlFWHlGClh2N3ZERWRJZG1pUWNjWEtMcHRuMWFzV25nek1aVG9EMDVjMWxQSTlZZ1ZqMFVsNldWMkVMdHhxdGVqdXJHT2kKN3R0K3hRanY0ckdQZ01udTNqOEF1QTNLZXpSUFJ0TVA1RkF6SHZ4WVQ3RU0rRzVmU2JGWFY0ZVVMb0czS3pzWQo3eitDYlF1bnYyNmhXMFM5dWtZT0lNWnA4eVJtcHJ6cGxSVnh5d0dJUUw2ajhqdndkcXNDQXdFQUFUQU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBUUVBQmNUazU0TUY5YnNpaDZaVXJiakh0MmFXR3VaTzZBODlZa3ZUL21VcTRoTHUKd2lUcHRKZWNJWEh5RkZYemVCSDJkUGZIZ1lldEMrQTJGS0dsZFJ1SHJuUW1iTWFkdjN6bGNjbEl2ald6dU1GUQpnenhUQUJ0dGVNYkYvL2M5cE9TL2ZmQS9OcVV0akVEUzlJVXZUTDdjUEs3Z0dMSzRrQWY2N2hPTERLb1NGT2ZjCnp0bEpXWkhPaEpGRjM0bkQySytXMmZzb0g4WFdTeDd1N3FmSHFFRkFNOW5BRjRyQjNZdUFHKzdIOUxMbmVaK1IKbHBTeThLNzBVZUdUVFpFdW5yMzJwMmJEZWxQN0tCTWsvbmUxV01PbzRnL01QUUhOTm5XZHlNeFJ6bHBOeTBregpOekVydVlhbHpINDVTVHIrNytCMkNhcS9sWDFTSWpENXBYVDhZMXRtSFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeU0xWDZsTjRSS0RPLytTbnFnSys4NnlGQ3Bjb3JBUnFReHFuaWpnYUY0V3NybE9hCmZETHZ4cU1pMnp5Z0t1aks4SmhRWjVTcEdPR2FvZ1pCaHVwaEE0Nzh5Q2FKK0FUVkI3UHNiZGY3YVVLZU1UdzEKVzhEQXk0RE5kSXZ6L0IxSklORG9LMkNoRVFHbGRJQUZWcjVuWlpaYjlRbWZCMWdSZklWZS91OE1SMGgyYUpCeAp4Y291bTJmVnF4YWVETXhsT2dQVGx6V1U4ajFpQldQUlNYcFpYWVF1M0dxMTZPNnNZNkx1MjM3RkNPL2lzWStBCnllN2VQd0M0RGNwN05FOUcwdy9rVURNZS9GaFBzUXo0Ymw5SnNWZFhoNVF1Z2Jjck94anZQNEp0QzZlL2JxRmIKUkwyNlJnNGd4bW56SkdhbXZPbVZGWEhMQVloQXZxUHlPL0IycXdJREFRQUJBb0lCQVFDeU5KcmJXT3laYTJXSgo4REZrVGorTkhnU01XNDQ2NjBncStaTEt0Zk5pQUw0NWovVEFXS3czU3p4NStSbmtPdWt3RU56NnNCSktCSjRwClFRZ1NaaHRtL3hVVHhEQVpycUFveitMNXNQNXNjalRXV1NxNW5SejgvZmhZZ0lRdHNRZmZXY2RTQjlXcHRCNVUKZi9FOUJJbmF2RkFyN1RmM1dvOWFSVHNEWUw4eTJtVjJrakNpMkd4S3U4K3BQWXN3ZUIrbGZjc1QyNlB3ODBsRgpXTmZVODRzdDE1SjBCNitRSmhEQnNDb3NpbGxrcFZnaDhPMzVNNmE3WjZlL3IrZnZuYjcycXd2MkdGQm0rNEpmCmRydVJtTHRLdHUxVGhzUGQ4YkQ2MXpTblMrSXoyUGxGWnk0RkY3cFhWU2RwbjVlSm00dkJMM3NOem9HWGlGUmIKOTAydFo5d1JBb0dCQVB6ZXZEZWhEYVBiZ1FLTU5hMFBzN2dlNDZIUkF6Rzl4RDh2RXk4dEVXcVVVY2c3Mndqawp6MGFvLzZvRkFDM0tkM3VkUmZXdmhrV2RrcE9CMXIzMml6Y29Ka3lOQmxDc2YxSDF2dVJDb0gwNTZwM3VCa3dHCjFsZjFWeDV0cjVHMU5laXdzQjdsTklDa2pPNTg2b3F6M3NNWmZMcHM1ZlMxeVZFUExrVmErL2N0QW9HQkFNdEoKbnhpQXNCMnZKaXRaTTdrTjZjTzJ1S0lwNHp0WjZDMFhBZmtuNnd5Zk9zd3lyRHdNUnA2Yk56OTNCZzk0azE4aQpIdlJ3YzJPVVBkeXVrU2YyVGZVbXN6L0h1OWY0emRCdFdYM2lkOE50b29MYUd6RnVVN3hObVlrUWJaL2Y1ZmpNCmtpZzlVZVJYdng5THJTa3RDdEdyRWMvK0JubHNrRk1xc2IrZ1FVdzNBb0dCQUs0SzA3cnFFNHhMQVNGeXhXTG0KNHNpQUlpWjJ5RjhOQUt5SVJ3ajZXUGxsT21DNXFja1dTditVUTl1T2M1QVF3V29JVm1XQ09NVmpiY1l1NEZHQgpCbEtoUkxMOWdYSTNONjUrbUxOY2xEOThoRm5Nd1BMRTVmUkdQWDhJK1lVdEZ2eWYxNmg4RTBYVGU5aU5pNVNKCnRuSEw4Z2dSK2JnVEFvdlRDZ0xjVzMzRkFvR0FSZWFYelM0YTRPb2ovczNhYWl4dGtEMlpPVEdjRUFGM1EySGcKN05LY0VTZ0RhTW1YemNJTzJtVFcxM3pPMmEwRlI3WU0zTko1NnVqRGFNbWg0aExnZFlhTUprZEF3Uit0YlpqYwpKOXdpZ0ZHSGl1VUNhcm5jRXlpL3ZaQ25rVXpFNEFzL3lwUmpQMWdvd05NZHhNWFhMWWRjUlorOGpDNFhabkdNCjB5NkFwWHNDZ1lFQXh6aUkyK2tUekNJcENnOGh3WXdiQ21sTVBaM3RBNXRLRHhKZmNjdWpXSExHVkNnMVd6QTAKdHZuUmxJbnZxdzFXOWtsSGlHTlhmTUpqczhpeXk5WUl4S0NKeTdhUU85WXZ1SVR6OC9PMHVCRURlQ1gvOHFDTwpzRGJ0eHpsa3A2NVdaYTFmR2FLRWVwcHFtWUU2NUdiZk91eHNxRENDSG1WWXcvZmR0M2NnMjI0PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
[root@server81 install_k8s_master]# 

查看生成的bootstrap.kubeconfig的内容,如下:

[root@server81 install_k8s_master]# cat /etc/kubernetes/bootstrap.kubeconfig 
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHVENDQWdHZ0F3SUJBZ0lKQVAxbEpzOTFHbG9wTUEwR0NTcUdTSWIzRFFFQkN3VUFNQ014RXpBUkJnTlYKQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0Y3pBZUZ3MHhPREE0TVRreE5ESXhORFJhRncwMApOakF4TURReE5ESXhORFJhTUNNeEV6QVJCZ05WQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0CmN6Q0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5BejJxOUNsVWozZmNTY20wVTYKWnhrTVFCVVJzSFpFeUpIbXhMWUR1RmNzbGlyUjZxZHFSbExjM3Z1SnlVSHB3dUF5QzZxYzlaZE52clNCUkhOegpxUVFSREVuUENMQXQ0ZFVkUjh2NnQvOVhKbnJ0Y0k3My94U0RKNno2eFh3K2MvTy95c0NET3pQNkFDcmE5cHlPCmJpQ1ZRSEJ4eEI3bGxuM0ErUEFaRWEzOHZSNmhTSklzRndxVjAwKy9iNSt5K3FvVVdtNWFtcS83OWNIM2Zwd0kKNnRmUlZIeHAweXBKNi9TckYyZWVWVU1KVlJxZWtiNjBuZkJRUUNEZ2YyL3lSOGNxVDZlV3VDdmZnVEdCV01QSQpPSjVVM1VxekNMVGNpNHpDSFhaTUlra25EWVFuNFR6Qm05MitzTGhXMlpFZk5DOUxycFZYWHpzTm45alFzeTA3ClliOENBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUI4R0ExVWQKSXdRWU1CYUFGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSQpodmNOQVFFTEJRQURnZ0VCQUtNVGJXcng5WXJmSXByY3RHMThTanJCZHVTYkhLL05FRGcySHNCb1BrU2YwbE1TCmdGTnNzOGZURlliKzY3UWhmTnA1MjBodnk3M3JKU29OVkJweWpBWDR1SnRjVG9aZDdCZVhyUHdNVWVjNXRjQWoKSFdvY1dKaXNpck0vdFV4cUxLekdRdnFhVDhmQy9UUW5kTGUxTkJ0cEFQbjM5RzE5VFVialMvUTlKVE1qZVdMWAo0dU5MVExGUVUrYTAwTWMrMGVSWjdFYUVRSks2U0h1OUNuSEtNZnhIVC81UTdvbXBrZlBtTTZLT0VOVndaK0Q5Clh0ZzlIUmlrampFMGtsNHB3TmlHRnZQYVhuY0V5RDlwVW5vdWI0RGc2UHJ1MU9zTjYxakwyd2VneVY4WU1nUVEKWEdkVTIveExMcEh2cVlPVDNRay9mNWw5MHpackQvYm5vZGhxNS84PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==server: https://172.16.5.81:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubelet-bootstrapname: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrapuser:token: 4b395732894828d5a34737d83c334330
[root@server81 install_k8s_master]# 

最后总结一下master部署

检查master组件情况以及集群情况

[root@server81 install_k8s_master]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
[root@server81 install_k8s_master]# 
[root@server81 install_k8s_master]# kubectl cluster-info
Kubernetes master is running at https://172.16.5.81:6443To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@server81 install_k8s_master]# 

确认后续master需要拷贝到node的相关证书文件

因为node部署的时候,proxykubelet是需要拷贝上面生成的证书以及kubeconfig文件的,这里罗列如下:

[root@server81 install_k8s_master]# tree /etc/kubernetes/
/etc/kubernetes/
├── apiserver
├── bootstrap.kubeconfig
├── config
├── controller-manager
├── kube-proxy.kubeconfig
├── kubernetesTLS
│   ├── admin.key
│   ├── admin.pem
│   ├── apiserver.key
│   ├── apiserver.pem
│   ├── ca.key
│   ├── ca.pem
│   ├── proxy.key
│   └── proxy.pem
├── scheduler
└── token.csv1 directory, 15 files
[root@server81 install_k8s_master]# 

其中apiserver、controller-manager、scheduler三个配置文件不需要拷贝到node节点服务器上,但是个人比较懒惰,干脆整个文件夹目录拷贝过去了。

好了,这里已经写清楚了部署master以及相关证书需要知道的知识了,那么下一步我们就切换到node部署的环节。


13423234-908fd626817d259d.jpg

部署Node节点服务

在部署完毕上面的步骤之后,我们就可以开始部署Node的节点服务了,在部署之前,首先淡定将master部署时候创建的TLS以及相关kubeconfig文件都拷贝至各台node节点上。

Node服务器拓扑

13423234-1ab880bd4e5996c1.png

因为上面已经写了很多内容了,相信读者还要找拓扑来看比较麻烦,那么就在这里部署Node服务之前,再次讲述一下。

1.首先我在之前的篇章已经部署好了三台etcd的集群服务
2.在server81的服务器上我部署好了Master节点的服务
3.那么下一步就是要给Server81、86、87三台服务器都部署上Node节点的服务了。

那么下面我们就开始动手部署Node节点的服务吧。


拷贝Master节点创建的TLS以及kubeconfig文件至Node节点服务

因为Server81就是Master节点服务,所以不需要拷贝证书。
而Server86、87服务器就需要拷贝了,执行命名如下:

[root@server81 etc]# scp -r kubernetes root@server86:/etc
ca.pem                                                                                               100% 1135   243.9KB/s   00:00    
ca.key                                                                                               100% 1679   383.9KB/s   00:00    
apiserver.pem                                                                                        100% 1302   342.6KB/s   00:00    
apiserver.key                                                                                        100% 1675   378.4KB/s   00:00    
admin.pem                                                                                            100% 1050   250.3KB/s   00:00    
admin.key                                                                                            100% 1675   401.5KB/s   00:00    
proxy.pem                                                                                            100% 1009   253.2KB/s   00:00    
proxy.key                                                                                            100% 1679    74.5KB/s   00:00    
token.csv                                                                                            100%   84     4.5KB/s   00:00    
config                                                                                               100%  656    45.9KB/s   00:00    
apiserver                                                                                            100% 1656   484.7KB/s   00:00    
controller-manager                                                                                   100%  615   163.8KB/s   00:00    
scheduler                                                                                            100%  243    10.9KB/s   00:00    
kube-proxy.kubeconfig                                                                                100% 5451   335.3KB/s   00:00    
bootstrap.kubeconfig                                                                                 100% 1869   468.9KB/s   00:00    
[root@server81 etc]# 
[root@server81 etc]# scp -r kubernetes root@server87:/etc
ca.pem                                                                                               100% 1135   373.4KB/s   00:00    
ca.key                                                                                               100% 1679   470.8KB/s   00:00    
apiserver.pem                                                                                        100% 1302   511.5KB/s   00:00    
apiserver.key                                                                                        100% 1675   565.6KB/s   00:00    
admin.pem                                                                                            100% 1050   340.2KB/s   00:00    
admin.key                                                                                            100% 1675   468.4KB/s   00:00    
proxy.pem                                                                                            100% 1009   247.8KB/s   00:00    
proxy.key                                                                                            100% 1679   516.4KB/s   00:00    
token.csv                                                                                            100%   84    30.2KB/s   00:00    
config                                                                                               100%  656   217.0KB/s   00:00    
apiserver                                                                                            100% 1656   415.7KB/s   00:00    
controller-manager                                                                                   100%  615   240.0KB/s   00:00    
scheduler                                                                                            100%  243    92.1KB/s   00:00    
kube-proxy.kubeconfig                                                                                100% 5451     1.3MB/s   00:00    
bootstrap.kubeconfig                                                                                 100% 1869   614.0KB/s   00:00    
[root@server81 etc]# 

查看Server86的拷贝文件情况,如下:

[root@server86 etc]# pwd
/etc
[root@server86 etc]# 
[root@server86 etc]# tree kubernetes/
kubernetes/
├── apiserver
├── bootstrap.kubeconfig
├── config
├── controller-manager
├── kube-proxy.kubeconfig
├── kubernetesTLS
│   ├── admin.key
│   ├── admin.pem
│   ├── apiserver.key
│   ├── apiserver.pem
│   ├── ca.key
│   ├── ca.pem
│   ├── proxy.key
│   └── proxy.pem
├── scheduler
└── token.csv1 directory, 15 files
[root@server86 etc]# 
[root@server86 etc]# cd kubernetes/
[root@server86 kubernetes]# ls
apiserver  bootstrap.kubeconfig  config  controller-manager  kube-proxy.kubeconfig  kubernetesTLS  scheduler  token.csv
[root@server86 kubernetes]# ls kubernetesTLS/
admin.key  admin.pem  apiserver.key  apiserver.pem  ca.key  ca.pem  proxy.key  proxy.pem
[root@server86 kubernetes]# 

查看Server87的拷贝文件情况,如下:

[root@server87 ~]# cd /etc/
[root@server87 etc]# pwd
/etc
[root@server87 etc]# tree kubernetes/
kubernetes/
├── apiserver
├── bootstrap.kubeconfig
├── config
├── controller-manager
├── kube-proxy.kubeconfig
├── kubernetesTLS
│   ├── admin.key
│   ├── admin.pem
│   ├── apiserver.key
│   ├── apiserver.pem
│   ├── ca.key
│   ├── ca.pem
│   ├── proxy.key
│   └── proxy.pem
├── scheduler
└── token.csv1 directory, 15 files
[root@server87 etc]# cd kubernetes/
[root@server87 kubernetes]# ls
apiserver  bootstrap.kubeconfig  config  controller-manager  kube-proxy.kubeconfig  kubernetesTLS  scheduler  token.csv
[root@server87 kubernetes]# 
[root@server87 kubernetes]# ls kubernetesTLS/
admin.key  admin.pem  apiserver.key  apiserver.pem  ca.key  ca.pem  proxy.key  proxy.pem
[root@server87 kubernetes]# 

拷贝访问etcd集群的TLS证书文件

  • 因为每台Node都需要访问Etcd集群服务,在后面部署Calico或者flanneld网络的时候都是需要证书访问etcd集群的,该部分就会在后面的部署中说明了。
  • 但是因为恰好Server81、86、87服务器节点,我是用来做etcd三台服务集群的,在部署的时候已经拷贝好相关证书目录了。
  • 可是,如果新增一台服务器想要加入Node的话,这时候该台服务器就需要单独将证书拷贝至相应的文件目录了。
    那么这里展示一下etcd集群TLS证书文件应该放在Node节点的哪个目录文件下
    其实哪个文件目录在部署etcd集群的时候我有说明过是可以自定义的,不过每个Node文件夹需要相同的服务器路径而已。

Server81存放etcd的TLS文件路径(/etc/etcd/etcdSSL)

[root@server81 etc]# cd etcd/
[root@server81 etcd]# ls
etcd.conf  etcdSSL
[root@server81 etcd]# 
[root@server81 etcd]# cd etcdSSL/
[root@server81 etcdSSL]# 
[root@server81 etcdSSL]# pwd
/etc/etcd/etcdSSL
[root@server81 etcdSSL]# 
[root@server81 etcdSSL]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem
[root@server81 etcdSSL]# 
[root@server81 etcdSSL]# ls -ll
total 36
-rw-r--r-- 1 root root  288 Aug 14 14:05 ca-config.json
-rw-r--r-- 1 root root  997 Aug 14 14:05 ca.csr
-rw-r--r-- 1 root root  205 Aug 14 14:05 ca-csr.json
-rw------- 1 root root 1675 Aug 14 14:05 ca-key.pem
-rw-r--r-- 1 root root 1350 Aug 14 14:05 ca.pem
-rw-r--r-- 1 root root 1066 Aug 14 14:05 etcd.csr
-rw-r--r-- 1 root root  296 Aug 14 14:05 etcd-csr.json
-rw------- 1 root root 1675 Aug 14 14:05 etcd-key.pem
-rw-r--r-- 1 root root 1436 Aug 14 14:05 etcd.pem
[root@server81 etcdSSL]# 

Server86存在etcd的TLS文件路径(/etc/etcd/etcdSSL)

[root@server86 etcd]# cd etcdSSL/
[root@server86 etcdSSL]# 
[root@server86 etcdSSL]# pwd
/etc/etcd/etcdSSL
[root@server86 etcdSSL]# ls -ll
total 36
-rw-r--r-- 1 root root  288 Aug 14 16:42 ca-config.json
-rw-r--r-- 1 root root  997 Aug 14 16:42 ca.csr
-rw-r--r-- 1 root root  205 Aug 14 16:42 ca-csr.json
-rw------- 1 root root 1675 Aug 14 16:42 ca-key.pem
-rw-r--r-- 1 root root 1350 Aug 14 16:42 ca.pem
-rw-r--r-- 1 root root 1066 Aug 14 16:42 etcd.csr
-rw-r--r-- 1 root root  296 Aug 14 16:42 etcd-csr.json
-rw------- 1 root root 1675 Aug 14 16:42 etcd-key.pem
-rw-r--r-- 1 root root 1436 Aug 14 16:42 etcd.pem
[root@server86 etcdSSL]# 

Server87存在etcd的TLS文件路径(/etc/etcd/etcdSSL)

[root@server87 etcd]# cd etcdSSL/
[root@server87 etcdSSL]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem
[root@server87 etcdSSL]# 
[root@server87 etcdSSL]# pwd
/etc/etcd/etcdSSL
[root@server87 etcdSSL]# 
[root@server87 etcdSSL]# ls -ll
total 36
-rw-r--r-- 1 root root  288 Aug 14 16:52 ca-config.json
-rw-r--r-- 1 root root  997 Aug 14 16:52 ca.csr
-rw-r--r-- 1 root root  205 Aug 14 16:52 ca-csr.json
-rw------- 1 root root 1675 Aug 14 16:52 ca-key.pem
-rw-r--r-- 1 root root 1350 Aug 14 16:52 ca.pem
-rw-r--r-- 1 root root 1066 Aug 14 16:52 etcd.csr
-rw-r--r-- 1 root root  296 Aug 14 16:52 etcd-csr.json
-rw------- 1 root root 1675 Aug 14 16:52 etcd-key.pem
-rw-r--r-- 1 root root 1436 Aug 14 16:52 etcd.pem
[root@server87 etcdSSL]# 

部署Node步骤说明

  • 部署docker-ce (如果是直接部署docker的话,那就要启用cgroup参数了,用docker-ce则不需要)
  • 部署kubelet服务
  • 部署kube-proxy服务

基本上每台Node节点都需要部署这三个服务的,我就单独拿一台Server81部署进行说明先吧。其余Server86、87的部署过程都是跟Server81的Node节点部署一致的。


部署Docker-ce

如果不太懂docker安装的读者,可以访问docker官网的部署文档说明(官网需要翻墙访问比较顺畅)

13423234-800b034e476c0159.png


1.下载docker-ce的rpm包

点击这里,下载docker-ce的rpm安装包。

13423234-68cf69c73e13898d.png


2.执行安装docker-ce

yum install docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm -y

执行安装过程如下:

[root@server81 docker]# ls
certs.d      docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm  docker.service.simple  install_docker-ce.sh  set_docker_network.sh
daemon.json  docker.service                                erase_docker-ce.sh     login_registry.sh     test.sh
[root@server81 docker]# 
[root@server81 docker]# yum install docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm -y
Loaded plugins: fastestmirror
Examining docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Marking docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:18.03.0.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Loading mirror speeds from cached hostfile......Installed:docker-ce.x86_64 0:18.03.0.ce-1.el7.centos                                                                                           Dependency Installed:audit-libs-python.x86_64 0:2.8.1-3.el7     checkpolicy.x86_64 0:2.5-6.el7                 container-selinux.noarch 2:2.66-1.el7     libcgroup.x86_64 0:0.41-15.el7             libseccomp.x86_64 0:2.3.1-3.el7                libsemanage-python.x86_64 0:2.5-11.el7    pigz.x86_64 0:2.3.4-1.el7                  policycoreutils-python.x86_64 0:2.5-22.el7     python-IPy.noarch 0:0.75-6.el7            setools-libs.x86_64 0:3.3.8-2.el7         Dependency Updated:audit.x86_64 0:2.8.1-3.el7                          audit-libs.x86_64 0:2.8.1-3.el7      libselinux.x86_64 0:2.5-12.el7            libselinux-python.x86_64 0:2.5-12.el7               libselinux-utils.x86_64 0:2.5-12.el7 libsemanage.x86_64 0:2.5-11.el7           libsepol.x86_64 0:2.5-8.1.el7                       policycoreutils.x86_64 0:2.5-22.el7  selinux-policy.noarch 0:3.13.1-192.el7_5.4selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4Complete!
[root@server81 docker]# 

3.启用docker-ce

systemctl daemon-reload
systemctl enable docker
systemctl restart docker
systemctl status docker

执行如下:

[root@server81 install_k8s_node]# systemctl daemon-reload
[root@server81 install_k8s_node]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@server81 install_k8s_node]# systemctl restart docker
[root@server81 install_k8s_node]# systemctl status docker
● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 14:11:17 HKT; 639ms agoDocs: https://docs.docker.comMain PID: 3014 (dockerd)Memory: 36.4MCGroup: /system.slice/docker.service├─3014 /usr/bin/dockerd└─3021 docker-containerd --config /var/run/docker/containerd/containerd.tomlAug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17+08:00" level=info msg=serving... address="/var/run/docker/c...d/grpc"
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17+08:00" level=info msg="containerd successfully booted in 0....tainerd
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.492174891+08:00" level=info msg="Graph migration to content...econds"
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.493087053+08:00" level=info msg="Loading containers: start."
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.608563905+08:00" level=info msg="Default bridge (docker0) i...ddress"
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.645395453+08:00" level=info msg="Loading containers: done."
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.659457843+08:00" level=info msg="Docker daemon" commit=0520...03.0-ce
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.659619134+08:00" level=info msg="Daemon has completed initialization"
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.669961967+08:00" level=info msg="API listen on /var/run/docker.sock"
Aug 20 14:11:17 server81 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 install_k8s_node]# 
[root@server81 install_k8s_node]# docker version
Client:Version:   18.03.0-ceAPI version:   1.37Go version:    go1.9.4Git commit:    0520e24Built: Wed Mar 21 23:09:15 2018OS/Arch:   linux/amd64Experimental:  falseOrchestrator:  swarmServer:Engine:Version:  18.03.0-ceAPI version:  1.37 (minimum version 1.12)Go version:   go1.9.4Git commit:   0520e24Built:    Wed Mar 21 23:13:03 2018OS/Arch:  linux/amd64Experimental: false
[root@server81 install_k8s_node]# 

拷贝二进制可执行文件至Node服务器(/usr/bin

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
softwareDir=$basedir/../install_kubernetes_software
binDir=/usr/bin## function and implments
function check_firewalld_selinux(){systemctl status firewalld/usr/sbin/sestatus -vswapoff -a
}check_firewalld_selinuxfunction copy_bin(){
cp -v $softwareDir/kubectl $binDir
cp -v $softwareDir/kubelet $binDir
cp -v $softwareDir/kube-proxy $binDir
}copy_bin

执行结果如下:

[root@server81 install_k8s_node]# ./Step1_config.sh 
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
SELinux status:                 disabled
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubectl’ -> ‘/usr/bin/kubectl’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubelet’ -> ‘/usr/bin/kubelet’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kube-proxy’ -> ‘/usr/bin/kube-proxy’
[root@server81 install_k8s_node]# 
[root@server81 install_k8s_node]# ls -ll /usr/bin/kube*
-rwxr-xr-x 1 root root 185471375 Aug 19 22:57 /usr/bin/kube-apiserver
-rwxr-xr-x 1 root root 154056749 Aug 19 22:57 /usr/bin/kube-controller-manager
-rwxr-xr-x 1 root root  55421261 Aug 20 14:14 /usr/bin/kubectl
-rwxr-xr-x 1 root root 162998216 Aug 20 14:14 /usr/bin/kubelet
-rwxr-xr-x 1 root root  52055519 Aug 20 14:14 /usr/bin/kube-proxy
-rwxr-xr-x 1 root root  55610654 Aug 19 22:57 /usr/bin/kube-scheduler
[root@server81 install_k8s_node]# 

首先关闭每台Node服务器的swap分区、防火墙、selinux,然后将二进制可执行文件拷贝至/usr/bin目录下。
那么下面开始部署Node节点的kubeletkube-proxy服务。


部署kubelet服务

1.编写kubelet.service文件(/usr/lib/systemd/system)

编写kubelet.service写入/usr/lib/systemd/system目录下:

[root@server81 install_k8s_node]# cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBELET_CONFIG\$KUBELET_ADDRESS \$KUBELET_PORT \$KUBELET_HOSTNAME \$KUBELET_POD_INFRA_CONTAINER \$KUBELET_ARGS
Restart=on-failure[Install]
WantedBy=multi-user.target
[root@server81 install_k8s_node]# 

kubelet.service参数说明

EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet

说明:配置kubelet启用读取的两个配置文件config、kubelet,其中config在部署master服务的时候已经写好了,这是一个通用的配置文件。那么下面则单独编写kubelet的配置文件。


ExecStart=/usr/bin/kubelet \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBELET_CONFIG\$KUBELET_ADDRESS \$KUBELET_PORT \$KUBELET_HOSTNAME \$KUBELET_POD_INFRA_CONTAINER \$KUBELET_ARGS

说明:定义service启用的时候运行的二进制可执行文件(/usr/bin/kubelet)以及相关服务启动所需的参数(这些参数从配置文件中读取)。


配置文件kubelet(/etc/kubernetes)

编写kubelet配置文件至/etc/kubernetes/目录下:

[root@server81 install_k8s_node]# cat /etc/kubernetes/
apiserver              config                 kubelet                kubernetesTLS/         token.csv              
bootstrap.kubeconfig   controller-manager     kube-proxy.kubeconfig  scheduler              
[root@server81 install_k8s_node]# cat /etc/kubernetes/kubelet ###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
#KUBELET_ADDRESS="--address=0.0.0.0"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.5.81"
#
## location of the api-server
KUBELET_CONFIG="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=172.16.5.81:5000/pause-amd64:3.1"
#
## Add your own!
KUBELET_ARGS="--cluster-dns=10.0.6.200  --serialize-image-pulls=false  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cert-dir=/etc/kubernetes/kubernetesTLS  --cluster-domain=cluster.local.  --hairpin-mode promiscuous-bridge  --network-plugin=cni"[root@server81 install_k8s_node]# 

kubelet配置文件中的相关参数说明

## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.5.81"

说明:这里是写Node节点的名称,我使用该服务器的IP地址进行覆盖。如果是在Server87、Server86上部署,则修改相应的IP地址即可。
在部署完毕之后,执行kubectl get node,你就可以看到你定义的node节点名称的了。


## location of the api-server
KUBELET_CONFIG="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"

说明:定义kubeletkubeconfig文件路径,之前在master部署的时候创建的。


## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=172.16.5.81:5000/pause-amd64:3.1"

说明:

  • 在创建应用的时候,kubelet是需要依赖于pause镜像的,如果没有pause镜像,那么镜像就会启用失败
  • 所以每个Node节点上必须要有pause的镜像,但是默认pause镜像需要翻墙后再去官网下载的,这样会影响镜像启动的效率,那么我就将pause镜像下载到我的私有仓库中,方便内网启动。
    这里pause镜像的私有地址:172.16.5.81:5000/pause-amd64:3.1
  • 对于读者可以从以下地址地址pause镜像,然后再搭设一个自己的私有仓库。下载地址如下:(该仓库是另一位博客作者提供的,在此感谢他)该作者写的kuberntes部署是没有启用RBAC模式的,是极简模式,有兴趣的读者也可以去看看。
docker pull  mirrorgooglecontainers/pause-amd64:3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
  • 如果你可以翻墙,直接下载官网的镜像地址即可
docker pull   k8s.gcr.io/pause-amd64:3.1

## Add your own!
KUBELET_ARGS="--cluster-dns=10.0.6.200  \
--serialize-image-pulls=false \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig  \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/kubernetesTLS \
--cluster-domain=cluster.local. \
--hairpin-mode promiscuous-bridge \
--network-plugin=cni"
参数说明
--cluster-dns=10.0.6.200设置kubernetes集群网络中内部DNSIP地址,后续用于CoreDNS
--serialize-image-pulls=false设置kubernetes集群允许使用http非安全镜像拉取
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig设置bootstrap.kubeconfig的文件路径
--cert-dir=/etc/kubernetes/kubernetesTLS设置kubernetesTLS文件路径,后续kubelet服务启动之后,会在该文件夹自动创建kubelet相关公钥和私钥文件
--cluster-domain=cluster.local.设置kubernetes集群的DNS域名
--hairpin-mode promiscuous-bridge设置pod桥接网络模式
--network-plugin=cni设置启用CNI网络插件,因为后续是使用Calico网络,所以需要配置

如果你还想更加详细了解kubelet的参数配置,可以访问官网,点击这里。


启动kubelet服务

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

执行运行如下:

[root@server81 kubernetesTLS]# ls -ll
total 32
-rw-r--r-- 1 root root 1675 Aug 19 22:21 admin.key
-rw-r--r-- 1 root root 1050 Aug 19 22:21 admin.pem
-rw-r--r-- 1 root root 1675 Aug 19 22:21 apiserver.key
-rw-r--r-- 1 root root 1302 Aug 19 22:21 apiserver.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 ca.key
-rw-r--r-- 1 root root 1135 Aug 19 22:21 ca.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 proxy.key
-rw-r--r-- 1 root root 1009 Aug 19 22:21 proxy.pem
[root@server81 kubernetesTLS]# 
[root@server81 kubernetesTLS]# systemctl daemon-reload
[root@server81 kubernetesTLS]# systemctl enable kubelet
[root@server81 kubernetesTLS]# systemctl start kubelet
[root@server81 kubernetesTLS]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet ServerLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 15:07:26 HKT; 640ms agoDocs: https://github.com/GoogleCloudPlatform/kubernetesMain PID: 3589 (kubelet)Memory: 16.1MCGroup: /system.slice/kubelet.service└─3589 /usr/bin/kubelet --logtostderr=true --v=0 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --hostname-override=172.16.5.81 --pod-infra-container-image=172.16.5.81:5000/...Aug 20 15:07:26 server81 systemd[1]: Started Kubernetes Kubelet Server.
Aug 20 15:07:26 server81 systemd[1]: Starting Kubernetes Kubelet Server...
Aug 20 15:07:26 server81 kubelet[3589]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. Se...information.
Aug 20 15:07:26 server81 kubelet[3589]: Flag --serialize-image-pulls has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi...information.
Aug 20 15:07:26 server81 kubelet[3589]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag....information.
Aug 20 15:07:26 server81 kubelet[3589]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. S...information.
Aug 20 15:07:26 server81 kubelet[3589]: I0820 15:07:26.364083    3589 feature_gate.go:230] feature gates: &{map[]}
Aug 20 15:07:26 server81 kubelet[3589]: I0820 15:07:26.364224    3589 feature_gate.go:230] feature gates: &{map[]}
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 kubernetesTLS]# 
[root@server81 kubernetesTLS]# ls -ll
total 44
-rw-r--r-- 1 root root 1675 Aug 19 22:21 admin.key
-rw-r--r-- 1 root root 1050 Aug 19 22:21 admin.pem
-rw-r--r-- 1 root root 1675 Aug 19 22:21 apiserver.key
-rw-r--r-- 1 root root 1302 Aug 19 22:21 apiserver.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 ca.key
-rw-r--r-- 1 root root 1135 Aug 19 22:21 ca.pem
-rw------- 1 root root  227 Aug 20 15:07 kubelet-client.key.tmp
-rw-r--r-- 1 root root 2177 Aug 20 15:07 kubelet.crt
-rw------- 1 root root 1679 Aug 20 15:07 kubelet.key
-rw-r--r-- 1 root root 1679 Aug 19 22:21 proxy.key
-rw-r--r-- 1 root root 1009 Aug 19 22:21 proxy.pem
[root@server81 kubernetesTLS]# 

注意

  • 可以从文件夹中看出,kubelet服务启动之后,自动响应生成了这三个文件:kubelet-client.key.tmp kubelet.crt kubelet.key
  • 如果需要重新部署kubelet服务,那么就需要删除这三个文件即可。不然会提示过期,服务启动异常。
  • 另外,可以看到kubelet-client.key.tmp该文件还没有亮色,不可以运行起来,原因是kubeletapiserver发出CSR认证的请求,此时apiserver还没有认证通过。
  • 那么下一步就需要回到master服务认证csr

在master节点服务器认证通过csr

master认证通过csr脚本如下:

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)## function 
function node_approve_csr(){CSR=`kubectl get csr | grep csr | grep Pending |  awk '{print $1}' | head -n 1`kubectl certificate approve $CSRkubectl get nodes
}node_approve_csr

执行通过csr过程如下:

[root@server81 kubernetesTLS]# ls
admin.key  admin.pem  apiserver.key  apiserver.pem  ca.key  ca.pem  kubelet-client.key.tmp  kubelet.crt  kubelet.key  proxy.key  proxy.pem
[root@server81 kubernetesTLS]# 
[root@server81 kubernetesTLS]# kubectl get node
No resources found.
[root@server81 kubernetesTLS]# 
[root@server81 kubernetesTLS]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-_xuU6rY0NNn9v2kgY58dOI86X_F1PBcbziXByJXnB7s   54m       kubelet-bootstrap   Pending
node-csr-fH4Ct4Fg4TgzFV0dP-SlfVCtTo9XNCJjajzPohDVxHE   6m        kubelet-bootstrap   Pending
[root@server81 kubernetesTLS]# 
[root@server81 kubernetesTLS]# kubectl certificate approve node-csr-fH4Ct4Fg4TgzFV0dP-SlfVCtTo9XNCJjajzPohDVxHE
certificatesigningrequest.certificates.k8s.io/node-csr-fH4Ct4Fg4TgzFV0dP-SlfVCtTo9XNCJjajzPohDVxHE approved
[root@server81 kubernetesTLS]# 
[root@server81 kubernetesTLS]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   NotReady   <none>    5s        v1.11.0
[root@server81 kubernetesTLS]# 
[root@server81 kubernetesTLS]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-_xuU6rY0NNn9v2kgY58dOI86X_F1PBcbziXByJXnB7s   54m       kubelet-bootstrap   Pending
node-csr-fH4Ct4Fg4TgzFV0dP-SlfVCtTo9XNCJjajzPohDVxHE   7m        kubelet-bootstrap   Approved,Issued
[root@server81 kubernetesTLS]# 
[root@server81 kubernetesTLS]# ls -ll
total 44
-rw-r--r-- 1 root root 1675 Aug 19 22:21 admin.key
-rw-r--r-- 1 root root 1050 Aug 19 22:21 admin.pem
-rw-r--r-- 1 root root 1675 Aug 19 22:21 apiserver.key
-rw-r--r-- 1 root root 1302 Aug 19 22:21 apiserver.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 ca.key
-rw-r--r-- 1 root root 1135 Aug 19 22:21 ca.pem
-rw------- 1 root root 1183 Aug 20 15:14 kubelet-client-2018-08-20-15-14-35.pem
lrwxrwxrwx 1 root root   68 Aug 20 15:14 kubelet-client-current.pem -> /etc/kubernetes/kubernetesTLS/kubelet-client-2018-08-20-15-14-35.pem
-rw-r--r-- 1 root root 2177 Aug 20 15:07 kubelet.crt
-rw------- 1 root root 1679 Aug 20 15:07 kubelet.key
-rw-r--r-- 1 root root 1679 Aug 19 22:21 proxy.key
-rw-r--r-- 1 root root 1009 Aug 19 22:21 proxy.pem
[root@server81 kubernetesTLS]# 

说明:

  • 可以看到执行kubectl certificate approve node-csr-fH4Ct4Fg4TgzFV0dP-SlfVCtTo9XNCJjajzPohDVxHE之后,
  • 再次执行kubectl get csr 查看csr的时候,在node-csr的状态就变成了 Approved,Issued了,
  • 此时kubectl get node的时候就可以看到node节点了,只是状态为NotReady而已
  • 另外,查看TLS文件夹,可以看到kubelet-client.key.tmp该临时文件在csr通过之后,变成了文件如下:
-rw------- 1 root root 1183 Aug 20 15:14 kubelet-client-2018-08-20-15-14-35.pem
lrwxrwxrwx 1 root root   68 Aug 20 15:14 kubelet-client-current.pem -> /etc/kubernetes/kubernetesTLS/kubelet-client-2018-08-20-15-14-35.pem

最后查看一下kubelet启动后的日志:

[root@server81 install_k8s_node]# journalctl -f -u kubelet
-- Logs begin at Sun 2018-08-19 21:26:42 HKT. --
Aug 20 15:20:51 server81 kubelet[3589]: W0820 15:20:51.476453    3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:20:51 server81 kubelet[3589]: E0820 15:20:51.477201    3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:20:56 server81 kubelet[3589]: W0820 15:20:56.479691    3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:20:56 server81 kubelet[3589]: E0820 15:20:56.480061    3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:21:01 server81 kubelet[3589]: W0820 15:21:01.483272    3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:21:01 server81 kubelet[3589]: E0820 15:21:01.484824    3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:21:06 server81 kubelet[3589]: W0820 15:21:06.488203    3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:21:06 server81 kubelet[3589]: E0820 15:21:06.489788    3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:21:11 server81 kubelet[3589]: W0820 15:21:11.497281    3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:21:11 server81 kubelet[3589]: E0820 15:21:11.497941    3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:21:16 server81 kubelet[3589]: W0820 15:21:16.502290    3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:21:16 server81 kubelet[3589]: E0820 15:21:16.502733    3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

说明:此时日志提示没有cni网络,这个后续在安装Calico网络的时候说明。


部署kube-proxy服务

编写kube-proxy.service文件(/usr/lib/systemd/system

[root@server81 install_k8s_node]# cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kube Proxy Service
After=network.target[Service]
Type=simple
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_PROXY_ARGSRestart=always
LimitNOFILE=65536[Install]
WantedBy=default.target[root@server81 install_k8s_node]# 

kube-proxy.service 说明:

EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy

配置kube-proxy启用读取的两个配置文件config、proxy,其中config在部署master服务的时候已经写好了,这是一个通用的配置文件。那么下面则单独编写proxy的配置文件。


ExecStart=/usr/bin/kube-proxy \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_PROXY_ARGS

配置kube-proxy服务启动使用的二进制可执行文件的路径(/usr/bin/kube-proxy)以及相关启动参数


配置文件proxy(/etc/kubernetes

[root@server81 install_k8s_node]# cat /etc/kubernetes/
apiserver              config                 kubelet                kube-proxy.kubeconfig  proxy                  token.csv              
bootstrap.kubeconfig   controller-manager     kubelet.kubeconfig     kubernetesTLS/         scheduler              
[root@server81 install_k8s_node]# cat /etc/kubernetes/proxy 
###
# kubernetes proxy config# defaults from config and proxy should be adequate# Add your own!
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig  --cluster-cidr=10.1.0.0/16"[root@server81 install_k8s_node]# 

参数说明:

  • --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig 指定proxy运行的kubeconfig文件路径
  • --cluster-cidr=10.1.0.0/16指定podkubernetes启动的虚拟IP网段(CNI网络),提供后续calico使用参数

启动kube-proxy服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

执行如下:

[root@server81 install_k8s_node]# systemctl daemon-reload
[root@server81 install_k8s_node]# systemctl enable kube-proxy
[root@server81 install_k8s_node]# systemctl start kube-proxy
[root@server81 install_k8s_node]# systemctl status kube-proxy
● kube-proxy.service - Kube Proxy ServiceLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 15:32:10 HKT; 11min agoMain PID: 3988 (kube-proxy)CGroup: /system.slice/kube-proxy.service‣ 3988 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.1.0.0/16Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.742562    3988 conntrack.go:52] Setting nf_conntrack_max to 131072
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.748678    3988 conntrack.go:83] Setting conntrack hashsize to 32768
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749216    3988 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749266    3988 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749762    3988 config.go:102] Starting endpoints config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749807    3988 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749838    3988 config.go:202] Starting service config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749845    3988 controller_utils.go:1025] Waiting for caches to sync for service config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.850911    3988 controller_utils.go:1032] Caches are synced for endpoints config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.850959    3988 controller_utils.go:1032] Caches are synced for service config controller
[root@server81 install_k8s_node]# 

执行到这里,关于node的服务也已经部署好了,而其他Server86和87的服务,我这边使用脚本快速部署一下,执行过程于Server81一致。

使用脚本快速部署Server86服务器

[root@server86 kubernetesTLS]# cd /opt/
[root@server86 opt]# ls
install_etcd_cluster  install_kubernetes  rh
[root@server86 opt]# 
[root@server86 opt]# 
[root@server86 opt]# cd install_kubernetes/
[root@server86 install_kubernetes]# ls
check_etcd  install_Calico  install_CoreDNS  install_k8s_master  install_k8s_node  install_kubernetes_software  install_RAS_node  MASTER_INFO  reademe.txt
[root@server86 install_kubernetes]# 
[root@server86 install_kubernetes]# cd install_k8s_node/
[root@server86 install_k8s_node]# ls
nodefile  Step1_config.sh  Step2_install_docker.sh  Step3_install_kubelet.sh  Step4_install_proxy.sh  Step5_node_approve_csr.sh  Step6_master_node_context.sh
[root@server86 install_k8s_node]# 
[root@server86 install_k8s_node]# ./Step1_config.sh 
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
SELinux status:                 disabled
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubectl’ -> ‘/usr/bin/kubectl’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubelet’ -> ‘/usr/bin/kubelet’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kube-proxy’ -> ‘/usr/bin/kube-proxy’
[root@server86 install_k8s_node]# 
[root@server86 install_k8s_node]# ./Step2_install_docker.sh 
Loaded plugins: fastestmirror, langpacks
Examining /opt/install_kubernetes/install_k8s_node/nodefile/docker/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Marking /opt/install_kubernetes/install_k8s_node/nodefile/docker/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:18.03.0.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* epel: mirrors.tongji.edu.cn* extras: mirrors.aliyun.com* updates: mirrors.163.com
--> Processing Dependency: pigz for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.66-1.el7 will be installed
--> Processing Dependency: selinux-policy-targeted >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: selinux-policy-base >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: selinux-policy >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
---> Package pigz.x86_64 0:2.3.4-1.el7 will be installed
--> Running transaction check
---> Package selinux-policy.noarch 0:3.13.1-166.el7_4.5 will be updated
---> Package selinux-policy.noarch 0:3.13.1-192.el7_5.4 will be an update
--> Processing Dependency: policycoreutils >= 2.5-18 for package: selinux-policy-3.13.1-192.el7_5.4.noarch
---> Package selinux-policy-targeted.noarch 0:3.13.1-166.el7_4.5 will be updated
---> Package selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4 will be an update
--> Running transaction check
---> Package policycoreutils.x86_64 0:2.5-17.1.el7 will be updated
--> Processing Dependency: policycoreutils = 2.5-17.1.el7 for package: policycoreutils-python-2.5-17.1.el7.x86_64
---> Package policycoreutils.x86_64 0:2.5-22.el7 will be an update
--> Processing Dependency: libsepol >= 2.5-8 for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libselinux-utils >= 2.5-12 for package: policycoreutils-2.5-22.el7.x86_64
--> Running transaction check
---> Package libselinux-utils.x86_64 0:2.5-11.el7 will be updated
---> Package libselinux-utils.x86_64 0:2.5-12.el7 will be an update
--> Processing Dependency: libselinux(x86-64) = 2.5-12.el7 for package: libselinux-utils-2.5-12.el7.x86_64
---> Package libsepol.i686 0:2.5-6.el7 will be updated
---> Package libsepol.x86_64 0:2.5-6.el7 will be updated
---> Package libsepol.i686 0:2.5-8.1.el7 will be an update
---> Package libsepol.x86_64 0:2.5-8.1.el7 will be an update
---> Package policycoreutils-python.x86_64 0:2.5-17.1.el7 will be updated
---> Package policycoreutils-python.x86_64 0:2.5-22.el7 will be an update
--> Processing Dependency: setools-libs >= 3.3.8-2 for package: policycoreutils-python-2.5-22.el7.x86_64
--> Processing Dependency: libsemanage-python >= 2.5-9 for package: policycoreutils-python-2.5-22.el7.x86_64
--> Running transaction check
---> Package libselinux.i686 0:2.5-11.el7 will be updated
---> Package libselinux.x86_64 0:2.5-11.el7 will be updated
--> Processing Dependency: libselinux(x86-64) = 2.5-11.el7 for package: libselinux-python-2.5-11.el7.x86_64
---> Package libselinux.i686 0:2.5-12.el7 will be an update
---> Package libselinux.x86_64 0:2.5-12.el7 will be an update
---> Package libsemanage-python.x86_64 0:2.5-8.el7 will be updated
---> Package libsemanage-python.x86_64 0:2.5-11.el7 will be an update
--> Processing Dependency: libsemanage = 2.5-11.el7 for package: libsemanage-python-2.5-11.el7.x86_64
---> Package setools-libs.x86_64 0:3.3.8-1.1.el7 will be updated
---> Package setools-libs.x86_64 0:3.3.8-2.el7 will be an update
--> Running transaction check
---> Package libselinux-python.x86_64 0:2.5-11.el7 will be updated
---> Package libselinux-python.x86_64 0:2.5-12.el7 will be an update
---> Package libsemanage.x86_64 0:2.5-8.el7 will be updated
---> Package libsemanage.x86_64 0:2.5-11.el7 will be an update
--> Finished Dependency ResolutionDependencies Resolved===========================================================================================================================================================================Package                                 Arch                   Version                                    Repository                                                 Size
===========================================================================================================================================================================
Installing:docker-ce                               x86_64                 18.03.0.ce-1.el7.centos                    /docker-ce-18.03.0.ce-1.el7.centos.x86_64                 151 M
Installing for dependencies:container-selinux                       noarch                 2:2.66-1.el7                               extras                                                     35 kpigz                                    x86_64                 2.3.4-1.el7                                epel                                                       81 k
Updating for dependencies:libselinux                              i686                   2.5-12.el7                                 base                                                      166 klibselinux                              x86_64                 2.5-12.el7                                 base                                                      162 klibselinux-python                       x86_64                 2.5-12.el7                                 base                                                      235 klibselinux-utils                        x86_64                 2.5-12.el7                                 base                                                      151 klibsemanage                             x86_64                 2.5-11.el7                                 base                                                      150 klibsemanage-python                      x86_64                 2.5-11.el7                                 base                                                      112 klibsepol                                i686                   2.5-8.1.el7                                base                                                      293 klibsepol                                x86_64                 2.5-8.1.el7                                base                                                      297 kpolicycoreutils                         x86_64                 2.5-22.el7                                 base                                                      867 kpolicycoreutils-python                  x86_64                 2.5-22.el7                                 base                                                      454 kselinux-policy                          noarch                 3.13.1-192.el7_5.4                         updates                                                   453 kselinux-policy-targeted                 noarch                 3.13.1-192.el7_5.4                         updates                                                   6.6 Msetools-libs                            x86_64                 3.3.8-2.el7                                base                                                      619 kTransaction Summary
===========================================================================================================================================================================
Install  1 Package  (+ 2 Dependent packages)
Upgrade             ( 13 Dependent packages)Total size: 161 M
Total download size: 11 M
Downloading packages:
No Presto metadata available for base
updates/7/x86_64/prestodelta                                                                                                                        | 420 kB  00:00:00     
(1/15): container-selinux-2.66-1.el7.noarch.rpm                                                                                                     |  35 kB  00:00:00     
(2/15): libselinux-2.5-12.el7.i686.rpm                                                                                                              | 166 kB  00:00:00     
(3/15): libsemanage-2.5-11.el7.x86_64.rpm                                                                                                           | 150 kB  00:00:00     
(4/15): libsemanage-python-2.5-11.el7.x86_64.rpm                                                                                                    | 112 kB  00:00:00     
(5/15): libselinux-utils-2.5-12.el7.x86_64.rpm                                                                                                      | 151 kB  00:00:00     
(6/15): libselinux-2.5-12.el7.x86_64.rpm                                                                                                            | 162 kB  00:00:00     
(7/15): libsepol-2.5-8.1.el7.i686.rpm                                                                                                               | 293 kB  00:00:00     
(8/15): libsepol-2.5-8.1.el7.x86_64.rpm                                                                                                             | 297 kB  00:00:00     
(9/15): selinux-policy-3.13.1-192.el7_5.4.noarch.rpm                                                                                                | 453 kB  00:00:00     
(10/15): policycoreutils-2.5-22.el7.x86_64.rpm                                                                                                      | 867 kB  00:00:00     
(11/15): selinux-policy-targeted-3.13.1-192.el7_5.4.noarch.rpm                                                                                      | 6.6 MB  00:00:00     
(12/15): policycoreutils-python-2.5-22.el7.x86_64.rpm                                                                                               | 454 kB  00:00:01     
(13/15): setools-libs-3.3.8-2.el7.x86_64.rpm                                                                                                        | 619 kB  00:00:00     
(14/15): pigz-2.3.4-1.el7.x86_64.rpm                                                                                                                |  81 kB  00:00:01     
(15/15): libselinux-python-2.5-12.el7.x86_64.rpm                                                                                                    | 235 kB  00:00:01     
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                      4.7 MB/s |  11 MB  00:00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transactionUpdating   : libsepol-2.5-8.1.el7.x86_64                                                                                                                            1/29 Updating   : libselinux-2.5-12.el7.x86_64                                                                                                                           2/29 Updating   : libsemanage-2.5-11.el7.x86_64                                                                                                                          3/29 Updating   : libselinux-utils-2.5-12.el7.x86_64                                                                                                                     4/29 Updating   : policycoreutils-2.5-22.el7.x86_64                                                                                                                      5/29 Updating   : selinux-policy-3.13.1-192.el7_5.4.noarch                                                                                                               6/29 Updating   : selinux-policy-targeted-3.13.1-192.el7_5.4.noarch                                                                                                      7/29 Updating   : libsemanage-python-2.5-11.el7.x86_64                                                                                                                   8/29 Updating   : libselinux-python-2.5-12.el7.x86_64                                                                                                                    9/29 Updating   : setools-libs-3.3.8-2.el7.x86_64                                                                                                                       10/29 Updating   : policycoreutils-python-2.5-22.el7.x86_64                                                                                                              11/29 Installing : 2:container-selinux-2.66-1.el7.noarch                                                                                                                 12/29 
setsebool:  SELinux is disabled.Installing : pigz-2.3.4-1.el7.x86_64                                                                                                                               13/29 Updating   : libsepol-2.5-8.1.el7.i686                                                                                                                             14/29 Installing : docker-ce-18.03.0.ce-1.el7.centos.x86_64                                                                                                              15/29 Updating   : libselinux-2.5-12.el7.i686                                                                                                                            16/29 Cleanup    : selinux-policy-targeted-3.13.1-166.el7_4.5.noarch                                                                                                     17/29 Cleanup    : policycoreutils-python-2.5-17.1.el7.x86_64                                                                                                            18/29 Cleanup    : selinux-policy-3.13.1-166.el7_4.5.noarch                                                                                                              19/29 Cleanup    : libselinux-2.5-11.el7                                                                                                                                 20/29 Cleanup    : policycoreutils-2.5-17.1.el7.x86_64                                                                                                                   21/29 Cleanup    : libselinux-utils-2.5-11.el7.x86_64                                                                                                                    22/29 Cleanup    : setools-libs-3.3.8-1.1.el7.x86_64                                                                                                                     23/29 Cleanup    : libselinux-python-2.5-11.el7.x86_64                                                                                                                   24/29 Cleanup    : libsemanage-python-2.5-8.el7.x86_64                                                                                                                   25/29 Cleanup    : libsepol-2.5-6.el7                                                                                                                                    26/29 Cleanup    : libsemanage-2.5-8.el7.x86_64                                                                                                                          27/29 Cleanup    : libselinux-2.5-11.el7                                                                                                                                 28/29 Cleanup    : libsepol-2.5-6.el7                                                                                                                                    29/29 Verifying  : libselinux-python-2.5-12.el7.x86_64                                                                                                                    1/29 Verifying  : selinux-policy-3.13.1-192.el7_5.4.noarch                                                                                                               2/29 Verifying  : setools-libs-3.3.8-2.el7.x86_64                                                                                                                        3/29 Verifying  : libsemanage-python-2.5-11.el7.x86_64                                                                                                                   4/29 Verifying  : policycoreutils-2.5-22.el7.x86_64                                                                                                                      5/29 Verifying  : libsepol-2.5-8.1.el7.i686                                                                                                                              6/29 Verifying  : libsemanage-2.5-11.el7.x86_64                                                                                                                          7/29 Verifying  : selinux-policy-targeted-3.13.1-192.el7_5.4.noarch                                                                                                      8/29 Verifying  : pigz-2.3.4-1.el7.x86_64                                                                                                                                9/29 Verifying  : policycoreutils-python-2.5-22.el7.x86_64                                                                                                              10/29 Verifying  : 2:container-selinux-2.66-1.el7.noarch                                                                                                                 11/29 Verifying  : libselinux-2.5-12.el7.i686                                                                                                                            12/29 Verifying  : libsepol-2.5-8.1.el7.x86_64                                                                                                                           13/29 Verifying  : libselinux-2.5-12.el7.x86_64                                                                                                                          14/29 Verifying  : docker-ce-18.03.0.ce-1.el7.centos.x86_64                                                                                                              15/29 Verifying  : libselinux-utils-2.5-12.el7.x86_64                                                                                                                    16/29 Verifying  : libselinux-utils-2.5-11.el7.x86_64                                                                                                                    17/29 Verifying  : libsepol-2.5-6.el7.i686                                                                                                                               18/29 Verifying  : libselinux-2.5-11.el7.x86_64                                                                                                                          19/29 Verifying  : libsepol-2.5-6.el7.x86_64                                                                                                                             20/29 Verifying  : policycoreutils-python-2.5-17.1.el7.x86_64                                                                                                            21/29 Verifying  : selinux-policy-targeted-3.13.1-166.el7_4.5.noarch                                                                                                     22/29 Verifying  : policycoreutils-2.5-17.1.el7.x86_64                                                                                                                   23/29 Verifying  : libsemanage-python-2.5-8.el7.x86_64                                                                                                                   24/29 Verifying  : libselinux-2.5-11.el7.i686                                                                                                                            25/29 Verifying  : libsemanage-2.5-8.el7.x86_64                                                                                                                          26/29 Verifying  : selinux-policy-3.13.1-166.el7_4.5.noarch                                                                                                              27/29 Verifying  : libselinux-python-2.5-11.el7.x86_64                                                                                                                   28/29 Verifying  : setools-libs-3.3.8-1.1.el7.x86_64                                                                                                                     29/29 Installed:docker-ce.x86_64 0:18.03.0.ce-1.el7.centos                                                                                                                               Dependency Installed:container-selinux.noarch 2:2.66-1.el7                                                      pigz.x86_64 0:2.3.4-1.el7                                                     Dependency Updated:libselinux.i686 0:2.5-12.el7                         libselinux.x86_64 0:2.5-12.el7                       libselinux-python.x86_64 0:2.5-12.el7                        libselinux-utils.x86_64 0:2.5-12.el7                 libsemanage.x86_64 0:2.5-11.el7                      libsemanage-python.x86_64 0:2.5-11.el7                       libsepol.i686 0:2.5-8.1.el7                          libsepol.x86_64 0:2.5-8.1.el7                        policycoreutils.x86_64 0:2.5-22.el7                          policycoreutils-python.x86_64 0:2.5-22.el7           selinux-policy.noarch 0:3.13.1-192.el7_5.4           selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4          setools-libs.x86_64 0:3.3.8-2.el7                   Complete!
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 15:57:07 HKT; 21ms agoDocs: https://docs.docker.comMain PID: 2955 (dockerd)Memory: 39.0MCGroup: /system.slice/docker.service├─2955 /usr/bin/dockerd└─2964 docker-containerd --config /var/run/docker/containerd/containerd.tomlAug 20 15:57:06 server86 dockerd[2955]: time="2018-08-20T15:57:06.737217664+08:00" level=info msg="devmapper: Creating filesystem xfs on device docker-8:3-67...8927-base]"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.045640563+08:00" level=info msg="devmapper: Successfully created filesystem xfs on device d...18927-base"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.257682803+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.260865731+08:00" level=info msg="Loading containers: start."
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.603658334+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 17...IP address"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.763307367+08:00" level=info msg="Loading containers: done."
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.812802202+08:00" level=info msg="Docker daemon" commit=0520e24 graphdriver(s)=devicemapper ...=18.03.0-ce
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.813732684+08:00" level=info msg="Daemon has completed initialization"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.866979598+08:00" level=info msg="API listen on /var/run/docker.sock"
Aug 20 15:57:07 server86 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server86 install_k8s_node]# 
[root@server86 install_k8s_node]# ls
nodefile  Step1_config.sh  Step2_install_docker.sh  Step3_install_kubelet.sh  Step4_install_proxy.sh  Step5_node_approve_csr.sh  Step6_master_node_context.sh
[root@server86 install_k8s_node]# 
[root@server86 install_k8s_node]# ./Step3_install_kubelet.sh 
MASTER_IP=172.16.5.81
cat: /opt/ETCD_CLUSER_INFO: No such file or directory
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service - Kubernetes Kubelet ServerLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 15:57:15 HKT; 142ms agoDocs: https://github.com/GoogleCloudPlatform/kubernetesMain PID: 3195 (kubelet)Memory: 5.8MCGroup: /system.slice/kubelet.service└─3195 /usr/bin/kubelet --logtostderr=true --v=0 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --hostname-override=172.16.5.86 --pod-infra-container-image=...Aug 20 15:57:15 server86 systemd[1]: Started Kubernetes Kubelet Server.
Aug 20 15:57:15 server86 systemd[1]: Starting Kubernetes Kubelet Server...
[root@server86 install_k8s_node]# 
[root@server86 install_k8s_node]# ./Step4_install_proxy.sh 
Created symlink from /etc/systemd/system/default.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
● kube-proxy.service - Kube Proxy ServiceLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 15:57:19 HKT; 97ms agoMain PID: 3282 (kube-proxy)Memory: 5.5MCGroup: /system.slice/kube-proxy.service└─3282 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.1.0...Aug 20 15:57:19 server86 systemd[1]: Started Kube Proxy Service.
Aug 20 15:57:19 server86 systemd[1]: Starting Kube Proxy Service...
[root@server86 install_k8s_node]# 
[root@server86 install_k8s_node]# 

使用脚本快速部署Server87服务器

[root@server87 ~]# cd /opt/
[root@server87 opt]# ls
install_etcd_cluster  install_kubernetes  rh
[root@server87 opt]# cd install_kubernetes/
[root@server87 install_kubernetes]# ls
check_etcd  install_Calico  install_CoreDNS  install_k8s_master  install_k8s_node  install_kubernetes_software  install_RAS_node  MASTER_INFO  reademe.txt
[root@server87 install_kubernetes]# cd install_k8s_node/
[root@server87 install_k8s_node]# ls
nodefile  Step1_config.sh  Step2_install_docker.sh  Step3_install_kubelet.sh  Step4_install_proxy.sh  Step5_node_approve_csr.sh  Step6_master_node_context.sh
[root@server87 install_k8s_node]# ./Step1_config.sh 
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)
SELinux status:                 disabled
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubectl’ -> ‘/usr/bin/kubectl’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubelet’ -> ‘/usr/bin/kubelet’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kube-proxy’ -> ‘/usr/bin/kube-proxy’
[root@server87 install_k8s_node]# ./Step2_install_docker.sh 
Loaded plugins: fastestmirror, langpacks
Examining /opt/install_kubernetes/install_k8s_node/nodefile/docker/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Marking /opt/install_kubernetes/install_k8s_node/nodefile/docker/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:18.03.0.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* epel: mirrors.tongji.edu.cn* extras: mirrors.aliyun.com* updates: mirrors.163.com
--> Processing Dependency: libseccomp >= 2.3 for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
--> Processing Dependency: pigz for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.66-1.el7 will be installed
--> Processing Dependency: selinux-policy-targeted >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: selinux-policy-base >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: selinux-policy >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: policycoreutils >= 2.5-11 for package: 2:container-selinux-2.66-1.el7.noarch
---> Package libseccomp.x86_64 0:2.2.1-1.el7 will be updated
---> Package libseccomp.x86_64 0:2.3.1-3.el7 will be an update
---> Package pigz.x86_64 0:2.3.4-1.el7 will be installed
--> Running transaction check
---> Package policycoreutils.x86_64 0:2.2.5-20.el7 will be updated
--> Processing Dependency: policycoreutils = 2.2.5-20.el7 for package: policycoreutils-python-2.2.5-20.el7.x86_64
---> Package policycoreutils.x86_64 0:2.5-22.el7 will be an update
--> Processing Dependency: libsepol >= 2.5-8 for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libselinux-utils >= 2.5-12 for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libsepol.so.1(LIBSEPOL_1.1)(64bit) for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libsepol.so.1(LIBSEPOL_1.0)(64bit) for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libsemanage.so.1(LIBSEMANAGE_1.1)(64bit) for package: policycoreutils-2.5-22.el7.x86_64
---> Package selinux-policy.noarch 0:3.13.1-60.el7 will be updated
---> Package selinux-policy.noarch 0:3.13.1-192.el7_5.4 will be an update
---> Package selinux-policy-targeted.noarch 0:3.13.1-60.el7 will be updated
---> Package selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4 will be an update
--> Running transaction check
---> Package libselinux-utils.x86_64 0:2.2.2-6.el7 will be updated
---> Package libselinux-utils.x86_64 0:2.5-12.el7 will be an update
--> Processing Dependency: libselinux(x86-64) = 2.5-12.el7 for package: libselinux-utils-2.5-12.el7.x86_64
---> Package libsemanage.x86_64 0:2.1.10-18.el7 will be updated
--> Processing Dependency: libsemanage = 2.1.10-18.el7 for package: libsemanage-python-2.1.10-18.el7.x86_64
---> Package libsemanage.x86_64 0:2.5-11.el7 will be an update
---> Package libsepol.x86_64 0:2.1.9-3.el7 will be updated
---> Package libsepol.x86_64 0:2.5-8.1.el7 will be an update
---> Package policycoreutils-python.x86_64 0:2.2.5-20.el7 will be updated
---> Package policycoreutils-python.x86_64 0:2.5-22.el7 will be an update
--> Processing Dependency: setools-libs >= 3.3.8-2 for package: policycoreutils-python-2.5-22.el7.x86_64
--> Running transaction check
---> Package libselinux.x86_64 0:2.2.2-6.el7 will be updated
--> Processing Dependency: libselinux = 2.2.2-6.el7 for package: libselinux-python-2.2.2-6.el7.x86_64
---> Package libselinux.x86_64 0:2.5-12.el7 will be an update
---> Package libsemanage-python.x86_64 0:2.1.10-18.el7 will be updated
---> Package libsemanage-python.x86_64 0:2.5-11.el7 will be an update
---> Package setools-libs.x86_64 0:3.3.7-46.el7 will be updated
---> Package setools-libs.x86_64 0:3.3.8-2.el7 will be an update
--> Running transaction check
---> Package libselinux-python.x86_64 0:2.2.2-6.el7 will be updated
---> Package libselinux-python.x86_64 0:2.5-12.el7 will be an update
--> Processing Conflict: libselinux-2.5-12.el7.x86_64 conflicts systemd < 219-20
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package systemd.x86_64 0:219-19.el7 will be updated
--> Processing Dependency: systemd = 219-19.el7 for package: systemd-python-219-19.el7.x86_64
--> Processing Dependency: systemd = 219-19.el7 for package: systemd-sysv-219-19.el7.x86_64
---> Package systemd.x86_64 0:219-57.el7 will be an update
--> Processing Dependency: systemd-libs = 219-57.el7 for package: systemd-219-57.el7.x86_64
--> Processing Dependency: liblz4.so.1()(64bit) for package: systemd-219-57.el7.x86_64
--> Running transaction check
---> Package lz4.x86_64 0:1.7.5-2.el7 will be installed
---> Package systemd-libs.x86_64 0:219-19.el7 will be updated
--> Processing Dependency: systemd-libs = 219-19.el7 for package: libgudev1-219-19.el7.x86_64
---> Package systemd-libs.x86_64 0:219-57.el7 will be an update
---> Package systemd-python.x86_64 0:219-19.el7 will be updated
---> Package systemd-python.x86_64 0:219-57.el7 will be an update
---> Package systemd-sysv.x86_64 0:219-19.el7 will be updated
---> Package systemd-sysv.x86_64 0:219-57.el7 will be an update
--> Running transaction check
---> Package libgudev1.x86_64 0:219-19.el7 will be updated
---> Package libgudev1.x86_64 0:219-57.el7 will be an update
--> Finished Dependency ResolutionDependencies Resolved===========================================================================================================================================================================Package                                 Arch                   Version                                    Repository                                                 Size
===========================================================================================================================================================================
Installing:docker-ce                               x86_64                 18.03.0.ce-1.el7.centos                    /docker-ce-18.03.0.ce-1.el7.centos.x86_64                 151 M
Updating:systemd                                 x86_64                 219-57.el7                                 base                                                      5.0 M
Installing for dependencies:container-selinux                       noarch                 2:2.66-1.el7                               extras                                                     35 klz4                                     x86_64                 1.7.5-2.el7                                base                                                       98 kpigz                                    x86_64                 2.3.4-1.el7                                epel                                                       81 k
Updating for dependencies:libgudev1                               x86_64                 219-57.el7                                 base                                                       92 klibseccomp                              x86_64                 2.3.1-3.el7                                base                                                       56 klibselinux                              x86_64                 2.5-12.el7                                 base                                                      162 klibselinux-python                       x86_64                 2.5-12.el7                                 base                                                      235 klibselinux-utils                        x86_64                 2.5-12.el7                                 base                                                      151 klibsemanage                             x86_64                 2.5-11.el7                                 base                                                      150 klibsemanage-python                      x86_64                 2.5-11.el7                                 base                                                      112 klibsepol                                x86_64                 2.5-8.1.el7                                base                                                      297 kpolicycoreutils                         x86_64                 2.5-22.el7                                 base                                                      867 kpolicycoreutils-python                  x86_64                 2.5-22.el7                                 base                                                      454 kselinux-policy                          noarch                 3.13.1-192.el7_5.4                         updates                                                   453 kselinux-policy-targeted                 noarch                 3.13.1-192.el7_5.4                         updates                                                   6.6 Msetools-libs                            x86_64                 3.3.8-2.el7                                base                                                      619 ksystemd-libs                            x86_64                 219-57.el7                                 base                                                      402 ksystemd-python                          x86_64                 219-57.el7                                 base                                                      128 ksystemd-sysv                            x86_64                 219-57.el7                                 base                                                       79 kTransaction Summary
===========================================================================================================================================================================
Install  1 Package (+ 3 Dependent packages)
Upgrade  1 Package (+16 Dependent packages)Total size: 166 M
Total download size: 16 M
Downloading packages:
No Presto metadata available for base
updates/7/x86_64/prestodelta                                                                                                                        | 420 kB  00:00:01     
(1/19): libselinux-2.5-12.el7.x86_64.rpm                                                                                                            | 162 kB  00:00:00     
(2/19): libselinux-utils-2.5-12.el7.x86_64.rpm                                                                                                      | 151 kB  00:00:00     
(3/19): libsemanage-2.5-11.el7.x86_64.rpm                                                                                                           | 150 kB  00:00:00     
(4/19): libgudev1-219-57.el7.x86_64.rpm                                                                                                             |  92 kB  00:00:00     
(5/19): libsemanage-python-2.5-11.el7.x86_64.rpm                                                                                                    | 112 kB  00:00:00     
(6/19): libsepol-2.5-8.1.el7.x86_64.rpm                                                                                                             | 297 kB  00:00:00     
(7/19): lz4-1.7.5-2.el7.x86_64.rpm                                                                                                                  |  98 kB  00:00:00     
(8/19): libselinux-python-2.5-12.el7.x86_64.rpm                                                                                                     | 235 kB  00:00:00     
(9/19): selinux-policy-3.13.1-192.el7_5.4.noarch.rpm                                                                                                | 453 kB  00:00:00     
(10/19): policycoreutils-python-2.5-22.el7.x86_64.rpm                                                                                               | 454 kB  00:00:00     
(11/19): setools-libs-3.3.8-2.el7.x86_64.rpm                                                                                                        | 619 kB  00:00:00     
(12/19): systemd-219-57.el7.x86_64.rpm                                                                                                              | 5.0 MB  00:00:00     
(13/19): container-selinux-2.66-1.el7.noarch.rpm                                                                                                    |  35 kB  00:00:01     
(14/19): systemd-libs-219-57.el7.x86_64.rpm                                                                                                         | 402 kB  00:00:00     
(15/19): systemd-sysv-219-57.el7.x86_64.rpm                                                                                                         |  79 kB  00:00:00     
(16/19): selinux-policy-targeted-3.13.1-192.el7_5.4.noarch.rpm                                                                                      | 6.6 MB  00:00:01     
(17/19): systemd-python-219-57.el7.x86_64.rpm                                                                                                       | 128 kB  00:00:00     
(18/19): pigz-2.3.4-1.el7.x86_64.rpm                                                                                                                |  81 kB  00:00:01     
(19/19): policycoreutils-2.5-22.el7.x86_64.rpm                                                                                                      | 867 kB  00:00:01     
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                      7.4 MB/s |  16 MB  00:00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transactionUpdating   : libsepol-2.5-8.1.el7.x86_64                                                                                                                            1/38 Updating   : libselinux-2.5-12.el7.x86_64                                                                                                                           2/38 Updating   : libsemanage-2.5-11.el7.x86_64                                                                                                                          3/38 Installing : lz4-1.7.5-2.el7.x86_64                                                                                                                                 4/38 Updating   : systemd-libs-219-57.el7.x86_64                                                                                                                         5/38 Updating   : systemd-219-57.el7.x86_64                                                                                                                              6/38 Updating   : libselinux-utils-2.5-12.el7.x86_64                                                                                                                     7/38 Updating   : policycoreutils-2.5-22.el7.x86_64                                                                                                                      8/38 Updating   : selinux-policy-3.13.1-192.el7_5.4.noarch                                                                                                               9/38 Updating   : selinux-policy-targeted-3.13.1-192.el7_5.4.noarch                                                                                                     10/38 Updating   : libsemanage-python-2.5-11.el7.x86_64                                                                                                                  11/38 Updating   : libselinux-python-2.5-12.el7.x86_64                                                                                                                   12/38 Updating   : setools-libs-3.3.8-2.el7.x86_64                                                                                                                       13/38 Updating   : policycoreutils-python-2.5-22.el7.x86_64                                                                                                              14/38 Installing : 2:container-selinux-2.66-1.el7.noarch                                                                                                                 15/38 
setsebool:  SELinux is disabled.Installing : pigz-2.3.4-1.el7.x86_64                                                                                                                               16/38 Updating   : libseccomp-2.3.1-3.el7.x86_64                                                                                                                         17/38 Installing : docker-ce-18.03.0.ce-1.el7.centos.x86_64                                                                                                              18/38 Updating   : systemd-sysv-219-57.el7.x86_64                                                                                                                        19/38 Updating   : systemd-python-219-57.el7.x86_64                                                                                                                      20/38 Updating   : libgudev1-219-57.el7.x86_64                                                                                                                           21/38 Cleanup    : selinux-policy-targeted-3.13.1-60.el7.noarch                                                                                                          22/38 Cleanup    : policycoreutils-python-2.2.5-20.el7.x86_64                                                                                                            23/38 Cleanup    : selinux-policy-3.13.1-60.el7.noarch                                                                                                                   24/38 Cleanup    : systemd-sysv-219-19.el7.x86_64                                                                                                                        25/38 Cleanup    : policycoreutils-2.2.5-20.el7.x86_64                                                                                                                   26/38 Cleanup    : systemd-python-219-19.el7.x86_64                                                                                                                      27/38 Cleanup    : systemd-219-19.el7.x86_64                                                                                                                             28/38 Cleanup    : setools-libs-3.3.7-46.el7.x86_64                                                                                                                      29/38 Cleanup    : libselinux-utils-2.2.2-6.el7.x86_64                                                                                                                   30/38 Cleanup    : libselinux-python-2.2.2-6.el7.x86_64                                                                                                                  31/38 Cleanup    : libsemanage-python-2.1.10-18.el7.x86_64                                                                                                               32/38 Cleanup    : libsemanage-2.1.10-18.el7.x86_64                                                                                                                      33/38 Cleanup    : libgudev1-219-19.el7.x86_64                                                                                                                           34/38 Cleanup    : systemd-libs-219-19.el7.x86_64                                                                                                                        35/38 Cleanup    : libselinux-2.2.2-6.el7.x86_64                                                                                                                         36/38 Cleanup    : libsepol-2.1.9-3.el7.x86_64                                                                                                                           37/38 Cleanup    : libseccomp-2.2.1-1.el7.x86_64                                                                                                                         38/38 Verifying  : libsemanage-python-2.5-11.el7.x86_64                                                                                                                   1/38 Verifying  : libsemanage-2.5-11.el7.x86_64                                                                                                                          2/38 Verifying  : libselinux-python-2.5-12.el7.x86_64                                                                                                                    3/38 Verifying  : selinux-policy-3.13.1-192.el7_5.4.noarch                                                                                                               4/38 Verifying  : setools-libs-3.3.8-2.el7.x86_64                                                                                                                        5/38 Verifying  : libseccomp-2.3.1-3.el7.x86_64                                                                                                                          6/38 Verifying  : policycoreutils-2.5-22.el7.x86_64                                                                                                                      7/38 Verifying  : selinux-policy-targeted-3.13.1-192.el7_5.4.noarch                                                                                                      8/38 Verifying  : pigz-2.3.4-1.el7.x86_64                                                                                                                                9/38 Verifying  : policycoreutils-python-2.5-22.el7.x86_64                                                                                                              10/38 Verifying  : libgudev1-219-57.el7.x86_64                                                                                                                           11/38 Verifying  : 2:container-selinux-2.66-1.el7.noarch                                                                                                                 12/38 Verifying  : systemd-sysv-219-57.el7.x86_64                                                                                                                        13/38 Verifying  : lz4-1.7.5-2.el7.x86_64                                                                                                                                14/38 Verifying  : systemd-219-57.el7.x86_64                                                                                                                             15/38 Verifying  : libsepol-2.5-8.1.el7.x86_64                                                                                                                           16/38 Verifying  : systemd-libs-219-57.el7.x86_64                                                                                                                        17/38 Verifying  : libselinux-2.5-12.el7.x86_64                                                                                                                          18/38 Verifying  : docker-ce-18.03.0.ce-1.el7.centos.x86_64                                                                                                              19/38 Verifying  : libselinux-utils-2.5-12.el7.x86_64                                                                                                                    20/38 Verifying  : systemd-python-219-57.el7.x86_64                                                                                                                      21/38 Verifying  : libsemanage-python-2.1.10-18.el7.x86_64                                                                                                               22/38 Verifying  : selinux-policy-targeted-3.13.1-60.el7.noarch                                                                                                          23/38 Verifying  : setools-libs-3.3.7-46.el7.x86_64                                                                                                                      24/38 Verifying  : libsemanage-2.1.10-18.el7.x86_64                                                                                                                      25/38 Verifying  : systemd-sysv-219-19.el7.x86_64                                                                                                                        26/38 Verifying  : libgudev1-219-19.el7.x86_64                                                                                                                           27/38 Verifying  : systemd-219-19.el7.x86_64                                                                                                                             28/38 Verifying  : selinux-policy-3.13.1-60.el7.noarch                                                                                                                   29/38 Verifying  : systemd-libs-219-19.el7.x86_64                                                                                                                        30/38 Verifying  : libselinux-utils-2.2.2-6.el7.x86_64                                                                                                                   31/38 Verifying  : libseccomp-2.2.1-1.el7.x86_64                                                                                                                         32/38 Verifying  : libsepol-2.1.9-3.el7.x86_64                                                                                                                           33/38 Verifying  : libselinux-python-2.2.2-6.el7.x86_64                                                                                                                  34/38 Verifying  : policycoreutils-2.2.5-20.el7.x86_64                                                                                                                   35/38 Verifying  : systemd-python-219-19.el7.x86_64                                                                                                                      36/38 Verifying  : libselinux-2.2.2-6.el7.x86_64                                                                                                                         37/38 Verifying  : policycoreutils-python-2.2.5-20.el7.x86_64                                                                                                            38/38 Installed:docker-ce.x86_64 0:18.03.0.ce-1.el7.centos                                                                                                                               Dependency Installed:container-selinux.noarch 2:2.66-1.el7                            lz4.x86_64 0:1.7.5-2.el7                            pigz.x86_64 0:2.3.4-1.el7                           Updated:systemd.x86_64 0:219-57.el7                                                                                                                                              Dependency Updated:libgudev1.x86_64 0:219-57.el7                        libseccomp.x86_64 0:2.3.1-3.el7                      libselinux.x86_64 0:2.5-12.el7                               libselinux-python.x86_64 0:2.5-12.el7                libselinux-utils.x86_64 0:2.5-12.el7                 libsemanage.x86_64 0:2.5-11.el7                              libsemanage-python.x86_64 0:2.5-11.el7               libsepol.x86_64 0:2.5-8.1.el7                        policycoreutils.x86_64 0:2.5-22.el7                          policycoreutils-python.x86_64 0:2.5-22.el7           selinux-policy.noarch 0:3.13.1-192.el7_5.4           selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4          setools-libs.x86_64 0:3.3.8-2.el7                    systemd-libs.x86_64 0:219-57.el7                     systemd-python.x86_64 0:219-57.el7                           systemd-sysv.x86_64 0:219-57.el7                    Complete!
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 15:51:50 HKT; 9ms agoDocs: https://docs.docker.comMain PID: 42077 (dockerd)Memory: 40.8MCGroup: /system.slice/docker.service├─42077 /usr/bin/dockerd└─42086 docker-containerd --config /var/run/docker/containerd/containerd.tomlAug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.337814778+08:00" level=info msg="devmapper: Successfully created filesystem xfs on device d...5123-base"
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.463516508+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.463782799+08:00" level=warning msg="mountpoint for pids not found"
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.464461343+08:00" level=info msg="Loading containers: start."
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.601643093+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 17...P address"
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.677859724+08:00" level=info msg="Loading containers: done."
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.696315433+08:00" level=info msg="Docker daemon" commit=0520e24 graphdriver(s)=devicemapper ...18.03.0-ce
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.696473183+08:00" level=info msg="Daemon has completed initialization"
Aug 20 15:51:50 server87 systemd[1]: Started Docker Application Container Engine.
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.714102886+08:00" level=info msg="API listen on /var/run/docker.sock"
Hint: Some lines were ellipsized, use -l to show in full.
[root@server87 install_k8s_node]# ls
nodefile  Step1_config.sh  Step2_install_docker.sh  Step3_install_kubelet.sh  Step4_install_proxy.sh  Step5_node_approve_csr.sh  Step6_master_node_context.sh
[root@server87 install_k8s_node]# ./Step3_install_kubelet.sh 
MASTER_IP=172.16.5.81
cat: /opt/ETCD_CLUSER_INFO: No such file or directory
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service - Kubernetes Kubelet ServerLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 15:52:13 HKT; 46ms agoDocs: https://github.com/GoogleCloudPlatform/kubernetesMain PID: 42486 (kubelet)Memory: 6.4MCGroup: /system.slice/kubelet.service└─42486 /usr/bin/kubelet --logtostderr=true --v=0 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --hostname-override=172.16.5.87 --pod-infra-container-image...Aug 20 15:52:13 server87 systemd[1]: Started Kubernetes Kubelet Server.
Aug 20 15:52:13 server87 systemd[1]: Starting Kubernetes Kubelet Server...
[root@server87 install_k8s_node]# 
[root@server87 install_k8s_node]# ./Step4_install_proxy.sh 
Created symlink from /etc/systemd/system/default.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
● kube-proxy.service - Kube Proxy ServiceLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2018-08-20 15:52:18 HKT; 38ms agoMain PID: 42814 (kube-proxy)Memory: 5.8MCGroup: /system.slice/kube-proxy.service└─42814 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.1....Aug 20 15:52:18 server87 systemd[1]: Started Kube Proxy Service.
Aug 20 15:52:18 server87 systemd[1]: Starting Kube Proxy Service...
[root@server87 install_k8s_node]# 

回到Master服务器认证通过Server86、87的kubelet服务csr请求

[root@server81 opt]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-1behv8FXfoDXo6SLgRlwtJ7EwOnMMqIoo7c5YI4q0Yc   1m        kubelet-bootstrap   Pending
node-csr-fH4Ct4Fg4TgzFV0dP-SlfVCtTo9XNCJjajzPohDVxHE   50m       kubelet-bootstrap   Approved,Issued
node-csr-tO2dsRk01-qNWJkeDYARuIkeV24QsX2M8txYmkXs96M   6m        kubelet-bootstrap   Pending
[root@server81 opt]# 
[root@server81 opt]# kubectl certificate approve node-csr-1behv8FXfoDXo6SLgRlwtJ7EwOnMMqIoo7c5YI4q0Yc
certificatesigningrequest.certificates.k8s.io/node-csr-1behv8FXfoDXo6SLgRlwtJ7EwOnMMqIoo7c5YI4q0Yc approved
[root@server81 opt]# 
[root@server81 opt]# kubectl certificate approve node-csr-tO2dsRk01-qNWJkeDYARuIkeV24QsX2M8txYmkXs96M
certificatesigningrequest.certificates.k8s.io/node-csr-tO2dsRk01-qNWJkeDYARuIkeV24QsX2M8txYmkXs96M approved
[root@server81 opt]# 
[root@server81 opt]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-1behv8FXfoDXo6SLgRlwtJ7EwOnMMqIoo7c5YI4q0Yc   1m        kubelet-bootstrap   Approved,Issued
node-csr-fH4Ct4Fg4TgzFV0dP-SlfVCtTo9XNCJjajzPohDVxHE   51m       kubelet-bootstrap   Approved,Issued
node-csr-tO2dsRk01-qNWJkeDYARuIkeV24QsX2M8txYmkXs96M   6m        kubelet-bootstrap   Approved,Issued
[root@server81 opt]# 
[root@server81 opt]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   NotReady   <none>    44m       v1.11.0
172.16.5.86   NotReady   <none>    13s       v1.11.0
172.16.5.87   NotReady   <none>    6s        v1.11.0
[root@server81 opt]# 

部署到这里kubernetesNode节点服务也部署完毕了,虽然这里是NotReady状态,但是只要部署Calico网络即可。

最后总结

综上所述,整体kubernetes启用RBAC的生成环境 二进制可执行文件的环境已部署完毕。
这里Node节点部署Calico网络的内容我就打算写在下一篇章了。

点击这里跳转Calico集成kubernetes的CNI网络部署全过程、启用CA自签名


kubernetes v1.11 二进制部署篇章目录

  • kubernetes v1.11 二进制部署
    • (一)环境介绍
    • (二)Openssl自签TLS证书
    • (三)master组件部署
    • (四)node组件部署
    • (五)Calico集成kubernetes的CNI网络部署全过程、启用CA自签名

优化的方向

  • 离线环境部署kubernetes环境
  • 全自动部署项目
  • 服务器集群外部组件的说明以及自动化部署
    以上几点后续,有时间我可以陆续逐步写上来的,赞一下给我点动力吧。

如果你想要看我写的总体系列文章目录介绍,可以点击kuberntes以及运维开发文章目录介绍

这篇关于kubernetes v1.11 生产环境 二进制部署 全过程的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1092207

相关文章

python管理工具之conda安装部署及使用详解

《python管理工具之conda安装部署及使用详解》这篇文章详细介绍了如何安装和使用conda来管理Python环境,它涵盖了从安装部署、镜像源配置到具体的conda使用方法,包括创建、激活、安装包... 目录pytpshheraerUhon管理工具:conda部署+使用一、安装部署1、 下载2、 安装3

k8s部署MongDB全过程

《k8s部署MongDB全过程》文章介绍了如何在Kubernetes集群中部署MongoDB,包括环境准备、创建Secret、创建服务和Deployment,并通过Robo3T工具测试连接... 目录一、环境准备1.1 环境说明1.2 创建 namespace1.3 创建mongdb账号/密码二、创建Sec

Java中的Opencv简介与开发环境部署方法

《Java中的Opencv简介与开发环境部署方法》OpenCV是一个开源的计算机视觉和图像处理库,提供了丰富的图像处理算法和工具,它支持多种图像处理和计算机视觉算法,可以用于物体识别与跟踪、图像分割与... 目录1.Opencv简介Opencv的应用2.Java使用OpenCV进行图像操作opencv安装j

mysql-8.0.30压缩包版安装和配置MySQL环境过程

《mysql-8.0.30压缩包版安装和配置MySQL环境过程》该文章介绍了如何在Windows系统中下载、安装和配置MySQL数据库,包括下载地址、解压文件、创建和配置my.ini文件、设置环境变量... 目录压缩包安装配置下载配置环境变量下载和初始化总结压缩包安装配置下载下载地址:https://d

将Python应用部署到生产环境的小技巧分享

《将Python应用部署到生产环境的小技巧分享》文章主要讲述了在将Python应用程序部署到生产环境之前,需要进行的准备工作和最佳实践,包括心态调整、代码审查、测试覆盖率提升、配置文件优化、日志记录完... 目录部署前夜:从开发到生产的心理准备与检查清单环境搭建:打造稳固的应用运行平台自动化流水线:让部署像

Python项目打包部署到服务器的实现

《Python项目打包部署到服务器的实现》本文主要介绍了PyCharm和Ubuntu服务器部署Python项目,包括打包、上传、安装和设置自启动服务的步骤,具有一定的参考价值,感兴趣的可以了解一下... 目录一、准备工作二、项目打包三、部署到服务器四、设置服务自启动一、准备工作开发环境:本文以PyChar

gradle安装和环境配置全过程

《gradle安装和环境配置全过程》本文介绍了如何安装和配置Gradle环境,包括下载Gradle、配置环境变量、测试Gradle以及在IntelliJIDEA中配置Gradle... 目录gradle安装和环境配置1 下载GRADLE2 环境变量配置3 测试gradle4 设置gradle初始化文件5 i

springboot健康检查监控全过程

《springboot健康检查监控全过程》文章介绍了SpringBoot如何使用Actuator和Micrometer进行健康检查和监控,通过配置和自定义健康指示器,开发者可以实时监控应用组件的状态,... 目录1. 引言重要性2. 配置Spring Boot ActuatorSpring Boot Act

centos7基于keepalived+nginx部署k8s1.26.0高可用集群

《centos7基于keepalived+nginx部署k8s1.26.0高可用集群》Kubernetes是一个开源的容器编排平台,用于自动化地部署、扩展和管理容器化应用程序,在生产环境中,为了确保集... 目录一、初始化(所有节点都执行)二、安装containerd(所有节点都执行)三、安装docker-

在Ubuntu上部署SpringBoot应用的操作步骤

《在Ubuntu上部署SpringBoot应用的操作步骤》随着云计算和容器化技术的普及,Linux服务器已成为部署Web应用程序的主流平台之一,Java作为一种跨平台的编程语言,具有广泛的应用场景,本... 目录一、部署准备二、安装 Java 环境1. 安装 JDK2. 验证 Java 安装三、安装 mys