1、K8s为什么要使用存储?
答:k8s中的副本控制器保证了pod的始终存储,却保证不了Pod中的数据。只有启动一个新pod的,之前pod中的数据会随着容器的删掉而丢失。k8s中的rc启动指定数量的Pod,当某个Pod死掉了,会在新的节点启动新的Pod,k8s中想要实现数据持久化,需要使用一个叫做共享存储的,让Pod里面的数据挂载到这个共享存储上面,就算在新的节点启动新的Pod,依然可以保证数据不丢失。
2、k8s中的PV和PVC的概念。
答:PersistentVolume(简称为PV,持久化存储),由管理员添加的一个存储的描述,是一个全局资源,没有namespace的限制,包含存储的类型,存储的大小和访问模式等等。它的生命周期独立于Pod,例如当使用它的Pod销毁时对PV没有影响。
PersistentVolumeClaim(简称为PVC),是Namespace里面的资源,描述对PV的一个请求。请求信息包含存储大小,访问模式等等。注意,PV和PVC是一一绑定的。
3、首先这里创建PV使用到的是NFS,这里首先安装一下NFS,如下所示:
需要Master主节点先安装NFS服务端,在所有Node节点安装NFS客户端,不然Pod无法使用,因为挂载不上,在三台机器都执行下面的命令即可。
1 [root@k8s-master ~]# yum install nfs-utils.x86_64 2 Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager3 4 This system is not registered with an entitlement server. You can use subscription-manager to register.5 6 Loading mirror speeds from cached hostfile7 * base: mirrors.tuna.tsinghua.edu.cn8 * extras: mirrors.bfsu.edu.cn9 * updates: mirrors.bfsu.edu.cn 10 base | 3.6 kB 00:00:00 11 extras | 2.9 kB 00:00:00 12 updates | 2.9 kB 00:00:00 13 Resolving Dependencies 14 --> Running transaction check 15 ---> Package nfs-utils.x86_64 1:1.3.0-0.61.el7 will be updated 16 ---> Package nfs-utils.x86_64 1:1.3.0-0.66.el7 will be an update 17 --> Finished Dependency Resolution 18 19 Dependencies Resolved 20 21 ================================================================================================================================================================================================================= 22 Package Arch Version Repository Size 23 ================================================================================================================================================================================================================= 24 Updating: 25 nfs-utils x86_64 1:1.3.0-0.66.el7 base 412 k 26 27 Transaction Summary 28 ================================================================================================================================================================================================================= 29 Upgrade 1 Package 30 31 Total size: 412 k 32 Is this ok [y/d/N]: y 33 Downloading packages: 34 Running transaction check 35 Running transaction test 36 Transaction test succeeded 37 Running transaction 38 Updating : 1:nfs-utils-1.3.0-0.66.el7.x86_64 1/2 39 Cleanup : 1:nfs-utils-1.3.0-0.61.el7.x86_64 2/2 40 Verifying : 1:nfs-utils-1.3.0-0.66.el7.x86_64 1/2 41 Verifying : 1:nfs-utils-1.3.0-0.61.el7.x86_64 2/2 42 43 Updated: 44 nfs-utils.x86_64 1:1.3.0-0.66.el7 45 46 Complete! 47 [root@k8s-master ~]#
在三台机器都要安装nfs-utils的包的,然后在服务器端,开始配置配置文件/etc/exports,如下所示:
1 [root@k8s-master ~]# vim /etc/exports
配置内容,如下所示:
1 # 共享data目录,允许192.168.110.*访问,即允许110段ip地址访问。读写权限,同步,不做root用户、所有用户的UID映射 2 /data 192.168.110.0/24(rw,async,no_root_squash,no_all_squash)
创建/data/目录下面的一个目录,如下所示:
1 [root@k8s-master ~]# cat /etc/exports 2 # 共享data目录,允许192.168.110.*访问,即允许110段ip地址访问。读写权限,同步,不做root用户、所有用户的UID映射 3 /data 192.168.110.0/24(rw,async,no_root_squash,no_all_squash) 4 5 6 [root@k8s-master ~]# mkdir /data/k8s 7 mkdir: cannot create directory ‘/data/k8s’: No such file or directory 8 [root@k8s-master ~]# mkdir /data/k8s -p 9 [root@k8s-master ~]#
然后重启nfs、rpcbind,如下所示:
1 [root@k8s-master ~]# systemctl restart rpcbind.service 2 [root@k8s-master ~]# systemctl restart nfs.service 3 [root@k8s-master ~]#
在另外两个节点查看是否可以查看到nfs,如下所示:
1 [root@k8s-node2 ~]# showmount -e 192.168.110.1332 Export list for 192.168.110.133:3 /data 192.168.110.0/244 [root@k8s-node2 ~]# 5 6 7 [root@k8s-node3 ~]# showmount -e 192.168.110.1338 Export list for 192.168.110.133:9 /data 192.168.110.0/24 10 [root@k8s-node3 ~]#
下面开始创建一个PV,如下所示:
1 [root@k8s-master ~]# cd k8s/ 2 [root@k8s-master k8s]# ls 3 book-master.war dashboard dashboard.zip deploy health heapster hpa metrics namespace pod rc skydns skydns.zip svc tomcat_demo tomcat_demo.zip 4 [root@k8s-master k8s]# mkdir volume 5 [root@k8s-master k8s]# cd volume/ 6 [root@k8s-master volume]# vim test-pv.yaml
配置内容,如下所示:
1 apiVersion: v12 kind: PersistentVolume3 metadata:4 name: testpv5 labels:6 type: testpv7 spec:8 capacity:9 storage: 10Gi 10 accessModes: 11 - ReadWriteMany 12 persistentVolumeReclaimPolicy: Recycle 13 nfs: 14 path: "/data/k8s" 15 server: 192.168.110.133 16 readOnly: false
开始创建这个PV,并进行查看,如下所示:
1 [root@k8s-master volume]# kubectl create -f test-pv.yaml 2 persistentvolume "testpv" created 3 [root@k8s-master volume]# kubectl get pv -o wide 4 NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE 5 testpv 10Gi RWX Recycle Available 10s 6 [root@k8s-master volume]#
再创建一个5Gi的PV,修改一下配置文件,然后进行创建,名称必须不一样,标签可以一样的,如下所示:
1 [root@k8s-master volume]# vim test-pv.yaml 2 [root@k8s-master volume]# kubectl create -f test-pv.yaml 3 persistentvolume "testpv2" created 4 [root@k8s-master volume]# kubectl get pv -o wide 5 NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE 6 testpv 10Gi RWX Recycle Available 3m 7 testpv2 5Gi RWX Recycle Available 3s 8 [root@k8s-master volume]#
开始创建PVC,如下所示:
1 apiVersion: v12 kind: PersistentVolumeClaim3 kind: PersistentVolumeClaim4 apiVersion: v15 metadata:6 name: nfs7 spec:8 accessModes:9 - ReadWriteMany 10 resources: 11 requests: 12 storage: 1Gi
1 [root@k8s-master volume]# kubectl create -f test-pvc.yaml 2 persistentvolumeclaim "nfs" created 3 [root@k8s-master volume]# kubectl get pvc -o wide 4 NAME STATUS VOLUME CAPACITY ACCESSMODES AGE 5 nfs Bound testpv2 5Gi RWX 9s 6 [root@k8s-master volume]#
此时,可以发现pvc已经绑定到了testpv2这个PV上面了,pvc优先绑定到容量小的pv上面的。自己也可以再创建一个6G的PV,然后再创建一个7G的PVC,可以发现这个PVC会绑定到10G的这个PV上面了的。
4、K8s持久化实战,自己可以参考一下这篇https://www.cnblogs.com/wangxu01/articles/11411113.html。
这里首先引用一下https://www.cnblogs.com/biehongli/p/13150609.html,使用k8s运行Java Web项目,这里紧接着这个链接的操作,创建一个mysql,如下所示:
1 [root@k8s-master tomcat_demo]# kubectl create -f mysql-rc.yml 2 replicationcontroller "mysql" created3 [root@k8s-master tomcat_demo]# kubectl create -f mysql-svc.yml 4 service "mysql" created5 [root@k8s-master tomcat_demo]# kubectl get all -o wide6 NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR7 rc/mysql 1 1 1 14s mysql 192.168.110.133:5000/mysql:5.7.30 app=mysql8 9 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR 10 svc/kubernetes 10.254.0.1 <none> 443/TCP 24d <none> 11 svc/mysql 10.254.234.52 <none> 3306/TCP 11s app=mysql 12 13 NAME READY STATUS RESTARTS AGE IP NODE 14 po/mysql-4j7qk 1/1 Running 0 14s 172.16.16.3 k8s-node3 15 [root@k8s-master tomcat_demo]#
下面是mysql-rc.yaml的配置文件,如下所示:
1 [root@k8s-master tomcat_demo]# cat mysql-rc.yml 2 apiVersion: v13 kind: ReplicationController4 metadata:5 name: mysql6 spec:7 replicas: 18 selector:9 app: mysql 10 template: 11 metadata: 12 labels: 13 app: mysql 14 spec: 15 containers: 16 - name: mysql 17 image: 192.168.110.133:5000/mysql:5.7.30 18 ports: 19 - containerPort: 3306 20 env: 21 - name: MYSQL_ROOT_PASSWORD 22 value: '123456' 23 [root@k8s-master tomcat_demo]#
下面是mysql-svc.yaml的配置文件,如下所示:
1 [root@k8s-master tomcat_demo]# cat mysql-svc.yml 2 apiVersion: v13 kind: Service4 metadata:5 name: mysql6 spec:7 ports:8 - port: 33069 targetPort: 3306 10 selector: 11 app: mysql 12 [root@k8s-master tomcat_demo]#
下面是tomcat-rc.yml的配置文件,如下所示:
1 [root@k8s-master tomcat_demo]# kubectl get all -o wide2 NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR3 rc/mysql 1 1 1 2h mysql 192.168.110.133:5000/mysql:5.7.30 app=mysql4 rc/myweb 1 1 1 11m myweb 192.168.110.133:5000/tomcat-book:latest app=myweb5 6 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR7 svc/kubernetes 10.254.0.1 <none> 443/TCP 24d <none>8 svc/mysql 10.254.234.52 <none> 3306/TCP 2h app=mysql9 svc/myweb 10.254.14.249 <nodes> 8080:30008/TCP 10m app=myweb 10 11 NAME READY STATUS RESTARTS AGE IP NODE 12 po/mysql-4j7qk 1/1 Running 0 2h 172.16.16.3 k8s-node3 13 po/myweb-f2dqn 1/1 Running 0 11m 172.16.16.5 k8s-node3 14 [root@k8s-master tomcat_demo]# vim tomcat-rc.yml 15 [root@k8s-master tomcat_demo]# cat tomcat-rc.yml 16 apiVersion: v1 17 kind: ReplicationController 18 metadata: 19 name: myweb 20 spec: 21 replicas: 1 22 selector: 23 app: myweb 24 template: 25 metadata: 26 labels: 27 app: myweb 28 spec: 29 containers: 30 - name: myweb 31 # image: 192.168.110.133:5000/tomcat:lates