本文主要是介绍部署kubesphere,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
部署kubesphere
1.部署kubernetes
https://github.com/easzlab/kubeasz
1.1环境初始化
- 关闭防火墙
- 如果是centos系统要关闭seLinux
- 时间同步
- master节点和node节点做免密
- 安装ansible
2.2在部署节点准备安装k8s的资源
2.2.1 下载二进制包和离线镜像源
export release=3.6.2
wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown1、安装docker
./ezdown -D docker2、配置docker镜像代理仓库
root@k8s-master-1:~# cat /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
Environment="PATH=/opt/kube/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStart=/opt/kube/bin/dockerd
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
Environment=HTTP_PROXY=http://$proxy_ip #添加
Environment=HTTPS_PROXY=http://$proxy_ip #添加
Environment=NO_PROXY=localhost,127.0.0.1,easzlab.io.local #添加(如果不添加后期会提示镜像无法上传到镜像仓库)
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
2.2.2 创建k8s集群
#进入到ezctl文件所在的目录执行创建命令
ezctl new k8s-01
修改 /etc/kubeasz/clusters/k8s-01/hosts' 和 '/etc/kubeasz/clusters/k8s-01/config.yml':根据前面节点规划修改hosts 文件和其他集群层面的主要配置选项;其他集群组件等配置项可以在config.yml 文件中修改。
#部署k8s
ezctl setup k8s-01 all
2.2.3 验证集群是否部署成功
这里我只是回顾下,至于/etc/kubeasz/clusters/k8s-01/hosts’ 和 '/etc/kubeasz/clusters/k8s-01/config.yml文件如何修改,可以参考我前面的博客,里面介绍的很详细
2.2.4 部署nerdctl工具
因为k8s 1.24以上版本就不支持使用docker改用containerd,但是containerd的客户端命令不好用,故部署第三方的客户端工具,以便后续的操作。
root@k8s-master-1:~/nerdctl# ll
total 73312
drwxr-xr-x 2 root root 4096 Aug 21 03:18 ./
drwx------ 9 root root 4096 Aug 21 05:36 ../
-rw-r--r-- 1 root root 202 Aug 21 03:17 buildkit.service
-rw-r--r-- 1 root root 65775728 Aug 21 03:17 buildkit-v0.12.5.linux-amd64.tar.gz
-rwxr-xr-x 1 root root 21622 Jul 31 2023 containerd-rootless-setuptool.sh*
-rwxr-xr-x 1 root root 7187 Jul 31 2023 containerd-rootless.sh*
-rwxr-xr-x 1 root root 827 Aug 21 03:17 install_nerdctl.sh*
-rw-r--r-- 1 root root 9242800 Aug 21 03:17 nerdctl-1.5.0-linux-amd64.tar.gzroot@k8s-master-1:~/nerdctl# cat buildkit.service
[Unit]
Description=BuildKit
Documentation=https://github.com/moby/buildkit[Service]
ExecStart=/usr/local/bin/buildkitd --oci-worker=false --containerd-worker=true
[Install]
WantedBy=multi-user.targetroot@k8s-master-1:~/nerdctl# cat install_nerdctl.sh
#!/bin/bash
# 安装 nerdctl
function install_nerdctl() {mkdir -p /usr/local/containerd/bin/tar -xvf ./nerdctl-1.5.0-linux-amd64.tar.gzmv ./nerdctl /usr/local/containerd/bin/ln -s /usr/local/containerd/bin/nerdctl /usr/local/bin/nerdctl
}# 安装 buildkit
function install_buildkit() {mkdir -p /usr/local/buildctltar -zxvf ./buildkit-v0.12.5.linux-amd64.tar.gz -C /usr/local/buildctlln -s /usr/local/buildctl/bin/buildkitd /usr/local/bin/buildkitdln -s /usr/local/buildctl/bin/buildctl /usr/local/bin/buildctlcp ./buildkit.service /etc/systemd/system/buildkit.servicesystemctl daemon-reloadsystemctl enable buildkit --now
}
# 主函数
function main() {install_nerdctlinstall_buildkit
}
# 调用主函数
main
2.部署kubesphere
2.1部署nfs共享存储目录
apt -y install nfs-server
echo "/data/kubesphere *(rw,sync,no_root_squash)" >> /etc/exports
systemctl restart nfs-server
#验证
showmount -e 192.168.4.253
2.2 部署nfs-storageclass
root@k8s-master-1:~/kubesphere# cat nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:archiveOnDelete: "false"root@k8s-master-1:~/kubesphere# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisioner
spec:replicas: 2strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-hangzhou.aliyuncs.com/qinge/nfs-subdir-external-provisioner:v1volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 192.168.4.253- name: NFS_PATHvalue: /data/kubespherevolumes:- name: nfs-client-rootnfs:server: 192.168.4.253 #nfs服务地址path: /data/kubesphere #共享目录root@k8s-master-1:~/kubesphere# cat rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]
- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccountname: nfs-client-provisionernamespace: default # 确保这里的namespace与你的nfs-client-provisioner服务账户所在的namespace相匹配
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.ioroot@k8s-master-1:~/kubesphere# kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-664ff76565-krktp 1/1 Running 0 101m
nfs-client-provisioner-664ff76565-ljjmq 1/1 Running 0 101m
2.3把部署的nfs-storageclass设置成默认的Storageclass
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
2.4下载kubesphere的资源并部署
wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml
wget apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml#查看部署的进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
在这里插入图片描述
当pod全部部署成功后,此时就可以在浏览直接访问任意节点的30880端口了,默认用户名/密码:admin/P@88w0rd
2.5 添加应用商店
在定制资源定义中搜索ClusterConfiguration支持模糊搜索写clusterconfig或则cluster都可以只要找对就行
然后在执行kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l ‘app in (ks-install, ks-installer)’ -o jsonpath=‘{.items[0].metadata.name}’) -f查看应用商店的部署进度
3.部署遇到的挂载
- 因为环境之前部署过rancher等其他服务,导致环境不干净,部署kubesphere一直提示保错10.10.0.1:443端口无法被访问
排查思路:
1、这个是k8s集群内部的访问api接口,首先查看calico相关pod的运行状态是否正常
2、calico组件运行都是正常的,我又查看了kube-proxy组件是否正常,经过一番检查发现都是没问题,我就怀疑是我环境有问题,直接把集群销毁,重新部署(我这里的环境此时还没有装任何服务),发现问题没有了
- 自动创建的pvc一直处于pending状态中,查看sc也是正常的,这时想起来之前也遇到类似的事,是nfs-Storageclass的版本和k8s的版本不兼容导致的,结果没有出乎意外就是这个问题,重新部署nfs-storageclass就解决了
这篇关于部署kubesphere的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!