使用kubekey部署k8s集群和kubesphere、在已有k8s集群上部署kubesphere

2023-10-19 04:44

本文主要是介绍使用kubekey部署k8s集群和kubesphere、在已有k8s集群上部署kubesphere,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目录

    • 前言
    • 什么是kubekey(简称kk)
    • 单节点上安装 kubesphere(all in one 快速熟悉kubesphere)
      • 部署 kubernetes和和kubesphere
    • 多节点安装
      • 部署 kubernetes和和kubesphere
    • 离线安装k8s v1.22.17和kubesphere v3.3.2
    • 联网-在已有k8s集群上安装kubesphere v3.3.0
    • 离线-在已有k8s集群上安装kubesphere v3.3.0
    • 卸载kubesphere
    • 添加一个节点
    • 删除节点
    • kk命令语法

前言

环境:centos 7.6、k8s 1.22.17、kubesphere v3.3.0
本篇以kubesphere v3.3.0版本讲解。
kubesphere 愿景是打造一个以 kubernetes 为内核的云原生分布式操作系统,它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用(plug-and-play)的集成,支持云原生应用在多云与多集群的统一分发和运维管理。换句话说,使用kubekey工具可以在linux服务器上同时安装k8s集群和kubesphere,也可以在已有的k8s集群中安装kubesphere,kubesphere只是一个k8s界面图形化工具,可以让人快速的部署k8s对象资源等功能以及devops功能等。
kubekey安装k8s集群,etcd是systemd管理的。etcd、kubelet都是systemd管理的,其他master组件都是pod启动的,kube-apiserver、kube-controller-manager、kube-scheduler都是静态pod启动的。

什么是kubekey(简称kk)

kubesphere的官网:https://www.kubesphere.io/zh/
kubekey是用go语言开发的一款全新的安装工具,代替了以前基于 ansible 的安装程序。kubekey 为用户提供了灵活的安装选择,可以分别安装 kubesphere和 kubernetes 或二者同时安装,既方便又高效。安装kubesphere需要k8s集群中有默认存储,如果k8s没有默认存储,则kubesphere 默认会创建一种叫做openebs的存储,openebs存储类本质上是hostpath类型的存储。

单节点上安装 kubesphere(all in one 快速熟悉kubesphere)

使用kk在单台服务器上快速部署 kubesphere 和 kubernetes。
官方文档:https://www.kubesphere.io/zh/docs/v3.3/quick-start/all-in-one-on-linux/

部署 kubernetes和和kubesphere

#安装服务器基本依赖以及做基本配置
yum install  socat conntrack ebtables ipset -y
yum install vim lsof net-tools zip unzip tree wget curl bash-completion pciutils gcc make lrzsz tcpdump bind-utils -y
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 
setenforce 0
echo "检查是否关闭selinux:";getenforce && grep 'SELINUX=disabled' /etc/selinux/config
systemctl stop firewalld.service && systemctl disable firewalld.service
echo "检查是否关闭防火墙:";systemctl status firewalld.service | grep -E 'Active|disabled'
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
echo "检查swap是否关闭:";grep -i 'swap' /etc/fstab;free -h | grep -i 'swap'
#可提前手动安装docker,不安装的话kubekey默认会自动安装与k8s匹配的最新版本docker
#docker 手动安装参考:https://blog.csdn.net/MssGuo/article/details/122694156#下载安装kubekey v3.0.7
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
chmod +x kk
#上面下载不了可直接下载包解压亦可
wget https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.7/kubekey-v3.0.7-linux-amd64.tar.gz
tar xf kubekey-v3.0.7-linux-amd64.tar.gz && chmod a+x kk#安装kubernetes和kubesphere
#查看当前kubekey支持安装哪些k8s版本
./kk version --show-supported-k8s
#kk命令创建集群语法格式
./kk create cluster [--with-kubernetes version] [--with-kubesphere version]
#同时创建k8s集群和安装kubesphere,因为没有创建配置文件,默认就是当前节点单节点安装k8s集群
./kk create cluster --with-kubernetes v1.22.17 --with-kubesphere v3.3.0#等待k8s集群安装完成,查看全部的pod是否启动就绪
kubectl get pod --all-namespaces
#查看kubesphere安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
#配置kubectl命令自动补全功能
yum -y install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
#登录kubesphere,根据安装页面的输出信息登录kubesphere即可
Console: http://192.168.xx.xx:30880
Account: admin
Password: P@88w0rd

多节点安装

最少准备3台服务器。
官方文档:https://www.kubesphere.io/zh/docs/v3.3/installing-on-linux/introduction/multioverview/

部署 kubernetes和和kubesphere

#安装服务器基本依赖以及做基本配置
yum install  socat conntrack ebtables ipset -y
yum install vim lsof net-tools zip unzip tree wget curl bash-completion pciutils gcc make lrzsz tcpdump bind-utils -y
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 
setenforce 0
echo "检查是否关闭selinux:";getenforce && grep 'SELINUX=disabled' /etc/selinux/config
systemctl stop firewalld.service && systemctl disable firewalld.service
echo "检查是否关闭防火墙:";systemctl status firewalld.service | grep -E 'Active|disabled'
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
echo "检查swap是否关闭:";grep -i 'swap' /etc/fstab;free -h | grep -i 'swap'
#可提前手动安装docker,不安装的话kubekey默认会自动安装与k8s匹配的最新版本docker
#docker手动安装参考: https://blog.csdn.net/MssGuo/article/details/122694156#下载安装kubekey v3.0.7
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
chmod +x kk
#上面下载不了可直接下载包解压亦可
wget https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.7/kubekey-v3.0.7-linux-amd64.tar.gz
tar xf kubekey-v3.0.7-linux-amd64.tar.gz && chmod +x kk#安装kubernetes和kubesphere
#对于多节点安装,需要通过指定配置文件来创建集群
#查看当前kubekey支持安装哪些k8s版本
./kk version --show-supported-k8s#1. 创建配置文件
#不添加--with-kubesphere则不会部署kubesphere,只能使用配置文件中的addons字段安装,或者在您后续使用./kk create cluster命令时再次添加这个标志
#添加标志--with-kubesphere时不指定kubesphere版本,则会安装最新版本的kubesphere
#语法格式: ./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
#创建配置文件
./kk create config --with-kubernetes v1.22.17 --with-kubesphere v3.3.0 -f config.yaml#2、编辑配置文件
#完整的配置文件 https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md 可参考这个文件里面参数的含义
#修改配置文件中的参数,包括主机信息,节点角色等
vim config.yaml#3、使用配置文件创建集群
./kk create cluster -f config.yaml#查看全部的pod是否启动就绪
kubectl get pod --all-namespaces
#查看kubesphere安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
#配置kubectl命令自动补全功能
yum -y install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
#根据安装页面的输出信息登录kubesphere即可
Console: http://192.168.xx.xx:30880
Account: admin
Password: P@88w0rd

离线安装k8s v1.22.17和kubesphere v3.3.2

#离线安装官网: https://www.kubesphere.io/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation/
清单(manifest):manifest是一个描述当前kubernetes集群信息和定义
制品(artifact):artifact制品中需要包含哪些内容的文本文件
使用清单manifest文件来定义将要离线部署的集群环境需要的内容,然后使用kubekey通过该./kk artifact export命令指定manifest清单文件来导出
制品包,最后将kk工具和制品包上传到内网离线服务器安装部署。离线部署时只需要kubekey和artifact就可快速、简单的在环境中部署镜像仓库和 kubernetes集群。
下载kubekey和artifact制品包都是在有网的环境下进行,所以找一台可以联网的服务器进行下载即可。#联网下载kubekey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
#上面下载不了可直接下载包解压亦可
wget https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.7/kubekey-v3.0.7-linux-amd64.tar.gz
tar xf kubekey-v3.0.7-linux-amd64.tar.gz && chmod +x kk#编写一个配置清单manifest.yaml文件,用于下载对应的制品
#清单文件的字段解析可参考:https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md
#下面的清单文件对应的k8s版本和kubesphere是kubernetes v1.22.17、kubesphere v3.3.2
cat > manifest.yaml<<'EOF'
---
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:name: k8s
spec:arches:- amd64operatingSystems:- arch: amd64type: linuxid: centosversion: "7"repository:iso:localPath:url: https://github.com/kubesphere/kubekey/releases/download/v3.0.7/centos7-rpms-amd64.isokubernetesDistributions:- type: kubernetesversion: v1.22.17components:helm:version: v3.9.0cni:version: v0.9.1etcd:version: v3.4.13## For now, if your cluster container runtime is containerd, kubekey will add a docker 20.10.8 container runtime in the below list.## The reason is kubekey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.containerRuntimes:- type: dockerversion: 20.10.8crictl:version: v1.24.0docker-registry:version: "2"harbor:version: v2.5.3docker-compose:version: v2.2.2images:- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.17- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.17- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.17- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.17- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3- registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-upgrade:v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1- registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z- registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z- registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4- registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14- registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2- registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2- registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2- registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:ks-v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:ks-v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:ks-v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1- registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3- registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0- registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1- registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0- registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2- registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0- registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03- registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11- registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1- registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1- registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38- registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text- registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache- registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3- registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0
EOF
#下载artifact制品,kubesphere.tar.gz文件非常大,主要取决的manifest.yaml文件里的镜像多少与大小
export KKZONE=cn
./kk artifact export -m manifest.yaml -o kubesphere.tar.gz
#将kubekey文件和kubesphere.tar.gz制品包上传到离线服务器
#创建配置文件,注意版本要与上面制品包里的版本一致
./kk create config --with-kubesphere v3.3.2 --with-kubernetes v1.22.17 -f config.yaml
#编辑配置文件
#完整的配置文件 https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md 可参考这个文件里面字段的含义
vim config.yaml	#配置镜像仓库roleGroups:	#角色组下面添加镜像仓库,指定镜像仓库的节点registry:- ks1registry:type: harbor	#仓库类型设置为harbor,如果不设置默认会安装docker registry镜像仓库privateRegistry: ""namespaceOverride: ""registryMirrors: []insecureRegistries: []#安装镜像仓库,配置文件设置的镜像仓库是harbor,所以安装的就是harbor
./kk init registry -f config.yaml -a kubesphere.tar.gz
#创建Harbor项目,因为harbor需要创建项目才能推送镜像
curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh
vim create_project_harbor.sh	#修改脚本
kubesphereio					#在仓库列表中添加这个项目
url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.local
curl -u "oject_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
#授权脚本
chmod +x create_project_harbor.sh
#创建harbor,harbor创建成功之后,harbor由systemd管理,安装位置在/opt/harbor目录,可自行维护harbor
./create_project_harbor.sh
#登录harbor,脚本默认创建的项目都是公开的,项目设置为公开以便所有用户都能够拉取镜像
https://192.168.xx.xx:80 admin/Harbor12345#再次编辑集群配置文件,添加镜像仓库的信息
vim config.yaml...registry:type: harborauths:			#新增auths配置增加dockerhub.kubekey.local和账号密码"dockerhub.kubekey.local":username: adminpassword: Harbor12345privateRegistry: "dockerhub.kubekey.local"	#privateRegistry值为dockerhub.kubekey.localnamespaceOverride: "kubesphereio"			#namespaceOverride值为kubesphereio,镜像仓库里面的项目名称registryMirrors: []insecureRegistries: []addons: []#真正开始创建k8s集群和kubesphere
./kk create cluster -f config.yaml -a kubesphere.tar.gz --with-packages#安装完成之后发现镜像都上传到kubesphereio项目,这因为kk在安装的过程中会将制品包里面的镜像装换为harbor的域名+项目名,然后上
#传到harbor镜像仓库,项目名就是config.yaml文件中的namespaceOverride字段定义的#查看全部的pod是否启动就绪
kubectl get pod --all-namespaces
#查看kubesphere安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system \
-l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
#配置kubectl命令自动补全功能
yum -y install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
#根据安装页面的输出信息登录kubesphere即可
Console: http://192.168.xx.xx:30880
Account: admin
Password: P@88w0rd#离线安装-小节
1、找一台联网服务器下载kk,编写一个清单文件,根据清单文件使用./kk artifact export命令导出制品包,制品包主要是镜像文件,所以很大;
2、仅需将kk和制品包上传到离线服务器;
3、使用./kk create config命令创建配置文件,修改配置文件,主要是修改主机信息、节点角色分配、镜像仓库类型等信息;
4、使用./kk init registry命令根据配置文件和制品包来创建一个镜像仓库,镜像仓库的类型已经在配置文件中定义,可以是harbor或docker 
registry,镜像仓库都是systemd管理的,之所以要创建镜像仓库是因为要把制品包里面的镜像上传到镜像仓库,harbor默认安装在/opt/harbor目录下,
/opt/harbor目录里面的配置文件详细的说明了仓库名称,端口等信息,可以自行维护harbor。
5、harbor仓库创建完成之后需要执行自动化脚本创建项目,因为harbor需要有项目才能上传镜像;
6、重新编辑配置文件,主要是配置镜像仓库的信息,如harbor的认证信息,指定默认的项目名称,因为kk创建k8s集群时需要将制品包里的镜像名转换后上传到harbor镜像仓库指定的项目中;
7、创建集群;
8、检查全部pod是否就绪,检查kubesphere日志是否显示kubesphere安装完成。

联网-在已有k8s集群上安装kubesphere v3.3.0

#最小化安装kubesphere,可在安装完成之后自行启用kubesphere的插件
https://kubesphere.io/zh/docs/v3.3/quick-start/minimal-kubesphere-on-k8s/
https://www.kubesphere.io/zh/docs/v3.3/installing-on-kubernetes/introduction/overview/
#准备工作
1、kubernetes 版本必须为:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
2、确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。
3、在安装之前,需要配置 kubernetes 集群中的默认存储类型。
所以得首先安装一个合适的k8s版本集群,然后k8s集群要有动态供给的默认存储类型。
安装k8s集群参照:https://blog.csdn.net/MssGuo/article/details/122773155
配置nfs作为k8s默认存储可以参照:https://blog.csdn.net/MssGuo/article/details/116381308和https://blog.csdn.net/MssGuo/article/details/123611986
#部署kubesphere v3.3.0
wget -c https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f kubesphere-installer.yaml
wget -c https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
#vim cluster-configuration.yaml 编辑文件适当修改
kubectl apply -f cluster-configuration.yaml
#kubesphere v3.3.2版本
# kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
# kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml#检查日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
#输出日志可以查看kubesphere登录信息
Console: http://192.168.xx.xx:30880
Account: admin
Password: P@88w0rd#kubesphere基本使用
admin登录,创建A用户并授权可以创建企业空间;
A用户登录,创建企业空间;A用户邀请其他用户加入企业空间;其他用户都是可以查看企业空间全部资源,只有B是项目总监,B可以创建项目;
B用户登录,创建项目,邀请其他开发成员加入项目并分配项目角色即可;
创建一个项目后k8s就会创建一个对应的namespace

离线-在已有k8s集群上安装kubesphere v3.3.0

#官方文档: https://www.kubesphere.io/zh/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/
#离线在已有的k8s集群中安装kubesphere和联网在已有k8s集群上安装kubesphere差别在于,离线需要自己在离线服务器上创建一个本地仓库来托管Docker镜像
#本教程演示了如何在离线环境中将kubesphere安装到kubernetes 上#首先需要离线服务器上或同网段服务器上有一个镜像仓库,可参考下面两个链接搭建,任选一个即可,如果已有镜像仓库,可不用重复搭建
#搭建docker registry镜像仓库: https://blog.csdn.net/MssGuo/article/details/128945312
#搭建harbor镜像仓库: https://blog.csdn.net/MssGuo/article/details/126210184#找一台有docker环境且可以联网的服务器
#联网-下载KubeSphere镜像列表
wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt
#可以根据需要选择拉取的镜像,如果已经有一个kubernetes集群了,可以在images-list.text中删除 ##k8s-images 和在它下面的相关镜像
vim images-list.txt
#联网-下载offline-installation-tool.sh
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh
chmod +x offline-installation-tool.sh
./offline-installation-tool.sh -h
#联网-拉取镜像,服务器需要能联网并且有docker环境,下载包是在kubesphere-images目录下一个个以tar.gz结尾的文件
./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images
#下载kubesphere部署文件
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
#编辑集群配置文件,spec.local_registry字段指定自己镜像仓库的IP+端口
vim cluster-configuration.yaml
#替换镜像名称,dockerhub.kubekey.local是镜像仓库名称
sed -i "s#^\s*image: kubesphere.*/ks-installer:.*#        image: dockerhub.kubekey.local/kubesphere/ks-installer:v3.0.0#" kubesphere-installer.yaml
#将已经联网下载好的全部文件上传到离线服务器
#推送镜像至私有仓库,-r参数指定是镜像仓库域名和端口
#备注:如果使用harbor,需要先在harbor上面创建项目,要创建的项目目录参考images-list.txt文件的镜像列表,如kubesphere/tomcat85-java8-centos7:v3.2.0,项目名称就是kubesphere
./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local
#开始安装kubesphere
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml

卸载kubesphere

在k8s集群中卸载kubesphere,可参考官方写的卸载脚本:
https://www.kubesphere.io/zh/docs/v3.4/installing-on-kubernetes/uninstall-kubesphere-from-k8s/
#卸载kubesphere,下载脚本在master节点执行即可
wget -c https://raw.githubusercontent.com/kubesphere/ks-installer/release-3.1/scripts/kubesphere-delete.sh
bash kubesphere-delete.sh

添加一个节点

官方文档: https://www.kubesphere.io/zh/docs/v3.3/installing-on-linux/cluster-operation/add-new-nodes/
#添加工作节点
#从集群中检索并生成sample.yaml配置文件,生成的sample.yaml配置文件可能信息不全,需要自己补全,如果机器上已有原来的配置文件,可跳过此步
#编辑配置文件,将新节点的信息放在hosts和roleGroups之下
./kk create config --from-cluster
vim sample.yaml
#添加节点
./kk add nodes -f sample.yaml#添加master节点实现高可用
#与上相识,不过配置的是master节点相关的信息
./kk create config --from-cluster
#将新节点和负载均衡器的信息添加到sample.yaml文件中
vim sample.yaml
##添加节点
./kk add nodes -f sample.yaml

删除节点

#找到设置集群时所用的配置文件,如果没有该配置文件,可使用kubekey检索集群信息,默认创建文件sample.yaml
./kk create config --from-cluster
#删除节点
./kk delete node <nodeName> -f sample.yaml

kk命令语法

[root@ks1 ~]# ./kk --help
Deploy a kubernetes or kubesphere cluster efficiently, flexibly and easily. There are three scenarios to use kubekey.
1. 仅安装kubernetes 
2. 一条命令同时安装kubernetes和kubesphere
3. 现在安装kubernetes,然后在使用ks-installer在k8s上部署kubesphere,ks-installer参考:https://github.com/kubesphere/ks-installer
语法:kk [command]
可用命令s:add         k8s集群添加节点alpha       Commands for features in alphaartifact    管理kubekey离线下载的安装包certs       集群证书completion  生成 shell 完成脚本create      创建一个集群或创建集群配置文件delete      删除节点或删除集群help        帮助init        初始化安装环境plugin      Provides utilities for interacting with pluginsupgrade     平滑升级集群version     打印kk版本信息Flags:-h, --help   help for kkUse "kk [command] --help" for more information about a command.
[root@ks1 ~]# 
#安装k8s集群和kubesphere,没有使用配置文件,默认当前单节点安装k8s
./kk create cluster --with-kubernetes v1.22.17 --with-kubesphere v3.3.0
#仅安装k8s集群,没有使用配置文件,默认当前单节点安装k8s./kk create cluster --with-kubernetes v1.22.17
#创建配置文件
./kk create config --with-kubesphere v3.3.2 --with-kubernetes v1.22.17 -f config.yaml
#从配置文件读取配置,创建k8s集群和kubesphere
./kk create cluster -f config.yaml#删除k8s节点
./kk delete node ks2 -f config.yaml
#删除k8s集群
./kk delete cluster

这篇关于使用kubekey部署k8s集群和kubesphere、在已有k8s集群上部署kubesphere的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/237394

相关文章

C++使用栈实现括号匹配的代码详解

《C++使用栈实现括号匹配的代码详解》在编程中,括号匹配是一个常见问题,尤其是在处理数学表达式、编译器解析等任务时,栈是一种非常适合处理此类问题的数据结构,能够精确地管理括号的匹配问题,本文将通过C+... 目录引言问题描述代码讲解代码解析栈的状态表示测试总结引言在编程中,括号匹配是一个常见问题,尤其是在

Java中String字符串使用避坑指南

《Java中String字符串使用避坑指南》Java中的String字符串是我们日常编程中用得最多的类之一,看似简单的String使用,却隐藏着不少“坑”,如果不注意,可能会导致性能问题、意外的错误容... 目录8个避坑点如下:1. 字符串的不可变性:每次修改都创建新对象2. 使用 == 比较字符串,陷阱满

Python使用国内镜像加速pip安装的方法讲解

《Python使用国内镜像加速pip安装的方法讲解》在Python开发中,pip是一个非常重要的工具,用于安装和管理Python的第三方库,然而,在国内使用pip安装依赖时,往往会因为网络问题而导致速... 目录一、pip 工具简介1. 什么是 pip?2. 什么是 -i 参数?二、国内镜像源的选择三、如何

使用C++实现链表元素的反转

《使用C++实现链表元素的反转》反转链表是链表操作中一个经典的问题,也是面试中常见的考题,本文将从思路到实现一步步地讲解如何实现链表的反转,帮助初学者理解这一操作,我们将使用C++代码演示具体实现,同... 目录问题定义思路分析代码实现带头节点的链表代码讲解其他实现方式时间和空间复杂度分析总结问题定义给定

Linux使用nload监控网络流量的方法

《Linux使用nload监控网络流量的方法》Linux中的nload命令是一个用于实时监控网络流量的工具,它提供了传入和传出流量的可视化表示,帮助用户一目了然地了解网络活动,本文给大家介绍了Linu... 目录简介安装示例用法基础用法指定网络接口限制显示特定流量类型指定刷新率设置流量速率的显示单位监控多个

ElasticSearch+Kibana通过Docker部署到Linux服务器中操作方法

《ElasticSearch+Kibana通过Docker部署到Linux服务器中操作方法》本文介绍了Elasticsearch的基本概念,包括文档和字段、索引和映射,还详细描述了如何通过Docker... 目录1、ElasticSearch概念2、ElasticSearch、Kibana和IK分词器部署

部署Vue项目到服务器后404错误的原因及解决方案

《部署Vue项目到服务器后404错误的原因及解决方案》文章介绍了Vue项目部署步骤以及404错误的解决方案,部署步骤包括构建项目、上传文件、配置Web服务器、重启Nginx和访问域名,404错误通常是... 目录一、vue项目部署步骤二、404错误原因及解决方案错误场景原因分析解决方案一、Vue项目部署步骤

JavaScript中的reduce方法执行过程、使用场景及进阶用法

《JavaScript中的reduce方法执行过程、使用场景及进阶用法》:本文主要介绍JavaScript中的reduce方法执行过程、使用场景及进阶用法的相关资料,reduce是JavaScri... 目录1. 什么是reduce2. reduce语法2.1 语法2.2 参数说明3. reduce执行过程

如何使用Java实现请求deepseek

《如何使用Java实现请求deepseek》这篇文章主要为大家详细介绍了如何使用Java实现请求deepseek功能,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录1.deepseek的api创建2.Java实现请求deepseek2.1 pom文件2.2 json转化文件2.2

python使用fastapi实现多语言国际化的操作指南

《python使用fastapi实现多语言国际化的操作指南》本文介绍了使用Python和FastAPI实现多语言国际化的操作指南,包括多语言架构技术栈、翻译管理、前端本地化、语言切换机制以及常见陷阱和... 目录多语言国际化实现指南项目多语言架构技术栈目录结构翻译工作流1. 翻译数据存储2. 翻译生成脚本