用kubeadm 搭建 Kubernetes

2024-06-12 03:48
文章标签 kubernetes 搭建 kubeadm

本文主要是介绍用kubeadm 搭建 Kubernetes,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

记录下这几天在折腾的一个事,就是想把Kubernetes 搭建起来,看看它是怎么玩的,搭建过程还是比较辛苦的,因为没有找到特别靠谱的资料,或者版本不兼容。

一 搭建的方式

Kubernetes 搭建有三种方式,简单评价一下:

  1. 基于Docker 本地运行Kubernetes
    先决条件:
    http://www.cnblogs.com/zhangeamon/p/5197655.html
    参考资料:
    https://github.com/kubernetes/community/blob/master/contributors/devel/local-cluster/docker.md
    Install kubectl and shell auto complish:
    评价: 这种方式我没有搭建成功,一直有can not connet 127.0.0.1:8080 的问题,后面感觉是没有创建./kube目录的原因。不过没有再试
  2. 用minikube
    minikube是一个适合于在单机环境下搭建,它是创建出一个虚拟机来,并且Kubernetes官方好像已经停止对基于Docker本地运行Kubernetes的支持,参考:https://github.com/kubernetes/minikube, 但是因为它最好要求是virtualbox作为底层虚拟化driver,而我的bare metal 已经安装kvm了,我试了下存在冲突,所以也就没有用这种方式进行安装。
  3. 用kubeadm
    它是一个比较方便安装Kubernetes cluster的工具,我也是按照这种方式装成功的。后面会详细记录这种方式。
  4. 一步步安装
    每一个组件每一个组件进行安装,我还没有试,可以根据:https://github.com/opsnull/follow-me-install-kubernetes-cluster, 比较麻烦。
    个人还是推荐第三种方式,对于上手来说比较方便一点,我是这几种方式都有尝试。

二 kubeadm setup Kubernetes

参考:
Openstack: https://docs.openstack.org/developer/kolla-kubernetes/deployment-guide.html
Kubernetes: https://kubernetes.io/docs/getting-started-guides/kubeadm/
搭建环境:KVM 起的Centos7 虚拟机
1.Turn off SELinux

sudo setenforce 0
sudo sed -i 's/enforcing/permissive/g' /etc/selinux/config

2.Turn off firewalld

sudo systemctl stop firewalld
sudo systemctl disable firewalld

3.Write the Kubernetes repository file

cat <<EOF > kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo mv kubernetes.repo /etc/yum.repos.d

4.Install Kubernetes 1.6.1 or later and other dependencies

sudo yum install -y docker ebtables kubeadm kubectl kubelet kubernetes-cni

5.To enable the proper cgroup driver, start Docker and disable CRI

sudo systemctl enable docker
sudo systemctl start docker
CGROUP_DRIVER=$(sudo docker info | grep "Cgroup Driver" | awk '{print $3}')
sudo sed -i "s|KUBELET_KUBECONFIG_ARGS=|KUBELET_KUBECONFIG_ARGS=--cgroup-driver=$CGROUP_DRIVER --enable-cri=false |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sudo sed -i "s|\$KUBELET_NETWORK_ARGS| |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

6.Setup the DNS server with the service CIDR:

sudo sed -i 's/10.96.0.10/10.3.3.10/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

7.reload kubelet

sudo systemctl daemon-reload
sudo systemctl stop kubelet
sudo systemctl enable kubelet
sudo systemctl start kubelet

8.Deploy Kubernetes with kubeadm

sudo kubeadm init --pod-network-cidr=10.1.0.0/16 --service-cidr=10.3.3.0/24

有可能会遇到的问题:如果你是通过公司的proxy出去的网络,那么一定要把你vm的地址放到no_proxy中,否运行kubeadm,会hank在下面, 如果运行失败,执行:sudo kubeadm reset:

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready

Note:
Note pod-network-cidr is a network private to Kubernetes that the PODs within Kubernetes communicate on. The service-cidr is where IP addresses for Kubernetes services are allocated. There is no recommendation that the pod network should be /16 network in upstream documentation however, the Kolla developers have found through experience that each node consumes an entire /24 network, so this configuration would permit 255 Kubernetes nodes.
运行完后:

[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.29]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 23.768335 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 4.022721 seconds
[token] Using token: 5e0896.4cced9c43904d4d0
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dnsYour Kubernetes master has initialized successfully!To start using your cluster, you need to run (as a regular user):sudo cp /etc/kubernetes/admin.conf $HOME/sudo chown $(id -u):$(id -g) $HOME/admin.confexport KUBECONFIG=$HOME/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:http://kubernetes.io/docs/admin/addons/You can now join any number of machines by running the following on each node
as root:kubeadm join --token 5e0896.4cced9c43904d4d0 192.168.122.29:6443

记住:最后一句话kuberadm join, slave node可以用此CLI去joint到Kubernetes集群中。
然后:

  sudo cp /etc/kubernetes/admin.conf $HOME/sudo chown $(id -u):$(id -g) $HOME/admin.confexport KUBECONFIG=$HOME/admin.conf

Load the kubedm credentials into the system:

mkdir -p $HOME/.kube
sudo -H cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo -H chown $(id -u):$(id -g) $HOME/.kube/config

运行完后用 去check 状态:

kubectl get nodes
kubectl get pods -n kube-system

9.Deploy CNI Driver
CNI 组网方式:https://linux.cn/thread-15315-1-1.html
用Flannel:
Flannel是基于vxlan, 用vxlan 因为报文长度增加,所以效率相对低,它它是Kubernetes推荐的方式

kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这种方式我没有成功, flannel 这个pod一直在重启。
用 Canal:

wget http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yamlsed -i "s@192.168.0.0/16@10.1.0.0/16@" calico.yaml 
sed -i "s@10.96.232.136@10.3.3.100@" calico.yaml
kubectl apply -f calico.yaml

Finally untaint the node (mark the master node as schedulable) so that PODs can be scheduled to this AIO deployment:

kubectl taint nodes --all=true  node-role.kubernetes.io/master:NoSchedule-

10.restore $KUBELET_NETWORK_ARGS

sudo sed -i "s|\$KUBELET_EXTRA_ARGS|\$KUBELET_EXTRA_ARGS \$KUBELET_NETWORK_ARGS|g" /etc/systemd/system/kubelet.service.d/10-kubeadm.confsudo systemctl daemon-reload
sudo systemctl restart kubeletOLD_DNS_POD=$(kubectl get pods -n kube-system |grep dns | awk '{print $1}')
kubectl delete pod $OLD_DNS_POD -n kube-system

wait for old dns_pod deleted and autorestart a new dns_pod
kubectl get pods,svc,deploy,ds –all-namespaces

11.setup sample application
Ref: http://janetkuo.github.io/docs/getting-started-guides/kubeadm/
Installing a sample application 部分

总结

  1. kubectl 自动命令补全
    Ref:https://kubernetes.io/docs/tasks/kubectl/install/

这篇关于用kubeadm 搭建 Kubernetes的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1053112

相关文章

一份LLM资源清单围观技术大佬的日常;手把手教你在美国搭建「百万卡」AI数据中心;为啥大模型做不好简单的数学计算? | ShowMeAI日报

👀日报&周刊合集 | 🎡ShowMeAI官网 | 🧡 点赞关注评论拜托啦! 1. 为啥大模型做不好简单的数学计算?从大模型高考数学成绩不及格说起 司南评测体系 OpenCompass 选取 7 个大模型 (6 个开源模型+ GPT-4o),组织参与了 2024 年高考「新课标I卷」的语文、数学、英语考试,然后由经验丰富的判卷老师评判得分。 结果如上图所

【网络安全的神秘世界】搭建dvwa靶场

🌝博客主页:泥菩萨 💖专栏:Linux探索之旅 | 网络安全的神秘世界 | 专接本 | 每天学会一个渗透测试工具 下载DVWA https://github.com/digininja/DVWA/blob/master/README.zh.md 安装DVWA 安装phpstudy https://editor.csdn.net/md/?articleId=1399043

cocospod 搭建环境和使用

iOS 最新版 CocoaPods 的安装流程 1.移除现有Ruby默认源 $gem sources --remove https://rubygems.org/ 2.使用新的源 $gem sources -a https://ruby.taobao.org/ 3.验证新源是否替换成功 $gem sources -l 4.安装CocoaPods (1)  $sudo gem

Apache2.4+PHP7.2环境搭建

Editplus生成码:http://www.jb51.net/tools/editplus/ 阿帕奇下载地址:https://www.apachehaus.com/cgi-bin/download.plx PHP下载地址:http://windows.php.net/download#php-7.2 1.打开阿帕奇的下载地址,点击下载。

Solr集群的搭建和使用(2)

1   什么是SolrCloud   SolrCloud(solr 云)是Solr提供的分布式搜索方案,当你需要大规模,容错,分布式索引和检索能力时使用 SolrCloud。当一个系统的索引数据量少的时候是不需要使用SolrCloud的,当索引量很大,搜索请求并发很高,这时需要使  用SolrCloud来满足这些需求。   SolrCloud是基于Solr和Zookeeper的分布式搜索

基于LangChain框架搭建知识库

基于LangChain框架搭建知识库 说明流程1.数据加载2.数据清洗3.数据切分4.获取向量5.向量库保存到本地6.向量搜索7.汇总调用 说明 本文使用openai提供的embedding模型作为框架基础模型,知识库的搭建目的就是为了让大模型减少幻觉出现,实现起来也很简单,假如你要做一个大模型的客服问答系统,那么就把历史客服问答数据整理好,先做数据处理,在做数据向量化,最后保

DK盾-服务器 + docusaurus搭建

DK盾云服务器官网:https://www.dkdun.cn 详细可看我的github博客https://mumuzi7179.github.io/docs/Blog/%E5%8F%8B%E9%93%BE 主要是CSDN审核不通过 DK盾CTF群–727077055 以下是为了审核通过顺带写的。。 Docusaurus搭建 第一步,安装npm curl -fsSL https://

Kubernetes排错(十)-处理容器数据磁盘被写满

容器数据磁盘被写满造成的危害: 不能创建 Pod (一直 ContainerCreating)不能删除 Pod (一直 Terminating)无法 exec 到容器 如何判断是否被写满? 容器数据目录大多会单独挂数据盘,路径一般是 /var/lib/docker,也可能是 /data/docker 或 /opt/docker,取决于节点被添加时的配置,可通过 docker info 确定:

kubernetes客户端crictl命令

kubernetes客户端crictl命令 crictl 是一个命令行工具,用于与容器运行时接口(CRI)兼容的容器运行时(如 containerd 和 CRI-O)进行交互。crictl 提供了许多有用的命令来管理容器、镜像和 sandboxes。 官方仓库地址: kubernetes-sigs/cri-tools: CLI and validation tools for Kubelet

Android从零开始搭建MVVM架构(5)—— LifeCycle详解

1.Lifecycle简介 为什么要使用lifecycle? activity 和fragment 是有声明周期的,有时候,我们的很多操作需要写在声明周期的方法中,比如,下载,文件操作等,这样很多情况下回导致,我们在activity中的声明周期方法中写越来越多的代码,activity或者fragment 越来越臃肿,代码维护越来越困难。 使用lifecycle就可以很好的解决这类问题。 lifec