kubernates-1.26.1 单机部署 containerd nerdctl

2023-11-06 08:50

本文主要是介绍kubernates-1.26.1 单机部署 containerd nerdctl,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

k8s1.26 kubeadm containerd 安装

2

内核调整,将桥接的ipv4流量传递到iptable链

[root@k8s-master ~]# vi /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1sysctl --system 使用执行
关于cri插件

在Containerd 1.1时,将cri-containerd改成了Containerd的CRI插件,CRI插件位于containerd内部,这让k8s启动Pod时的通信更加高效,此时k8s节点上kubelet大致按下图流程启动容器:
在这里插入图片描述123
使用负载均衡模式发布服务
kubectl expose deployment nginx --port=80 --type=LoadBalancer -n dev

使用Containerd构建容器镜像
https://www.cnblogs.com/liy36/p/16595301.html

e

kubeadm init 初始化安装

[root@k8s-master /]# kubeadm init --kubernetes-version=1.26.1 --apiserver-advertise-address=172.29.128.182 --pod-network-cidr=10.244.0.0/16 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.128.182]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.502001 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ixjeeo.ai7504k72eeulqst
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.29.128.182:6443 --token ixjeeo.ai7504k72eeulqst \--discovery-token-ca-cert-hash sha256:312fe3f11864591ecb59f081a9c9a16f6a9aac965914af6016f1706a0f210807

设置master节点可以部署pod

#当创建单机版的k8s时,这个时候master节点是默认不允许调度pod 的,需要执行

kubectl taint nodes --all node-role.kubernetes.io/master-报错 1 node(s) had taint {node-role.kubernetes.io/master: } that the pod didn't tolerate.
这是因为kubernetes出于安全考虑默认情况下无法在master节点上部署podnode pending 状态的解决
报错 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available:kubectl taint nodes --all node-role.kubernetes.io/control-plane-

部署flannel

kubectl apply -f kube-flannel.yml

部署一个容器

kubectl apply -f company-desployment.yml
kubectl apply -f company-service.yml

crictl ps 查看容器

第一个是company测试微服务

[root@k8s-master /]# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
0e37494f486e4       eb84a8b41b798       6 minutes ago       Running             company                   0                   b5d8745c98007       company-deployment-56bdfc8778-4s6ww
bd21c3b9dbeb6       7b7f3acab868d       21 minutes ago      Running             kube-flannel              0                   5388e6ce11b60       kube-flannel-ds-85ksj
f42e19dfa9047       5185b96f0becf       29 minutes ago      Running             coredns                   0                   4c20ebab1ff72       coredns-567c556887-rztwj
ed883c6974c08       5185b96f0becf       29 minutes ago      Running             coredns                   0                   a1a0bfd0c4713       coredns-567c556887-h4vgb
e091a14cc1470       46a6bb3c77ce0       29 minutes ago      Running             kube-proxy                0                   61d5eb5e4d94b       kube-proxy-k89bn
d647b82eef1e4       fce326961ae2d       29 minutes ago      Running             etcd                      4                   0632df1d28797       etcd-k8s-master
869f490a5843c       e9c08e11b07f6       29 minutes ago      Running             kube-controller-manager   4                   eafa771afef51       kube-controller-manager-k8s-master
d2c18c9f82a93       deb04688c4a35       29 minutes ago      Running             kube-apiserver            4                   b0d0d2b3b578f       kube-apiserver-k8s-master
dc3a6dec2dbbe       655493523f607       29 minutes ago      Running             kube-scheduler            4                   f6302fcd0a8ff       kube-scheduler-k8s-master

kubectl get pod --all-namespaces -o wide

kubectl get all -o wide

[root@k8s-master /]# kubectl get pod --all-namespaces -o wide
NAMESPACE      NAME                                  READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
default        company-deployment-56bdfc8778-4s6ww   1/1     Running   0          41m   10.244.0.14      k8s-master   <none>           <none>
kube-flannel   kube-flannel-ds-85ksj                 1/1     Running   0          47m   172.29.128.182   k8s-master   <none>           <none>
kube-system    coredns-567c556887-h4vgb              1/1     Running   0          55m   10.244.0.12      k8s-master   <none>           <none>
kube-system    coredns-567c556887-rztwj              1/1     Running   0          55m   10.244.0.13      k8s-master   <none>           <none>
kube-system    etcd-k8s-master                       1/1     Running   4          55m   172.29.128.182   k8s-master   <none>           <none>
kube-system    kube-apiserver-k8s-master             1/1     Running   4          55m   172.29.128.182   k8s-master   <none>           <none>
kube-system    kube-controller-manager-k8s-master    1/1     Running   4          55m   172.29.128.182   k8s-master   <none>           <none>
kube-system    kube-proxy-k89bn                      1/1     Running   0          55m   172.29.128.182   k8s-master   <none>           <none>
kube-system    kube-scheduler-k8s-master             1/1     Running   4          55m   172.29.128.182   k8s-master   <none>           <none>

测试访问

[root@k8s-master /]# curl -XPOST http://10.244.0.14:8099/test/queryList{"success":true,"result":"SUCCESS","code":0,"msg":"成功","obj":""}

end

错误处理

kubeadm init 时提示 containerd 错误

failed to pull image “k8s.gcr.io/pause:3.6” 报错日志显示containerd pull时找不到对应的pause版本,而不是registry.k8s.io/pause:3.9

[root@k8s-master containerd]# kubeadm init --kubernetes-version=1.26.1 --pod-network-cidr=10.244.0.0/16 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.128.182]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
journalctl -xeu kubelet 查看错误日志

[root@k8s-master ~]# journalctl -xeu kubelet

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image “k8s.gcr.io/pause:3.6”: failed to pull image “k8s.gcr.io/pause:3.6”: failed to pull and unpack image “k8s.gcr.io/pause:3.6”: failed to resolve reference “k8s.gcr.io/pause:3.6”: failed to do request: Head “https://k8s.gcr.io/v2/pause/manifests/3.6”: dial tcp 172.29.128.182:443: connect: connection refused

k8s核心服务的pod创建失败,因为获取pause镜像失败,总是从k8s.gcr.io下载。
k8s 1.26中启用了CRI sandbox(pause) image的配置支持。
之前通过kubeadm init –image-repository设置的镜像地址,不再会传递给cri运行时去下载pause镜像
而是需要在cri运行时的配置文件中设置。

解决办法:

ctr -n k8s.io image pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io image tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6

关于 crictl ps 报错 runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]crictl依次查找容器运行时,当查找第一个unix:///var/run/dockershim.sock 没有找到,需要手动指定当前k8s的容器运行时,例如:
kubernetes 1.24+ 之后,如果dockershim已经变成了cri-docker,所以你需要执行:
crictl config runtime-endpoint unix:///var/run/cri-dockerd.sock如果你的容器运行时,已经换成了containerd,则执行:
crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock另外:生成的配置在cat /etc/crictl.yaml,可以随时修改。

kubectl get pods 一直提示 ContainerCreating

READY 0/1 没有正常启动pod
根据下面报错提示解决

[root@k8s-master flannel]# vi /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

在每个节点上(master跟node)创建上面文件后,确实能够解决问题。注意上面指定的网段不要跟自己真实的内网网段产生冲突,否则会造成机器之间无法ping通的问题。

但是这样解决会出现重启机器后,又无法部署到pod的问题。因为重启后这个文件就丢失了。并未彻底解决问题。

实际上,这个问题是因为没有给集群的pod指定内网网段的缘故。最彻底的解决办法是将这个集群所有节点重新kubeadm reset,重新初始化,并在初始化时指定内网网段:

kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16

[root@k8s-master /]# kubectl get all
NAME                                      READY   STATUS              RESTARTS   AGE
pod/company-deployment-56bdfc8778-q7j5z   0/1     ContainerCreating   0          11mNAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/company-service   NodePort    10.107.95.79   <none>        8099:30099/TCP   8m48s
service/kubernetes        ClusterIP   10.96.0.1      <none>        443/TCP          15dNAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/company-deployment   0/1     1            0           11mNAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/company-deployment-56bdfc8778   1         1         0       11m

查看该pod的详细信息

[root@k8s-master /]# kubectl describe pod company-deployment
Name:             company-deployment-56bdfc8778-tfffg
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-master/172.29.128.182
Start Time:       Mon, 13 Mar 2023 10:21:50 +0800
Labels:           app=company-podpod-template-hash=56bdfc8778
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/company-deployment-56bdfc8778
Containers:company:Container ID:   Image:          lyndon1107/company:latestImage ID:       Port:           8099/TCPHost Port:      0/TCPState:          WaitingReason:       ContainerCreatingReady:          FalseRestart Count:  0Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fqvks (ro)
Conditions:Type              StatusInitialized       True Ready             False ContainersReady   False PodScheduled      True 
Volumes:kube-api-access-fqvks:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason                  Age                   From               Message----     ------                  ----                  ----               -------Normal   Scheduled               5m44s                 default-scheduler  Successfully assigned default/company-deployment-56bdfc8778-tfffg to k8s-masterWarning  FailedCreatePodSandBox  5m44s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a77e286ca34423a9a7670987d8d338ab6285507488610bb9eaf5c792a1dd004c": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  5m33s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e3aa346f0347902f60d9f3a329052a81e376f700a88e8b1b06f200acc6b2a739": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  5m17s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "03377d24cbbbedded0402f49dc2591db42b49abcf25daa18095387c336309d59": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  5m2s                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c0e1a712476ed33396c464ee231d8cb8e4e2434a55a8ec45df96525c79532be5": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  4m50s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "659910ded6698b4268d032fb274b70b3c5da2de275dd8aefd6d9f70d3efe15c0": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  4m38s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2f820ee439dce2bc5f0d3045bb241471af9e51ee2a6429c131b97631c47ac17b": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  4m26s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "aaa696310150c5801d92267fbc90aee021569b8af4f40244b31847e8cf84ca54": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  4m14s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5395b4dfb0e7ffc10d4dca6a100ac173bb85b2f3af11ef94d20c47e452d8b0ef": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  3m59s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7d199897a29766c886a6ebd042f64a3f09e42e1bc0ad3af4e0755060d29518c2": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  16s (x17 over 3m46s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "795feeca99496be3b43d1006202d8d20d824379931db04ba3fd693b76e18ca3f": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory

#继续报错ERROR CrashLoopBackOff 错误是之前的container容器在运行中

重新kubectl apply -f company-deployment.yml

[root@k8s-master /]# kubectl get pods
NAME                                  READY   STATUS             RESTARTS      AGE
company-deployment-56bdfc8778-vfmgf   0/1     CrashLoopBackOff   5 (52s ago)   6m41s[root@k8s-master /]# kubectl describe pod company-deployment-56bdfc8778-vfmgf 
Name:             company-deployment-56bdfc8778-vfmgf
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-master/172.29.128.182
Start Time:       Mon, 13 Mar 2023 10:32:29 +0800
Labels:           app=company-podpod-template-hash=56bdfc8778
Annotations:      <none>
Status:           Running
IP:               10.244.0.3
IPs:IP:           10.244.0.3
Controlled By:  ReplicaSet/company-deployment-56bdfc8778
Containers:company:Container ID:   containerd://2938f11ba9a182a1c4377524e07666f6c9e0f9eeec81bce211a35bd1ddeb8843Image:          lyndon1107/company:latestImage ID:       docker.io/lyndon1107/company@sha256:83d08d938bbe766d5d2aa15022296543ff82deb8f1080f488385a8c4d268d75bPort:           8099/TCPHost Port:      0/TCPState:          WaitingReason:       CrashLoopBackOffLast State:     TerminatedReason:       ErrorExit Code:    1Started:      Mon, 13 Mar 2023 10:37:51 +0800Finished:     Mon, 13 Mar 2023 10:38:18 +0800Ready:          FalseRestart Count:  5Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p2thd (ro)
Conditions:Type              StatusInitialized       True Ready             False ContainersReady   False PodScheduled      True 
Volumes:kube-api-access-p2thd:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason     Age                    From               Message----     ------     ----                   ----               -------Normal   Scheduled  7m                     default-scheduler  Successfully assigned default/company-deployment-56bdfc8778-vfmgf to k8s-masterNormal   Pulled     6m58s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.660856913s (1.660873652s including waiting)Normal   Pulled     6m30s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.641259608s (1.64128084s including waiting)Normal   Pulled     5m50s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.654300412s (1.654317447s including waiting)Normal   Created    4m53s (x4 over 6m58s)  kubelet            Created container companyNormal   Started    4m53s (x4 over 6m58s)  kubelet            Started container companyNormal   Pulled     4m53s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.668901691s (1.668914347s including waiting)Normal   Pulling    3m37s (x5 over 7m)     kubelet            Pulling image "lyndon1107/company:latest"Normal   Pulled     3m35s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.751819753s (1.751831323s including waiting)Warning  BackOff    113s (x14 over 6m2s)   kubelet            Back-off restarting failed container company in pod company-deployment-56bdfc8778-vfmgf_default(31b38cbc-e526-4d9b-9623-06ee97813672)

重新部署 deployment 和 service

[root@k8s-master /]# kubectl delete deployment company-deployment 
deployment.apps "company-deployment" deleted[root@k8s-master /]# kubectl apply -f company-deployment.yml  #或 kubectl create -f deployment.yaml
deployment.apps/company-deployment created[root@k8s-master /]# kubectl get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/company-deployment-56bdfc8778-q46z4   1/1     Running   0          4sNAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/company-service   NodePort    10.107.95.79   <none>        8099:30099/TCP   46m
service/kubernetes        ClusterIP   10.96.0.1      <none>        443/TCP          15dNAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/company-deployment   1/1     1            1           4sNAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/company-deployment-56bdfc8778   1         1         1       4s

其他

进入到pod

进入到 nginx Pod
kubectl exec -it nginx-deployment-67656986d9-f74lb bash

查看apiserver容器

查看apiserver容器是否起来了,故障排查

[root@k8s-master /]# systemctl status containerd
[root@k8s-master /]# nerdctl -n k8s.io ps | grep kube-apiserver
[root@k8s-master /]# nerdctl -n k8s.io ps | grep etcd[root@k8s-master /]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}

证书

使用kubeadm安装的 Kubernetes, 则会自动生成集群所需的证书。

证书设置方法
https://www.cnblogs.com/tylerzhou/p/11120347.html

查看证书过期

[root@k8s-master /]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Mar 25, 2024 09:45 UTC   350d            ca                      no      
apiserver                  Mar 25, 2024 09:45 UTC   350d            ca                      no      
apiserver-etcd-client      Mar 25, 2024 09:45 UTC   350d            etcd-ca                 no      
apiserver-kubelet-client   Mar 25, 2024 09:45 UTC   350d            ca                      no      
controller-manager.conf    Mar 25, 2024 09:45 UTC   350d            ca                      no      
etcd-healthcheck-client    Mar 25, 2024 09:45 UTC   350d            etcd-ca                 no      
etcd-peer                  Mar 25, 2024 09:45 UTC   350d            etcd-ca                 no      
etcd-server                Mar 25, 2024 09:45 UTC   350d            etcd-ca                 no      
front-proxy-client         Mar 25, 2024 09:45 UTC   350d            front-proxy-ca          no      
scheduler.conf             Mar 25, 2024 09:45 UTC   350d            ca                      no      CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Mar 23, 2033 09:45 UTC   9y              no      
etcd-ca                 Mar 23, 2033 09:45 UTC   9y              no      
front-proxy-ca          Mar 23, 2033 09:45 UTC   9y              no      如果出现invalid字样的,就说明证书过期了
低版本集群执行这个命令可能会报错,可以执行这个命令kubeadm alpha certs check-expiration

DNS 访问 Service

在集群之内,我们访问的是cluster-ip:8099端口。

[root@k8s-master /]# kubectl get service -o wide
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE     SELECTOR
company-service   NodePort    10.101.28.2   <none>        8099:30099/TCP   6d23h   app=company

在 Cluster 中,除了可以通过 Cluster IP 访问 Service,Kubernetes 还提供了更为方便的 DNS 访问。

DNS组件

kubeadm 部署时会默认安装 kube-dns 组件。

[root@k8s-master /]# kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 6d23h

coredns 是一个 DNS 服务器。每当有新的 Service 被创建,coredns 会添加该 Service 的 DNS 记录。Cluster 中的 Pod 可以通过 <SERVICE_NAME>.<NAMESPACE_NAME> 访问 Service。

三、外网访问 Service

Kubernetes 提供了多种类型的 Service,默认是 ClusterIP。
ClusterIP、NodePort、LoadBalancer

查看 SpringBoot 日志

kubectl logs --tail 200 -f hello-deployment

kubectl proxy

[root@k8s-master /]# kubectl proxy
Starting to serve on 127.0.0.1:8001

kubectl cluster-info

[root@k8s-master ~]# kubectl cluster-info
Kubernetes control plane is running at https://172.29.128.182:6443
CoreDNS is running at https://172.29.128.182:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

ClusterIP对应的链路是“cluster ip --> POD IP”;
NodePort对应的链路是“NodePort -- clusterIP --> POD IP”。
那么,这些链路是如何转换的呢?基本原理是通过iptables的NAT转换进行的。
查看 k8s-apiserver日志
kubectl logs --tail 200 -f kube-apiserver-k8s-master -n kube-system

参考地址

kubectl 常用的命令
https://www.cnblogs.com/ophui/p/15001410.html

这篇关于kubernates-1.26.1 单机部署 containerd nerdctl的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/355637

相关文章

ElasticSearch+Kibana通过Docker部署到Linux服务器中操作方法

《ElasticSearch+Kibana通过Docker部署到Linux服务器中操作方法》本文介绍了Elasticsearch的基本概念,包括文档和字段、索引和映射,还详细描述了如何通过Docker... 目录1、ElasticSearch概念2、ElasticSearch、Kibana和IK分词器部署

部署Vue项目到服务器后404错误的原因及解决方案

《部署Vue项目到服务器后404错误的原因及解决方案》文章介绍了Vue项目部署步骤以及404错误的解决方案,部署步骤包括构建项目、上传文件、配置Web服务器、重启Nginx和访问域名,404错误通常是... 目录一、vue项目部署步骤二、404错误原因及解决方案错误场景原因分析解决方案一、Vue项目部署步骤

Linux流媒体服务器部署流程

《Linux流媒体服务器部署流程》文章详细介绍了流媒体服务器的部署步骤,包括更新系统、安装依赖组件、编译安装Nginx和RTMP模块、配置Nginx和FFmpeg,以及测试流媒体服务器的搭建... 目录流媒体服务器部署部署安装1.更新系统2.安装依赖组件3.解压4.编译安装(添加RTMP和openssl模块

0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeek R1模型的操作流程

《0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeekR1模型的操作流程》DeepSeekR1模型凭借其强大的自然语言处理能力,在未来具有广阔的应用前景,有望在多个领域发... 目录0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeek R1模型,3步搞定一个应

redis群集简单部署过程

《redis群集简单部署过程》文章介绍了Redis,一个高性能的键值存储系统,其支持多种数据结构和命令,它还讨论了Redis的服务器端架构、数据存储和获取、协议和命令、高可用性方案、缓存机制以及监控和... 目录Redis介绍1. 基本概念2. 服务器端3. 存储和获取数据4. 协议和命令5. 高可用性6.

Deepseek R1模型本地化部署+API接口调用详细教程(释放AI生产力)

《DeepseekR1模型本地化部署+API接口调用详细教程(释放AI生产力)》本文介绍了本地部署DeepSeekR1模型和通过API调用将其集成到VSCode中的过程,作者详细步骤展示了如何下载和... 目录前言一、deepseek R1模型与chatGPT o1系列模型对比二、本地部署步骤1.安装oll

nginx部署https网站的实现步骤(亲测)

《nginx部署https网站的实现步骤(亲测)》本文详细介绍了使用Nginx在保持与http服务兼容的情况下部署HTTPS,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值... 目录步骤 1:安装 Nginx步骤 2:获取 SSL 证书步骤 3:手动配置 Nginx步骤 4:测

Tomcat高效部署与性能优化方式

《Tomcat高效部署与性能优化方式》本文介绍了如何高效部署Tomcat并进行性能优化,以确保Web应用的稳定运行和高效响应,高效部署包括环境准备、安装Tomcat、配置Tomcat、部署应用和启动T... 目录Tomcat高效部署与性能优化一、引言二、Tomcat高效部署三、Tomcat性能优化总结Tom

如何在本地部署 DeepSeek Janus Pro 文生图大模型

《如何在本地部署DeepSeekJanusPro文生图大模型》DeepSeekJanusPro模型在本地成功部署,支持图片理解和文生图功能,通过Gradio界面进行交互,展示了其强大的多模态处... 目录什么是 Janus Pro1. 安装 conda2. 创建 python 虚拟环境3. 克隆 janus

本地私有化部署DeepSeek模型的详细教程

《本地私有化部署DeepSeek模型的详细教程》DeepSeek模型是一种强大的语言模型,本地私有化部署可以让用户在自己的环境中安全、高效地使用该模型,避免数据传输到外部带来的安全风险,同时也能根据自... 目录一、引言二、环境准备(一)硬件要求(二)软件要求(三)创建虚拟环境三、安装依赖库四、获取 Dee