kubernates-1.26.1 单机部署 containerd nerdctl

2023-11-06 08:50

本文主要是介绍kubernates-1.26.1 单机部署 containerd nerdctl,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

k8s1.26 kubeadm containerd 安装

2

内核调整,将桥接的ipv4流量传递到iptable链

[root@k8s-master ~]# vi /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1sysctl --system 使用执行
关于cri插件

在Containerd 1.1时,将cri-containerd改成了Containerd的CRI插件,CRI插件位于containerd内部,这让k8s启动Pod时的通信更加高效,此时k8s节点上kubelet大致按下图流程启动容器:
在这里插入图片描述123
使用负载均衡模式发布服务
kubectl expose deployment nginx --port=80 --type=LoadBalancer -n dev

使用Containerd构建容器镜像
https://www.cnblogs.com/liy36/p/16595301.html

e

kubeadm init 初始化安装

[root@k8s-master /]# kubeadm init --kubernetes-version=1.26.1 --apiserver-advertise-address=172.29.128.182 --pod-network-cidr=10.244.0.0/16 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.128.182]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.502001 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ixjeeo.ai7504k72eeulqst
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.29.128.182:6443 --token ixjeeo.ai7504k72eeulqst \--discovery-token-ca-cert-hash sha256:312fe3f11864591ecb59f081a9c9a16f6a9aac965914af6016f1706a0f210807

设置master节点可以部署pod

#当创建单机版的k8s时,这个时候master节点是默认不允许调度pod 的,需要执行

kubectl taint nodes --all node-role.kubernetes.io/master-报错 1 node(s) had taint {node-role.kubernetes.io/master: } that the pod didn't tolerate.
这是因为kubernetes出于安全考虑默认情况下无法在master节点上部署podnode pending 状态的解决
报错 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available:kubectl taint nodes --all node-role.kubernetes.io/control-plane-

部署flannel

kubectl apply -f kube-flannel.yml

部署一个容器

kubectl apply -f company-desployment.yml
kubectl apply -f company-service.yml

crictl ps 查看容器

第一个是company测试微服务

[root@k8s-master /]# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
0e37494f486e4       eb84a8b41b798       6 minutes ago       Running             company                   0                   b5d8745c98007       company-deployment-56bdfc8778-4s6ww
bd21c3b9dbeb6       7b7f3acab868d       21 minutes ago      Running             kube-flannel              0                   5388e6ce11b60       kube-flannel-ds-85ksj
f42e19dfa9047       5185b96f0becf       29 minutes ago      Running             coredns                   0                   4c20ebab1ff72       coredns-567c556887-rztwj
ed883c6974c08       5185b96f0becf       29 minutes ago      Running             coredns                   0                   a1a0bfd0c4713       coredns-567c556887-h4vgb
e091a14cc1470       46a6bb3c77ce0       29 minutes ago      Running             kube-proxy                0                   61d5eb5e4d94b       kube-proxy-k89bn
d647b82eef1e4       fce326961ae2d       29 minutes ago      Running             etcd                      4                   0632df1d28797       etcd-k8s-master
869f490a5843c       e9c08e11b07f6       29 minutes ago      Running             kube-controller-manager   4                   eafa771afef51       kube-controller-manager-k8s-master
d2c18c9f82a93       deb04688c4a35       29 minutes ago      Running             kube-apiserver            4                   b0d0d2b3b578f       kube-apiserver-k8s-master
dc3a6dec2dbbe       655493523f607       29 minutes ago      Running             kube-scheduler            4                   f6302fcd0a8ff       kube-scheduler-k8s-master

kubectl get pod --all-namespaces -o wide

kubectl get all -o wide

[root@k8s-master /]# kubectl get pod --all-namespaces -o wide
NAMESPACE      NAME                                  READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
default        company-deployment-56bdfc8778-4s6ww   1/1     Running   0          41m   10.244.0.14      k8s-master   <none>           <none>
kube-flannel   kube-flannel-ds-85ksj                 1/1     Running   0          47m   172.29.128.182   k8s-master   <none>           <none>
kube-system    coredns-567c556887-h4vgb              1/1     Running   0          55m   10.244.0.12      k8s-master   <none>           <none>
kube-system    coredns-567c556887-rztwj              1/1     Running   0          55m   10.244.0.13      k8s-master   <none>           <none>
kube-system    etcd-k8s-master                       1/1     Running   4          55m   172.29.128.182   k8s-master   <none>           <none>
kube-system    kube-apiserver-k8s-master             1/1     Running   4          55m   172.29.128.182   k8s-master   <none>           <none>
kube-system    kube-controller-manager-k8s-master    1/1     Running   4          55m   172.29.128.182   k8s-master   <none>           <none>
kube-system    kube-proxy-k89bn                      1/1     Running   0          55m   172.29.128.182   k8s-master   <none>           <none>
kube-system    kube-scheduler-k8s-master             1/1     Running   4          55m   172.29.128.182   k8s-master   <none>           <none>

测试访问

[root@k8s-master /]# curl -XPOST http://10.244.0.14:8099/test/queryList{"success":true,"result":"SUCCESS","code":0,"msg":"成功","obj":""}

end

错误处理

kubeadm init 时提示 containerd 错误

failed to pull image “k8s.gcr.io/pause:3.6” 报错日志显示containerd pull时找不到对应的pause版本,而不是registry.k8s.io/pause:3.9

[root@k8s-master containerd]# kubeadm init --kubernetes-version=1.26.1 --pod-network-cidr=10.244.0.0/16 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.128.182]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
journalctl -xeu kubelet 查看错误日志

[root@k8s-master ~]# journalctl -xeu kubelet

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image “k8s.gcr.io/pause:3.6”: failed to pull image “k8s.gcr.io/pause:3.6”: failed to pull and unpack image “k8s.gcr.io/pause:3.6”: failed to resolve reference “k8s.gcr.io/pause:3.6”: failed to do request: Head “https://k8s.gcr.io/v2/pause/manifests/3.6”: dial tcp 172.29.128.182:443: connect: connection refused

k8s核心服务的pod创建失败,因为获取pause镜像失败,总是从k8s.gcr.io下载。
k8s 1.26中启用了CRI sandbox(pause) image的配置支持。
之前通过kubeadm init –image-repository设置的镜像地址,不再会传递给cri运行时去下载pause镜像
而是需要在cri运行时的配置文件中设置。

解决办法:

ctr -n k8s.io image pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io image tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6

关于 crictl ps 报错 runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]crictl依次查找容器运行时,当查找第一个unix:///var/run/dockershim.sock 没有找到,需要手动指定当前k8s的容器运行时,例如:
kubernetes 1.24+ 之后,如果dockershim已经变成了cri-docker,所以你需要执行:
crictl config runtime-endpoint unix:///var/run/cri-dockerd.sock如果你的容器运行时,已经换成了containerd,则执行:
crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock另外:生成的配置在cat /etc/crictl.yaml,可以随时修改。

kubectl get pods 一直提示 ContainerCreating

READY 0/1 没有正常启动pod
根据下面报错提示解决

[root@k8s-master flannel]# vi /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

在每个节点上(master跟node)创建上面文件后,确实能够解决问题。注意上面指定的网段不要跟自己真实的内网网段产生冲突,否则会造成机器之间无法ping通的问题。

但是这样解决会出现重启机器后,又无法部署到pod的问题。因为重启后这个文件就丢失了。并未彻底解决问题。

实际上,这个问题是因为没有给集群的pod指定内网网段的缘故。最彻底的解决办法是将这个集群所有节点重新kubeadm reset,重新初始化,并在初始化时指定内网网段:

kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16

[root@k8s-master /]# kubectl get all
NAME                                      READY   STATUS              RESTARTS   AGE
pod/company-deployment-56bdfc8778-q7j5z   0/1     ContainerCreating   0          11mNAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/company-service   NodePort    10.107.95.79   <none>        8099:30099/TCP   8m48s
service/kubernetes        ClusterIP   10.96.0.1      <none>        443/TCP          15dNAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/company-deployment   0/1     1            0           11mNAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/company-deployment-56bdfc8778   1         1         0       11m

查看该pod的详细信息

[root@k8s-master /]# kubectl describe pod company-deployment
Name:             company-deployment-56bdfc8778-tfffg
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-master/172.29.128.182
Start Time:       Mon, 13 Mar 2023 10:21:50 +0800
Labels:           app=company-podpod-template-hash=56bdfc8778
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/company-deployment-56bdfc8778
Containers:company:Container ID:   Image:          lyndon1107/company:latestImage ID:       Port:           8099/TCPHost Port:      0/TCPState:          WaitingReason:       ContainerCreatingReady:          FalseRestart Count:  0Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fqvks (ro)
Conditions:Type              StatusInitialized       True Ready             False ContainersReady   False PodScheduled      True 
Volumes:kube-api-access-fqvks:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason                  Age                   From               Message----     ------                  ----                  ----               -------Normal   Scheduled               5m44s                 default-scheduler  Successfully assigned default/company-deployment-56bdfc8778-tfffg to k8s-masterWarning  FailedCreatePodSandBox  5m44s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a77e286ca34423a9a7670987d8d338ab6285507488610bb9eaf5c792a1dd004c": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  5m33s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e3aa346f0347902f60d9f3a329052a81e376f700a88e8b1b06f200acc6b2a739": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  5m17s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "03377d24cbbbedded0402f49dc2591db42b49abcf25daa18095387c336309d59": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  5m2s                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c0e1a712476ed33396c464ee231d8cb8e4e2434a55a8ec45df96525c79532be5": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  4m50s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "659910ded6698b4268d032fb274b70b3c5da2de275dd8aefd6d9f70d3efe15c0": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  4m38s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2f820ee439dce2bc5f0d3045bb241471af9e51ee2a6429c131b97631c47ac17b": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  4m26s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "aaa696310150c5801d92267fbc90aee021569b8af4f40244b31847e8cf84ca54": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  4m14s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5395b4dfb0e7ffc10d4dca6a100ac173bb85b2f3af11ef94d20c47e452d8b0ef": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  3m59s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7d199897a29766c886a6ebd042f64a3f09e42e1bc0ad3af4e0755060d29518c2": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directoryWarning  FailedCreatePodSandBox  16s (x17 over 3m46s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "795feeca99496be3b43d1006202d8d20d824379931db04ba3fd693b76e18ca3f": plugin type="flannel" name="cbr0" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory

#继续报错ERROR CrashLoopBackOff 错误是之前的container容器在运行中

重新kubectl apply -f company-deployment.yml

[root@k8s-master /]# kubectl get pods
NAME                                  READY   STATUS             RESTARTS      AGE
company-deployment-56bdfc8778-vfmgf   0/1     CrashLoopBackOff   5 (52s ago)   6m41s[root@k8s-master /]# kubectl describe pod company-deployment-56bdfc8778-vfmgf 
Name:             company-deployment-56bdfc8778-vfmgf
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-master/172.29.128.182
Start Time:       Mon, 13 Mar 2023 10:32:29 +0800
Labels:           app=company-podpod-template-hash=56bdfc8778
Annotations:      <none>
Status:           Running
IP:               10.244.0.3
IPs:IP:           10.244.0.3
Controlled By:  ReplicaSet/company-deployment-56bdfc8778
Containers:company:Container ID:   containerd://2938f11ba9a182a1c4377524e07666f6c9e0f9eeec81bce211a35bd1ddeb8843Image:          lyndon1107/company:latestImage ID:       docker.io/lyndon1107/company@sha256:83d08d938bbe766d5d2aa15022296543ff82deb8f1080f488385a8c4d268d75bPort:           8099/TCPHost Port:      0/TCPState:          WaitingReason:       CrashLoopBackOffLast State:     TerminatedReason:       ErrorExit Code:    1Started:      Mon, 13 Mar 2023 10:37:51 +0800Finished:     Mon, 13 Mar 2023 10:38:18 +0800Ready:          FalseRestart Count:  5Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p2thd (ro)
Conditions:Type              StatusInitialized       True Ready             False ContainersReady   False PodScheduled      True 
Volumes:kube-api-access-p2thd:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason     Age                    From               Message----     ------     ----                   ----               -------Normal   Scheduled  7m                     default-scheduler  Successfully assigned default/company-deployment-56bdfc8778-vfmgf to k8s-masterNormal   Pulled     6m58s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.660856913s (1.660873652s including waiting)Normal   Pulled     6m30s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.641259608s (1.64128084s including waiting)Normal   Pulled     5m50s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.654300412s (1.654317447s including waiting)Normal   Created    4m53s (x4 over 6m58s)  kubelet            Created container companyNormal   Started    4m53s (x4 over 6m58s)  kubelet            Started container companyNormal   Pulled     4m53s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.668901691s (1.668914347s including waiting)Normal   Pulling    3m37s (x5 over 7m)     kubelet            Pulling image "lyndon1107/company:latest"Normal   Pulled     3m35s                  kubelet            Successfully pulled image "lyndon1107/company:latest" in 1.751819753s (1.751831323s including waiting)Warning  BackOff    113s (x14 over 6m2s)   kubelet            Back-off restarting failed container company in pod company-deployment-56bdfc8778-vfmgf_default(31b38cbc-e526-4d9b-9623-06ee97813672)

重新部署 deployment 和 service

[root@k8s-master /]# kubectl delete deployment company-deployment 
deployment.apps "company-deployment" deleted[root@k8s-master /]# kubectl apply -f company-deployment.yml  #或 kubectl create -f deployment.yaml
deployment.apps/company-deployment created[root@k8s-master /]# kubectl get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/company-deployment-56bdfc8778-q46z4   1/1     Running   0          4sNAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/company-service   NodePort    10.107.95.79   <none>        8099:30099/TCP   46m
service/kubernetes        ClusterIP   10.96.0.1      <none>        443/TCP          15dNAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/company-deployment   1/1     1            1           4sNAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/company-deployment-56bdfc8778   1         1         1       4s

其他

进入到pod

进入到 nginx Pod
kubectl exec -it nginx-deployment-67656986d9-f74lb bash

查看apiserver容器

查看apiserver容器是否起来了,故障排查

[root@k8s-master /]# systemctl status containerd
[root@k8s-master /]# nerdctl -n k8s.io ps | grep kube-apiserver
[root@k8s-master /]# nerdctl -n k8s.io ps | grep etcd[root@k8s-master /]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}

证书

使用kubeadm安装的 Kubernetes, 则会自动生成集群所需的证书。

证书设置方法
https://www.cnblogs.com/tylerzhou/p/11120347.html

查看证书过期

[root@k8s-master /]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Mar 25, 2024 09:45 UTC   350d            ca                      no      
apiserver                  Mar 25, 2024 09:45 UTC   350d            ca                      no      
apiserver-etcd-client      Mar 25, 2024 09:45 UTC   350d            etcd-ca                 no      
apiserver-kubelet-client   Mar 25, 2024 09:45 UTC   350d            ca                      no      
controller-manager.conf    Mar 25, 2024 09:45 UTC   350d            ca                      no      
etcd-healthcheck-client    Mar 25, 2024 09:45 UTC   350d            etcd-ca                 no      
etcd-peer                  Mar 25, 2024 09:45 UTC   350d            etcd-ca                 no      
etcd-server                Mar 25, 2024 09:45 UTC   350d            etcd-ca                 no      
front-proxy-client         Mar 25, 2024 09:45 UTC   350d            front-proxy-ca          no      
scheduler.conf             Mar 25, 2024 09:45 UTC   350d            ca                      no      CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Mar 23, 2033 09:45 UTC   9y              no      
etcd-ca                 Mar 23, 2033 09:45 UTC   9y              no      
front-proxy-ca          Mar 23, 2033 09:45 UTC   9y              no      如果出现invalid字样的,就说明证书过期了
低版本集群执行这个命令可能会报错,可以执行这个命令kubeadm alpha certs check-expiration

DNS 访问 Service

在集群之内,我们访问的是cluster-ip:8099端口。

[root@k8s-master /]# kubectl get service -o wide
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE     SELECTOR
company-service   NodePort    10.101.28.2   <none>        8099:30099/TCP   6d23h   app=company

在 Cluster 中,除了可以通过 Cluster IP 访问 Service,Kubernetes 还提供了更为方便的 DNS 访问。

DNS组件

kubeadm 部署时会默认安装 kube-dns 组件。

[root@k8s-master /]# kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 6d23h

coredns 是一个 DNS 服务器。每当有新的 Service 被创建,coredns 会添加该 Service 的 DNS 记录。Cluster 中的 Pod 可以通过 <SERVICE_NAME>.<NAMESPACE_NAME> 访问 Service。

三、外网访问 Service

Kubernetes 提供了多种类型的 Service,默认是 ClusterIP。
ClusterIP、NodePort、LoadBalancer

查看 SpringBoot 日志

kubectl logs --tail 200 -f hello-deployment

kubectl proxy

[root@k8s-master /]# kubectl proxy
Starting to serve on 127.0.0.1:8001

kubectl cluster-info

[root@k8s-master ~]# kubectl cluster-info
Kubernetes control plane is running at https://172.29.128.182:6443
CoreDNS is running at https://172.29.128.182:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

ClusterIP对应的链路是“cluster ip --> POD IP”;
NodePort对应的链路是“NodePort -- clusterIP --> POD IP”。
那么,这些链路是如何转换的呢?基本原理是通过iptables的NAT转换进行的。
查看 k8s-apiserver日志
kubectl logs --tail 200 -f kube-apiserver-k8s-master -n kube-system

参考地址

kubectl 常用的命令
https://www.cnblogs.com/ophui/p/15001410.html

这篇关于kubernates-1.26.1 单机部署 containerd nerdctl的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/355637

相关文章

闲置电脑也能活出第二春?鲁大师AiNAS让你动动手指就能轻松部署

对于大多数人而言,在这个“数据爆炸”的时代或多或少都遇到过存储告急的情况,这使得“存储焦虑”不再是个别现象,而将会是随着软件的不断臃肿而越来越普遍的情况。从不少手机厂商都开始将存储上限提升至1TB可以见得,我们似乎正处在互联网信息飞速增长的阶段,对于存储的需求也将会不断扩大。对于苹果用户而言,这一问题愈发严峻,毕竟512GB和1TB版本的iPhone可不是人人都消费得起的,因此成熟的外置存储方案开

阿里开源语音识别SenseVoiceWindows环境部署

SenseVoice介绍 SenseVoice 专注于高精度多语言语音识别、情感辨识和音频事件检测多语言识别: 采用超过 40 万小时数据训练,支持超过 50 种语言,识别效果上优于 Whisper 模型。富文本识别:具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。高效推

衡石分析平台使用手册-单机安装及启动

单机安装及启动​ 本文讲述如何在单机环境下进行 HENGSHI SENSE 安装的操作过程。 在安装前请确认网络环境,如果是隔离环境,无法连接互联网时,请先按照 离线环境安装依赖的指导进行依赖包的安装,然后按照本文的指导继续操作。如果网络环境可以连接互联网,请直接按照本文的指导进行安装。 准备工作​ 请参考安装环境文档准备安装环境。 配置用户与安装目录。 在操作前请检查您是否有 sud

在 Windows 上部署 gitblit

在 Windows 上部署 gitblit 在 Windows 上部署 gitblit 缘起gitblit 是什么安装JDK部署 gitblit 下载 gitblit 并解压配置登录注册为 windows 服务 修改 installService.cmd 文件运行 installService.cmd运行 gitblitw.exe查看 services.msc 缘起

Solr部署如何启动

Solr部署如何启动 Posted on 一月 10, 2013 in:  Solr入门 | 评论关闭 我刚接触solr,我要怎么启动,这是群里的朋友问得比较多的问题, solr最新版本下载地址: http://www.apache.org/dyn/closer.cgi/lucene/solr/ 1、准备环境 建立一个solr目录,把solr压缩包example目录下的内容复制

Spring Roo 实站( 一 )部署安装 第一个示例程序

转自:http://blog.csdn.net/jun55xiu/article/details/9380213 一:安装 注:可以参与官网spring-roo: static.springsource.org/spring-roo/reference/html/intro.html#intro-exploring-sampleROO_OPTS http://stati

828华为云征文|华为云Flexus X实例docker部署rancher并构建k8s集群

828华为云征文|华为云Flexus X实例docker部署rancher并构建k8s集群 华为云最近正在举办828 B2B企业节,Flexus X实例的促销力度非常大,特别适合那些对算力性能有高要求的小伙伴。如果你有自建MySQL、Redis、Nginx等服务的需求,一定不要错过这个机会。赶紧去看看吧! 什么是华为云Flexus X实例 华为云Flexus X实例云服务是新一代开箱即用、体

部署若依Spring boot项目

nohup和& nohup命令解释 nohup命令:nohup 是 no hang up 的缩写,就是不挂断的意思,但没有后台运行,终端不能标准输入。nohup :不挂断的运行,注意并没有后台运行的功能,就是指,用nohup运行命令可以使命令永久的执行下去,和用户终端没有关系,注意了nohup没有后台运行的意思;&才是后台运行在缺省情况下该作业的所有输出都被重定向到一个名为nohup.o

kubernetes集群部署Zabbix监控平台

一、zabbix介绍 1.zabbix简介 Zabbix是一个基于Web界面的分布式系统监控的企业级开源软件。可以监视各种系统与设备的参数,保障服务器及设备的安全运营。 2.zabbix特点 (1)安装与配置简单。 (2)可视化web管理界面。 (3)免费开源。 (4)支持中文。 (5)自动发现。 (6)分布式监控。 (7)实时绘图。 3.zabbix的主要功能

java计算机毕设课设—停车管理信息系统(附源码、文章、相关截图、部署视频)

这是什么系统? 资源获取方式在最下方 java计算机毕设课设—停车管理信息系统(附源码、文章、相关截图、部署视频) 停车管理信息系统是为了提升停车场的运营效率和管理水平而设计的综合性平台。系统涵盖用户信息管理、车位管理、收费管理、违规车辆处理等多个功能模块,旨在实现对停车场资源的高效配置和实时监控。此外,系统还提供了资讯管理和统计查询功能,帮助管理者及时发布信息并进行数据分析,为停车场的科学