本文主要是介绍kubernetes-1.18.8-UOS-龙芯mips64le架构适配,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
kubernetes-1.18.8-UOS-龙芯mips64le架构适配
一.适配环境
操作系统:UOS 20
CPU架构:mips64le
服务器厂家:
K8S版本:v1.18.8
docker版本:docker-ce 19.03
二.适配步骤
1. 安装docker
由于UOS之前已与docker做过适配,因此可通过uos官方的软件源,安装docker,官方提供给的版本为docker-ce 19.03,若需要其他版本,需要自行进行源码编译安装,本文档仅提供使用uos官方软件源进行安装:
apt-get install -y docker-ce
注意:
以下版本中在安装docker-ce的过程中,发现安装后docker无法正常运行,由uos方工程师确认为内核bug目前已合入最新内核。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0RY55WFB-1618380763672)(C:\Users\Lenovo\AppData\Roaming\Typora\typora-user-images\image-20200911174533750.png)]
2.源码编译
2.1 安装其他依赖软件
yum install gcc make -y
yum install rsync jq -y
安装go环境:
wget -c https://golang.google.cn/dl/go1.14.6.linux-amd64.tar.gz /opt/
cd /opt/
tar -C /usr/local -xzf go1.14.6.linux-amd64.tar.gz
echo "export PATH=$PATH:/usr/local/go/bin" >> /etc/profile && source /etc/profile
echo "export GOPATH=/home/go" >> /etc/profile && source /etc/profile # 配置GOPATH
mkdir -p $GOPATH
2.2 下载源码
可根据自己的需要去下载不同版本的源码,本文主要编译的为v1.18.6.
mkdir -p $GOPATH/src/k8s.io
cd $GOPATH/src/k8s.io
git clone https://github.com/kubernetes/kubernetes -b v1.18.8
cd kubernetes
国内受限于网速,大概率会下载失败,可参考其他帖子在码云 上做仓库同步,将github的kubernetes仓库同步到码云,目前已经有其他童鞋做过同步,可直接进行下载。
cd $GOPATH/src/k8s.io
git clone https://gitee.com/mirrors/Kubernetes.git -b v1.18.8
cd Kubernetes
2.3 编译需要的资源
2.3.1 mips64le架构平台镜像
- 查看kube-cross的TAG版本号
root@b529f9ce0ca9:/go/src/k8s.io/kubernetes# cat ./build/build-image/cross/VERSION
v1.13.15-1
- 查看debian_iptables_version版本号
root@b529f9ce0ca9:/go/src/k8s.io/kubernetes# egrep -Rn "debian_iptables_version=" ./
./build/common.sh:98: local debian_iptables_version=v12.1.2
./build/dependencies.yaml:112: match: debian_iptables_version=
- 查看debian_base_version版本号
root@b529f9ce0ca9:/go/src/k8s.io/kubernetes# egrep -Rn "debian_base_version=" ./
./build/common.sh:97: local debian_base_version=v2.1.3
./build/dependencies.yaml:84: match: debian_base_version=
目前还无法从官方下载到这两种tag的镜像,只能通过其他镜像替代,如拉取以下镜像:
docker pull loongnixk8s/debian-iptables-mips64le:v12.1.0
docker pull loongnixk8s/debian-base-mips64le:v2.1.0
docker pull registry.aliyuncs.com/google_containers/kube-cross:v1.13.6-1
docker pull loongnixk8s/pause-mips64le:3.1docker tag loongnixk8s/pause-mips64le:3.1 k8s.gcr.io/pause-mips64le:3.2
docker tag registry.aliyuncs.com/google_containers/kube-cross:v1.13.6-1 us.gcr.io/k8s-artifacts-prod/build-image/kube-cross:v1.13.15-1
docker tag loongnixk8s/debian-base-mips64le:v2.1.0 k8s.gcr.io/debian-base-mips64le:v2.1.3
docker tag loongnixk8s/debian-iptables-mips64le:v12.1.0 k8s.gcr.io/debian-iptables-mips64le:v12.1.2docker rmi loongnixk8s/debian-iptables-mips64le:v12.1.0
docker rmi loongnixk8s/debian-base-mips64le:v2.1.0
docker rmi registry.aliyuncs.com/google_containers/kube-cross:v1.13.6-1
docker rmi loongnixk8s/pause-mips64le:3.1
4)修改编译脚本
k8s官方并未对mips64le操作指令集的架构进行适配,所以在编译脚本中也不支持对构建此架构的镜像,需要对以下脚本的内容进行修改。
vim hack/lib/version.sh
if [[ -z ${KUBE_GIT_TREE_STATE-} ]]; then# Check if the tree is dirty. default to dirtyif git_status=$("${git[@]}" status --porcelain 2>/dev/null) && [[ -z ${git_status} ]]; thenKUBE_GIT_TREE_STATE="clean"elseKUBE_GIT_TREE_STATE="clean" # dirty修改为clean,否则修改代码后编译出的版本号会dirty标记fifi
另外在vendor/github.com/google/cadvisor/fs/fs.go
这个包中,有一个数据类型在mips64le
架构中不兼容,需将buf.Dev
改为uint64(buf.Dev)
将hack/lib/golang.sh
中的KUBE_SUPPORTED_SERVER_PLATFORMS、KUBE_SUPPORTED_NODE_PLATFORMS、KUBE_SUPPORTED_CLIENT_PLATFORMS、KUBE_SUPPORTED_TEST_PLATFORMS
中添加上mips64le
- 执行编译命令,
在编译过程中会有一个报错,需要将对应的数据类型强制转换一下
KUBE_BASE_IMAGE_REGISTRY=k8s.gcr.io GOOS=linux GOARCH=mips64le KUBE_BUILD_PLATFORMS=linux/mips64le KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images GOFLAGS=-v GOGCFLAGS="-N -l" KUBE_BUILD_PULL_LATEST_IMAGES=false
2.3.2 编译kubelet、kubeadm、kubectl二进制
执行如下编译命令:
docker run --rm -v /home/go/src/k8s.io/Kubernetes:/go/src/k8s.io/kubernetes -it us.gcr.io/k8s-artifacts-prod/build-image/kube-cross:v1.13.15-1 bash
cd /go/src/k8s.io/kubernetes
GOOS=linux GOARCH=mips64le KUBE_BUILD_PLATFORMS=linux/mips64le make all GOFLAGS=-v GOGCFLAGS="-N -l" WHAT=cmd/kubeadm # 再分别编译kubectl和kubelet
2.3.3 使用编译好的镜像
将制作好的镜像上传到指定服务器,导入kube-apiserver.tar镜像,并更新环境上部署的kube-apiserver镜像,其他的操作类似。
# docker load -i kube-apiserver.tar
b1d170ccb364: Loading layer [==================================================>] 162.4MB/162.4MB
3.安装部署
3.1 安装kubectl、kubelet、kubectl
将上述编译好的二进制文件上传到指定服务器,将上述而你简直文件移入到/usr/bin
目录下,为kubelet
配置启动脚本和配置文件。
[root@k8s-master ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS[root@k8s-master ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target
3.2 部署k8s集群
3.2.1 下载相关镜像
需要提前下载好以下镜像:
docker pull gebilaoyao/pause-mips64le:3.1
docker pull gebilaoyao/etcd-mips64le:3.3.11
docker pull gebilaoyao/flannel-mips64le:0.10.0
docker pull gebilaoyao/coredns-mips64le:v1.6.7
docker pull gebilaoyao/kube-scheduler-mips64le:v1.18.8
docker pull gebilaoyao/kube-apiserver-mips64le:v1.18.8
docker pull gebilaoyao/kube-controller-manager-mips64le:v1.18.8
docker pull gebilaoyao/kube-proxy-mips64le:v1.18.8
下载好以后需要将镜像标签修改为指定的版本,可通过kubeadm init
的过程中通过查看日志看到具体版本,修改为指定版本后在进行集群部署。
3.2.2 部署
具体部署后期会通过自动化工具进行部署,本次适配过程仍为命令行操作,再次不做具体说明
3.3.3 安装网络插件(flannel)
核心组件安装部署完成后,通过kubectl get pod -A
,会发现coredns
这个pod一直处于pending
状态,这是因为没有安装网络插件,此时通过kubectl get node
查看node也会看到节点处于NoReady
状态。可通过下面的yaml
文件安装flannel
网络插件。
vim flannel-mips64le.yaml---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
rules:- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-mips64lenamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- mips64lehostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: gebilaoyao/flannel-mips64le:v0.10.0command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: gebilaoyao/flannel-mips64le:v0.10.0command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "300m"memory: "500Mi"limits:cpu: "300m"memory: "500Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
执行kubectl apply -f flannel-mips64le.yaml
创建pod,此时通过查看pod发现flannel
可以正常运行,但是coredns
仍然无法正常运行,通过查看node可以发现,此时node仍处于NoReady
状态,可以在systemctl status kubelet
的信息中看到有no valid networks found in /etc/cni/net.d
的报错,同时报在报无法在/opt/cni/bin
下找到合适的插件,可以通过如下方法进行解决:
cd $GOPATH/src
git clone https://github.com/containernetworking/plugins.g
cd containernetworking
./build_linux.sh
cp bin/* /
适配结果
目前已成功将K8S核心组件在基于龙芯CPU(mips64le架构)的UO操作系统的服务器上成功运行,具体情况如下:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-69zzTnTF-1618380763675)(C:\Users\Lenovo\AppData\Roaming\Typora\typora-user-images\image-20200911184548998.png)]
创建pod,此时通过查看pod发现
flannel可以正常运行,但是
coredns仍然无法正常运行,通过查看node可以发现,此时node仍处于
NoReady状态,可以在
systemctl status kubelet的信息中看到有
no valid networks found in /etc/cni/net.d的报错,同时报在报无法在
/opt/cni/bin`下找到合适的插件,可以通过如下方法进行解决:
cd $GOPATH/src
git clone https://github.com/containernetworking/plugins.g
cd containernetworking
./build_linux.sh
cp bin/* /
适配结果
目前已成功将K8S核心组件在基于龙芯CPU(mips64le架构)的UO操作系统的服务器上成功运行,具体情况如下:
这篇关于kubernetes-1.18.8-UOS-龙芯mips64le架构适配的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!