k8s集群本地搭建,使用gitlab、harbor、jenkens、cicd来实现devops自动化构建

本文主要是介绍k8s集群本地搭建,使用gitlab、harbor、jenkens、cicd来实现devops自动化构建,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

k8s集群本地搭建

准备:一台windows即可我windows内存是32gb的,6核,每核2线程全程使用终端 ssh命令操作.我是直接用的mac点操作windows,然后windows连接虚拟机即可.虚拟机记得改网卡,这样才能保证以后ip不变.介绍:k8s集群本地搭建(1master、2node)k8x运用devops来自动化构建服务(gitlab、harbor、jenkens、cicd)简介:一:环境设置1.环境设置:2.调整网卡3.docker设置4.安装kubelet、kubeadm、kubectl5.初始化master6.加入k8s节点7.部署CNI网络插件8.测试k8s集群二 : k8x运用1.部署nginx2.部署fluentd日志采集3.查看容器资源4.安装helm和ingress5.nginx绑定configMap6.安装nfs,创建StorageClass7.helm安装redis集群8.安装Prometheus监控9.ELK 服务日志收集和搜索查看10.kuboard可视化界面三:devops来构建服务1.安装gitlab2.安装Harbor 镜像仓库管理3.jenkins4.jenkins使用CICD构建服务环境设置:# 关闭防火墙systemctl stop firewalldsystemctl disable firewalld# 关闭selinuxsed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久setenforce 0  # 临时# 关闭swapswapoff -a  # 临时sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久# 关闭完swap后,一定要重启一下虚拟机!!!# 根据规划设置主机名hostnamectl set-hostname <hostname># 在master添加hostscat >> /etc/hosts << EOF192.168.113.120 k8s-master192.168.113.121 k8s-node1192.168.113.122 k8s-node2EOF更改网卡,固定ip(不固定ip也行)vi    /etc/sysconfig/network-scripts/ifcfg-ens33修改成如下内容TYPE="Ethernet"PROXY_METHOD="none"BROWSER_ONLY="no"BOOTPROTO="none"DEFROUTE="yes"IPV4_FAILURE_FATAL="no"IPV6INIT="yes"IPV6_AUTOCONF="yes"IPV6_DEFROUTE="yes"IPV6_FAILURE_FATAL="no"IPV6_ADDR_GEN_MODE="stable-privacy"NAME="ens33"UUID="7ae1e162-35b2-45d4-b642-65b8452b6bfe" //写个唯一的DEVICE="ens33"ONBOOT="yes"IPADDR="192.168.190.12" //填写自己的IP地址 需要和自己的子网保持一致PREFIX="24"GATEWAY="192.168.190.2" //网关DNS1="192.168.190.2"    //填写自己的网关IPV6_PRIVACY="no"# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOFsysctl --system  # 生效# 时间同步
yum install ntpdate -y
ntpdate time.windows.com第一步:CentOS修复2024 年 7 月 1 日,CentOS 7 终止使用,CentOS 团队已将其存储库移至 的存档vault.centos.org。如果不更新存储库 URL,则无法更新或验证软件包,从而导致这些错误。运行此命令来下载并执行修复脚本:curl -fsSL https://autoinstall.plesk.com/PSA_18.0.62/examiners/repository_check.sh | bash -s -- update >/dev/null第二步骤:CentOS加载iso虚拟机加载cd磁盘,磁盘为centOS7 iso,记得要连接防止有些包拉不下来mkdir /mnt/cdrommount /dev/cdrom /mnt/cdromvi /etc/yum.repos.d/dvd.repo[dvd]name=dvdbaseurl=file:///mnt/cdrom/enabled=1gpgcheck=1pgpkey=file://mnt/RPM-GPG-KEY-CentOS-7第三步:安装toolshttps://github.com/wjw1758548031/resource(资源地址)conntrack-tools 要单独安装 直接下载 rpm 然后本地安装即可,安装kubectl需要的sudo yum localinstall conntrack-tools-1.4.4-7.el7.x86_64.rpm第四步骤:docker设置代理sudo mkdir -p /etc/systemd/system/docker.service.dsudo vi /etc/systemd/system/docker.service.d/http-proxy.conf[Service]Environment="HTTP_PROXY=http://192.168.1.5:7890"Environment="HTTPS_PROXY=http://192.168.1.5:7890"# 这里设置自己的节点ipEnvironment="NO_PROXY=localhost,127.0.0.1,192.168.190.10,192.168.190.11,192.168.190.12"sudo systemctl daemon-reloadsudo systemctl restart docker//输出设置的代理systemctl show --property=Environment dockerexport https_proxy=http://192.168.1.5:7890 http_proxy=http://192.168.1.5:7890 all_proxy=socks5://192.168.1.5.1:7890ip和端口填写自己科学上网的,如果能拉取镜像,也不需要管这一步第五步:安装docker# step 1: 安装必要的一些系统工具sudo yum install -y yum-utils device-mapper-persistent-data lvm2# Step 2: 添加软件源信息sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# PS:如果出现如下错误信息Loaded plugins: fastestmirroradding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repograbbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repoCould not fetch/save url https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to file /etc/yum.repos.d/docker-ce.repo: [Errno 14] curl#60 - "Peer's Certificate issuer is not recognized."# 编辑 /etc/yum.conf 文件, 在 [main] 下面添加 sslverify=0 参数vi /etc/yum.conf# 配置如下----------------------[main]sslverify=0# -----------------------------# Step 3: 更新并安装Docker-CEsudo yum makecache fastsudo yum -y install docker-ce# Step 4: 开启Docker服务sudo systemctl enable dockersudo service docker start第六步:添加阿里云yum源cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgc=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF第七步:安装kubelet、kubeadm、kubectlyum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6systemctl enable kubeletsystemctl start kubelet# 配置关闭 Docker 的 cgroups,修改 /etc/docker/daemon.json,加入以下内容//和k8s保持同一驱动{"exec-opts": ["native.cgroupdriver=systemd"]} # 重启 dockersystemctl daemon-reloadsystemctl restart docker--------以上都是所有虚拟机都需要执行---------第八步:初始化master(master虚拟机上执行)kubeadm init \--apiserver-advertise-address=192.168.190.10 \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.23.6 \--service-cidr=10.96.0.0/12 \--pod-network-cidr=10.244.0.0/16 //提示intiialized successfully 就代表成功 输出的内容复制一下保存//--service-cidr和--pod-network-cidr子网可以随便填,只要子网不存在即可.mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configkubectl get nodes第九步:加入k8s节点(node节点上执行)kubeadm join 192.168.113.120:6443 --token w34ha2.66if2c8nwmeat9o7 --discovery-token-ca-cert-hash sha256:20e2227554f8883811c01edd850f0cf2f396589d32b57b9984de3353a7389477token通过如下获取:kubeadm token listhash通过如下获取:openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \openssl dgst -sha256 -hex | sed 's/^.* //'第十步:部署CNI网络插件(master执行)wget https://calico-v3-25.netlify.app/archive/v3.25/manifests/calico.yaml修改calico如下内容- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16" //value为master init时候的--pod-network-cidrsed -i 's#docker.io/##g' calico.yamlgrep image calico.yaml 查询这个yaml要拉取哪些镜像每个节点运行:通过docker pull 把这些镜像全部拉取下来设置代理拉 HTTP_PROXY=http://192.168.1.6:7890 HTTPS_PROXY=http://192.168.1.6:7890 docker pull docker.io/calico/cni:v3.25.0如果还是拉不下就设置docker加速镜像vi /etc/docker/daemon.json{"registry-mirrors": ["https://docker.m.daocloud.io","https://dockerproxy.com","https://docker.mirrors.ustc.edu.cn","https://docker.nju.edu.cn"]}部署kubectl apply -f calico.yaml查看节点和容器是否正常,如果有网络不通的,和没起来的kubectl get nodeskubectl get pod -n kube-system通过这几个命令来排查问题修复kubectl describe pod calico-node-68dr4 -n kube-systemkubectl logs -l k8s-app=calico-node -n kube-system报错提示:有的时候会出现node服务未运行,然后calico连不到node节点,链接ip 端口10250会失败.这种一般都是node节点kubelet未启动造成的,可以去查看日志为什么没启动.一般都是docker没启动,或者配置文件有问题造成的.journalctl -u kubelet 来查看日志测试k8s集群# 创建部署kubectl create deployment nginx --image=nginx# 暴露端口kubectl expose deployment nginx --port=80 --type=NodePort# 查看 pod 以及服务信息kubectl get pod,svc通过服务信息的映射端口,可以去浏览器里访问一下//验证完就可以删掉kubectl delete services nginxkubectl delete deploy nginx让所有的节点也能使用kubectl(所有节点运行)//把master机子上的文件复制到node上scp root@192.168.190.10:/etc/kubernetes/admin.conf /etc/kubernetesecho "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profilesource ~/.bash_profile---------以上k8s搭建完毕---------
---------以下介绍k8s里面的操作---------第一步:部署nginx//部署nginxkubectl create deploy nginx-deploy --image=nginx//获取deploy配置kubectl get deploy nginx-deploy -o yaml把输出的内容全部复制(除了status: 这一层的全部不要)vim nginx-deploy.yaml 复制进去,把没有用的配置删掉后为:apiVersion: apps/v1kind: Deploymentmetadata:labels:app: nginx-deployname: nginx-deploynamespace: defaultspec:replicas: 1//滚动跟新后,保留的历史版本数,可以会滚到之前的版本revisionHistoryLimit: 10 selector:matchLabels:app: nginx-deploystrategy://更新策略rollingUpdate: //滚动更新配置//在滚动更新的时候,所有容器允许超过百分之多少或者多少个maxSurge: 25%//更新过程中,允许有百分之25的不可用.maxUnavailable: 25%type: RollingUpdate //类行为滚动更新template:metadata:creationTimestamp: nulllabels:app: nginx-deployspec:containers:- image: nginximagePullPolicy: Alwaysname: nginxresources:limits:cpu: 200mmemory: 128Mirequests:cpu: 100mmemory: 128MirestartPolicy: Always//容器没有正常的退出会停顿30秒后才退出terminationGracePeriodSeconds: 30 //启动kubectl create -f nginx-deploy.yaml vi nginx-svc.yamlapiVersion: v1kind: Servicemetadata:name: nginx-svclabels:app: nginx-svcspec:selector:app: nginx-deployports:- name: http # service 端~O~E~M置~Z~D~P~M称port: 80 # service ~G己~Z~D端~OtargetPort: 80 # ~[| ~G pod ~Z~D端~Otype: NodePort//启动kubectl create -f nginx-svc.yaml第二步:部署fluentd日志采集编写文件 fluentd.yamlapiVersion: apps/v1kind: DaemonSetmetadata:name: fluentdspec:selector:matchLabels:app: loggingtemplate:metadata:labels:app: loggingid: fluentdname: fluentdspec:containers:- name: fluentd-esimage: agilestacks/fluentd-elasticsearch:v1.3.0env:- name: FLUENTD_ARGSvalue: -qqvolumeMounts://容器里的地址,把node磁盘映射到容器刻路径- name: containersmountPath: /var/lib/docker/containers- name: varlogmountPath: /varlogvolumes://挂载node里的路径- hostPath:path: /var/lib/docker/containersname: containers- hostPath:path: /var/logname: varlog执行 kubectl create -f fluentd.yaml通过kubectl get pod 来查看容器是否启动第三步: 查看容器资源wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yamlvi metrics-server-components.yaml定位到- --metric-resolution=15s往下面添加- --kubelet-insecure-tlskubectl apply -f metrics-server-components.yamlkubectl get pods --all-namespaces | grep metrics//就能看到容器占用的cpu和内存kubectl top pods 第四步: 安装helm和ingresswget https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gztar -zxvf helm-v3.10.2-linux-amd64.tar.gzcd linux-amd64/cp helm /usr/local/bin/helm//命令有用即可helm # 添加仓库helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx# 查看和下载helm search repo ingress-nginxhelm pull ingress-nginx/ingress-nginx --version 4.0.1tar -xf ingress-nginx-4.11.1.tgzcd ingress-nginxvi values.yaml修改如下hostNetwork: truednsPolicy: ClusterFirstWithHostNetkind: Deployment 改成 kind: DaemonSetnodeSelector:ingress: "true" # 增加选择器,如果 node 上有 ingress=true 就部署将 admissionWebhooks.enabled 修改为 false将 service 中的 type 由 LoadBalancer 修改为 ClusterIP,如果服务器是云平台才用 LoadBalancer# 为 ingress 专门创建一个 namespacekubectl create ns ingress-nginx# 为需要部署 ingress 的节点上加标签kubectl label node k8s-node1 ingress=true#部署ingress控制器helm install ingress-nginx -n ingress-nginx .#查看是否成功kubectl get pod -n ingress-nginxvi wolfcode-nginx-ingress.yamlapiVersion: networking.k8s.io/v1kind: Ingress # 资源类型为 Ingressmetadata:name: wolfcode-nginx-ingressannotations:kubernetes.io/ingress.class: "nginx"nginx.ingress.kubernetes.io/rewrite-target: /spec:rules: # ingress 规则配置,可以配置多个- host: k8s.wolfcode.cn # 域名配置,可以使用通配符 *http:paths: # 相当于 nginx 的 location 配置,可以配置多个- pathType: Prefix # 路径类型,按照路径类型进行匹配 ImplementationSpecific 需要指定 IngressClass,具体匹配规则以 IngressClass 中的规则为准。Exact:精确匹配,URL需要与path完全匹配上,且区分大小写的。Prefix:以 / 作为分隔符来进行前缀匹配backend:service: name: nginx-svc # 代理到哪个 serviceport: number: 80 # service 的端口path: /api # 等价于 nginx 中的 location 的路径前缀匹配//创建ingresskubectl create -f wolfcide-ingress.yamlkubectl get ingresswindows主机将hosts添加一个域名映射192.168.190.11 k8s.wolfcode.cn然后浏览器直接访问k8s.wolfcode.cn 就能访问到nginx了//可能需要取消翻墙才能访问第五步:nginx绑定configMap//进入其中一台nginx podkubectl exec -it nginx-deploy-6b4db948c6-9sw6k -- sh//把查询的内容辅助下来cat nginx.conf//退出exit//编辑nginx.conf 把内容复制进去vi nginx.confuser  nginx;worker_processes  auto;error_log  /var/log/nginx/error.log notice;pid        /var/run/nginx.pid;events {worker_connections  1024;}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;include /etc/nginx/conf.d/*.conf;}//创建configmapkubectl create configmap nginx-conf-cm --from-file=./nginx.conf//查看configmap 是否创建成功kubectl describe cm nginx-conf-cm//编辑之前的nginx-deploy.yaml把template.spec全部覆盖spec:containers:- image: nginximagePullPolicy: Alwaysname: nginxresources:limits:cpu: 200mmemory: 128Mirequests:cpu: 100mmemory: 128MivolumeMounts:- mountPath: /etc/nginx/nginx.confname: nginx-confsubPath: nginx.confvolumes:- name: nginx-conf  # 正确~Z~D缩~[configMap:name: nginx-conf-cm  # 正确~Z~D缩~[defaultMode: 420items:- key: nginx.confpath: nginx.conf//重新加载,如果加载不了就删了重新createkubectl apply -f nginx-deploy.yaml第六步:安装nfs,创建StorageClass# 安装 nfsyum install nfs-utils -y# 启动 nfssystemctl start nfs-server# 开机启动systemctl enable nfs-server# 查看 nfs 版本cat /proc/fs/nfsd/versions# 创建共享目录mkdir -p /data/nfscd /data/nfsmkdir rwmkdir ro# 设置共享目录 exportvim /etc/exports/data/nfs/rw 192.168.190.0/24(rw,sync,no_subtree_check,no_root_squash)/data/nfs/ro 192.168.190.0/24(ro,sync,no_subtree_check,no_root_squash)# 重新加载exportfs -fsystemctl reload nfs-servervi nfs-provisioner-rbac.yamlapiVersion: v1kind: ServiceAccountmetadata:name: nfs-client-provisionernamespace: kube-system---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: nfs-client-provisioner-runnernamespace: kube-systemrules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: run-nfs-client-provisionersubjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: kube-systemroleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-nfs-client-provisionernamespace: kube-systemrules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-nfs-client-provisionernamespace: kube-systemsubjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: kube-systemroleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.iokubectl apply -f nfs-provisioner-rbac.yamlvi nfs-provisioner-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:name: nfs-client-provisionernamespace: kube-systemlabels:app: nfs-client-provisionerspec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: fuseim.pri/ifs- name: NFS_SERVERvalue: 192.168.190.10- name: NFS_PATHvalue: /data/nfs/rwvolumes:- name: nfs-client-rootnfs:server: 192.168.190.10path: /data/nfs/rwkubectl apply -f nfs-provisioner-deployment.yamlvi nfs-storage-class.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: managed-nfs-storagenamespace: kube-systemprovisioner: fuseim.pri/ifs # 外部制备器提供者,编写为提供者的名称parameters:archiveOnDelete: "false" # 是否存档,false 表示不存档,会删除 oldPath 下面的数据,true 表示存档,会重命名路径reclaimPolicy: Retain # 回收策略,默认为 Delete 可以配置为 RetainvolumeBindingMode: Immediate # 默认为 Immediate,表示创建 PVC 立即进行绑定,只有 azuredisk 和 AWSelasticblockstore 支持其他值kubectl apply -f nfs-storage-class.yaml//查看是否创建成功kubectl get sc //查看制备器容器是否启动kubectl get pod -n kube-system第七步:helm安装redis集群helm repo add bitnami https://charts.bitnami.com/bitnamihelm search repo redis# 先将 chart 拉到本地helm pull bitnami/redis --version 17.4.3# 解压后,修改 values.yaml 中的参数tar -xvf redis-17.4.3.tgzcd redis/vi values.yaml# 修改 storageClass 为 managed-nfs-storage# 设置 redis 密码 password//创建命名空间kubectl create namespace rediscd ..///安装helm install redis ./redis -n redis//查看是否启动kubectl get pod -n rediskubectl get pvc -n redis第八步:安装Prometheus监控wget https://github.com/prometheus-ope
rator/kube-prometheus/archive/refs/tags/v0.10.0.tar.gztar -zxvf v0.10.0.tar.gzcd kube-prometheus-0.10.0/kubectl create -f manifests/setup//查看所有资源是否正常kubectl get all -n monitoring//查看svckubectl get svc -n monitoring# 创建 prometheus-ingress.yamlapiVersion: networking.k8s.io/v1kind: Ingressmetadata:namespace: monitoringname: prometheus-ingressspec:ingressClassName: nginxrules:- host: grafana.wolfcode.cn  # 访问 Grafana 域名http:paths:- path: /pathType: Prefixbackend:service:name: grafanaport:number: 3000- host: prometheus.wolfcode.cn  # 访问 Prometheus 域名http:paths:- path: /pathType: Prefixbackend:service:name: prometheus-k8s port:number: 9090- host: alertmanager.wolfcode.cn  # 访问 alertmanager 域名http:paths:- path: /pathType: Prefixbackend:service:name: alertmanager-mainport:number: 9093# 创建 ingresskubectl apply -f prometheus-ingress.yaml//windows主机配置host192.168.190.11 grafana.wolfcode.cn192.168.190.11 prometheus.wolfcode.cn192.168.190.11 alertmanager.wolfcode.cn//以上的域名就可以直接访问对应的监控界面了grafana.wolfcode.cn //通过这个去看k8s各种资源监控//默认账号密码都是admin 配置一下dashboards第九步:ELK 服务日志收集和搜索查看kubectl label node k8s-node1  es=datavi es.yaml --- apiVersion: v1 kind: Service metadata: name: elasticsearch-logging namespace: kube-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "Elasticsearch" spec: ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearch-logging --- # RBAC authn and authz apiVersion: v1 kind: ServiceAccount metadata: name: elasticsearch-logging namespace: kube-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: elasticsearch-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups: - "" resources: - "services" - "namespaces" - "endpoints" verbs: - "get" --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: kube-logging name: elasticsearch-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile subjects: - kind: ServiceAccount name: elasticsearch-logging namespace: kube-logging apiGroup: "" roleRef: kind: ClusterRole name: elasticsearch-logging apiGroup: "" --- # Elasticsearch deployment itself apiVersion: apps/v1 kind: StatefulSet #使用statefulset创建Pod metadata: name: elasticsearch-logging #pod名称,使用statefulSet创建的Pod是有序号有顺序的 namespace: kube-logging  #命名空间 labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile srv: srv-elasticsearch spec: serviceName: elasticsearch-logging #与svc相关联,这可以确保使用以下DNS地址访问Statefulset中的每个pod (es-cluster-[0,1,2].elasticsearch.elk.svc.cluster.local) replicas: 1 #副本数量,单节点 selector: matchLabels: k8s-app: elasticsearch-logging #和pod template配置的labels相匹配 template: metadata: labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" spec: serviceAccountName: elasticsearch-logging containers: - image: docker.io/library/elasticsearch:7.9.3 name: elasticsearch-logging resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m memory: 2Gi requests: cpu: 100m memory: 500Mi ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: elasticsearch-logging mountPath: /usr/share/elasticsearch/data/   #挂载点 env: - name: "NAMESPACE" valueFrom: fieldRef: fieldPath: metadata.namespace - name: "discovery.type"  #定义单节点类型 value: "single-node" - name: ES_JAVA_OPTS #设置Java的内存参数,可以适当进行加大调整 value: "-Xms512m -Xmx2g"  volumes: - name: elasticsearch-logging hostPath: path: /data/es/ nodeSelector: #如果需要匹配落盘节点可以添加 nodeSelect es: data tolerations: - effect: NoSchedule operator: Exists # Elasticsearch requires vm.max_map_count to be at least 262144. # If your OS already sets up this number to a higher value, feel free # to remove this init container. initContainers: #容器初始化前的操作 - name: elasticsearch-logging-init image: alpine:3.6 command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] #添加mmap计数限制,太低可能造成内存不足的错误 securityContext:  #仅应用到指定的容器上,并且不会影响Volume privileged: true #运行特权容器 - name: increase-fd-ulimit image: busybox imagePullPolicy: IfNotPresent command: ["sh", "-c", "ulimit -n 65536"] #修改文件描述符最大数量 securityContext: privileged: true - name: elasticsearch-volume-init #es数据落盘初始化,加上777权限 image: alpine:3.6 command: - chmod - -R - "777" - /usr/share/elasticsearch/data/ volumeMounts: - name: elasticsearch-logging mountPath: /usr/share/elasticsearch/data/# 创建命名空间kubectl create ns kube-logging# 创建服务kubectl create -f es.yamlvi logstash.yaml --- apiVersion: v1 kind: Service metadata: name: logstash namespace: kube-logging spec: ports: - port: 5044 targetPort: beats selector: type: logstash clusterIP: None --- apiVersion: apps/v1 kind: Deployment metadata: name: logstash namespace: kube-logging spec: selector: matchLabels: type: logstash template: metadata: labels: type: logstash srv: srv-logstash spec: containers: - image: docker.io/kubeimages/logstash:7.9.3 #该镜像支持arm64和amd64两种架构 name: logstash ports: - containerPort: 5044 name: beats command: - logstash - '-f' - '/etc/logstash_c/logstash.conf' env: - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS" value: "http://elasticsearch-logging:9200" volumeMounts: - name: config-volume mountPath: /etc/logstash_c/ - name: config-yml-volume mountPath: /usr/share/logstash/config/ - name: timezone mountPath: /etc/localtime resources: #logstash一定要加上资源限制,避免对其他业务造成资源抢占影响 limits: cpu: 1000m memory: 2048Mi requests: cpu: 512m memory: 512Mi volumes: - name: config-volume configMap: name: logstash-conf items: - key: logstash.conf path: logstash.conf - name: timezone hostPath: path: /etc/localtime - name: config-yml-volume configMap: name: logstash-yml items: - key: logstash.yml path: logstash.yml --- apiVersion: v1 kind: ConfigMap metadata: name: logstash-conf namespace: kube-logging labels: type: logstash data: logstash.conf: |- input {beats { port => 5044 } } filter {# 处理 ingress 日志 if [kubernetes][container][name] == "nginx-ingress-controller" {json {source => "message" target => "ingress_log" }if [ingress_log][requesttime] { mutate { convert => ["[ingress_log][requesttime]", "float"] }}if [ingress_log][upstremtime] { mutate { convert => ["[ingress_log][upstremtime]", "float"] }} if [ingress_log][status] { mutate { convert => ["[ingress_log][status]", "float"] }}if  [ingress_log][httphost] and [ingress_log][uri] {mutate { add_field => {"[ingress_log][entry]" => "%{[ingress_log][httphost]}%{[ingress_log][uri]}"} } mutate { split => ["[ingress_log][entry]","/"] } if [ingress_log][entry][1] { mutate { add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/%{[ingress_log][entry][1]}"} remove_field => "[ingress_log][entry]" }} else { mutate { add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/"} remove_field => "[ingress_log][entry]" }}}}# 处理以srv进行开头的业务服务日志 if [kubernetes][container][name] =~ /^srv*/ { json { source => "message" target => "tmp" } if [kubernetes][namespace] == "kube-logging" { drop{} } if [tmp][level] { mutate{ add_field => {"[applog][level]" => "%{[tmp][level]}"} } if [applog][level] == "debug"{ drop{} } } if [tmp][msg] { mutate { add_field => {"[applog][msg]" => "%{[tmp][msg]}"} } } if [tmp][func] { mutate { add_field => {"[applog][func]" => "%{[tmp][func]}"} } } if [tmp][cost]{ if "ms" in [tmp][cost] { mutate { split => ["[tmp][cost]","m"] add_field => {"[applog][cost]" => "%{[tmp][cost][0]}"} convert => ["[applog][cost]", "float"] } } else { mutate { add_field => {"[applog][cost]" => "%{[tmp][cost]}"} }}}if [tmp][method] { mutate { add_field => {"[applog][method]" => "%{[tmp][method]}"} }}if [tmp][request_url] { mutate { add_field => {"[applog][request_url]" => "%{[tmp][request_url]}"} } }if [tmp][meta._id] { mutate { add_field => {"[applog][traceId]" => "%{[tmp][meta._id]}"} } } if [tmp][project] { mutate { add_field => {"[applog][project]" => "%{[tmp][project]}"} }}if [tmp][time] { mutate { add_field => {"[applog][time]" => "%{[tmp][time]}"} }}if [tmp][status] { mutate { add_field => {"[applog][status]" => "%{[tmp][status]}"} convert => ["[applog][status]", "float"] }}}mutate { rename => ["kubernetes", "k8s"] remove_field => "beat" remove_field => "tmp" remove_field => "[k8s][labels][app]" }}output { elasticsearch { hosts => ["http://elasticsearch-logging:9200"] codec => json index => "logstash-%{+YYYY.MM.dd}" #索引名称以logstash+日志进行每日新建 } } ---apiVersion: v1 kind: ConfigMap metadata: name: logstash-yml namespace: kube-logging labels: type: logstash data: logstash.yml: |- http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200kubectl create -f logstash.yamlvi filebeat.yaml --- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-logging labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.inputs: - type: container enable: truepaths: - /var/log/containers/*.log #这里是filebeat采集挂载到pod中的日志目录 processors: - add_kubernetes_metadata: #添加k8s的字段用于后续的数据清洗 host: ${NODE_NAME}matchers: - logs_path: logs_path: "/var/log/containers/" #output.kafka:  #如果日志量较大,es中的日志有延迟,可以选择在filebeat和logstash中间加入kafka #  hosts: ["kafka-log-01:9092", "kafka-log-02:9092", "kafka-log-03:9092"] # topic: 'topic-test-log' #  version: 2.0.0 output.logstash: #因为还需要部署logstash进行数据的清洗,因此filebeat是把数据推到logstash中 hosts: ["logstash:5044"] enabled: true --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-logging labels: k8s-app: filebeat--- apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-logging roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-logging labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 containers: - name: filebeat image: docker.io/kubeimages/filebeat:7.9.3 #该镜像支持arm64和amd64两种架构 args: [ "-c", "/etc/filebeat.yml", "-e","-httpprof","0.0.0.0:6060" ] #ports: #  - containerPort: 6060 #    hostPort: 6068 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: ELASTICSEARCH_HOST value: elasticsearch-logging - name: ELASTICSEARCH_PORT value: "9200" securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 1000Mi cpu: 1000m requests: memory: 100Mi cpu: 100m volumeMounts: - name: config #挂载的是filebeat的配置文件 mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: data #持久化filebeat数据到宿主机上 mountPath: /usr/share/filebeat/data - name: varlibdockercontainers #这里主要是把宿主机上的源日志目录挂载到filebeat容器中,如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把mountPath改为/var/lib mountPath: /var/libreadOnly: true - name: varlog #这里主要是把宿主机上/var/log/pods和/var/log/containers的软链接挂载到filebeat容器中 mountPath: /var/log/ readOnly: true - name: timezone mountPath: /etc/localtime volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: varlibdockercontainers hostPath: #如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把path改为/var/lib path: /var/lib- name: varlog hostPath: path: /var/log/ # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: inputs configMap: defaultMode: 0600 name: filebeat-inputs - name: data hostPath: path: /data/filebeat-data type: DirectoryOrCreate - name: timezone hostPath: path: /etc/localtime tolerations: #加入容忍能够调度到每一个节点 - effect: NoExecute key: dedicated operator: Equal value: gpu - effect: NoSchedule operator: Existskubectl create -f filebeat.yaml vi kibana.yaml---apiVersion: v1kind: ConfigMapmetadata:namespace: kube-loggingname: kibana-configlabels:k8s-app: kibanadata:kibana.yml: |-server.name: kibanaserver.host: "0"i18n.locale: zh-CN                      #设置默认语言为中文elasticsearch:hosts: ${ELASTICSEARCH_HOSTS}         #es集群连接地址,由于我这都都是k8s部署且在一个ns下,可以直接使用service name连接--- apiVersion: v1 kind: Service metadata: name: kibana namespace: kube-logging labels: k8s-app: kibana kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "Kibana" srv: srv-kibana spec: type: NodePortports: - port: 5601 protocol: TCP targetPort: ui selector: k8s-app: kibana --- apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: kube-logging labels: k8s-app: kibana kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile srv: srv-kibana spec: replicas: 1 selector: matchLabels: k8s-app: kibana template: metadata: labels: k8s-app: kibana spec: containers: - name: kibana image: docker.io/kubeimages/kibana:7.9.3 #该镜像支持arm64和amd64两种架构 resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_HOSTS value: http://elasticsearch-logging:9200 ports: - containerPort: 5601 name: ui protocol: TCP volumeMounts:- name: configmountPath: /usr/share/kibana/config/kibana.ymlreadOnly: truesubPath: kibana.ymlvolumes:- name: configconfigMap:name: kibana-config--- apiVersion: networking.k8s.io/v1kind: Ingress metadata: name: kibana namespace: kube-logging spec: ingressClassName: nginxrules: - host: kibana.wolfcode.cnhttp: paths: - path: / pathType: Prefixbackend: service:name: kibana port:number: 5601kubectl create -f kibana.yaml# 查看 pod 启用情况kubectl get pod -n kube-logging# 查看svc 复制kibana的端口kubectl get svc -n kube-logging//通过节点的ip加端口就能访问192.168.190.11:32036第一步创建Stack Management索引第二步点击discover 选择完创建的索引,就能看到所有日志了.第十步:kuboard可视化界面wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yamlwget https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yamlvi cluster-configuration.yaml修改 storageClass: "managed-nfs-storage"kubectl apply -f kubesphere-installer.yamlkubectl apply -f cluster-configuration.yaml# 检查安装日志kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f# 查看端口kubectl get svc/ks-console -n kubesphere-system# 默认端口是 30880,如果是云服务商,或开启了防火墙,记得要开放该端口# 登录控制台访问,账号密码:admin/P@88w0rd---------以上介绍k8s里面的操作完毕---------
---------以下会使用devops来构建服务---------第一步:安装gitlab# 下载安装包wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-15.9.1-ce.0.el7.x86_64.rpm# 安装rpm -i gitlab-ce-15.9.1-ce.0.el7.x86_64.rpm# 编辑 /etc/gitlab/gitlab.rb 文件# 修改 external_url 访问路径 http://<ip>:28080# 其他配置修改如下gitlab_rails['time_zone'] = 'Asia/Shanghai'puma['worker_processes'] = 2sidekiq['max_concurrency'] = 8postgresql['shared_buffers'] = "128MB"postgresql['max_worker_processes'] = 4prometheus_monitoring['enable'] = false# 更新配置并重启gitlab-ctl reconfiguregitlab-ctl restartsudo systemctl enable gitlab-runsvdir.service访问 192.168.190.10:28080账号 root密码 cat /etc/gitlab/initial_root_password网站操作# 登录后修改默认密码 > 右上角头像 > Perferences > Password 修改成wolfcode# 开启 webhook 外部访问# Settings > Network > Outbound requests > Allow requests to the local network from web hooks and services 勾选# 设置语言为中文(全局)# Settings > Preferences > Localization > Default language > 选择简体中文 > Save changes# 设置当前用户语言为中文# 右上角用户头像 > Preferences > Localization > Language > 选择简体中文 > Save changes第二步:安装Harbor 镜像仓库管理sudo curl -L "https://github.com/docker/compose/releases/download/v2.20.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-composesudo chmod +x /usr/local/bin/docker-composedocker-compose --versionwget https://github.com/goharbor/harbor/releases/download/v2.5.0/harbor-offline-installer-v2.5.0.tgztar -xzf  harbor-offline-installer-v2.5.0.tgzcd harbor-offline-installer-v2.5.0cp harbor.yml.tmpl harbor.ymlvi harbor.ymlhostname: 192.168.190.10port: 8858关于https的都注视掉# https related config#https:# https port for harbor, default is 443# port: 443# The path of cert and key files for nginx# certificate: /your/certificate/path# private_key: /your/private/key/pathharbor_admin_password: wolfcode./install.shkubectl create secret docker-registry harbor-secret --docker-server=192.168.190.10:8858 --docker-username=admin --docker-password=wolfcode -n kube-devops192.168.190.10:8858账号 admin密码 wolfcode创建一个wolfcode的项目即可//每个机器都操作一遍vi /etc/docker/daemon.json 添加 "insecure-registries": ["192.168.190.10:8858","registry.cn-hangzhou.aliyuncs.com"],sudo systemctl daemon-reloadsudo systemctl restart docker//使用 docker login登录一下,可以登录的话说明就成功了docker login -uadmin 192.168.190.10:8858第三步:jenkins     mkdir jenkinscd jenkinsvi DockerFROM jenkins/jenkins:lts-jdk11ADD ./sonar-scanner-cli-4.8.0.2856-linux.zip /usr/local/USER rootWORKDIR /usr/local/RUN unzip sonar-scanner-cli-4.8.0.2856-linux.zipRUN mv sonar-scanner-4.8.0.2856-linux sonar-scanner-cliRUN ln -s /usr/local/sonar-scanner-cli/bin/sonar-scanner /usr/bin/sonar-scannerRUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoersUSER jenkinswget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.8.0.2856-linux.zipunzip sonar-scanner-cli-4.8.0.2856-linux.zip# 构建 jenkins 镜像docker build -t 192.168.190.10:8858/library/jenkins-jdk11:jdk-11 .# 登录 harbordocker login --username=admin 192.168.190.10:8858# 推送镜像到 harbordocker push 192.168.190.10:8858/library/jenkins-jdk11:jdk-11 kubectl create secret docker-registry aliyum-secret --docker-server=192.168.190.10:8858 --docker-username=admin --docker-password=wolfcode -n kube-devopsvi jenkins-serviceAccount.yamlapiVersion: v1kind: ServiceAccountmetadata:name: jenkins-adminnamespace: kube-devops---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: jenkins-adminroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: jenkins-adminnamespace: kube-devopsvi jenkins-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata:name: jenkins-pvcnamespace: kube-devopsspec:storageClassName: managed-nfs-storageaccessModes:- ReadWriteManyresources:requests:storage: 1Givi jenkins-service.yamlapiVersion: v1kind: Servicemetadata:name: jenkins-servicenamespace: kube-devopsannotations:prometheus.io/scrape: 'true'prometheus.io/path:   /prometheus.io/port:   '8080'spec:selector:app: jenkins-servertype: NodePortports:- name: httpport: 8080targetPort: 8080- name: agentport: 50000protocol: TCPtargetPort: 50000vi jenkins-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:name: jenkinsnamespace: kube-devopsspec:replicas: 1selector:matchLabels:app: jenkins-servertemplate:metadata:labels:app: jenkins-serverspec:serviceAccountName: jenkins-adminimagePullSecrets:- name: harbor-secret # harbor 访问   或者看情况修改成自己阿里云的容器仓库containers:- name: jenkinsimage: 192.168.190.10:8858/library/jenkins-jdk11:jdk-11imagePullPolicy: IfNotPresentsecurityContext:privileged: truerunAsUser: 0 # 使用 root 用户运行容器resources:limits:memory: "2Gi"cpu: "1000m"requests:memory: "500Mi"cpu: "500m"ports:- name: httpportcontainerPort: 8080- name: jnlpportcontainerPort: 50000livenessProbe:httpGet:path: "/login"port: 8080initialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5failureThreshold: 5readinessProbe:httpGet:path: "/login"port: 8080initialDelaySeconds: 60periodSeconds: 10timeoutSeconds: 5failureThreshold: 3volumeMounts:- name: jenkins-datamountPath: /var/jenkins_home- name: dockermountPath: /run/docker.sock- name: docker-homemountPath: /usr/bin/docker- name: daemonmountPath: /etc/docker/daemon.jsonsubPath: daemon.json- name: kubectlmountPath: /usr/bin/kubectlvolumes:- name: kubectlhostPath:path: /usr/bin/kubectl- name: jenkins-datapersistentVolumeClaim:claimName: jenkins-pvc- name: dockerhostPath:path: /run/docker.sock # 将主机的 docker 映射到容器中- name: docker-homehostPath:path: /usr/bin/docker- name: mvn-settingconfigMap:name: mvn-settingsitems:- key: settings.xmlpath: settings.xml- name: daemonhostPath:path: /etc/docker/# 进入 jenkins 目录,安装 jenkinskubectl apply -f manifests/# 查看是否运行成功kubectl get po -n kube-devops# 查看 service 端口,通过浏览器访问kubectl get svc -n kube-devops# 查看容器日志,获取默认密码//在This may also be found at: /var/jenkins_home/secrets/initialAdminPassword上面kubectl logs -f pod名称 -n kube-devops//通过自己的ip地址和映射端口就能访问192.168.190.11:31697选择安装推荐插件进去的时候账号密码弄成 admin/wolfcode通过manage jenkins -> 点击插件管理里的-》Avalilable plugins 去下载需要的插件Build Authorization Token RootGitlabNode and Label parameterKubernetesConfig File ProviderGit Parameterjenkins + k8s 环境配置进入 Dashboard > 系统管理 > 节点管理 > Clouds > NewClouds配置 k8s 集群名称:kubernetesKubernetes 地址:https://kubernetes.default警用https证书检查Jenkins 地址:http://jenkins-service.kube-devops:8080配置完成后保存即可凭据-系统-全局凭据-添加凭证类型 Username with password把gitlab的账号密码填进去id git-user-pass同理把harbor也添加进去第四步:jenkins使用CICD构建服务去自己的gitlab界面把https://github.com/wjw1758548031/resource/k8s-go-demo 倒入进去我的库名叫做 resourcecat ~/.kube/config系统管理,Managed files ,创建一个新的config,把上面的内容copy进去jenkins里面点击新建任务流水线gitlab 选择默认build when 选择一下push EventsOpened Merge高级生成一下Secret token流水线选择:Pipeline script form SCM填写一下git信息脚本路径就 Jenkinsfile然后把build when 后面的url 复制下来打开gitlab 项目 设置 webhookurl放进去,ssl验证关闭,打开push推送Secret token 也添加进去项目只要有push就会构建镜像和容器 通过下面的url就可以访问接口http://192.168.190.11:30001/项目里的 jenkinsFile、dockerFile、deploment.yaml 我都写好了,直接用即可.

这篇关于k8s集群本地搭建,使用gitlab、harbor、jenkens、cicd来实现devops自动化构建的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1143908

相关文章

服务器集群同步时间手记

1.时间服务器配置(必须root用户) (1)检查ntp是否安装 [root@node1 桌面]# rpm -qa|grep ntpntp-4.2.6p5-10.el6.centos.x86_64fontpackages-filesystem-1.41-1.1.el6.noarchntpdate-4.2.6p5-10.el6.centos.x86_64 (2)修改ntp配置文件 [r

中文分词jieba库的使用与实景应用(一)

知识星球:https://articles.zsxq.com/id_fxvgc803qmr2.html 目录 一.定义: 精确模式(默认模式): 全模式: 搜索引擎模式: paddle 模式(基于深度学习的分词模式): 二 自定义词典 三.文本解析   调整词出现的频率 四. 关键词提取 A. 基于TF-IDF算法的关键词提取 B. 基于TextRank算法的关键词提取

使用SecondaryNameNode恢复NameNode的数据

1)需求: NameNode进程挂了并且存储的数据也丢失了,如何恢复NameNode 此种方式恢复的数据可能存在小部分数据的丢失。 2)故障模拟 (1)kill -9 NameNode进程 [lytfly@hadoop102 current]$ kill -9 19886 (2)删除NameNode存储的数据(/opt/module/hadoop-3.1.4/data/tmp/dfs/na

HDFS—集群扩容及缩容

白名单:表示在白名单的主机IP地址可以,用来存储数据。 配置白名单步骤如下: 1)在NameNode节点的/opt/module/hadoop-3.1.4/etc/hadoop目录下分别创建whitelist 和blacklist文件 (1)创建白名单 [lytfly@hadoop102 hadoop]$ vim whitelist 在whitelist中添加如下主机名称,假如集群正常工作的节

Hadoop集群数据均衡之磁盘间数据均衡

生产环境,由于硬盘空间不足,往往需要增加一块硬盘。刚加载的硬盘没有数据时,可以执行磁盘数据均衡命令。(Hadoop3.x新特性) plan后面带的节点的名字必须是已经存在的,并且是需要均衡的节点。 如果节点不存在,会报如下错误: 如果节点只有一个硬盘的话,不会创建均衡计划: (1)生成均衡计划 hdfs diskbalancer -plan hadoop102 (2)执行均衡计划 hd

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

Makefile简明使用教程

文章目录 规则makefile文件的基本语法:加在命令前的特殊符号:.PHONY伪目标: Makefilev1 直观写法v2 加上中间过程v3 伪目标v4 变量 make 选项-f-n-C Make 是一种流行的构建工具,常用于将源代码转换成可执行文件或者其他形式的输出文件(如库文件、文档等)。Make 可以自动化地执行编译、链接等一系列操作。 规则 makefile文件

hdu1043(八数码问题,广搜 + hash(实现状态压缩) )

利用康拓展开将一个排列映射成一个自然数,然后就变成了普通的广搜题。 #include<iostream>#include<algorithm>#include<string>#include<stack>#include<queue>#include<map>#include<stdio.h>#include<stdlib.h>#include<ctype.h>#inclu

使用opencv优化图片(画面变清晰)

文章目录 需求影响照片清晰度的因素 实现降噪测试代码 锐化空间锐化Unsharp Masking频率域锐化对比测试 对比度增强常用算法对比测试 需求 对图像进行优化,使其看起来更清晰,同时保持尺寸不变,通常涉及到图像处理技术如锐化、降噪、对比度增强等 影响照片清晰度的因素 影响照片清晰度的因素有很多,主要可以从以下几个方面来分析 1. 拍摄设备 相机传感器:相机传

嵌入式QT开发:构建高效智能的嵌入式系统

摘要: 本文深入探讨了嵌入式 QT 相关的各个方面。从 QT 框架的基础架构和核心概念出发,详细阐述了其在嵌入式环境中的优势与特点。文中分析了嵌入式 QT 的开发环境搭建过程,包括交叉编译工具链的配置等关键步骤。进一步探讨了嵌入式 QT 的界面设计与开发,涵盖了从基本控件的使用到复杂界面布局的构建。同时也深入研究了信号与槽机制在嵌入式系统中的应用,以及嵌入式 QT 与硬件设备的交互,包括输入输出设