本文主要是介绍k8sEfk快速搭建,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1.前言
ELK 相信大家都很熟悉
- ElasticSearch,分布式搜索和分析引擎。
- Logstash,实时传输能力的数据收集引擎。
- Kibana,为 Elasticsearch 提供了分析和可视化的 Web 平台。它可以在 Elasticsearch 的索引中查找,交互数据,并生成各种维度表格、图形。
这里我们把Logstash 替换成 Fluentd,进行部署
1.1 logstash 和 fluentd 对比
名称 | 优点 | 缺点 |
---|---|---|
logstash | Logstash 主要的有点就是它的灵活性,主要因为它有很多插件,详细的文档以及直白的配置格式让它可以在多种场景下应用。我们基本上可以在网上找到很多资源,几乎可以处理任何问题 | 效能上表现略逊,大数据量的情况下会是个问题 |
fluentd | 效能好,轻量 | 灵活性差 |
1.2 flutend 相关说明
1.2.1 flutend 介绍
Fluentd 是一个高效的日志聚合器,是用 Ruby 编写的,并且可以很好地扩展。对于大部分企业来说,Fluentd 足够高效并且消耗的资源相对较少,另外一个工具Fluent-bit更轻量级,占用资源更少,但是插件相对 Fluentd 来说不够丰富,所以整体来说,Fluentd 更加成熟,使用更加广泛,所以我们这里也同样使用 Fluentd 来作为日志收集工具
1.2.2 工作原理
Fluentd 通过一组给定的数据源抓取日志数据,处理后(转换成结构化的数据格式)将它们转发给其他服务,比如 Elasticsearch、对象存储等等。Fluentd 支持超过300个日志存储和分析服务,所以在这方面是非常灵活的。主要运行步骤如下:
- 首先 Fluentd 从多个日志源获取数据
- 结构化并且标记这些数据
- 然后根据匹配的标签将数据发送到多个目标服务去
1.2.3 架构图
2.部署
本次搭建的环境是测试环境,一个主节点,四个工作节点
- Kubernetes:v1.17.3
- Kibana镜像 :kibana:7.6.2
- Elasticsearch镜像:elasticsearch:7.6.2
- Fluentd 镜像:willdockerhub/fluentd-elasticsearch:v2.3.2
2.1 Elasticsearch 相关
2.1.1 es-statefulset.yaml配置
apiVersion: apps/v1
kind: StatefulSet
metadata:name: esnamespace: logging
spec:serviceName: elasticsearchreplicas: 3selector:matchLabels:app: elasticsearchtemplate:metadata:labels: app: elasticsearchspec:nodeSelector:es: loginitContainers:- name: increase-vm-max-mapimage: busyboxcommand: ["sysctl", "-w", "vm.max_map_count=262144"]securityContext:privileged: true- name: increase-fd-ulimitimage: busyboxcommand: ["sh", "-c", "ulimit -n 65536"]securityContext:privileged: truecontainers:- name: elasticsearchimage: elasticsearch:7.6.2ports:- name: restcontainerPort: 9200- name: intercontainerPort: 9300resources:limits:cpu: 1000mrequests:cpu: 1000mvolumeMounts:- name: elasticsearch-loggingmountPath: /usr/share/elasticsearch/dataenv:- name: cluster.namevalue: k8s-logs- name: node.namevalueFrom:fieldRef:fieldPath: metadata.name- name: cluster.initial_master_nodesvalue: "es-0,es-1,es-2"- name: discovery.zen.minimum_master_nodesvalue: "2"- name: discovery.seed_hostsvalue: "elasticsearch"- name: ES_JAVA_OPTSvalue: "-Xms512m -Xmx512m"- name: network.hostvalue: "0.0.0.0"volumeClaimTemplates:- metadata:name: elasticsearch-loggingannotations:volume.beta.kubernetes.io/storage-class: course-nfs-storagespec:accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 1Gi
关于volumeClaimTemplates相关的内容,我是参考 简书的一个博客
2.1.2 es-svc.yaml配置
kind: Service
apiVersion: v1
metadata:name: elasticsearchnamespace: logginglabels:app: elasticsearch
spec:selector:app: elasticsearchclusterIP: Noneports:- port: 9200name: rest- port: 9300name: inter-node
2.1.3 安装es相关配置文件
$ kubectl create -f es-statefulset.yaml -n logging
$ kubectl create -f es-svc.yaml -n logging$ kubectl get pods -n logging
NAME READY STATUS RESTARTS AGE
es-0 1/1 Running 0 6h9m
es-1 1/1 Running 0 6h9m
es-2 1/1 Running 0 6h8mps node label$ kubectl get svc -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 6h17m
2.1.4 测试es
$ kubectl port-forward es-0 9200:9200 --namespace=logging
Forwarding from 127.0.0.1:9200 -> 9200
Forwarding from [::1]:9200 -> 9200
打开新的终端
$ curl http://localhost:9200/_cluster/state?pretty
正常应该是可以看到该信息的
2.2 Kibana 相关
2.2.1 kibana.yaml配置文件
apiVersion: v1
kind: Service
metadata:name: kibananamespace: logginglabels:app: kibana
spec:ports:- port: 5601type: NodePortselector:app: kibana---
apiVersion: apps/v1
kind: Deployment
metadata:name: kibananamespace: logginglabels:app: kibana
spec:selector:matchLabels:app: kibanatemplate:metadata:labels:app: kibanaspec:nodeSelector:es: logcontainers:- name: kibanaimage: kibana:7.6.2resources:limits:cpu: 1000mrequests:cpu: 1000menv:- name: ELASTICSEARCH_HOSTSvalue: http://elasticsearch:9200ports:- containerPort: 5601
2.2.2 部署kibana相关yaml
$ kubectl create -f kibana.yaml -n logging$ kubectl get pods -n logging
NAME READY STATUS RESTARTS AGE
es-0 1/1 Running 0 6h9m
es-1 1/1 Running 0 6h9m
es-2 1/1 Running 0 6h8m
kibana-7fc6d9dcbf-9twkd 1/1 Running 0 5h22m$ kubectl get svc -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 6h9m
kibana NodePort 10.108.169.43 <none> 5601:32355/TCP 5h14m
2.2.5 测试kibana
在浏览器中输入 访问 10.104.61.249:32355 即可打开kibana 即可
2.3 flutend相关
2.3.1 fluentd-es-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:name: fluentd-es-config-v0.2.0namespace: logginglabels:addonmanager.kubernetes.io/mode: Reconcile
data:system.conf: |-<system>root_dir /tmp/fluentd-buffers/</system>containers.input.conf: |-# This configuration file for Fluentd / td-agent is used# to watch changes to Docker log files. The kubelet creates symlinks that# capture the pod name, namespace, container name & Docker container ID# to the docker logs for pods in the /var/log/containers directory on the host.# If running this fluentd configuration in a Docker container, the /var/log# directory should be mounted in the container.## These logs are then submitted to Elasticsearch which assumes the# installation of the fluent-plugin-elasticsearch & the# fluent-plugin-kubernetes_metadata_filter plugins.# See https://github.com/uken/fluent-plugin-elasticsearch &# https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for# more information about the plugins.## Example# =======# A line in the Docker log file might look like this JSON:## {"log":"2014/09/25 21:15:03 Got request with path wombat\n",# "stream":"stderr",# "time":"2014-09-25T21:15:03.499185026Z"}## The time_format specification below makes sure we properly# parse the time format produced by Docker. This will be# submitted to Elasticsearch and should appear like:# $ curl 'http://elasticsearch-logging:9200/_search?pretty'# ...# {# "_index" : "logstash-2014.09.25",# "_type" : "fluentd",# "_id" : "VBrbor2QTuGpsQyTCdfzqA",# "_score" : 1.0,# "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",# "stream":"stderr","tag":"docker.container.all",# "@timestamp":"2014-09-25T22:45:50+00:00"}# },# ...## The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log# record & add labels to the log record if properly configured. This enables users# to filter & search logs on any metadata.# For example a Docker container's logs might be in the directory:## /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b## and in the file:## 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log## where 997599971ee6... is the Docker ID of the running container.# The Kubernetes kubelet makes a symbolic link to this file on the host machine# in the /var/log/containers directory which includes the pod name and the Kubernetes# container name:## synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log# -># /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log## The /var/log directory on the host is mapped to the /var/log directory in the container# running this instance of Fluentd and we end up collecting the file:## /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## This results in the tag:## var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name# which are added to the log message as a kubernetes field object & the Docker container ID# is also added under the docker field object.# The final tag is:## kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## And the final log record look like:## {# "log":"2014/09/25 21:15:03 Got request with path wombat\n",# "stream":"stderr",# "time":"2014-09-25T21:15:03.499185026Z",# "kubernetes": {# "namespace": "default",# "pod_name": "synthetic-logger-0.25lps-pod",# "container_name": "synth-lgr"# },# "docker": {# "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"# }# }## This makes it easier for users to search for logs by pod name or by# the name of the Kubernetes container regardless of how many times the# Kubernetes pod has been restarted (resulting in a several Docker container IDs).# Json Log Example:# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}# CRI Log Example:# 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here<source>@id fluentd-containers.log@type tailpath /var/log/containers/*.logpos_file /var/log/es-containers.log.postag raw.kubernetes.*read_from_head true<parse>@type multi_format<pattern>format jsontime_key timetime_format %Y-%m-%dT%H:%M:%S.%NZ</pattern><pattern>format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/time_format %Y-%m-%dT%H:%M:%S.%N%:z</pattern></parse></source># Detect exceptions in the log output and forward them as one log entry.<match raw.kubernetes.**>@id raw.kubernetes@type detect_exceptionsremove_tag_prefix rawmessage logstream streammultiline_flush_interval 5max_bytes 500000max_lines 1000</match># Concatenate multi-line logs<filter **>@id filter_concat@type concatkey messagemultiline_end_regexp /\n$/separator ""</filter># Enriches records with Kubernetes metadata<filter kubernetes.**>@id filter_kubernetes_metadata@type kubernetes_metadata</filter><filter kubernetes.**>@type record_transformerremove_keys $.docker.container_id,$.kubernetes.container_image_id,$.kubernetes.pod_id,$.kubernetes.namespace_id,$.kubernetes.master_url,$.kubernetes.labels.pod-template-hash,$.kubernetes.pod_name,$.stream,$.tag</filter><filter kubernetes.**>@id filter_log@type grep<regexp>key $.kubernetes.labels.loggingpattern ^true$</regexp></filter> # Fixes json fields in Elasticsearch<filter kubernetes.**>@id filter_parser@type parserkey_name logreserve_data trueremove_key_name_field true<parse>@type multi_format<pattern>format json</pattern><pattern>format none</pattern></parse></filter>system.input.conf: |-# Example:# 2015-12-21 23:17:22,066 [salt.state ][INFO ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081<source>@id minion@type tailformat /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/time_format %Y-%m-%d %H:%M:%Spath /var/log/salt/minionpos_file /var/log/salt.postag salt</source># Example:# Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script<source>@id startupscript.log@type tailformat syslogpath /var/log/startupscript.logpos_file /var/log/es-startupscript.log.postag startupscript</source># Examples:# time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"# time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404# TODO(random-liu): Remove this after cri container runtime rolls out.<source>@id docker.log@type tailformat /^time="(?<time>[^"]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/path /var/log/docker.logpos_file /var/log/es-docker.log.postag docker</source># Example:# 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal<source>@id etcd.log@type tail# Not parsing this, because it doesn't have anything particularly useful to# parse out of it (like severities).format nonepath /var/log/etcd.logpos_file /var/log/es-etcd.log.postag etcd</source># Multi-line parsing is required for all the kube logs because very large log# statements, such as those that include entire object bodies, get split into# multiple lines by glog.# Example:# I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]<source>@id kubelet.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kubelet.logpos_file /var/log/es-kubelet.log.postag kubelet</source># Example:# I1118 21:26:53.975789 6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed<source>@id kube-proxy.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kube-proxy.logpos_file /var/log/es-kube-proxy.log.postag kube-proxy</source># Example:# I0204 07:00:19.604280 5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]<source>@id kube-apiserver.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kube-apiserver.logpos_file /var/log/es-kube-apiserver.log.postag kube-apiserver</source># Example:# I0204 06:55:31.872680 5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui<source>@id kube-controller-manager.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kube-controller-manager.logpos_file /var/log/es-kube-controller-manager.log.postag kube-controller-manager</source># Example:# W0204 06:49:18.239674 7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]<source>@id kube-scheduler.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kube-scheduler.logpos_file /var/log/es-kube-scheduler.log.postag kube-scheduler</source># Example:# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf<source>@id glbc.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/glbc.logpos_file /var/log/es-glbc.log.postag glbc</source># Example:# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf<source>@id cluster-autoscaler.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/cluster-autoscaler.logpos_file /var/log/es-cluster-autoscaler.log.postag cluster-autoscaler</source># Logs from systemd-journal for interesting services.# TODO(random-liu): Remove this after cri container runtime rolls out.<source>@id journald-docker@type systemdmatches [{ "_SYSTEMD_UNIT": "docker.service" }]<storage>@type localpersistent truepath /var/log/journald-docker.pos</storage>read_from_head truetag docker</source><source>@id journald-container-runtime@type systemdmatches [{ "_SYSTEMD_UNIT": "{{ fluentd_container_runtime_service }}.service" }]<storage>@type localpersistent truepath /var/log/journald-container-runtime.pos</storage>read_from_head truetag container-runtime</source><source>@id journald-kubelet@type systemdmatches [{ "_SYSTEMD_UNIT": "kubelet.service" }]<storage>@type localpersistent truepath /var/log/journald-kubelet.pos</storage>read_from_head truetag kubelet</source><source>@id journald-node-problem-detector@type systemdmatches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]<storage>@type localpersistent truepath /var/log/journald-node-problem-detector.pos</storage>read_from_head truetag node-problem-detector</source><source>@id kernel@type systemdmatches [{ "_TRANSPORT": "kernel" }]<storage>@type localpersistent truepath /var/log/kernel.pos</storage><entry>fields_strip_underscores truefields_lowercase true</entry>read_from_head truetag kernel</source> forward.input.conf: |-# Takes the messages sent over TCP<source>@id forward@type forward</source>monitoring.conf: |-# Prometheus Exporter Plugin# input plugin that exports metrics<source>@id prometheus@type prometheus</source><source>@id monitor_agent@type monitor_agent</source># input plugin that collects metrics from MonitorAgent<source>@id prometheus_monitor@type prometheus_monitor<labels>host ${hostname}</labels></source># input plugin that collects metrics for output plugin<source>@id prometheus_output_monitor@type prometheus_output_monitor<labels>host ${hostname}</labels></source># input plugin that collects metrics for in_tail plugin<source>@id prometheus_tail_monitor@type prometheus_tail_monitor<labels>host ${hostname}</labels></source>output.conf: |-<match **>@id elasticsearch@type elasticsearch@log_level infoinclude_tag_key truehost elasticsearchport 9200logstash_format truelogstash_prefix k8s # 设置 index 前缀为 k8srequest_timeout 30s <buffer>@type filepath /var/log/fluentd-buffers/kubernetes.system.bufferflush_mode intervalretry_type exponential_backoffflush_thread_count 2flush_interval 5sretry_foreverretry_max_interval 30chunk_limit_size 2Mtotal_limit_size 500Moverflow_action block</buffer></match>
2.3.2 fluentd-es-ds.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: fluentd-esnamespace: logginglabels:k8s-app: fluentd-esaddonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: fluentd-eslabels:k8s-app: fluentd-esaddonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:- ""resources:- "namespaces"- "pods"verbs:- "get"- "watch"- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: fluentd-eslabels:k8s-app: fluentd-esaddonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccountname: fluentd-esnamespace: loggingapiGroup: ""
roleRef:kind: ClusterRolename: fluentd-esapiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: fluentd-es-v2.3.2namespace: logginglabels:k8s-app: fluentd-esversion: v2.3.2addonmanager.kubernetes.io/mode: Reconcile
spec:selector:matchLabels:k8s-app: fluentd-esversion: v2.3.2template:metadata:labels:k8s-app: fluentd-esversion: v2.3.2# This annotation ensures that fluentd does not get evicted if the node# supports critical pod annotation based priority scheme.# Note that this does not guarantee admission on the nodes (#40573).annotations:seccomp.security.alpha.kubernetes.io/pod: 'docker/default'spec:priorityClassName: system-node-criticalserviceAccountName: fluentd-escontainers:- name: fluentd-esimage: willdockerhub/fluentd-elasticsearch:v2.3.2env:- name: FLUENTD_ARGSvalue: --no-supervisor -qresources:limits:memory: 500Mirequests:cpu: 100mmemory: 200MivolumeMounts:- name: varlogmountPath: /var/log- name: varlibdockercontainersmountPath: /home/docker/data/containersreadOnly: true- name: config-volumemountPath: /etc/fluent/config.dports:- containerPort: 24231name: prometheusprotocol: TCPlivenessProbe:tcpSocket:port: prometheusinitialDelaySeconds: 5timeoutSeconds: 10readinessProbe:tcpSocket:port: prometheusinitialDelaySeconds: 5timeoutSeconds: 10terminationGracePeriodSeconds: 30volumes:- name: varloghostPath:path: /var/log- name: varlibdockercontainershostPath:path: /home/docker/data/containers- name: config-volumeconfigMap:name: fluentd-es-config-v0.2.0
为了能够灵活控制哪些节点的日志可以被收集,所以这里还添加了一个 nodSelector 属性
nodeSelector:beta.kubernetes.io/fluentd-ds-ready: "true"
想采集节点的日志,那么我们就需要给节点打上上面的标签 我是在所有的节点打上标签
$ kubectl label nodes node名 beta.kubernetes.io/fluentd-ds-ready=true$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
v10-104-141-164 Ready <none> 69d v1.17.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=v10-104-141-164,kubernetes.io/os=linux
v10-104-61-249 Ready master 69d v1.17.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=v10-104-61-249,kubernetes.io/os=linux,node-role.kubernetes.io/master=
v10-104-61-251 Ready <none> 69d v1.17.3 app=ingress,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,es=log,kubernetes.io/arch=amd64,kubernetes.io/hostname=v10-104-61-251,kubernetes.io/os=linux
v10-104-61-252 Ready <none> 69d v1.17.3 app=ingress,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,es=log,kubernetes.io/arch=amd64,kubernetes.io/hostname=v10-104-61-252,kubernetes.io/os=linux
v10-104-61-253 Ready <none> 69d v1.17.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,es=log,kubernetes.io/arch=amd64,kubernetes.io/hostname=v10-104-61-253,kubernetes.io/os=linux
这里需要注意的是,docker的根目录:
$ docker info
Docker Root Dir: /home/docker/data
所以上面要获取 docker 的容器目录需要更改成/home/docker/data/containers,这个地方非常重要,当然如果你没有更改 docker 根目录则使用默认的/var/lib/docker/containers目录即可
2.3.3 部署fluentd
$ kubectl create -f fluentd-es-configmap.yaml -n logging
$ kubectl create -f fluentd-es-ds.yaml -n logging
$ kubectl get pods -n logging
NAME READY STATUS RESTARTS AGE
es-0 1/1 Running 0 6h14m
es-1 1/1 Running 0 6h13m
es-2 1/1 Running 0 6h13m
fluentd-es-v2.3.2-64phl 1/1 Running 0 5h18m
fluentd-es-v2.3.2-fqtpf 1/1 Running 0 5h18m
fluentd-es-v2.3.2-gjk72 1/1 Running 0 5h18m
fluentd-es-v2.3.2-m8bv5 1/1 Running 0 5h18m
fluentd-es-v2.3.2-rb6z6 1/1 Running 0 5h18m
kibana-7fc6d9dcbf-9twkd 1/1 Running 0 5h26m
2.3.4 测试fluentd
部署一个简单的测试应用,新建 counter.yaml 文件,文件内容如下
apiVersion: v1
kind: Pod
metadata:name: counterlabels:logging: "true" # 一定要具有该标签才会被采集 在fluentd配置文件中有过滤
spec:containers:- name: countimage: busyboxargs: [/bin/sh, -c,'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
到这里,基本上efk就可以使用了
参考资料:
阳明博客
github单节点的简单项目
快速搭建博客
k8s yaml文件详解
nfs相关资料
项目地址
我这边把这次搭建的相关资料全部放在 项目资料,欢迎大家批评指正。
因为是快速搭建,可能还有之后要完善的地方,后续还会继续更新。
这篇关于k8sEfk快速搭建的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!