K8S日志收集方案-EFK部署

2024-03-16 06:44

本文主要是介绍K8S日志收集方案-EFK部署,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

EFK架构工作流程

在这里插入图片描述

部署说明

ECK (Elastic Cloud on Kubernetes):2.7
Kubernetes:1.23.0

文件准备

crds.yaml
下载地址:https://download.elastic.co/downloads/eck/2.7.0/crds.yaml

operator.yaml
下载地址:https://download.elastic.co/downloads/eck/2.7.0/operator.yaml

elastic.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:name: efk-elasticnamespace: elastic-system
spec:version: 8.12.2nodeSets:- name: defaultcount: 3config:node.store.allow_mmap: falsevolumeClaimTemplates:- metadata:name: elasticsearch-dataspec:accessModes:- ReadWriteOnceresources:requests:storage: 20GistorageClassName: openebs-hostpathpodTemplate:spec:containers:- name: elasticsearchenv:- name: ES_JAVA_OPTSvalue: -Xms2g -Xmx2gresources:requests:memory: 2Gicpu: 2limits:memory: 4Gicpu: 2# https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.htmlinitContainers:- name: sysctlsecurityContext:privileged: truecommand: ["sh", "-c", "sysctl -w vm.max_map_count=262144"]---
apiVersion: v1
kind: Service
metadata:namespace: elastic-systemlabels:app: efk-elastic-nodeportname: efk-elastic-nodeport
spec:sessionAffinity: Noneselector:common.k8s.elastic.co/type: elasticsearchelasticsearch.k8s.elastic.co/cluster-name: efk-elasticports:- name: http-9200protocol: TCPtargetPort: 9200port: 9200nodePort: 30920- name: http-9300protocol: TCPtargetPort: 9300port: 9300nodePort: 30921type: NodePort---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:name: efk-kibananamespace: elastic-system
spec:version: 8.12.2count: 1elasticsearchRef:name: efk-elasticnamespace: elastic-system---
apiVersion: v1
kind: Service
metadata:namespace: elastic-systemlabels:app: efk-kibana-nodeportname: efk-kibana-nodeport
spec:sessionAffinity: Noneselector:common.k8s.elastic.co/type: kibanakibana.k8s.elastic.co/name: efk-kibanaports:- name: http-5601protocol: TCPtargetPort: 5601port: 5601nodePort: 30922type: NodePort

fluentd-es-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:name: fluentd-es-confignamespace: elastic-system
data:fluent.conf: |-# https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.11/debian-elasticsearch7/conf/fluent.conf@include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"@include "#{ENV['FLUENTD_PROMETHEUS_CONF'] || 'prometheus'}.conf"@include kubernetes.conf@include conf.d/*.conf<match kubernetes.**># https://github.com/kubernetes/kubernetes/issues/23001@type elasticsearch_dynamic@id  kubernetes_elasticsearch@log_level infoinclude_tag_key truehost "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"logstash_prefix logstash-${record['kubernetes']['namespace_name']}logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}"type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}"enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}"ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}"ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}"ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}"<buffer>flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"retry_forever true</buffer></match><match **>@type elasticsearch@id out_es@log_level infoinclude_tag_key truehost "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}"type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}"enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}"ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}"ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}"ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}"<buffer>flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"retry_forever true</buffer></match>kubernetes.conf: |-# https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.11/debian-elasticsearch7/conf/kubernetes.conf<label @FLUENT_LOG><match fluent.**>@type null@id ignore_fluent_logs</match></label><source>@id fluentd-containers.log@type tailpath /var/log/containers/*.logpos_file /var/log/es-containers.log.postag raw.kubernetes.*read_from_head true<parse>@type multi_format<pattern>format jsontime_key timetime_format %Y-%m-%dT%H:%M:%S.%NZ</pattern><pattern>format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/time_format %Y-%m-%dT%H:%M:%S.%N%:z</pattern></parse></source># Detect exceptions in the log output and forward them as one log entry.<match raw.kubernetes.**>@id raw.kubernetes@type detect_exceptionsremove_tag_prefix rawmessage logstream streammultiline_flush_interval 5max_bytes 500000max_lines 1000</match># Concatenate multi-line logs<filter **>@id filter_concat@type concatkey messagemultiline_end_regexp /\n$/separator ""</filter># Enriches records with Kubernetes metadata<filter kubernetes.**>@id filter_kubernetes_metadata@type kubernetes_metadata</filter># Fixes json fields in Elasticsearch<filter kubernetes.**>@id filter_parser@type parserkey_name logreserve_data trueremove_key_name_field true<parse>@type multi_format<pattern>format json</pattern><pattern>format none</pattern></parse></filter><source>@type tail@id in_tail_minionpath /var/log/salt/minionpos_file /var/log/fluentd-salt.postag salt<parse>@type regexpexpression /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/time_format %Y-%m-%d %H:%M:%S</parse></source><source>@type tail@id in_tail_startupscriptpath /var/log/startupscript.logpos_file /var/log/fluentd-startupscript.log.postag startupscript<parse>@type syslog</parse></source><source>@type tail@id in_tail_dockerpath /var/log/docker.logpos_file /var/log/fluentd-docker.log.postag docker<parse>@type regexpexpression /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/</parse></source><source>@type tail@id in_tail_etcdpath /var/log/etcd.logpos_file /var/log/fluentd-etcd.log.postag etcd<parse>@type none</parse></source><source>@type tail@id in_tail_kubeletmultiline_flush_interval 5spath /var/log/kubelet.logpos_file /var/log/fluentd-kubelet.log.postag kubelet<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_kube_proxymultiline_flush_interval 5spath /var/log/kube-proxy.logpos_file /var/log/fluentd-kube-proxy.log.postag kube-proxy<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_kube_apiservermultiline_flush_interval 5spath /var/log/kube-apiserver.logpos_file /var/log/fluentd-kube-apiserver.log.postag kube-apiserver<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_kube_controller_managermultiline_flush_interval 5spath /var/log/kube-controller-manager.logpos_file /var/log/fluentd-kube-controller-manager.log.postag kube-controller-manager<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_kube_schedulermultiline_flush_interval 5spath /var/log/kube-scheduler.logpos_file /var/log/fluentd-kube-scheduler.log.postag kube-scheduler<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_reschedulermultiline_flush_interval 5spath /var/log/rescheduler.logpos_file /var/log/fluentd-rescheduler.log.postag rescheduler<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_glbcmultiline_flush_interval 5spath /var/log/glbc.logpos_file /var/log/fluentd-glbc.log.postag glbc<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_cluster_autoscalermultiline_flush_interval 5spath /var/log/cluster-autoscaler.logpos_file /var/log/fluentd-cluster-autoscaler.log.postag cluster-autoscaler<parse>@type kubernetes</parse></source># Example:# 2017-02-09T00:15:57.992775796Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" ip="104.132.1.72" method="GET" user="kubecfg" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"# 2017-02-09T00:15:57.993528822Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" response="200"<source>@type tail@id in_tail_kube_apiserver_auditmultiline_flush_interval 5spath /var/log/kubernetes/kube-apiserver-audit.logpos_file /var/log/kube-apiserver-audit.log.postag kube-apiserver-audit<parse>@type multilineformat_firstline /^\S+\s+AUDIT:/# Fields must be explicitly captured by name to be parsed into the record.# Fields may not always be present, and order may change, so this just looks# for a list of key="\"quoted\" value" pairs separated by spaces.# Unknown fields are ignored.# Note: We can't separate query/response lines as format1/format2 because#       they don't always come one after the other for a given query.format1 /^(?<time>\S+) AUDIT:(?: (?:id="(?<id>(?:[^"\\]|\\.)*)"|ip="(?<ip>(?:[^"\\]|\\.)*)"|method="(?<method>(?:[^"\\]|\\.)*)"|user="(?<user>(?:[^"\\]|\\.)*)"|groups="(?<groups>(?:[^"\\]|\\.)*)"|as="(?<as>(?:[^"\\]|\\.)*)"|asgroups="(?<asgroups>(?:[^"\\]|\\.)*)"|namespace="(?<namespace>(?:[^"\\]|\\.)*)"|uri="(?<uri>(?:[^"\\]|\\.)*)"|response="(?<response>(?:[^"\\]|\\.)*)"|\w+="(?:[^"\\]|\\.)*"))*/time_format %Y-%m-%dT%T.%L%Z</parse></source>

fluentd-es-ds.yaml

apiVersion: v1
kind: ServiceAccount
metadata:name: fluentd-esnamespace: elastic-systemlabels:app: fluentd-es
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: fluentd-eslabels:app: fluentd-es
rules:- apiGroups:- ""resources:- "namespaces"- "pods"verbs:- "get"- "watch"- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: fluentd-eslabels:app: fluentd-es
subjects:- kind: ServiceAccountname: fluentd-esnamespace: elastic-systemapiGroup: ""
roleRef:kind: ClusterRolename: fluentd-esapiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: fluentd-esnamespace: elastic-systemlabels:app: fluentd-es
spec:selector:matchLabels:app: fluentd-estemplate:metadata:labels:app: fluentd-esspec:serviceAccount: fluentd-esserviceAccountName: fluentd-estolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulecontainers:- name: fluentd-esimage: fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8-2env:- name: FLUENT_ELASTICSEARCH_HOSTvalue: efk-elastic-es-http# default user- name: FLUENT_ELASTICSEARCH_USERvalue: elastic# is already present from the elasticsearch deployment- name: FLUENT_ELASTICSEARCH_PASSWORDvalueFrom:secretKeyRef:name: efk-elastic-es-elastic-userkey: elastic# elasticsearch standard port- name: FLUENT_ELASTICSEARCH_PORTvalue: "9200"# der elastic operator ist https standard- name: FLUENT_ELASTICSEARCH_SCHEMEvalue: "https"# dont need systemd logs for now- name: FLUENTD_SYSTEMD_CONFvalue: disable# da certs self signt sind muss verify disabled werden- name: FLUENT_ELASTICSEARCH_SSL_VERIFYvalue: "false"# to avoid issue https://github.com/uken/fluent-plugin-elasticsearch/issues/525- name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONSvalue: "false"resources:limits:memory: 512Mirequests:cpu: 100mmemory: 100MivolumeMounts:- name: varlogmountPath: /var/log- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: true- name: config-volumemountPath: /fluentd/etcterminationGracePeriodSeconds: 30volumes:- name: varloghostPath:path: /var/log- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: config-volumeconfigMap:name: fluentd-es-config

index-cleaner.yaml

apiVersion: batch/v1  
kind: CronJob  
metadata:  name: index-cleaner  namespace: elastic-system  
spec:  schedule: "0 0 * * *" jobTemplate:  spec:  backoffLimit: 2  template:  spec:containers:  - name: index-cleaner  image: hcd1129/es-index-cleaner:latestenv:  - name: DAYSvalue: "2"- name: PREFIXvalue: logstash-*  - name: ES_HOST  value: https://efk-elastic-es-http:9200  - name: ES_USER  value: elastic  - name: ES_PASSWORDvalueFrom:secretKeyRef:key: elasticname: efk-elastic-es-elastic-userrestartPolicy: Never

es-index-cleaner镜像

执行部署

# 依次执行#安装 Operator
kubectl create -f crds.yamlkubectl apply -f operator.yaml# 安装Elasticsearch kibana
kubectl apply -f elastic.yaml#安装 Fluentd
kubectl apply -f fluentd-es-configmap.yamlkubectl apply -f fluentd-es-ds.yaml#部署自动清理旧日志定时任务
kubectl apply -f index-cleaner.yaml

部署效果

kubectl get all -n elastic-system

在这里插入图片描述

账号密码

#es账号:elastic
#查看es密码
kubectl get secret efk-elastic-es-elastic-user -n elastic-system -o=jsonpath='{.data.elastic}' | base64 --decode; echo

效果展示

https://192.168.1.180:30920/
在这里插入图片描述
https://192.168.1.180:30922/
在这里插入图片描述

这篇关于K8S日志收集方案-EFK部署的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/814631

相关文章

Java进行文件格式校验的方案详解

《Java进行文件格式校验的方案详解》这篇文章主要为大家详细介绍了Java中进行文件格式校验的相关方案,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录一、背景异常现象原因排查用户的无心之过二、解决方案Magandroidic Number判断主流检测库对比Tika的使用区分zip

SpringBoot日志配置SLF4J和Logback的方法实现

《SpringBoot日志配置SLF4J和Logback的方法实现》日志记录是不可或缺的一部分,本文主要介绍了SpringBoot日志配置SLF4J和Logback的方法实现,文中通过示例代码介绍的非... 目录一、前言二、案例一:初识日志三、案例二:使用Lombok输出日志四、案例三:配置Logback一

golang 日志log与logrus示例详解

《golang日志log与logrus示例详解》log是Go语言标准库中一个简单的日志库,本文给大家介绍golang日志log与logrus示例详解,感兴趣的朋友一起看看吧... 目录一、Go 标准库 log 详解1. 功能特点2. 常用函数3. 示例代码4. 优势和局限二、第三方库 logrus 详解1.

tomcat多实例部署的项目实践

《tomcat多实例部署的项目实践》Tomcat多实例是指在一台设备上运行多个Tomcat服务,这些Tomcat相互独立,本文主要介绍了tomcat多实例部署的项目实践,具有一定的参考价值,感兴趣的可... 目录1.创建项目目录,测试文China编程件2js.创建实例的安装目录3.准备实例的配置文件4.编辑实例的

SpringBoot配置Ollama实现本地部署DeepSeek

《SpringBoot配置Ollama实现本地部署DeepSeek》本文主要介绍了在本地环境中使用Ollama配置DeepSeek模型,并在IntelliJIDEA中创建一个Sprin... 目录前言详细步骤一、本地配置DeepSeek二、SpringBoot项目调用本地DeepSeek前言随着人工智能技

如何自定义Nginx JSON日志格式配置

《如何自定义NginxJSON日志格式配置》Nginx作为最流行的Web服务器之一,其灵活的日志配置能力允许我们根据需求定制日志格式,本文将详细介绍如何配置Nginx以JSON格式记录访问日志,这种... 目录前言为什么选择jsON格式日志?配置步骤详解1. 安装Nginx服务2. 自定义JSON日志格式各

通过Docker Compose部署MySQL的详细教程

《通过DockerCompose部署MySQL的详细教程》DockerCompose作为Docker官方的容器编排工具,为MySQL数据库部署带来了显著优势,下面小编就来为大家详细介绍一... 目录一、docker Compose 部署 mysql 的优势二、环境准备与基础配置2.1 项目目录结构2.2 基

CentOS 7部署主域名服务器 DNS的方法

《CentOS7部署主域名服务器DNS的方法》文章详细介绍了在CentOS7上部署主域名服务器DNS的步骤,包括安装BIND服务、配置DNS服务、添加域名区域、创建区域文件、配置反向解析、检查配置... 目录1. 安装 BIND 服务和工具2.  配置 BIND 服务3 . 添加你的域名区域配置4.创建区域

IDEA中Git版本回退的两种实现方案

《IDEA中Git版本回退的两种实现方案》作为开发者,代码版本回退是日常高频操作,IntelliJIDEA集成了强大的Git工具链,但面对reset和revert两种核心回退方案,许多开发者仍存在选择... 目录一、版本回退前置知识二、Reset方案:整体改写历史1、IDEA图形化操作(推荐)1.1、查看提

SpringBoot项目使用MDC给日志增加唯一标识的实现步骤

《SpringBoot项目使用MDC给日志增加唯一标识的实现步骤》本文介绍了如何在SpringBoot项目中使用MDC(MappedDiagnosticContext)为日志增加唯一标识,以便于日... 目录【Java】SpringBoot项目使用MDC给日志增加唯一标识,方便日志追踪1.日志效果2.实现步