kubernetes集群搭建efk日志收集平台
- 一、efk介绍
- 1.efk简介
- 2.Elasticsearch介绍
- ①Elasticsearch简介
- ②Elasticsearch的特点
- 3.Filebeat介绍
- ①Filebeat简介
- ②Fluentd简介
- ③Fluentd作用
- 4. Kibana介绍
- 5、efk的架构图
- 二、检查本地kubernetes集群状态
- 三、配置默认存储
- 1.检查nfs
- 2.编辑sc.yaml文件
- 3.应用sc.ymal文件
- 4.检查sc相关pod
- 5.测试pv
- ①编写pv.yaml
- ②运行pv
- ③检查pv和pvc状态
- 四、安装helm工具
- 1.下载helm二进制包
- 2.解压下载的helm压缩包
- 3.复制helm文件
- 4.查看helm版本
- 五、配置helm仓库
- 1.添加efk相关组件的helm源
- 2.查看helm仓库
- 六、安装Elasticsearch
- 1.下载Elasticsearch的chart包
- 2.解压tar包
- 3.修改yaml文件
- ①修改replicas
- ②关闭持久存储(可选)
- 4.安装Elasticsearch应用
- 5.查看运行pod
- 七、安装filebeat
- 1.下载filebeat
- 2.解压tar包
- 3查看values.yaml文件
- 4.安装filebeat
- 5.查看filebeat相关pod
- 八、安装metricbeat
- 1.下载metricbeat
- 2.解压tar包
- 3.安装metricbeat
- 4.查看metricbeat相关pod
- 九、安装kibana
- 1.下载安装kibana
- 2.解压kibana的tar包
- 3.修改服务类型
- 4.配置Elasticsearch地址
- 5.安装kibana
- 5.检查pod
- 十、访问kibana的web
- 1.查看svc
- 2.登录kibanna
一、efk介绍
1.efk简介
Kubernetes 开发了一个 Elasticsearch 附加组件来实现集群的日志管理。这是一个 Elasticsearch、Filebeat(或者Fluentd)和 Kibana 的组合。
2.Elasticsearch介绍
①Elasticsearch简介
Elasticsearch是一个基于Apache Lucene™的开源搜索和数据分析引擎引擎,Elasticsearch使用Java进行开发,并使用Lucene作为其核心实现所有索引和搜索的功能。
②Elasticsearch的特点
1.Elasticsearch是一个实时的,分布式的,可扩展的搜索引擎。
2.Elasticsearch允许进行全文本和结构化搜索以及对日志进行分析。
3.Elasticsearch 是一个搜索引擎,负责存储日志并提供查询接口。
4.Elasticsearch通常用于索引和搜索大量日志数据,也可以用于搜索许多不同种类的文档。
3.Filebeat介绍
①Filebeat简介
Filebeat是用于转发和集中日志数据的轻量级传送工具。Filebeat监视您指定的日志文件或位置,收集日志事件,并将它们转发到Elasticsearch或 Logstash进行索引。
②Fluentd简介
Fluentd是一个开源数据收集器,通过它能对数据进行统一收集和消费,能够更好地使用和理解数据。
③Fluentd作用
1.在kubernetes集群中每个节点安装Fluentd。
2.通过获取容器日志文件、过滤和转换日志数据
3.将数据传递到 Elasticsearch 集群,在该集群中对其进行索引和存储
4. Kibana介绍
Kibana是一个开源的分析与可视化平台,被设计用于和Elasticsearch一起使用的。通过kibana可以搜索、查看和交互存放在Elasticsearch中的数据,利用各种不同的图表、表格和地图等,Kibana能够对数据进行分析与可视化。
5、efk的架构图
二、检查本地kubernetes集群状态
[root@k8s-master ~]# kubectl get nodes -owideNAME STATUS ROLESAGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIMEk8s-master Readycontrol-plane,master 10d v1.23.1 192.168.3.201 <none>CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.6k8s-node01 Ready<none> 10d v1.23.1 192.168.3.202 <none>CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.6k8s-node02 Ready<none> 10d v1.23.1 192.168.3.203 <none>CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.6
三、配置默认存储
1.检查nfs
[root@k8s-master efk]# showmount -e 192.168.3.201Export list for 192.168.3.201:/nfs/data *
2.编辑sc.yaml文件
[root@k8s-master efk]# cat sc.yaml apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true"provisioner: k8s-sigs.io/nfs-subdir-external-provisionerparameters: archiveOnDelete: "true"## 删除pv的时候,pv的内容是否要备份---apiVersion: apps/v1kind: Deploymentmetadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultspec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 # resources: #limits: #cpu: 10m #requests: #cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.3.201 ## 指定自己nfs服务器地址 - name: NFS_PATH value: /nfs/data## nfs服务器共享的目录 volumes: - name: nfs-client-root nfs: server: 192.168.3.201 path: /nfs/data---apiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultrules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultsubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
3.应用sc.ymal文件
[root@k8s-master efk]# kubectl apply -f sc.yaml
4.检查sc相关pod
[root@k8s-master efk]# kubectl get podsNAMEREADY STATUSRESTARTS AGEnfs-client-provisioner-779b7f4dfd-zpqmt 1/1 Running 08s
5.测试pv
①编写pv.yaml
[root@k8s-master efk]# cat pv.yaml kind: PersistentVolumeClaimapiVersion: v1metadata: name: nginx-pvcspec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi
②运行pv
kubectl apply -f pv.yaml
③检查pv和pvc状态
[root@k8s-master efk]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-939faa36-9c19-4fd9-adc9-cb30b270de75 200MiRWXDelete Bounddefault/nginx-pvc nfs-storage 40s[root@k8s-master efk]# kubectl get pvcNAMESTATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEnginx-pvc Boundpvc-939faa36-9c19-4fd9-adc9-cb30b270de75 200MiRWXnfs-storage44
四、安装helm工具
1.下载helm二进制包
wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz
2.解压下载的helm压缩包
tar -xzf helm-v3.9.0-linux-amd64.tar.gz
3.复制helm文件
cp -a linux-amd64/helm /usr/bin/helm
4.查看helm版本
[root@k8s-master addons]# helm versionversion.BuildInfo{Version:"v3.9.0", GitCommit:"7ceeda6c585217a19a1131663d8cd1f7d641b2a7", GitTreeState:"clean", GoVersion:"go1.17.5"}
五、配置helm仓库
1.添加efk相关组件的helm源
[root@k8s-master ~]# helm repo add stable https://apphub.aliyuncs.com"stable" has been added to your repositories[root@k8s-master ~]# helm repo add elastic https://helm.elastic.co"elastic" has been added to your repositories[root@k8s-master ~]# helm repo add azure http://mirror.azure.cn/kubernetes/charts/"azure" has been added to your repositories[root@k8s-master ~]#
2.查看helm仓库
[root@k8s-master ~]# helm repo list NAME URLstable https://apphub.aliyuncs.comelastichttps://helm.elastic.coazurehttp://mirror.azure.cn/kubernetes/charts/
六、安装Elasticsearch
1.下载Elasticsearch的chart包
[root@k8s-master efk]# helm pull elastic/elasticsearch
2.解压tar包
[root@k8s-master efk]#tar -xzf elasticsearch-7.17.3.tgz
3.修改yaml文件
①修改replicas
vim elasticsearch/values.yaml
replicas: 2minimumMasterNodes: 1esMajorVersion: ""
②关闭持久存储(可选)
##persistence:enabled: falselabels:# Add default labels for the volumeClaimTemplate of the StatefulSetenabled: falseannotations: {}
4.安装Elasticsearch应用
helm install elasticelasticsearch
5.查看运行pod
[root@k8s-master efk]#kubectl get podsNAMEREADY STATUSRESTARTS AGEcirror-282531/1 Running 0135melasticsearch-master-01/1 Running 02m11selasticsearch-master-11/1 Running 02m11snfs-client-provisioner-779b7f4dfd-p7xsz 1/1 Running 03h31m
七、安装filebeat
1.下载filebeat
[root@k8s-master efk]# helm pull elastic/filebeat
2.解压tar包
[root@k8s-master efk]# tar -xzf filebeat-7.17.3.tgz
3查看values.yaml文件
[root@k8s-master filebeat]# cat values.yaml-n 1--- 2daemonset: 3# Annotations to apply to the daemonset 4annotations: {} 5# additionals labels 6labels: {} 7affinity: {} 8# Include the daemonset 9enabled: true10# Extra environment variables for Filebeat container.11envFrom: []12# - configMapRef:13# name: config-secret14extraEnvs: []15#- name: MY_ENVIRONMENT_VAR16#value: the_value_goes_here17extraVolumes:18[]19# - name: extras20# emptyDir: {}21extraVolumeMounts:22[]23# - name: extras24# mountPath: /usr/share/extras25# readOnly: true26hostNetworking: false27# Allows you to add any config files in /usr/share/filebeat28# such as filebeat.yml for daemonset29filebeatConfig:30filebeat.yml: |31filebeat.inputs:32- type: container33paths:34- /var/log/containers/*.log35processors:36- add_kubernetes_metadata:37host: ${NODE_NAME}38matchers:39- logs_path:40logs_path: "/var/log/containers/"4142output.elasticsearch:43host: '${NODE_NAME}'44hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'45# Only used when updateStrategy is set to "RollingUpdate"46maxUnavailable: 147nodeSelector: {}48# A list of secrets and their paths to mount inside the pod49# This is useful for mounting certificates for security other sensitive values50secretMounts: []51#- name: filebeat-certificates52#secretName: filebeat-certificates53#path: /usr/share/filebeat/certs54# Various pod security context settings. Bear in mind that many of these have an impact on Filebeat functioning properly.55#56# - User that the container will execute as. Typically necessary to run as root (0) in order to properly collect host container logs.57# - Whether to execute the Filebeat containers as privileged containers. Typically not necessarily unless running within environments such as OpenShift.58securityContext:59runAsUser: 060privileged: false61resources:62requests:63cpu: "100m"64memory: "100Mi"65limits:66cpu: "1000m"67memory: "200Mi"68tolerations: []6970deployment:71# Annotations to apply to the deployment72annotations: {}73# additionals labels74labels: {}75affinity: {}76# Include the deployment77enabled: false78# Extra environment variables for Filebeat container.79envFrom: []80# - configMapRef:81# name: config-secret82extraEnvs: []83#- name: MY_ENVIRONMENT_VAR84#value: the_value_goes_here85# Allows you to add any config files in /usr/share/filebeat86extraVolumes: []87# - name: extras88# emptyDir: {}89extraVolumeMounts: []90# - name: extras91# mountPath: /usr/share/extras92# readOnly: true93# such as filebeat.yml for deployment94filebeatConfig:95filebeat.yml: |96filebeat.inputs:97- type: tcp98max_message_size: 10MiB99host: "localhost:9000" 100 101output.elasticsearch: 102host: '${NODE_NAME}' 103hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}' 104nodeSelector: {} 105# A list of secrets and their paths to mount inside the pod 106# This is useful for mounting certificates for security other sensitive values 107secretMounts: [] 108#- name: filebeat-certificates 109#secretName: filebeat-certificates 110#path: /usr/share/filebeat/certs 111# 112# - User that the container will execute as. 113# Not necessary to run as root (0) as the Filebeat Deployment use cases do not need access to Kubernetes Node internals 114# - Typically not necessarily unless running within environments such as OpenShift. 115securityContext: 116runAsUser: 0 117privileged: false 118resources: 119requests: 120cpu: "100m" 121memory: "100Mi" 122limits: 123cpu: "1000m" 124memory: "200Mi" 125tolerations: [] 126 127# Replicas being used for the filebeat deployment 128replicas: 1 129 130extraContainers: "" 131# - name: dummy-init 132# image: busybox 133# command: ['echo', 'hey'] 134 135extraInitContainers: [] 136# - name: dummy-init 137 138# Root directory where Filebeat will write data to in order to persist registry data across pod restarts (file position and other metadata). 139hostPathRoot: /var/lib 140 141dnsConfig: {} 142# options: 143# - name: ndots 144# value: "2" 145hostAliases: [] 146#- ip: "127.0.0.1" 147#hostnames: 148#- "foo.local" 149#- "bar.local" 150image: "docker.elastic.co/beats/filebeat" 151imageTag: "7.17.3" 152imagePullPolicy: "IfNotPresent" 153imagePullSecrets: [] 154 155livenessProbe: 156exec: 157command: 158- sh 159- -c 160- | 161#!/usr/bin/env bash -e 162curl --fail 127.0.0.1:5066 163failureThreshold: 3 164initialDelaySeconds: 10 165periodSeconds: 10 166timeoutSeconds: 5 167 168readinessProbe: 169exec: 170command: 171- sh 172- -c 173- | 174#!/usr/bin/env bash -e 175filebeat test output 176failureThreshold: 3 177initialDelaySeconds: 10 178periodSeconds: 10 179timeoutSeconds: 5 180 181# Whether this chart should self-manage its service account, role, and associated role binding. 182managedServiceAccount: true 183 184clusterRoleRules: 185- apiGroups: 186- "" 187resources: 188- namespaces 189- nodes 190- pods 191verbs: 192- get 193- list 194- watch 195- apiGroups: 196- "apps" 197resources: 198- replicasets 199verbs: 200- get 201- list 202- watch 203 204podAnnotations: 205{} 206# iam.amazonaws.com/role: es-cluster 207 208# Custom service account override that the pod will use 209serviceAccount: "" 210 211# Annotations to add to the ServiceAccount that is created if the serviceAccount value isn't set. 212serviceAccountAnnotations: 213{} 214# eks.amazonaws.com/role-arn: arn:aws:iam::111111111111:role/k8s.clustername.namespace.serviceaccount 215 216# How long to wait for Filebeat pods to stop gracefully 217terminationGracePeriod: 30 218# This is the PriorityClass settings as defined in 219# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass 220priorityClassName: "" 221 222updateStrategy: RollingUpdate 223 224# Override various naming aspects of this chart 225# Only edit these if you know what you're doing 226nameOverride: "" 227fullnameOverride: "" 228 229# DEPRECATED 230affinity: {} 231envFrom: [] 232extraEnvs: [] 233extraVolumes: [] 234extraVolumeMounts: [] 235# Allows you to add any config files in /usr/share/filebeat 236# such as filebeat.yml for both daemonset and deployment 237filebeatConfig: {} 238nodeSelector: {} 239podSecurityContext: {} 240resources: {} 241secretMounts: [] 242tolerations: [] 243labels: {}
4.安装filebeat
[root@k8s-master efk]# helm install fb filebeatNAME: fbLAST DEPLOYED: Sun Jul3 13:03:21 2022NAMESPACE: defaultSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:1. Watch all containers come up.$ kubectl get pods --namespace=default -l app=fb-filebeat -w
5.查看filebeat相关pod
[root@k8s-master efk]# kubectl get podsNAMEREADY STATUSRESTARTS AGEcirror-282531/1 Running 0151melasticsearch-master-01/1 Running 018melasticsearch-master-11/1 Running 018mfb-filebeat-8fhg7 1/1 Running 05m17sfb-filebeat-lj5p7 1/1 Running 05m17snfs-client-provisioner-779b7f4dfd-p7xsz 1/1 Running 03h47m
八、安装metricbeat
1.下载metricbeat
helm pull stable/metricbeat
2.解压tar包
[root@k8s-master efk]# tar -xzfmetricbeat-1.7.1.tgz
3.安装metricbeat
[root@k8s-master efk]# helm install metric metricbeat
4.查看metricbeat相关pod
[root@k8s-master efk]# kubectl get pods NAMEREADY STATUSRESTARTS AGEcirror-282531/1 Running 03h26melasticsearch-master-01/1 Running 073melasticsearch-master-11/1 Running 073mfb-filebeat-8fhg7 1/1 Running 060mfb-filebeat-lj5p7 1/1 Running 060mmetric-metricbeat-4jbkk 1/1 Running 022smetric-metricbeat-5h5g5 1/1 Running 022smetric-metricbeat-758c5c674-ldgg4 1/1 Running 022smetric-metricbeat-bdth2 1/1 Running 022snfs-client-provisioner-779b7f4dfd-p7xsz 1/1 Running 04h42m
九、安装kibana
1.下载安装kibana
helm pull elastic/kibana
2.解压kibana的tar包
tar -xzf kibana-7.17.3.tgz
3.修改服务类型
[root@k8s-master kibana]# vim values.yaml
##service:port: 80type: NodePort## Specify the nodePort value for the LoadBalancer and NodePort service types.## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
4.配置Elasticsearch地址
## Properties for Elasticsearch##elasticsearch:hosts:- elastic-elasticsearch-coordinating-only.default.svc.cluster.local# - elasticsearch-1# - elasticsearch-2port: 9200
5.安装kibana
[root@k8s-master stable]# helm install kb kibana
5.检查pod
[root@k8s-master efk]# kubectl get podsNAMEREADY STATUSRESTARTSAGEcirror-282531/1 Running 1 (6m28s ago) 5h50melasticsearch-master-01/1 Running 1 (6m24s ago) 3h37melasticsearch-master-11/1 Running 1 (6m27s ago) 3h37mfb-filebeat-8fhg7 1/1 Running 1 (6m28s ago) 3h24mfb-filebeat-lj5p7 1/1 Running 1 (6m24s ago) 3h24mkb-kibana-5c46dbc5dd-htw7n1/1 Running 0 2m23smetric-metricbeat-4jbkk 1/1 Running 1 (6m41s ago) 145mmetric-metricbeat-5h5g5 1/1 Running 1 (6m24s ago) 145mmetric-metricbeat-758c5c674-ldgg4 1/1 Running 1 (6m24s ago) 145mmetric-metricbeat-bdth2 1/1 Running 1 (6m27s ago) 145mnfs-client-provisioner-779b7f4dfd-p7xsz 1/1 Running 2 (4m40s ago) 7h7m
十、访问kibana的web
1.查看svc
[root@k8s-master efk]# kubectl get svcNAMETYPECLUSTER-IP EXTERNAL-IP PORT(S) AGEelasticsearch-masterClusterIP 10.96.73.127 <none>9200/TCP,9300/TCP 3h38melasticsearch-master-headless ClusterIP None <none>9200/TCP,9300/TCP 3h38mkb-kibana NodePort10.102.85.68 <none>5601:31372/TCP3m4skubernetesClusterIP 10.96.0.1<none>443/TCP 15
2.登录kibanna
打开浏览器,访问——http://192.168.3.202:31372/