ELK 8.8.1 + Kafka 2.5.0 日志收集架构部署
- 服务版本
- 节点角色分布
- 服务兼容性
- JDK 特殊性
- 官方文档库
- 系统优化
- 部署 ES 集群
- 1. 下载 ES 源码包
- 2. 调整配置文件 \$ES\_HOME/config/elasticsearch.yml
- 3. 调整配置文件 \$ES\_HOME/config/jvm.option
- 4. 将配置文件分发至所有节点,并进行配置调整
- 5. 启动 ES 集群
- 6. 验证 ES 集群状态
- 部署 ES 插件 ES-Head
- 部署 Filebeat
- 1. 查看 Kafka 集群状态
- 2. 下载 Filebeat 源码包
- 3. 调整配置文件 \$FILEBEAT\_HOME/filebeat.yml
- 5. 验证 Filebeat
- 部署 Logstach
- 1. 下载 Logstach 源码包
- 2. 调整配置文件 \$LOGSTACH\_HOME/conf/pipelines.yml
- 3. 创建配置文件 /data/service/elk/logstash/conf.d/system-log.conf
- 4. 启动 Logstach
- 5. 验证 Logstach
- 部署 Kibana
- 1. 下载 Kibana 源码包
- 2. 调整配置文件 \$KIBANA\_HOME/config/kibana.yml
- 3. 启动 Kibana
- 4. 验证 Kibana
- 添加安全策略
- 安全策略等级概述
- ES 版本差异
- ES 集群配置
- 1. 创建密钥及证书存放路径
- 2. 生成密钥文件
- 3. 为密钥文件生成证书
- 4. 将密钥及证书复制至所有节点
- 5. 调整配置文件 \$ES\_HOME/config/elasticsearch.yml
- 6. 为密钥文件生成证书
- 7. 重启并配置 ES 相关密码
- 8. 重启并验证 ES-Head
- Kibana 配置
- 1. 调整配置文件 \$KIBANA\_HOME/config/kibana.yml
- 2. 创建 Kibana 的密钥库
- 3. 将 ES 密钥写入 Kibana 密钥库
- 4. 重启并查看 Kibana 状态
服务版本
服务名称 | 服务版本 |
---|
JDK | 1.8.0_322 |
Elasticsearch | 8.8.1 |
Logstash | 8.8.1 |
Filebeat | 8.8.1 |
Kibana | 8.8.1 |
Kafka | 2.5.0 |
ZooKeeper | 3.5.7 |
节点角色分布
主机名 | IP 地址 | 承载角色 |
---|
redis1 | 10.10.10.21 | Elasticsearch Master \ Elasticsearch Worker \ Filebeat \ Logstash \ Kibana |
redis2 | 10.10.10.22 | Elasticsearch Master \ Elasticsearch Worker |
redis3 | 10.10.10.23 | Elasticsearch Master \ Elasticsearch Worker |
hadoop1 | 10.10.10.131 | Kafka Broker \ ZooKeeper Server |
hadoop2 | 10.10.10.132 | Kafka Broker \ ZooKeeper Server |
hadoop3 | 10.10.10.133 | Kafka Broker \ ZooKeeper Server |
服务兼容性
- 官方提供了服务的兼容性清单。主要分为:与操作系统、与 JVM、与 Kubernetes、与浏览器、ELK 服务之间和 Logstash 插件
- 具体地址为:https://www.elastic.co/cn/support/matrix#matrix\_compatibility
JDK 特殊性
- 在所下载的 elasticsearch.tar.gz 包中,都包含有 jdk 文件夹。可直接使用此 jdk 文件夹中的 jdk 版本
- 通过如下方式,可确定 tar 包中的具体 jdk 信息
cat $ES_HOME/jdk/releaseIMPLEMENTOR="Oracle Corporation"JAVA_VERSION="20.0.1"JAVA_VERSION_DATE="2023-04-18"LIBC="gnu"MODULES="java.base java.compiler java.datatransfer java.xml java.prefs java.desktop java.instrument java.logging java.management java.security.sasl java.naming java.rmi java.management.rmi java.net.http java.scripting java.security.jgss java.transaction.xa java.sql java.sql.rowset java.xml.crypto java.se java.smartcardio jdk.accessibility jdk.internal.jvmstat jdk.attach jdk.charsets jdk.zipfs jdk.compiler jdk.crypto.ec jdk.crypto.cryptoki jdk.dynalink jdk.internal.ed jdk.editpad jdk.hotspot.agent jdk.httpserver jdk.incubator.concurrent jdk.incubator.vector jdk.internal.le jdk.internal.opt jdk.internal.vm.ci jdk.internal.vm.compiler jdk.internal.vm.compiler.management jdk.jartool jdk.javadoc jdk.jcmd jdk.management jdk.management.agent jdk.jconsole jdk.jdeps jdk.jdwp.agent jdk.jdi jdk.jfr jdk.jlink jdk.jpackage jdk.jshell jdk.jsobject jdk.jstatd jdk.localedata jdk.management.jfr jdk.naming.dns jdk.naming.rmi jdk.net jdk.nio.mapmode jdk.random jdk.sctp jdk.security.auth jdk.security.jgss jdk.unsupported jdk.unsupported.desktop jdk.xml.dom"OS_ARCH="x86_64"OS_NAME="Linux"SOURCE=".:git:a0bfbb14e326"
官方文档库
- Elasticsearsh 8.8.1: https://www.elastic.co/guide/en/elasticsearch/reference/index.html
系统优化
- 如下优化操作,须在所有 Elasticsearch 节点上都完成
- 官方系统优化建议:https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-system-settings.html
systemctl stop firewalld &&systemctl disable firewalldsystemctl status firewalldiptables -nLiptables -Fiptables -Xcp /etc/selinux/config /etc/selinux/config.bkupsed -i 's/^SELINUX=.*/SELINUX=disable/' /etc/selinux/configcat /etc/selinux/configgetenforceuseradd es && useradd kibanaecho "es" |passwd --stdin es && echo "kibana" |passwd --stdin kibanacp /etc/security/limits.conf /etc/security/limits.conf.bkupcat > /etc/security/limits.conf << EOF* soft nofile 6553537* hard nofile 6553538* soft nproc 6553539* hard nproc 6553540* soft memlock unlimited* hard memlock unlimitedEOFcat /etc/security/limits.confswapoff -acp /etc/fstab /etc/fstab.bkupsed -i '/^[^#].*swap/s/^/#/' /etc/fstabcat /etc/fstabcp /etc/sysctl.conf /etc/sysctl.conf.bkupecho "vm.swappiness=1" > /etc/sysctl.confecho "vm.max_map_count=262144" > /etc/sysctl.confecho "net.ipv4.tcp_retries2=5" > /etc/sysctl.confecho "net.ipv4.tcp_abort_on_overflow=1" > /etc/sysctl.confecho "net.core.somaxconn = 2048" > /etc/sysctl.confsysctl -p
部署 ES 集群
1. 下载 ES 源码包
mkdir -p /data/service/elk/elasticsearch/{tmp,data1,data2,data3,logs,jvm}mkdir -p /data/service/elk/elasticsearch/jvm/{tmp,dump,logs}wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.8.1-linux-x86_64.tar.gz -P /data/service/elk/elasticsearch/wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.8.1-linux-x86_64.tar.gz.sha512 -P /data/service/elk/elasticsearch/shasum -a 512 -c /data/service/elk/elasticsearch/elasticsearch-8.8.1-linux-x86_64.tar.gz.sha512tar -zxvf /data/service/elk/elasticsearch/elasticsearch-8.8.1-linux-x86_64.tar.gz /data/service/elk/elasticsearch/chown -R es:es /data/service/elk/elasticsearchscp -r -p /data/service/elk root@redis2:/data/service/scp -r -p /data/service/elk root@redis3:/data/service/cat >> /etc/profile << EOF# Elasticsearch 8.8.1 2023-06-12export ES_HOME=/data/service/elk/elasticsearch/elasticsearch-8.8.1export PATH=$PATH:$ES_HOME/bin:$ES_HOME/libEOFsource /etc/profile
2. 调整配置文件 $ES_HOME/config/elasticsearch.yml
cp $ES_HOME/config/elasticsearch.yml $ES_HOME/config/elasticsearch.yml.bkupcat > $ES_HOME/config/elasticsearch.yml << EOFcluster.name: es-testnode.name: node1node.roles: [ master, data ]path.data: /data/service/elk/elasticsearch/data1,/data/service/elk/elasticsearch/data2,/data/service/elk/elasticsearch/data3path.logs: /data/service/elk/elasticsearch/logsnetwork.host: 10.10.10.21http.port: 9200transport.port: 9300discovery.seed_hosts: ["10.10.10.21:9300","10.10.10.22:9300","10.10.10.23:9300"]cluster.initial_master_nodes: ["node1","node2","node3"]bootstrap.memory_lock: truexpack.security.enabled: falsehttp.cors.enabled: truehttp.cors.allow-origin: "*"EOF
3. 调整配置文件 $ES_HOME/config/jvm.option
cp $ES_HOME/config/jvm.options $ES_HOME/config/jvm.options.bkupcat > $ES_HOME/config/jvm.options << EOF# JVM 配置# JVM 默认为 4GB# JVM 大小建议为整体实例物理内存的一半,另一半预留给底层的 Lucene# 由于内存指针压缩的原因,不建议将内存大小调整为 32GB 或更高。若物理内存高于 32GB,则可分割为多个 Elasticsearch 实例# 建议预先获取的内存大小与最大值相同-Xms1g-Xmx1g# GC 配置# GC 处理器默认为 G1-XX:+UseG1GC-Djava.io.tmpdir=/data/service/elk/elasticsearch/jvm/tmp-XX:+HeapDumpOnOutOfMemoryError-XX:+ExitOnOutOfMemoryError-XX:HeapDumpPath=/data/service/elk/elasticsearch/jvm/dump-XX:ErrorFile=/data/service/elk/elasticsearch/jvm/logs/hs_err_pid%p.log+XX:+PrintGCDetails-Xlog:gc*,gc+age=trace,safepoint:file=/data/service/elk/elasticsearch/jvm/logs/gc.log:utctime,level,pid,tags:filecount=10,filesize=100mEOF
4. 将配置文件分发至所有节点,并进行配置调整
scp -pr $ES_HOME/config.yml root@redis2:$ES_HOMEscp -pr $ES_HOME/config root@redis3:$ES_HOMEsed -i 's/^node.name:.*/node.name: node2/' $ES_HOME/config/elasticsearch.ymlsed -i 's/^network.host:.*/network.host: 10.10.10.22/' $ES_HOME/config/elasticsearch.ymlsed -i 's/^node.name:.*/node.name: node3/' $ES_HOME/config/elasticsearch.ymlsed -i 's/^network.host:.*/network.host: 10.10.10.23/' $ES_HOME/config/elasticsearch.yml
5. 启动 ES 集群
- 在内部测试环境中,可能会由于网络问题导致无法正常启动。只需再次尝试重启即可
su - escd $ES_HOMEbin/elasticsearch -dsu - escd $ES_HOMEbin/elasticsearch -dsu - escd $ES_HOMEbin/elasticsearch -d
6. 验证 ES 集群状态
curl http://10.10.10.21:9200{"name" : "node1","cluster_name" : "es-test","cluster_uuid" : "OKMi6WrSRf-Vgk_oz-oqJQ","version" : {"number" : "8.8.1","build_flavor" : "default","build_type" : "tar","build_hash" : "f8edfccba429b6477927a7c1ce1bc6729521305e","build_date" : "2023-06-05T21:32:25.188464208Z","build_snapshot" : false,"lucene_version" : "9.6.0","minimum_wire_compatibility_version" : "7.17.0","minimum_index_compatibility_version" : "7.0.0"},"tagline" : "You Know, for Search"}curl http://10.10.10.22:9200{"name" : "node2","cluster_name" : "es-test","cluster_uuid" : "OKMi6WrSRf-Vgk_oz-oqJQ","version" : {"number" : "8.8.1","build_flavor" : "default","build_type" : "tar","build_hash" : "f8edfccba429b6477927a7c1ce1bc6729521305e","build_date" : "2023-06-05T21:32:25.188464208Z","build_snapshot" : false,"lucene_version" : "9.6.0","minimum_wire_compatibility_version" : "7.17.0","minimum_index_compatibility_version" : "7.0.0"},"tagline" : "You Know, for Search"}curl http://10.10.10.23:9200{"name" : "node3","cluster_name" : "es-test","cluster_uuid" : "OKMi6WrSRf-Vgk_oz-oqJQ","version" : {"number" : "8.8.1","build_flavor" : "default","build_type" : "tar","build_hash" : "f8edfccba429b6477927a7c1ce1bc6729521305e","build_date" : "2023-06-05T21:32:25.188464208Z","build_snapshot" : false,"lucene_version" : "9.6.0","minimum_wire_compatibility_version" : "7.17.0","minimum_index_compatibility_version" : "7.0.0"},"tagline" : "You Know, for Search"}curl http://10.10.10.21:9200/_cat/nodes?vipheap.percent ram.percent cpu load_1m load_5m load_15m node.role master name10.10.10.21 3937 30.310.23 0.12 dm-node110.10.10.23 6624 40.070.36 0.20 dm*node310.10.10.22 4424 50.470.54 0.26 dm-node2curl http://10.10.10.21:9200/_cluster/health?pretty{"cluster_name" : "es-test","status" : "green","timed_out" : false,"number_of_nodes" : 3,"number_of_data_nodes" : 3,"active_primary_shards" : 24,"active_shards" : 49,"relocating_shards" : 0,"initializing_shards" : 0,"unassigned_shards" : 0,"delayed_unassigned_shards" : 0,"number_of_pending_tasks" : 0,"number_of_in_flight_fetch" : 0,"task_max_waiting_in_queue_millis" : 0,"active_shards_percent_as_number" : 100.0}
部署 ES 插件 ES-Head
wget https://nodejs.org/dist/v12.17.0/node-v12.17.0-linux-x64.tar.xz -P /data/service/elk/elasticsearch/tar -xvJf /data/service/elk/elasticsearch/node-v12.17.0-linux-x64.tar.xz /data/service/elk/elasticsearch/cat >> /etc/profile << EOF# Node 12.17.0 2023-06-12export NODE_HOME=/data/service/node/node-v12.17.0-linux-x64export PATH=$JAVA_HOME/bin:$NODE_HOME/bin:$PATHEOFsource /etc/profilenode -vwget https://github.com/mobz/elasticsearch-head/archive/master.zip -P /data/service/elk/elasticsearch/unzip /data/service/elk/elasticsearch/elasticsearch-head-master.zipcd /data/service/elk/elasticsearch/elasticsearch-head-master/npm installcd /data/service/elk/elasticsearch/elasticsearch-head-master/nohup npm run start &
- 通过浏览器登录:http://10.10.10.21:9100
部署 Filebeat
1. 查看 Kafka 集群状态
$KAFKA_HOME/bin/kafka-topics.sh --bootstrap-server hadoop1:9092 --list
2. 下载 Filebeat 源码包
mkdir -p /data/service/elk/filebeatwget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.8.1-linux-x86_64.tar.gz -P /data/service/elk/filebeatwget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.8.1-linux-x86_64.tar.gz.sha512 -P /data/service/elk/filebeatshasum -a 512 -c /data/service/elk/filebeat/filebeat-8.8.1-linux-x86_64.tar.gz.sha512tar -zxvf /data/service/elk/filebeat/filebeat-8.8.1-linux-x86_64.tar.gz /data/service/elk/filebeat/
3. 调整配置文件 $FILEBEAT_HOME/filebeat.yml
cp /data/service/elk/filebeat/filebeat-8.8.1-linux-x86_64/filebeat.yml /data/service/elk/filebeat/filebeat-8.8.1-linux-x86_64/filebeat.yml.bkupcat > /data/service/elk/filebeat/filebeat-8.8.1-linux-x86_64/filebeat.yml << EOFfilebeat.inputs:# 文件类型为 log- type: logenabled: true# 指定文件路径paths:- /var/log/messages# 开启 json 文件支持json.keys_under_root: truejson.overwriite_keys: true# 在日志中增加一个字段,字段为 log_topic,值为 nginx_access,logstash 根据带有这个字段的日志存储到指定的 es 索引库fields:log_topic: system-log# 开启日志监控,从日志的最后一行开始收集tail_files: true# 指定输出到 Kafkaoutput.kafka:enabled: true# Kafka 集群列表hosts: ["10.10.10.131:9092","10.10.10.132:9092","10.10.10.133:9092"]# 指定将日志存储到 Kafka 集群的哪个 topic 中,这里的 topic 值是引用在 inputs 中定义的 fields,通过这种方式可以将不同路径的日志分别存储到不同的 topic 中topic: '%{[fields][log_topic]}'partition.round_robin:reachable_only: falserequired_acks: 1compression: gzipmax_message_bytes: 1000000EOF
4. 启动 Filebeat
cd /data/service/elk/filebeat/filebeat-8.8.1-linux-x86_64/nohup ./filebeat &
5. 验证 Filebeat
- 查看 Kafka 中是否已生成指定的 topic 并且已存入数据
$KAFKA_HOME/bin/kafka-topics.sh --bootstrap-server hadoop1:9092 --list
- 由于已配置 Kafka Manager,也可通过 Kafka Manager Web UI 进行查看
部署 Logstach
1. 下载 Logstach 源码包
mkdir -p /data/service/elk/logstach/conf.dwget https://artifacts.elastic.co/downloads/logstash/logstash-8.8.1-linux-x86_64.tar.gz -P /data/service/elk/logstachwget https://artifacts.elastic.co/downloads/logstash/logstash-8.8.1-linux-x86_64.tar.gz.sha512 -P /data/service/elk/logstachshasum -a 512 -c /data/service/elk/logstach/logstash-8.8.1-linux-x86_64.tar.gz.sha512tar -zxvf /data/service/elk/logstach/logstash-8.8.1-linux-x86_64.tar.gz /data/service/elk/logstach/cat >> /etc/profile << EOF# Logstach 8.8.1 2023-06-12export LOGSTACH_HOME=/data/service/elk/logstash/logstash-8.8.1export PATH=$PATH:$LOGSTACH_HOME/bin:$LOGSTACH_HOME/libEOFsource /etc/profile
2. 调整配置文件 $LOGSTACH_HOME/conf/pipelines.yml
cp $LOGSTACH_HOME/config/pipelines.yml $LOGSTACH_HOME/config/pipelines.yml.bkupcat > $LOGSTACH_HOME/config/pipelines.yml << EOF# 指定 pipeline 名称- pipeline.id: pipeline-system-log# 指定 pipeline 配置文件路径path.config: "/data/service/elk/logstash/conf.d/system-log.conf"EOF
3. 创建配置文件 /data/service/elk/logstash/conf.d/system-log.conf
cat > /data/service/elk/logstash/conf.d/system-log.conf << EOFinput {kafka {# Kafka 集群地址bootstrap_servers => ["10.10.10.131:9092,10.10.10.132:9092,10.10.10.133:9092" ]topics => ["system-log"]codec => "json"}}output {elasticsearch {# ES 集群地址hosts => ["10.10.10.21:9200","10.10.10.22:9200","10.10.10.23:9200"]index => "system-log-%{+YYYY.MM.dd}"}}EOF
4. 启动 Logstach
cd $LOGSTACH_HOME/bin/logstash -d
5. 验证 Logstach
curl http://10.10.10.21:9200/_cat/indices?v
部署 Kibana
1. 下载 Kibana 源码包
mkdir -p /data/service/elk/kibanawget https://artifacts.elastic.co/downloads/kibana/kibana-8.8.1-linux-x86_64.tar.gz -P /data/service/elk/kibana/wget https://artifacts.elastic.co/downloads/kibana/kibana-8.8.1-linux-x86_64.tar.gz.sha512 -P /data/service/elk/kibana/shasum -a 512 -c /data/service/elk/kibana/kibana-8.8.1-linux-x86_64.tar.gz.sha512tar -zxvf /data/service/elk/kibana/kibana-8.8.1-linux-x86_64.tar.gz.sha512 /data/service/elk/kibana/chown -R kibana:kibana /data/service/elk/kibanacat >> /etc/profile << EOF# Kibana 8.8.1 2023-06-12export KIBANA_HOME=/data/service/elk/kibana/kibana-8.8.1export PATH=$PATH:$KIBANA_HOME/bin:$KIBANA_HOME/libEOFsource /etc/profile
2. 调整配置文件 $KIBANA_HOME/config/kibana.yml
cp $KIBANA_HOME/config/kibana.yml $KIBANA_HOME/config/kibana.yml.bkupcat > $KIBANA_HOME/config/kibana.yml << EOFserver.port: 5601server.host: 10.10.10.21server.name: kibana-testelasticsearch.hosts: [" http://10.10.10.21:9200"," http://10.10.10.22:9200"," http://10.10.10.23:9200"]i18n.locale: "zh-CN"EOF
3. 启动 Kibana
- 整体启动时间相对较慢,大约需要等待 5 分钟
- 具体启动过程,可查看 $KIBANA_HOME 目录下的 nohup.out 文件
su - kibanacd $KIBANA_HOMEnohup ./bin/kibana &
4. 验证 Kibana
- 通过浏览器登录:http://10.10.10.21:5601
添加安全策略
安全策略等级概述
- 最小化安全策略(Minimal security):最小化安全策略等级适用于本地开发。在此模式下,可以对单节点服务设置密码来提供安全防护功能,但是对于本地集群环境,需要开启 TLS,以保证节点之间通信安全
- 基本安全策略(Basic security):
- 最低安全配置仅适用于单节点服务,而不符合集群的最低安全要求。如果集群有多个节点,那么您必须在节点之间配置 TLS。如果您不启用 TLS,无法启动生产集群
- 传输层依赖双向 TLS 对节点进行加密和认证。正确应用 TLS 可确保恶意节点无法加入集群并与其他节点交换数据。虽然在 HTTP 层实现用户名和密码身份验证对于保护本地集群很有用,但节点之间的通信安全需要 TLS
- Transport 协议是 Elasticsearch 节点用于相互通信的协议的名称。此名称特定于 Elasticsearch,用于区分传输端口(默认 9300)和 HTTP 端口(默认 9200)。节点使用传输端口相互通信,REST 客户端使用 HTTP 端口与 Elasticsearch 通信
- 基本安全策略 + https:在生产环境中,如果没有基于 HTTPS 上启用 TLS,那么部分安全功能会无法使用,比如 Token 和 API 密钥
ES 版本差异
- ES 6.x 及以前:需手动安装安全插件 x-pack
- ES 6.8 和 7.x 及以后:已内置 x-pack 插件,默认关闭 Security 配置
- ES 8.x 及以后:默认开启 Security 配置
ES 集群配置
- 此文档只进行基本安全策略配置
- 所有操作须在普通用户 es 上运行,以保证数据的权限正常
1. 创建密钥及证书存放路径
- ES 获取密钥及证书的默认路径为:$ES_HOME/conf
- 若需额外指定路径,也必须是 $ES_HOME/conf 目录下的子路径
mkdir -p $ES_HOME/conf/certs
2. 生成密钥文件
$ES_HOME/bin/elasticsearch-certutil ca --out $ES_HOME/conf/certs/elastic-stack-ca.p12 --pass "es"
3. 为密钥文件生成证书
$ES_HOME/bin/elasticsearch-certutil cert --ca $ES_HOME/conf/certs/elastic-stack-ca.p12 --out $ES_HOME/conf/certs/elastic-certificates.p12 --ca-pass "es" --pass "es"
4. 将密钥及证书复制至所有节点
scp -rp $ES_HOME/certs es@redis2:$ES_HOME/scp -rp $ES_HOME/certs es@redis2:$ES_HOME/
5. 调整配置文件 $ES_HOME/config/elasticsearch.yml
cp $ES_HOME/config/elasticsearch.yml $ES_HOME/config/elasticsearch.yml.bkupsed -i '/^[^#]*xpack.security.enabled:/s/^\(xpack.security.enabled:\s*\).*/\1true/' $ES_HOME/config/elasticsearch.ymlcat >> $ES_HOME/config/elasticsearch.yml << EOF# 开启 ES 的 SSL/TLS 加密xpack.security.transport.ssl.enabled: true# 设置 SSL/TLS 验证模式为证书验证模式xpack.security.transport.ssl.verification_mode: certificate# 要求客户端进行 SSL/TLS 客户端身份验证xpack.security.transport.ssl.client_authentication: required# 指定 SSL/TLS 密钥库的路径和文件名xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12# 指定 SSL/TLS 信任库的路径和文件名xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12# 允许跨域请求的 HTTP 请求头# 若未安装 es-head,则无需配置此参数http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-TypeEOF
6. 为密钥文件生成证书
- 所有节点都需进行如下操作
- 如下命令都会进入交互界面,所需键入的密码就是上述生成证书时所配置的密码
- 若在生成密钥及证书时未设置密码,则无需进行此配置
$ES_HOME/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password$ES_HOME/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
7. 重启并配置 ES 相关密码
- 如下命令都会进入交互界面,共需要配置多个用户的密码
- 超级用户 elastic : elastic
- APM(Application Performance Monitoring,应用性能监控) 用户 apm_system : apmsystem
- Kibana 用户 kibana_system : kibana
- Logstash 用户 logstash_system : logstash
- Beats 用户 beats_system : beatssystem
- 远程监控用户 remote_monitoring_user : monitor
- 密码长度必须超过 6 个字符
- 在测试环境中,可将所有用户的密码设置为统一密码,便于使用
$ES_HOME/bin/elasticsearch-setup-passwords interactivecurl http://10.10.10.21:9200/_cat/nodes?v -u elastic:elastic
8. 重启并验证 ES-Head
- 通过浏览器登录:http://10.10.10.21:9100?auth\_user=elastic&auth\_password=elastic
Kibana 配置
1. 调整配置文件 $KIBANA_HOME/config/kibana.yml
cp $KIBANA_HOME/config/kibana.yml $KIBANA_HOME/config/kibana.yml.bkupsed -i 's/#elasticsearch.username: "kibana_system"/elasticsearch.username: "kibana_system"/' $KIBANA_HOME/config/kibana.yml
2. 创建 Kibana 的密钥库
$KIBANA_HOME/bin/kibana-keystore create
3. 将 ES 密钥写入 Kibana 密钥库
- 如下命令都会进入交互界面,所需键入的密码是 kibana_system 的密码
$KIBANA_HOMEbin/kibana-keystore add elasticsearch.password
4. 重启并查看 Kibana 状态
- 通过浏览器登录:http://10.10.10.21:5601
- 密码:elastic : elastic