Flink日志采集-ELK可视化实现

一、各组件版本

组件版本
Flink1.16.1
kafka2.0.0
Logstash6.5.4
Elasticseach6.3.1
Kibana6.3.1

  针对按照⽇志⽂件⼤⼩滚动⽣成⽂件的⽅式,可能因为某个错误的问题,需要看好多个⽇志⽂件,还有Flink on Yarn模式提交Flink任务,在任务执行完毕或者任务报错后container会被回收从而导致日志丢失,为了方便排查问题可以把⽇志⽂件通过KafkaAppender写⼊到kafka中,然后通过ELK等进⾏⽇志搜索甚⾄是分析告警。

二、Flink配置将日志写入Kafka

2.1 flink-conf.yaml增加下面两行配置信息

env.java.opts.taskmanager: -DyarnContainerId=$CONTAINER_ID
env.java.opts.jobmanager: -DyarnContainerId=$CONTAINER_ID

2.2 log4j.properties配置案例如下

##################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
##################################################################
# Allows this configuration to be modified at runtime. The file will be checked every 30 seconds.
monitorInterval=30# This affects logging for both user code and Flink
#rootLogger.appenderRef.file.ref = MainAppender
rootLogger.level = INFO
rootLogger.appenderRef.kafka.ref = Kafka
rootLogger.appenderRef.file.ref = RollingFileAppender# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
logger.shaded_zookeeper.name = org.apache.flink.shaded.zookeeper3
logger.shaded_zookeeper.level = INFO# Log all infos in the given file
appender.rolling.name = RollingFileAppender
appender.rolling.type = RollingFile
appender.rolling.append = false
appender.rolling.fileName = ${sys:log.file}
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 500MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 10#appender.main.name = MainAppender
#appender.main.type = RollingFile
#appender.main.append = true
#appender.main.fileName = ${sys:log.file}
#appender.main.filePattern = ${sys:log.file}.%i
#appender.main.layout.type = PatternLayout
#appender.main.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
#appender.main.policies.type = Policies
#appender.main.policies.size.type = SizeBasedTriggeringPolicy
#appender.main.policies.size.size = 100MB
#appender.main.policies.startup.type = OnStartupTriggeringPolicy
#appender.main.strategy.type = DefaultRolloverStrategy
#appender.main.strategy.max = ${env:MAX_LOG_FILE_NUMBER:-10}# kafka
appender.kafka.type = Kafka
appender.kafka.name = Kafka
appender.kafka.syncSend = true
appender.kafka.ignoreExceptions = false
appender.kafka.topic = flink_logs
appender.kafka.property.type = Property
appender.kafka.property.name = bootstrap.servers
appender.kafka.property.value = xxx1:9092,xxx2:9092,xxx3:9092
appender.kafka.layout.type = JSONLayout
apender.kafka.layout.value = net.logstash.log4j.JSONEventLayoutV1
appender.kafka.layout.compact = true
appender.kafka.layout.complete = false# Suppress the irrelevant (wrong) warnings from the Netty channel handler
#logger.netty.name = org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.name = org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF#通过 flink on yarn 模式还可以添加⾃定义字段
# 日志路径
appender.kafka.layout.additionalField1.type = KeyValuePair
appender.kafka.layout.additionalField1.key = logdir
appender.kafka.layout.additionalField1.value = ${sys:log.file}
# flink-job-name
appender.kafka.layout.additionalField2.type = KeyValuePair
appender.kafka.layout.additionalField2.key = flinkJobName
appender.kafka.layout.additionalField2.value = ${sys:flinkJobName}
# 提交到yarn的containerId
appender.kafka.layout.additionalField3.type = KeyValuePair
appender.kafka.layout.additionalField3.key = yarnContainerId
appender.kafka.layout.additionalField3.value = ${sys:yarnContainerId}

  上⾯的 appender.kafka.layout.type 可以使⽤ JSONLayout ,也可以⾃定义。

  ⾃定义需要将上⾯的appender.kafka.layout.type 和 appender.kafka.layout.value 修改成如下:

appender.kafka.layout.type = PatternLayout
appender.kafka.layout.pattern ={"log_level":"%p","log_timestamp":"%d{ISO8601}","log_thread":"%t","log_file":"%F","l
og_line":"%L","log_message":"'%m'","log_path":"%X{log_path}","job_name":"${sys:flink
_job_name}"}%n

2.3 基于Flink on yarn模式提交任务前期准备

2.3.1 需要根据kafka的版本在flink/lib⽬录下放⼊kafka-clients的jar包

在这里插入图片描述

2.3.2 kafka处于启动状态

2.3.3 Flink Standalone集群

# 根据kafka的版本放⼊kafka-clients
kafka-clients-3.1.0.jar
# jackson对应的jar包
jackson-annotations-2.13.3.jar
jackson-core-2.13.3.jar
jackson-databind-2.13.3.jar

2.4 Flink on yarn任务提交案例

/root/software/flink-1.16.1/bin/flink run-application \
-t yarn-application \
-D yarn.application.name=TopSpeedWindowing \
-D parallelism.default=3 \
-D jobmanager.memory.process.size=2g \
-D taskmanager.memory.process.size=2g \
-D env.java.opts="-DflinkJobName=TopSpeedWindowing" \
/root/software/flink-1.16.1/examples/streaming/TopSpeedWindowing.jar

【注意】启动脚本需要加入这个参数,日志才能采集到任务名称(-D env.java.opts="-DflinkJobName=xxx")

  消费flink_logs案例

{instant: {epochSecond: 1698723428,nanoOfSecond: 544000000,},thread: 'flink-akka.actor.default-dispatcher-17',level: 'INFO',loggerName: 'org.apache.flink.runtime.rpc.akka.AkkaRpcService',message: 'Stopped Akka RPC service.',endOfBatch: false,loggerFqcn: 'org.apache.logging.slf4j.Log4jLogger',threadId: 68,threadPriority: 5,logdir: '/yarn/container-logs/application_1697779774806_0046/container_1697779774806_0046_01_000002/taskmanager.log',flinkJobName: 'flink-log-collect-test',yarnContainerId: 'container_1697779774806_0046_01_000002',
}

  ⽇志写⼊Kafka之后可以通过Logstash接⼊elasticsearch,然后通过kibana进⾏查询或搜索

三、LogStash部署

  部署过程略,网上都有

  需要注意Logstash内部kafka-clients和Kafka版本兼容问题,需要根据Kafka版本选择合适的Logstash版本

  将以下内容写⼊config/logstash-sample.conf ⽂件中

input {kafka {bootstrap_servers => ["xxx1:9092,xxx2:9092,xxx3:9092"] group_id => "logstash-group"topics => ["flink_logs"] consumer_threads => 3 type => "flink-logs" codec => "json"auto_offset_reset => "latest"}
}output {elasticsearch {hosts => ["xxx:9200"] index => "flink-log-%{+YYYY-MM-dd}"}
}

  Logstash启动:

logstash-6.5.4/bin/logstash -f logstash-6.5.4/config/logstash-sample.conf 2>&1 >logstash-6.5.4/logs/logstash.log &

四、Elasticsearch部署

  部署过程略,网上都有

  注意需要用root用户以外的用户启动Elasticsearch

  启动脚本:

Su elasticsearchlogtestelasticsearch-6.3.1/bin/elasticsearch

在这里插入图片描述

  Windows访问ES客户端推荐使用ElasticHD,本地运行后可以直连ES
在这里插入图片描述

五、Kibana部署

  部署过程略,网上都有

  启动脚本:

  kibana-6.3.1-linux-x86_64/bin/kibana

5.1 配置规则

在这里插入图片描述
在这里插入图片描述

5.2 日志分析

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/161935.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

ElasticSearch高级功能详解与原理剖析

ES数据预处理 Ingest Node Elasticsearch 5.0后,引入的一种新的节点类型。默认配置下,每个节点都是Ingest Node: 具有预处理数据的能力,可拦截lndex或Bulk API的请求对数据进行转换,并重新返回给Index或Bulk APl 无…

华为OD机试 - 高效的任务规划 - 逻辑分析(Java 2023 B卷 200分)

目录 专栏导读一、题目描述二、输入描述三、输出描述四、解题思路五、Java算法源码六、效果展示1、输入2、输出3、说明 华为OD机试 2023B卷题库疯狂收录中,刷题点这里 专栏导读 本专栏收录于《华为OD机试(JAVA)真题(A卷B卷&#…

Educational Codeforces Round 157 (Rated for Div. 2) F. Fancy Arrays(容斥+组合数学)

题目 称一个长为n的数列a是fancy的&#xff0c;当且仅当&#xff1a; 1. 数组内至少有一个元素在[x,xk-1]之间 2. 相邻项的差的绝对值不超过k&#xff0c;即 t(t<50)组样例&#xff0c;每次给定n(1<n<1e9),x(1<x<40), 求fancy的数组的数量&#xff0c;答案…

Spring Boot 3系列之-启动类详解

Spring Boot是一个功能强大、灵活且易于使用的框架&#xff0c;它极大地简化了Spring应用程序的开发和部署流程&#xff0c;使得开发人员能够更专注于业务逻辑的实现。在我们的Spring Boot 3系列之一&#xff08;初始化项目&#xff09;文章中&#xff0c;我们使用了Spring官方…

打印机:Open the front cover and pull out the drum unit

参考&#xff1a; https://support.brother.com/g/b/faqend.aspx?cgb&langen&prodmfcl8690cdw_eu_as&faqidfaq00000154_082#:~:textReplacing%20the%20drum%20unit%20Make%20sure%20the%20machine,unit%20out%20of%20the%20machine%20until%20it%20stops. 故障现…

Box2d 物理画线,Cocos Creator 3.8

一个简易的画线刚体Demo 效果 抱歉&#xff0c;放错图了&#xff0c;以上是 孙二喵 iwae https://forum.cocos.org/t/topic/142673[1] 的效果图。本Demo是根据文章的思路&#xff0c;合成的代码。首先&#xff0c;感谢孙二喵的技术分享。 以下是最终效果图 使用 版本 Cocos Cre…

产业园区中工业厂房的能源综合配置——工业园区综合能源数字化系统建设方案

以下内容转自微信公众号&#xff1a;PPP产业大讲堂&#xff0c;《产业园区中工业厂房的能源综合配置》。 园区工业地产中能源综合配置存在的问题 我国园区工业地产建设已历经近40年的发展, 园区在区域经济发展、产业集聚方面发挥了重要的载体和平台作用, 有力推动了我国社会经…

尚硅谷Docker基础篇和Dockerfile超详细整合笔记

Docker基础篇DockerFile Docker&#xff1a;您要如何确保应用能够在这些环境中运行和通过质量检测&#xff1f;并且在部署过程中不出现令人头疼的版本、配置问题&#xff0c;也无需重新编写代码和进行故障修复&#xff1f;而这个就是使用容器。Docker解决了运行环境和配置问题…

SMART PLC开放式以太网通信(UDP通信)

西门子S7-200 SMART PLC不仅支持开放式以太网通信,还支持MODBU-RTU,以及ModbusTcp通信,详细内容请参考下面文章: MODBUS-RTU主站通信 【精选】PLC MODBUS通信优化、提高通信效率避免权限冲突(程序+算法描述)-CSDN博客文章浏览阅读2.5k次,点赞5次,收藏10次。MODBUS通讯…

NVIDIA-SMI has failed because it couldn“t communicate with the NVIDIA driver .

文章目录 报错原因分析解决办法防患于未然 报错 执行nvidia-smi报错 NVIDIA-SMI has failed because it couldn"t communicate with the NVIDIA driver . Make sure that the atest NVIDIA driver is installed and running.运行使用gpu的docker容器时 NVIDIA Docker …

【小白专用】PHP基本语法 23.11.04

PHP基本语法 PHP是超文本预处理器 由服务器解析执行 可以与 html 进行混编(嵌入) ,PHP是一种弱类型语言 1.1 PHP标记 PHP和其他Web语言一样&#xff0c;都是用一对标记将PHP代码包含起来&#xff0c;以便和HTML代码区分开来。PHP支持4种风格的标记&#xff0c;如表所示。 标…

从零开始搭建微服务(一)

构建项目父工程 添加公共核心模块 安装nacos 安装nacos nacos 文档地址&#xff1a; https://nacos.io/zh-cn/docs/what-is-nacos.html 本文使用版本2.2.2 下载地址&#xff1a;https://github.com/alibaba/nacos/archive/refs/tags/2.2.2.zip 使用nacos 我们下载是源代码 解…