ELK日志分析系统

目录

开始

 第一台安装elasticsearch-head插件

第一台node1安装logstash

配置收集系统日志   第一台

安装kibana第一台

第三台


简述:

日志分析是运维工程师解决系统故障,发现问题的主要手段。日志主要包括系统日志、应用程序日志和安全日志。系统运维和开发人员可以通过日志了解服务器软硬件信息、检查配置过程中的错误及错误发生的原因,经常分析日志可以了解服务器的负荷,性能安全性,从而及时采取措施纠正错误。
日志是一个非常庞大的数据,并且常常被分散在不同的设备上,这样排查问题的时候找日志就非常繁琐困难。
这时,一个专门处理日志的系统就非常必要,这里介绍其中的一种,ELK日志分析系统(ELasticsearch+Logstash+Kibana)
————————————————

环境:

三台主机

关闭防火墙和安全规则

[root@bogon ~]# iptables -F

[root@bogon ~]# setenforce 0

[root@bogon ~]# systemctl stop firewalld

设置主机名

第一台   elk-node1

第二台   elk-node2

第三台   现在不进行操作 等下用到再说

主机映射配置

/etc/hosts

192.168.1.117 elk-node1
192.168.1.120 elk-node2

先说一下这个服务  过程中容易端口丢失  建议调整内核和内存    不然费老大劲了

开始

上传软件包

[root@elk-node1 elk软件包]# rpm -ivh elasticsearch-5.5.0.rpm

警告:elasticsearch-5.5.0.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY

准备中...                          ################################# [100%]

Creating elasticsearch group... OK

Creating elasticsearch user... OK

正在升级/安装...

   1:elasticsearch-0:5.5.0-1          ################################# [100%]

### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd

 sudo systemctl daemon-reload

 sudo systemctl enable elasticsearch.service

### You can start elasticsearch service by executing

 sudo systemctl start elasticsearch.service

已经告诉我们下一步了

[root@elk-node1 elk软件包]# systemctl daemon-reload

[root@elk-node1 elk软件包]#  sudo systemctl enable elasticsearch.service

Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.

修改node1和node2配置文件

vim /etc/elasticsearch/elasticsearch.yml

  17 cluster.name: my-elk-cluster

 23 node.name: elk-node1

33 path.data: /data/elk_data

37 path.logs: /var/log/elasticsearch

55 network.host: 0.0.0.0

59 http.port: 9200

 68 discovery.zen.ping.unicast.hosts: ["elk-node1", "elk-node2"]

 90 http.cors.enabled: true                 这两行只有node1添加

 91 http.cors.allow-origin: "*"

创建目录并且把用户权限改为elasticsearch

[root@elk-node2 ~]# mkdir -p /data/elk_data

[root@elk-node2 ~]#  chown elasticsearch:elasticsearch /data/elk_data/

启动服务并查看端口

[root@elk-node1 ~]# systemctl start elasticsearch.service

[root@elk-node1 ~]#  netstat -anpt | grep 9200

tcp6       0      0 :::9200                 :::*                    LISTEN      56622/java          

[root@elk-node2 ~]# systemctl restart elasticsearch.service

[root@elk-node2 ~]# netstat -anpt | grep 9200

tcp6       0      0 :::9200                 :::*                    LISTEN      55553/java          

访问节点IP

 第一台安装elasticsearch-head插件

解压软件包    tar xf node-v8.2.1-linux-x64.tar.gz -C /usr/local/

做链接

[root@elk-node1 elk软件包]# ln -s /usr/local/node-v8.2.1-linux-x64//bin/npm /usr/bin/node

[root@elk-node1 elk软件包]# ln -s /usr/local/node-v8.2.1-linux-x64//bin/npm /usr/local/bin

解压head包

[root@elk-node1 elk软件包]# tar xf elasticsearch-head.tar.gz -C /data/elk_data/

cd到elk_data

[root@elk-node1 elk软件包]# cd /data/elk_data/

修改用户和组

[root@elk-node1 elk_data]# chown -R elasticsearch:elasticsearch elasticsearch-head/

cd到elasticsearch-head/

[root@elk-node1 elk_data]# cd elasticsearch-head/

安装nmp

[root@elk-node1 elasticsearch-head]# npm install

npm WARN deprecated fsevents@1.2.13: The v1 package contains DANGEROUS / INSECURE binaries. Upgrade to safe fsevents v2

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules/karma/node_modules/chokidar/node_modules/fsevents):

npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression

up to date in 8.357s

cd到_site下把app.js做备份并编辑

[root@elk-node1 elasticsearch-head]# cd _site/

[root@elk-node1 _site]#  cp app.js{,.bak}

[root@elk-node1 _site]#  vim app.js

进去后按4329加大G就到4329行了

4329 行                        this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") ||      "http://192.168.1.117:9200";

启动npm

[root@elk-node1 _site]#  npm run start &

[1] 4423

[root@elk-node1 _site]#

> elasticsearch-head@0.0.0 start /data/elk_data/elasticsearch-head

> grunt server

Running "connect:server" (connect) task

Waiting forever...

Started connect web server on http://localhost:9100

systemctl start elasticsearch

启动elasticsearch

[root@elk-node1 _site]# systemctl start elasticsearch

查看端口

[root@elk-node1 _site]# netstat -lnpt | grep 9100

tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      4433/grunt      

访问节点IP

cdc

插入数据   测试类型为test

[root@elk-node1 _site]# curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'Content-Type: application/json' -d '{ "user": "zhangsan","mesg":"hello word" }'

{

  "_index" : "index-demo",

  "_type" : "test",

  "_id" : "1",

  "_version" : 1,

  "result" : "created",

  "_shards" : {

    "total" : 2,

    "successful" : 2,

    "failed" : 0

  },

  "created" : true

}

刷新查看变化

第一台node1安装logstash

rpm -ivh logstash-5.5.1.rpm  

启动服务并做链接

systemctl start logstash

ln -s /usr/share/logstash/bin/logstash /usr/local/bin/

启动一个logstash -e   标准输入

[root@elk-node1 elk软件包]#  logstash -e 'input { stdin{} } output { stdout{} }'

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console

20:53:11.284 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}

20:53:11.294 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}

20:53:11.378 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"a184b92b-62d6-416a-86bc-d496e1d07fbc", :path=>"/usr/share/logstash/data/uuid"}

20:53:11.590 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}

20:53:11.681 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started

The stdin plugin is now waiting for input:

20:53:11.825 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

www.baidu.com      输入网页  手动输入

2023-07-06T12:53:26.772Z elk-node1 www.baidu.com

www.slan.com.cn               手动输入

2023-07-06T12:53:48.435Z elk-node1 www.slan.com.cn

显示详细输出

[root@elk-node1 elk软件包]# logstash -e 'input { stdin{} } output { stdout{ codec =>rubydebug} }'

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console

20:54:54.115 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}

20:54:54.241 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started

The stdin plugin is now waiting for input:

20:54:54.387 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

www.baidu.com             手动输入

{

    "@timestamp" => 2023-07-06T12:55:04.406Z,

      "@version" => "1",

          "host" => "elk-node1",

       "message" => "www.baidu.com"

}

配置收集系统日志   第一台

[root@elk-node1 conf.d]# vim systemc.conf

input {
    file {
        path => "/var/log/messages"
        type => "system"
        start_position => "beginning"

   }
}
output  {
    elasticsearch {
        hosts => ["192.168.1.117:9200"]
        index => "system-%{+YYYY.MM.dd}"
    }
}

重启logstash

[root@elk-node1 _site]#  systemctl restart logstash

加载文件查看是否打入到es
[root@
elk-node1 conf.d]# logstash -f systemc.conf

安装kibana第一台

[root@elk-node1 elk软件包]#  rpm -ivh kibana-5.5.1-x86_64.rpm
警告:kibana-5.5.1-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:kibana-5.5.1-1                   ################################# [100%]

设置开机自启
[root@
elk-node1 elk软件包]#  systemctl enable kibana.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.

修改配置文件并启动
[root@
elk-node1 elk软件包]#  vim /etc/kibana/kibana.yml

 2 server.port: 5601

 7 server.host: "0.0.0.0"

21 elasticsearch.url: "http://192.168.1.117:9200"

30 kibana.index: ".kibana"
[root@localhost elk软件包]#  systemctl restart kibana.service

查看端口

[root@elk-node1 elk软件包]#  netstat -lnpt | grep 5601
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      4370/node    

root@elk-node1 elk软件包]# logstash -f /etc/logstash/conf.d/system.conf

访问

 创建       注意     system-2023.07.07

把英文添加进去

看一下有没有创建成功

第三台

起名

[root@localhost ~]#  hostname apache

刷新

[root@localhost ~]# bash

关闭防火墙和安全规则

[root@apache ~]#  iptables -F

[root@apache ~]#  systemctl stop firewalld

[root@apache ~]#  setenforce 0

安装httpd

root@apache ~]# yum -y install httpd

启动

[root@apache ~]# systemctl start httpd

上传软件包

[root@apache ~]#  rpm -ivh logstash-5.5.1.rpm

警告:logstash-5.5.1.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY

准备中...                          ################################# [100%]

正在升级/安装...

   1:logstash-1:5.5.1-1               ################################# [100%]

Using provided startup.options file: /etc/logstash/startup.options

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N

Successfully created system startup script for Logstash

设置开机启动

[root@apache ~]#  systemctl enable logstash.service

Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.

cd到logstash下

[root@apache ~]#  cd /etc/logstash/conf.d/

编辑配置文件

[root@apache conf.d]#  vim apache_log.conf

nput {

   file {

        path => "/var/log/httpd/access_log"

        type => "access"

        start_position => "beginning"

   }

  file {

     path => "/var/log/httpd/error_log"

     type => "error"

     start_position => "beginning"

  }

}

output  {

    if [type] == "access" {

       elasticsearch {

        hosts => ["192.168.1.117:9200"]

        index => "apache_access-%{+YYYY.MM.dd}"

     }

 }

  if [type] == "error" {

    elasticsearch {

        hosts => ["192.168.1.117:9200"]

        index => "apache_error-%{+YYYY.MM.dd}"

    }

  }

}

创建链接文件

[root@apache bin]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/

[root@apache bin]# ll

总用量 0

lrwxrwxrwx. 1 root root 32 7月   7 11:36 logstash -> /usr/share/logstash/bin/logstash

[root@apache bin]#  cd /etc/logstash/conf.d/

加载文件

[root@apache conf.d]# logstash -f apache_log.conf

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console

11:37:25.689 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}

11:37:25.693 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}

11:37:25.909 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"e79973b7-8439-4aaf-9f4a-248acb91eb3f", :path=>"/usr/share/logstash/data/uuid"}

11:37:26.125 [LogStash::Runner] ERROR logstash.agent - Cannot create pipeline {:reason=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after "}

:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x54a65874>]}

12:04:31.849 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}

12:04:32.199 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started

12:04:32.380 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9601}      显示这个证明成功

随便输入一些东西
bvnc vch df hdf dfh hfd

aefssdfgsfdg

去网页登陆一下httpd

网页查看192.168.1.117:9100

创建

 看看有没有创建成功

进入

查看

拜拜

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/18846.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

cesium 实现多颗卫星与多颗地面站雷达通信效果

最主要的部分是计算空间内两点之间的距离以及卫星对地点是否在雷达扫描范围内 先看效果 1.计算空间内两点之间的距离 //计算距离 function distance(point1, point2) {var point1cartographic = Cesium.Cartographic.fromCartesian(point1);var</

基于STM32智能窗帘控制系统仿真设计(含源程序+proteus仿真+讲解视频)

# 基于STM32智能窗帘设计&#xff08;含源程序proteus仿真&#xff09; 仿真&#xff1a;proteus8.11 程序编译器&#xff1a;keil 5 编程语言&#xff1a;C语言 编号C0007 资料下载链接 讲解视频 基于STM32的简易智能窗帘控制系统仿真设计 功能说明&#xff1a; 实现功能&a…

Kubernetes:快速入门指南

原文作者&#xff1a;NGINX 原文链接&#xff1a;Kubernetes&#xff1a;快速入门指南 转载来源&#xff1a;NGINX 官方网站 NGINX 唯一中文官方社区 &#xff0c;尽在 nginx.org.cn 什么是 Kubernetes&#xff1f; Kubernetes又称为 k8s&#xff0c;是一个开源平台&#xff…

Java虚拟机(JVM)

Java虚拟机&#xff08;JVM&#xff09; 类加载 类加载 Java类加载的过程可以分为以下几个步骤&#xff1a; 加载&#xff08;Loading&#xff09;&#xff1a;类加载的第一步是将类的字节码文件加载到内存中。 通过类的全名&#xff0c;全限定名&#xff08;包括包名和类名&…

【STM32MP135】移除stm32mp135d-atk-u-boot.dtsi设备树config节点,否则会进入fastboot下载模式

文件路径&#xff1a;u-boot-stm32mp-v2021.10-stm32mp1-r1/arch/arm/dts/stm32mp135d-atk-u-boot.dtsi

flutter开发实战-Theme主题切换

flutter开发实战-Theme主题切换 之前做的应用中有用到Theme主题切换&#xff0c;一直没有整理&#xff0c;这里整理一下。 使用的是Android studio 一、效果图 二、创建ThemeModel // 提供五套可选主题色 const _themes <MaterialColor>[Colors.blue,Colors.cyan,Co…

PWM 输出实验(stm32)

目录 PWM的代码pwm.cpwm.h main.c 说明&#xff1a;以下内容参考正点原子的资料 PWM的代码 pwm.c //TIM3 PWM部分初始化 //PWM输出初始化 //arr&#xff1a;自动重装值 //psc&#xff1a;时钟预分频数 void TIM3_PWM_Init(u16 arr,u16 psc) { GPIO_InitTypeDef GPIO_InitSt…

C语言进阶之字符串函数和内存函数的介绍及部分函数的模拟实现

字符串函数和内存函数 1.字符串函数介绍1.1 strlen1.2 strcpy1.3 strcat1.4 strcmp1.5 strncpy1.6 strncat1.7 strncpy1.8 strstr1.9 strtok1.10 strerror1.11 字符分类函数 2.内存函数2.1 memcpy2.2 memmove2.3 memcmp 3.函数的模拟实现3.1 模拟实现strlen3.2 模拟实现strcpy3…

单片机第一季:零基础4——LED点阵

1&#xff0c;第八章-LED点阵 如何驱动LED点阵&#xff1a; (1)单片机端口直接驱动。要驱动8*8的点阵需要2个IO端口&#xff08;16个IO口&#xff09;、要驱动16*16的点阵需要4个IO端口&#xff08;32个IO口&#xff09;。 (2)使用串转并移位锁存器驱动。要驱动16*16点阵只需要…

网安学习经历小记

明明自觉学会了不少知识&#xff0c;可真正开始做题时&#xff0c;却还是出现了“一支笔&#xff0c;一双手&#xff0c;一道力扣&#xff08;Leetcode&#xff09;做一宿”的窘境&#xff1f;你是否也有过这样的经历&#xff0c;题型不算很难&#xff0c;看题解也能弄明白&…

漏洞攻击 --- TCP -- 半开攻击、RST攻击

TCP半开攻击&#xff08;半连接攻击&#xff09; --- syn攻击 &#xff08;1&#xff09;定义&#xff1a; sys 攻击数据是DOS攻击的一种&#xff0c;利用TCP协议缺陷&#xff0c;发送大量的半连接请求&#xff0c;耗费CPU和内存资源&#xff0c;发生在TCP三次握手中。 A向B…

已解决 BrokenPipeError: [Errno 32] Broken pipe

作者主页&#xff1a;爱笑的男孩。的博客_CSDN博客-深度学习,活动,python领域博主爱笑的男孩。擅长深度学习,活动,python,等方面的知识,爱笑的男孩。关注算法,python,计算机视觉,图像处理,深度学习,pytorch,神经网络,opencv领域.https://blog.csdn.net/Code_and516?typeblog个…