【kafka】记一次kafka基于linux的原生命令的使用

环境是linux,4台机器,版本3.6,kafka安装在node 1 2 3 上,zookeeper安装在node2 3 4上。

安装好kafka,进入bin目录,可以看到有很多sh文件,是我们执行命令的基础。
在这里插入图片描述
启动kafka,下面的命令的后面带的配置文件的相对路径

kafka-server-start.sh ./server.properties

遇到不熟悉的sh文件,直接输入名字并回车,就会提示你可用的命令参数。如果参数用错了,kafka也会提示你相应的错误。

[root@localhost bin]# kafka-topics.sh
Create, delete, describe, or change a topic.
Option                                   Description                            
------                                   -----------                            
--alter                                  Alter the number of partitions and     replica assignment. Update the       configuration of an existing topic   via --alter is no longer supported   here (the kafka-configs CLI supports altering topic configs with a --     bootstrap-server option).            
--at-min-isr-partitions                  if set when describing topics, only    show partitions whose isr count is   equal to the configured minimum.     
--bootstrap-server <String: server to    REQUIRED: The Kafka server to connect  connect to>                              to.                                  
--command-config <String: command        Property file containing configs to be config property file>                    passed to Admin Client. This is used only with --bootstrap-server option  for describing and altering broker   configs.                             
--config <String: name=value>            A topic configuration override for the topic being created or altered. The  following is a list of valid         configurations:                      cleanup.policy                        compression.type                      delete.retention.ms                   file.delete.delay.ms                  flush.messages                        flush.ms                              follower.replication.throttled.       replicas                             index.interval.bytes                  leader.replication.throttled.replicas local.retention.bytes                 local.retention.ms                    max.compaction.lag.ms                 max.message.bytes                     message.downconversion.enable         message.format.version                message.timestamp.after.max.ms        message.timestamp.before.max.ms       message.timestamp.difference.max.ms   message.timestamp.type                min.cleanable.dirty.ratio             min.compaction.lag.ms                 min.insync.replicas                   preallocate                           remote.storage.enable                 retention.bytes                       retention.ms                          segment.bytes                         segment.index.bytes                   segment.jitter.ms                     segment.ms                            unclean.leader.election.enable        See the Kafka documentation for full   details on the topic configs. It is  supported only in combination with --create if --bootstrap-server option  is used (the kafka-configs CLI       supports altering topic configs with a --bootstrap-server option).        
--create                                 Create a new topic.                    
--delete                                 Delete a topic                         
--delete-config <String: name>           A topic configuration override to be   removed for an existing topic (see   the list of configurations under the --config option). Not supported with the --bootstrap-server option.       
--describe                               List details for the given topics.     
--exclude-internal                       exclude internal topics when running   list or describe command. The        internal topics will be listed by    default                              
--help                                   Print usage information.               
--if-exists                              if set when altering or deleting or    describing topics, the action will   only execute if the topic exists.    
--if-not-exists                          if set when creating topics, the       action will only execute if the      topic does not already exist.        
--list                                   List all available topics.             
--partitions <Integer: # of partitions>  The number of partitions for the topic being created or altered (WARNING:   If partitions are increased for a    topic that has a key, the partition  logic or ordering of the messages    will be affected). If not supplied   for create, defaults to the cluster  default.                             
--replica-assignment <String:            A list of manual partition-to-broker   broker_id_for_part1_replica1 :           assignments for the topic being      broker_id_for_part1_replica2 ,           created or altered.                  broker_id_for_part2_replica1 :                                                broker_id_for_part2_replica2 , ...>                                           
--replication-factor <Integer:           The replication factor for each        replication factor>                      partition in the topic being         created. If not supplied, defaults   to the cluster default.              
--topic <String: topic>                  The topic to create, alter, describe   or delete. It also accepts a regular expression, except for --create      option. Put topic name in double     quotes and use the '\' prefix to     escape regular expression symbols; e.g. "test\.topic".                    
--topic-id <String: topic-id>            The topic-id to describe.This is used  only with --bootstrap-server option  for describing topics.               
--topics-with-overrides                  if set when describing topics, only    show topics that have overridden     configs                              
--unavailable-partitions                 if set when describing topics, only    show partitions whose leader is not  available                            
--under-min-isr-partitions               if set when describing topics, only    show partitions whose isr count is   less than the configured minimum.    
--under-replicated-partitions            if set when describing topics, only    show under replicated partitions     
--version                                Display Kafka version.                

如这里,我们创建一个topic名为test。

kafka-topics.sh --create --topic test  --bootstrap-server node1:9092 --partitions 2 --replication-factor 2
Created topic test.

连接其中node1上的kafka获得metedata里的topic列表

[root@localhost bin]# kafka-topics.sh --list --bootstrap-server node1:9092
test

查看某个topic的细节

[root@localhost bin]# kafka-topics.sh --describe --topic test --bootstrap-server node1:9092
Topic: test	TopicId: WgjG4Ou_Q7iQvzgipRgzjg	PartitionCount: 2	ReplicationFactor: 2	Configs: Topic: test	Partition: 0	Leader: 2	Replicas: 2,1	Isr: 2,1Topic: test	Partition: 1	Leader: 3	Replicas: 3,2	Isr: 3,2

在其中的一台机器上起一个生产者,在其他两台机器上起2个消费者,都在同一个组里。

[root@localhost bin]# kafka-console-producer.sh --broker-list node1:9092 --topic test
>hello 03
>1
>2
>3
>4
>5
>6
>7
>8

可以看到同一个组内,如果组内消费者注册情况不变化有且只有同一个consumer能够消费数据。可以满足对于消息要求顺序性,不能并发消费的情况。

[root@localhost bin]# kafka-console-consumer.sh --bootstrap-server node1:9092 --topic test --group msb
hello 03
1
2
3
4
5
6
7
8

查看某个组内的情况

[root@localhost bin]# kafka-consumer-groups.sh --bootstrap-server node2:9092 --group msb --describeGROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                           HOST             CLIENT-ID
msb             test            1          24              24              0               console-consumer-4987804d-6e59-4f4d-9952-9afb9aff6cbe /192.168.184.130 console-consumer
msb             test            0          0               0               0               console-consumer-242992e4-7801-4a38-a8f3-8b44056ed4b6 /192.168.184.130 console-consumer

最后看一下zk中的情况吧。
zk根目录下多了一个kafka节点

[zk: localhost:2181(CONNECTED) 1] ls /
[kafka, node1, node6, node7, testLock, zookeeper]

kafka下面有很多metedata信息,包含在这些节点中,如,,

[zk: localhost:2181(CONNECTED) 2] ls /kafka
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification]
#集群id
[zk: localhost:2181(CONNECTED) 3] ls /kafka/cluster 
[id]
[zk: localhost:2181(CONNECTED) 5] get /kafka/cluster/id 
{"version":"1","id":"8t14lxoAS1SdXapY6ysw_A"}
#controller的id
[zk: localhost:2181(CONNECTED) 6] get /kafka/controller
{"version":2,"brokerid":3,"timestamp":"1698841142070","kraftControllerEpoch":-1}

可以看到topics中有一个__consumer_offsets,是kafka用来存储offset的topic。

[zk: localhost:2181(CONNECTED) 10] ls /kafka/brokers/topics 
[__consumer_offsets, test]
[zk: localhost:2181(CONNECTED) 12] get /kafka/brokers/topics/__consumer_offsets 
{"partitions":{"44":[1],"45":[2],"46":[3],"47":[1],"48":[2],"49":[3],"10":[3],"11":[1],"12":[2],"13":[3],"14":[1],"15":[2],"16":[3],"17":[1],"18":[2],"19":[3],"0":[2],"1":[3],"2":[1],"3":[2],"4":[3],"5":[1],"6":[2],"7":[3],"8":[1],"9":[2],"20":[1],"21":[2],"22":[3],"23":[1],"24":[2],"25":[3],"26":[1],"27":[2],"28":[3],"29":[1],"30":[2],"31":[3],"32":[1],"33":[2],"34":[3],"35":[1],"36":[2],"37":[3],"38":[1],"39":[2],"40":[3],"41":[1],"42":[2],"43":[3]},"topic_id":"RGxJyefAQlKrmY3LTVbKGw","adding_replicas":{},"removing_replicas":{},"version":3}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/158400.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

遥遥领先,免费开源的django4-vue3前后端分离项目

星域后台管理系统前端介绍 &#x1f33f;项目简介 本项目前端基于当下流行且常用的vue3作为主要技术栈进行开发&#xff0c;融合了typescript和element-plus-ui&#xff0c;提供暗黑模式和白昼模式两种主题以及全屏切换&#xff0c;开发bug少&#xff0c;简单易学&#xff0c…

ZYNQ实验---IQ调制实现SSB PART2

一、前言 本文实验在ZYNQ实验—IQ调制实现SSB PART1的基础上进行优化完善。 下图为IQ调制实现SSB PART1中设想实现设计框图 该图设计存在的几个问题&#xff1a; PC-PS的UDP传输存在丢包中断控制发包实际不适合流数据的传输采用的BRAM模块可以存储的空间较小&#xff0c;PC…

ngx_http_request_s

/* 罗剑锋老师的注释参考&#xff1a; https://github.com/chronolaw/annotated_nginx/blob/master/nginx/src/http/ngx_http_request.h */struct ngx_http_request_s {uint32_t signature; /* "HTTP" */ngx_connection_t …

素材搜罗利器!产品设计必须知道的13款最佳网站!

灵感素材类 1.即时设计 在网页中搜索“即时设计”&#xff0c;进入官网后登录账号&#xff0c;之后进入「资源广场」版块便能看到即时设计提供的上万条设计素材。在搜索框内根据需要进行搜索&#xff0c;比如输入“网页设计”&#xff0c;便会看到即时设计提供的网页设计素材…

《现代C++语言核心特性解析》笔记(一)

一、新基础类型&#xff08;C11&#xff5e;C20&#xff09; C基础类型回顾一览表 1. 整数类型 long long 我们知道long通常表示一个32位整型&#xff0c;而long long则是用来表示一个64位的整型。不得不说&#xff0c;这种命名方式简单粗暴。不仅写法冗余&#xff0c;而且表…

记录 vue + vuetify + electron 安装过程

NodeJs 版本&#xff1a; 20 内容来自&#xff1a; Electron Vue.js Vuetify 构建跨平台应用_思月行云的博客-CSDN博客文章浏览阅读61次。Go coding!https://blog.csdn.net/kenkao/article/details/132600542 npm config set registry https://registry.npm.taobao.org np…

【HSPICE仿真】输入网表文件(6)用户自定义分析输出(.measure)

.measure语句的基本用法 1. 语句顺序2. 测量参数类型3. Rise, Fall, Delay, and Power Measurements4. FIND 和 WHEN 函数5. Equation Evaluation/Arithmetic Expression6. AVG, EM_AVG, INTEG, MIN, MAX, PP, and RMS基本语法示例 7. 输出文件格式MEASFORMMEASOUTMEASFILEMEAS…

SMTP邮件发送图片-如何在github中存储图片并访问

之前写了一篇文章 Go&#xff1a;实现SMTP邮件发送订阅功能&#xff08;包含163邮箱、163企业邮箱、谷歌gmail邮箱&#xff09;&#xff0c;实现了通过邮箱服务来发送邮件&#xff0c;但都是文字内容&#xff0c;要是想实现邮件发送图片&#xff0c;就需要将图片放到公网可访问…

ssm银行账目管理系统-计算机毕设 附源码 93216

ssm银行账目管理系统的设计与实现 摘 要 大数据时代下&#xff0c;数据呈爆炸式地增长。为了迎合信息化时代的潮流和信息化安全的要求&#xff0c;利用互联网服务于其他行业&#xff0c;促进生产&#xff0c;已经是成为一种势不可挡的趋势。在银行账目管理的要求下&#xff0c;…

HTML、CSS和JavaScript,实现换肤效果的原理

这篇涉及到HTML DOM的节点类型、节点层级关系、DOM对象的继承关系、操作DOM节点和HTML元素 还用到HTML5的本地存储技术。 换肤效果的原理&#xff1a;是在选择某种皮肤样式之后&#xff0c;通过JavaScript脚本来加载选中的样式&#xff0c;再通过localStorage存储。 先来回忆…

Django中的FBV和CBV

一、两者的区别 1、在我们日常学习Django中&#xff0c;都是用的FBV&#xff08;function base views&#xff09;方式&#xff0c;就是在视图中用函数处理各种请求。而CBV&#xff08;class base view&#xff09;则是通过类来处理请求。 2、Python是一个面向对象的编程语言…

MSQL系列(十二) Mysql实战-为什么索引要建立在被驱动表上

Mysql实战-为什么索引要建立在被驱动表上 前面我们讲解了BTree的索引结构&#xff0c;也详细讲解下 left Join的底层驱动表 选择原理&#xff0c;那么今天我们来看看到底如何用以及如何建立索引和索引优化 开始之前我们先提一个问题&#xff0c; 为什么索引要建立在被驱动表上…