中间件-kafka

Windows下kafka搭建与使用

注意

  1. 按下shift+鼠标右键,选择”在此处打开命令窗口”,打开命令行在当前目录下
  2. 过程中会修改配置文件中的path地址,这些path如果不修改,在windows下都会打印到当然文件的根磁盘目录下,所以需要修改下

1. 安装zookeeper

  1. 下载:https://zookeeper.apache.org/releases.html https://www.apache.org/dyn/closer.cgi/zookeeper/
  2. 解压:解压到一个路径没有空格的X:/xxxpath中
  3. 创建配置:进入path/conf目录,启动的配置文件默认只提供了例子,改一下即可,复制zoo_sample.cfg,命名为zoo.cfg
  4. 编辑配置:zoo.cfg 修改属性
    1. dataDir=/tmp/zookeeper to dataDir=X:/xxxpath/zookeeper-3.4.13/data 或 dataDir=X:\xxxpath\zookeeper-3.4.13\data
    2. 修改启动端口号,也可以不修改:clientPort=2181
  5. 配置环境变量:在环境变量path中添加;X:\xxxpath\zookeeper-3.4.13\bin
  6. cmd,输入zkserver启动

完整配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=E:/IdeaProjects/common/zookeeper-3.4.13/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

2. 安装kafka

  1. 下载:http://kafka.apache.org/downloads 二进制类型
  2. 解压:X:/xxxpath中
  3. 编辑X:/xxxpath/kafka_2.12-2.0.0/config中修改server.properties
    1. listeners=PLAINTEXT://:9092 默认端口9092
    2. log.dirs=X:/xxxpath/kafka_2.12-2.0.0/kafka-logs
    3. zookeeper.connect=localhost:2181 代表依赖的zk地址
  4. 编辑X:/xxxpath/kafka_2.12-2.0.0/config中修改log4j.properties,里面/tmp/xxx地址改为X:/xxxpath/kafka_2.12-2.0.0/logs/
  5. 运行kafka,cmd,运行.\bin\windows\kafka-server-start.bat .\config\server.properties

使用kafka

Topic操作

  1. 建立kafka-topics.bat文件写入:kafka-run-class.bat kafka.admin.TopicCommand %*
  2. 查看topic:kafka-topics.bat –list –zookeeper localhost:2181
  3. 创建topic:kafka-topics.bat –create –zookeeper locahost:2181 –replication-factor 1 –partitions 1 –topic accounts
  4. 在zk上查看topic:zookeeper-shell.bat localhost:2181会进入zk的磁盘系统中,可以看到kafka在zk中的存储的数据
    1. 进入目录:get /
    2. 展示文件列表:ls / ls /brokers/topics ls /brokers/topics/{topic name}/partitions/0/state

消息操作,模拟生产者消费者

  1. X:\xxxpath\kafka_2.12-2.0.0\bin\windows>kafka-console-producer.bat –broker-list localhost:9092 –topic accounts
    1. 进入发送模式,输入内容都会发送到kafka中
  2. X:\xxxpath\kafka_2.12-2.0.0\bin\windows>kafka-console-consumer.bat –bootstrap-server localhost:9092 –topic accounts –from-beginning
    1. 进入监听模式,所有kafka内容都会打印到cmd控台上
------ 本文结束 ------

版权声明

dawell's Notes by Dawell is licensed under a Creative Commons BY-NC-ND 4.0 International License.
Dawell创作并维护的dawell's Notes博客采用创作共用保留署名-非商业-禁止演绎4.0国际许可证
本文首发于dawell's Notes 博客( http://dawell.cc ),版权所有,侵权必究。

坚持原创技术分享,您的支持将鼓励我继续创作!