kafka集群部署搭建【详细版】
文章目录
前言
因项目需要,需搭建kafka集群进行验证工作。
一、环境简介
系统环境真的很重要,这会影响排查问题的效率。但是大部分博客编写的时候基本上都没有记录这些,这就会导致有些命令不是共通的。
1.环境简介
#1 系统版本
[root@cnode1][/opt]
$cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
#2 jdk版本
[root@cnode1][/opt]
$java -version
java version "1.8.0_271"
Java(TM) SE Runtime Environment (build 1.8.0_271-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.271-b09, mixed mode)
2.部署清单
序号 | IP | 主机名 | 部署服务 |
---|---|---|---|
1 | 192.168.56.101 | cnode1 | jdk,zookeeper,kafka |
2 | 192.168.56.103 | cnode2 | jdk,zookeeper,kafka |
3 | 192.168.56.104 | cnode3 | jdk,zookeeper,kafka |
3.组件版本
组件名称 | 版本 |
---|---|
jdk | 1.8.0_271 |
zookeeper | kafka内置版本 |
kafka | 2.12-3.6.0 |
二、部署步骤
1.jdk部署
因为jdk很常见了,详细部署步骤不做记录,只记录配置信息
[root@cnode1][~]
$ cat /etc/profile
export JAVA_HOME=/usr/lib/java/jdk1.8.0_271
export CLASSPATH=.:$JAVA_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH
[root@cnode1][~]
$ java -version
java version "1.8.0_271"
Java(TM) SE Runtime Environment (build 1.8.0_271-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.271-b09, mixed mode)
2.zookeeper安装
2.1 cnode1节点
#1 下载安装包
[root@cydocker][~/package]
$ wget https://downloads.apache.org/kafka/3.6.0/kafka_2.12-3.6.0.tgz
#2 解压缩
[root@cnode1][~/package]
$ tar -xzvf kafka_2.12-3.6.0.tgz
#3 移动
[root@cnode1][~/package]
$ mv kafka_2.12-3.6.0 /opt/kafka
[root@cnode1][~/package]
$ ll /usr/local/kafka
#4 创建存储目录
mkdir -p /opt/software/kafka/log
mkdir -p /opt/software/kafka/zookeeper
#5 修改配置文件
[root@cnode1][/opt/kafka_2.12-3.6.0/config]
$ vim zookeeper.properties
dataDir=/opt/software/kafka/zookeeper
dataLogDir=/opt/software/kafka/zookeeper/log
clientPort=2181
maxClientCnxns=100
tickTimes=2000
initLimit=10
syncLimit=5
server.0=192.168.56.101:2888:3888
server.1=192.168.56.103:2888:3888
server.2=192.168.56.104:2888:3888
#6 编写myid
[root@cnode1][/opt/kafka_2.12-3.6.0/config]
$vim/opt/software/kafka/zookeeper/myid
0
#7 分发到cnode2和cnode3
[root@cnode1][/opt]
$ scp -r kafka_2.12-3.6.0 root@cnode2:$PWD
[root@cnode1][/opt]
$ scp -r kafka_2.12-3.6.0 root@cnode3:$PWD
2.2 cnode2 节点
#1 进入到指定目录
[root@cnode2][/opt/kafka_2.12-3.6.0/config]
$cd /opt/kafka_2.12-3.6.0/config
#2 创建存储目录
mkdir -p /opt/software/kafka/log
mkdir -p /opt/software/kafka/zookeeper
#3 修改myid
[root@cnode2][/opt/kafka_2.12-3.6.0/config]
$vim /opt/software/kafka/zookeeper/myid
1
2.3 cnode3节点
#1 进入到指定目录
[root@cnode3][/opt/kafka_2.12-3.6.0/config]
$cd /opt/kafka_2.12-3.6.0/config
#2 创建存储目录
mkdir -p /opt/software/kafka/log
mkdir -p /opt/software/kafka/zookeeper
#3 修改myid
[root@cnode3][/opt/kafka_2.12-3.6.0/config]
$vim /opt/software/kafka/zookeeper/myid
2
3.kafka安装
3.1 cnode1节点
#1 进入到指定目录
[root@cnode1][/opt/kafka_2.12-3.6.0/config]
$cd /opt/kafka_2.12-3.6.0/config
#2 配置如下:大家需要修改自己的实际ip
[root@cnode1][/opt/kafka_2.12-3.6.0/config]
$cat server.properties |grep -v "^#\|^$"
broker.id=0
listeners=PLAINTEXT://192.168.56.101:9092
advertised.listeners=PLAINTEXT://192.168.56.101:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.56.101:2181,192.168.56.103:2181,192.168.56.104:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
3.2 cnode2节点
#1 进入到指定目录
[root@cnode2][/opt/kafka_2.12-3.6.0/config]
$cd /opt/kafka_2.12-3.6.0/config
#2 配置如下:大家需要修改自己的实际ip
[root@cnode2][/opt/kafka_2.12-3.6.0/config]
$cat server.properties |grep -v "^#\|^$"
broker.id=1
listeners=PLAINTEXT://192.168.56.103:9092
advertised.listeners=PLAINTEXT://192.168.56.103:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.56.101:2181,192.168.56.103:2181,192.168.56.104:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
3.3 cnode3节点
#1 进入到指定目录
[root@cnode3][/opt/kafka_2.12-3.6.0/config]
$cd /opt/kafka_2.12-3.6.0/config
#2 配置如下:大家需要修改自己的实际ip
[root@cnode2][/opt/kafka_2.12-3.6.0/config]
$cat server.properties |grep -v "^#\|^$"
broker.id=2
listeners=PLAINTEXT://192.168.56.104:9092
advertised.listeners=PLAINTEXT://192.168.56.104:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.56.101:2181,192.168.56.103:2181,192.168.56.104:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
三、服务启动
#1 将启动和关闭服务写成脚本
[root@cnode1][/opt/kafka_2.12-3.6.0]
$ ll
-rwxrwxrwx 1 root root 215 Nov 14 11:40 kafkaStart.sh
-rwxrwxrwx 1 root root 211 Nov 13 16:33 kafkaStop.sh
#1.1 zk和kafka服务启动脚本
[root@cnode1][/opt/kafka_2.12-3.6.0]
$ cat kafkaStart.sh
/opt/kafka_2.12-3.6.0/bin/zookeeper-server-start.sh /opt/kafka_2.12-3.6.0/config/zookeeper.properties &
sleep 10
/opt/kafka_2.12-3.6.0/bin/kafka-server-start.sh /opt/kafka_2.12-3.6.0/config/server.properties &
#1.2 zk和kafka服务关闭脚本
[root@cnode1][/opt/kafka_2.12-3.6.0]
$cat kafkaStop.sh
/opt/kafka_2.12-3.6.0/bin/zookeeper-server-stop.sh /opt/kafka_2.12-3.6.0/config/zookeeper.properties &
sleep 3
/opt/kafka_2.12-3.6.0/bin/kafka-server-stop.sh /opt/kafka_2.12-3.6.0/config/server.properties &
#2 分发到其他节点并启动服务
[root@cnode1][/opt/kafka_2.12-3.6.0]
$ scp kafkaStart.sh kafkaStop.sh root@cnode2:$PWD
[root@cnode1][/opt/kafka_2.12-3.6.0]
$ scp kafkaStart.sh kafkaStop.sh root@cnode3:$PWD
#3 服务启动(3个节点都执行)
[root@cnode1][/opt/kafka_2.12-3.6.0]
$ sh /opt/kafka_2.12-3.6.0/kafkaStart.sh
#4 服务验证
[root@cnode2][/opt/kafka_2.12-3.6.0]
$jps
四、总结
至此,kafka集群搭建完毕,大家如果有什么疑问,请及时和我沟通交流。
原文地址:https://blog.csdn.net/xgysimida/article/details/143848767
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!