自学内容网 自学内容网

Linux云计算 |【第四阶段】NOSQL-DAY2

主要内容:

Redis集群概述、部署Redis集群(配置manage管理集群主机、创建集群、访问集群、添加节点、移除节点)

一、Redis集群概述

1、集群概述

所谓集群,就是通过添加服务器的数量,提供相同的服务,从而让服务器达到一个稳定、高效的状态;而单个Redis服务运行存在不稳定性,当Redis服务宕机,就没有可用服务了,且单个Redis服务运行的读写能力是有限的。

Redis 集群是一种分布式数据库解决方案,是为了强化Redis服务的读写能力而存在;用于在多个 Redis 节点之间分片数据,以提高性能、可扩展性和容错能力。Redis 集群通过将数据分布在多个节点上,使得系统能够处理更大的数据集和更高的并发请求。

  • Redis集群中每一个Redis称之为一个节点,所有的Redis节点彼此互联(PING-PONG机制),内部使用二进制协议优化传输速度和带宽;
  • 节点的fail是通过集群中超过半数的节点检测失效时才生效;
  • 客户端与Redis节点直连,不需要中间proxy层,客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可
  • Redis集群中有两种类型的节点:主节点(master)、从节点(slave)
  • Redis集群,是基于Redis主从复制实现(使用管理主机创建集群时,自动分配主从)
  • 主从复制模型中,有多个Redis节点;其中有且仅有一个为主节点Master。从节点Slave可以有多个,只要网络连接正常,Master会一直将自己的数据更新同步给Slaves,保持主从同步;主节点Master:可读、可写,从节点Slave:只读

2、部署Redis集群:

基本原理:集群中至少应有奇数个节点,所以至少有三个节点,每个节点至少有一个备份节点;(实验中,一个主从结构为一个节点)

实验网络拓扑:

服务器角色:Manager、redis1、redis2、redis3、redis4、redis5、redis6

  • Manger:IP为192.168.2.20,安装redis-3.2.1.gem搭建集群工具,部署集群管理脚本
  • Redis1:IP为192.168.2.11,编译安装Redis,开启集群功能,bind设置所有主机访问
  • Redis2:IP为192.168.2.12,编译安装Redis,开启集群功能,bind设置所有主机访问
  • Redis3:IP为192.168.2.13,编译安装Redis,开启集群功能,bind设置所有主机访问
  • Redis4:IP为192.168.2.14,编译安装Redis,开启集群功能,bind设置所有主机访问
  • Redis5:IP为192.168.2.15,编译安装Redis,开启集群功能,bind设置所有主机访问
  • Redis6:IP为192.168.2.16,编译安装Redis,开启集群功能,bind设置所有主机访问

存储结构:

- Redis-Cluster把所有的物理节点映射到[0-16383]Slot上(不一定是平均分配),Cluster 负责维护nodeslotvalue。

- Redis集群预分好16384个哈希槽,当需要在 Redis 集群中放置一个 key-value 时,根据 CRC16(key) mod求余 16384的值,决定将一个key放到哪个哈希槽中。

例如:Redis集群预分好16384个哈希槽,往Redis集群中存储一个key-value(值为10000),根据CRC16算法对值求余,10000 % 16384=存储哈希槽范围,即找到存储节点,存储节点内部为一个主从同步架构,存储到主服务器再同步到从服务器上;

集群相关配置: /etc/redis/6379.conf

  • cluster-enabled yes     //启用集群功能
  • cluster-config-file nodes-6379.conf   //指定集群配置文件位置
  • cluster-node-timeout 15000    //心跳时间(单位毫秒)
  • 集群服务端口:16379

步骤1:配置管理主机(manage操作)

① 配置脚本运行环境

[root@manager ~]# yum -y install rubygems
[root@manager ~]# gem install redis-3.2.1.gem     //参考/linux-soft/4/redis/
Successfully installed redis-3.2.1
Parsing documentation for redis-3.2.1
Installing ri documentation for redis-3.2.1
1 gem installed

补充:RubyGems 是 Ruby 的一个包管理器,它提供一个分发 Ruby 程序和库的标准格式,还提供一个管理程序包安装的工具;

补充:redis-3.2.1.gem,搭建redis集群所需要的软件(redis和ruby接口相关)

② 部署集群管理脚本(redis-trib.rb)

[root@manager ~]# tar -xf redis-4.0.8.tar.gz
[root@manager ~]# cp redis-4.0.8/src/redis-trib.rb /usr/local/bin/
[root@manager ~]# chmod +x /usr/local/bin/redis-trib.rb   //授予脚本执行权限

# 查看帮助

[root@manager ~]# redis-trib.rb help    //Redis集群管理脚本帮助

步骤2:创建集群

① 启动redis1的集群功能(redis1操作)

# 停止服务

[root@redis1 ~]# service redis_6379 stop
Stopping ...
Redis stopped

# 修改配置文件

[root@redis1 ~]# vim /etc/redis/6379.conf
protected-mode no     //关闭保护模式,以允许不使用密码、不指定绑定地址提供服务
# bind 127.0.0.1      //注释本地连接,允许所有主机连接
# requirepass tedu.cn   //注释密码,不使用密码
cluster-enabled yes     //启用集群功能
cluster-config-file nodes-6379.conf   //指定集群配置文件位置
cluster-node-timeout 5000    //心跳时间(单位毫秒)

# 清空原有Redis数据库的数据

[root@redis1 ~]# rm -rf /var/lib/redis/6379/*

# 修改服务启动文件,去掉密码

[root@redis1 ~]# vim +43 /etc/init.d/redis_6379
...
            $CLIEXEC -p $REDISPORT shutdown
...

# 启动服务

[root@redis1 ~]# service redis_6379 start
Starting Redis server...

# 查看端口,集群服务运行在16379端口上

[root@redis1 ~]# ss -tlnp | grep redis
LISTEN     0      128          *:6379                     *:*                   users:(("redis-server",pid=1867,fd=7))
LISTEN     0      128          *:16379                    *:*                   users:(("redis-server",pid=1867,fd=10))
LISTEN     0      128         :::6379                    :::*                   users:(("redis-server",pid=1867,fd=6))
LISTEN     0      128         :::16379                   :::*                   users:(("redis-server",pid=1867,fd=9))

② 在redis1上配置redis2、redis3、redis4、redis5、redis6的集群功能

# 配置redis1可SSH免密登录

[root@redis1 ~]# ssh-keygen -f /root/.ssh/id_rsa -N ''
[root@redis1 ~]# for i in {12..16}
> do
> ssh-copy-id 192.168.2.$i
> done

# 将redis1上编译好的redis安装目录拷贝到其他节点(/usr/local/redis/

[root@redis1 ~]# for i in {12..16}
> do
> scp -r /usr/local/redis/ 192.168.2.$i:/usr/local/
> done

# 将redis命令目录添加至PATH环境变量(/usr/local/redis/bin

[root@redis1 ~]# for i in {12..16}
> do
> ssh 192.168.2.$i "echo 'export PATH=$PATH:/usr/local/redis/bin' >> /etc/bashrc"
> done

注意:需要在redis2、redis3、redis4、redis5、redis6上执行source /etc/bashrc

# 将redis1上源码目录拷贝到其他节点(~/redis-4.0.8)

[root@redis1 ~]# for i in {12..16}
> do
> scp -r redis-4.0.8/ 192.168.2.$i:/root/
> done

# 分别在redis2、redis3、redis4、redis5、redis6上执行初始化服务器脚本(install_server.sh)

[root@redis2 ~]# cd redis-4.0.8/
[root@redis2 redis-4.0.8]# ./utils/install_server.sh   //使用默认值,回车即可
 
[root@redis3 ~]# cd redis-4.0.8/
[root@redis3 redis-4.0.8]# ./utils/install_server.sh
 
[root@redis4 ~]# cd redis-4.0.8/
[root@redis4 redis-4.0.8]# ./utils/install_server.sh
 
[root@redis5 ~]# cd redis-4.0.8/
[root@redis5 redis-4.0.8]# ./utils/install_server.sh
 
[root@redis6 ~]# cd redis-4.0.8/
[root@redis6 redis-4.0.8]# ./utils/install_server.sh

# 停止redis2、redis3、redis4、redis5、redis6上的redis服务

[root@redis1 ~]# for i in {12..16}
> do
> ssh 192.168.2.$i "service redis_6379 stop"
> done
Stopping ...
Redis stopped
...

# 拷贝redis1的已开启集群服务的配置文件到其他节点(/etc/redis/6379.conf)

[root@redis1 ~]# for i in {12..16}
> do
> scp -r /etc/redis/6379.conf 192.168.2.$i:/etc/redis/
> done

# 清除各节点上redis数据库目录的数据(/var/lib/redis/6379/*)

[root@redis1 ~]# for i in {12..16}
> do
> ssh 192.168.2.$i "rm -rf /var/lib/redis/6379/*"
> done

# 启动redis2、redis3、redis4、redis5、redis6上的redis服务

[root@redis1 ~]# for i in {12..16}
> do
> ssh 192.168.2.$i "service redis_6379 start"
> done
Starting Redis server...
...

③ 在管理主机manager上创建集群(manage操作)

  • 通过redis-trib.rb脚本创建集群,--replices 给每个主服务器分配从服务器(replices副本)
  • 格式:redis-trib.rb create --replices 数字 服务器列表
[root@manager ~]# redis-trib.rb create --replicas 1 \
> 192.168.2.11:6379 192.168.2.12:6379 192.168.2.13:6379 \
> 192.168.2.14:6379 192.168.2.15:6379 192.168.2.16:6379
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.2.11:6379
192.168.2.12:6379
192.168.2.13:6379
Adding replica 192.168.2.15:6379 to 192.168.2.11:6379
Adding replica 192.168.2.16:6379 to 192.168.2.12:6379
Adding replica 192.168.2.14:6379 to 192.168.2.13:6379
M: a214edd97f82c20c301982b9debdeb001278e415 192.168.2.11:6379
   slots:0-5460 (5461 slots) master
M: f56fcee490e7297f1d23573c42414e66ece2abcf 192.168.2.12:6379
   slots:5461-10922 (5462 slots) master
M: 3fb5339593859f53a77bab2df1b59ca95c8384bf 192.168.2.13:6379
   slots:10923-16383 (5461 slots) master
S: 44835570bde2c47c3820e6f1f069dbf8056b82c1 192.168.2.14:6379
   replicates 3fb5339593859f53a77bab2df1b59ca95c8384bf
S: 1b82e1c945bd46249d76b28436a453a8b9e57cc4 192.168.2.15:6379
   replicates a214edd97f82c20c301982b9debdeb001278e415
S: 6e0df6a7261cb6120b12f9fe11a434fc138eab77 192.168.2.16:6379
   replicates f56fcee490e7297f1d23573c42414e66ece2abcf
Can I set the above configuration? (type 'yes' to accept):yes    //yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: a214edd97f82c20c301982b9debdeb001278e415 192.168.2.11:6379
   slots:0-5460 (5461 slots) master        //哈希槽0-5460(5461)
   1 additional replica(s)
S: 44835570bde2c47c3820e6f1f069dbf8056b82c1 192.168.2.14:6379
   slots: (0 slots) slave
   replicates 3fb5339593859f53a77bab2df1b59ca95c8384bf
M: f56fcee490e7297f1d23573c42414e66ece2abcf 192.168.2.12:6379
   slots:5461-10922 (5462 slots) master   //哈希槽5461-10922(5462)
   1 additional replica(s)
S: 6e0df6a7261cb6120b12f9fe11a434fc138eab77 192.168.2.16:6379
   slots: (0 slots) slave
   replicates f56fcee490e7297f1d23573c42414e66ece2abcf
M: 3fb5339593859f53a77bab2df1b59ca95c8384bf 192.168.2.13:6379
   slots:10923-16383 (5461 slots) master   //哈希槽10923-16383(5461)
   1 additional replica(s)
S: 1b82e1c945bd46249d76b28436a453a8b9e57cc4 192.168.2.15:6379
   slots: (0 slots) slave
   replicates a214edd97f82c20c301982b9debdeb001278e415
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.   //总哈希槽0-16384

常见报错:*** ERROR: Invalid configuration for cluster creation.

排查思路:

  • ① 检查防火墙和SELinux
  • ② 检查/etc/redis/6379.conf的bind 127.0.0.1是否注释
  • ③ 检查网络是否互通

若管理主机无法创建集群,尝试把redis-cli拷贝到管理主机,并访问和Ping测试

[root@redis1 ~]# scp /usr/local/redis/bin/redis-cli 192.168.2.20:/usr/local/bin/
[root@manager ~]# redis-cli -h 192.168.2.11
192.168.2.11:6379> ping

④ 在管理主机上查看集群信息

  • 格式:redis-trib.rb info 集群节点ip:端口
  • 格式:redis-trib.rb check 集群节点ip:端口
[root@manager ~]# redis-trib.rb info 192.168.2.11:6379
192.168.2.11:6379 (a214edd9...) -> 0 keys | 5461 slots | 1 slaves.
192.168.2.12:6379 (f56fcee4...) -> 0 keys | 5462 slots | 1 slaves.
192.168.2.13:6379 (3fb53395...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.    //3个主节点,0个key;
0.00 keys per slot on average.
 
[root@manager ~]# redis-trib.rb check 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: a214edd97f82c20c301982b9debdeb001278e415 192.168.2.11:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 44835570bde2c47c3820e6f1f069dbf8056b82c1 192.168.2.14:6379
   slots: (0 slots) slave
   replicates 3fb5339593859f53a77bab2df1b59ca95c8384bf
M: f56fcee490e7297f1d23573c42414e66ece2abcf 192.168.2.12:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 6e0df6a7261cb6120b12f9fe11a434fc138eab77 192.168.2.16:6379
   slots: (0 slots) slave
   replicates f56fcee490e7297f1d23573c42414e66ece2abcf
M: 3fb5339593859f53a77bab2df1b59ca95c8384bf 192.168.2.13:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 1b82e1c945bd46249d76b28436a453a8b9e57cc4 192.168.2.15:6379
   slots: (0 slots) slave
   replicates a214edd97f82c20c301982b9debdeb001278e415
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

⑤ 在集群节点上查看集群信息

# 直接将服务器上的redis-cli拷贝到客户端的/usr/local/bin即可

[root@redis1 ~]# scp /usr/local/redis/bin/redis-cli 192.168.2.10:/usr/local/bin

# 在客户端上登陆服务器,并查看集群状态

[root@clinet ~]# redis-cli -h 192.168.2.11
192.168.2.11:6379> PING
PONG
192.168.2.11:6379> CLUSTER INFO    //查看集群状态
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:3726
cluster_stats_messages_pong_sent:4086
cluster_stats_messages_sent:7812
cluster_stats_messages_ping_received:4081
cluster_stats_messages_pong_received:3726
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:7812

步骤3:访问集群

客户端可以连接任何一台集群服务器,存储数据时,根据CRC16算法,客户端自动重定向到指定服务器存储;

  • 格式:redis-cli -c -h 节点ip -p 端口
[root@clinet ~]# redis-cli -c -h 192.168.2.11    //需要【-c】指定集群
192.168.2.11:6379> SET name tom
-> Redirected to slot [5798] located at 192.168.2.12:6379    //根据算法,存储在M-redis2
OK
192.168.2.12:6379> SET gender boy
-> Redirected to slot [15355] located at 192.168.2.13:6379   //根据算法,存储在M-redis3
OK
192.168.2.13:6379> SET email tom@tedu.cn
-> Redirected to slot [10780] located at 192.168.2.12:6379    //根据算法,存储在M-redis2 
OK
192.168.2.12:6379> SET phone 13511223344  //根据算法,依旧存储在M-redis2
OK
192.168.2.12:6379> SET address Beijing     //根据算法,存储在M-redis1
-> Redirected to slot [3680] located at 192.168.2.11:6379
OK

注意:不使用-c指定集群,会提示报错

192.168.2.11:6379> SET name tom
(error) MOVED 5798 192.168.2.12:6379

3、添加新节点(主从服务器):

实验网络拓扑:

  • Redis7:IP为192.168.2.17,编译安装Redis,开启集群功能,bind设置所有主机访问
  • Redis8:IP为192.168.2.18,编译安装Redis,开启集群功能,bind设置所有主机访问

步骤1:准备2台初始化完成且开启集群功能的redis服务器

① 配置redis1可SSH免密登录

[root@redis1 ~]# ssh-keygen -f /root/.ssh/id_rsa -N ''
[root@redis1 ~]# for i in {17,18}; do ssh-copy-id 192.168.2.$i; done

② 将redis1上编译好的redis安装目录拷贝到其他节点(/usr/local/redis/

[root@redis1 ~]# for i in {17,18}; do scp -r /usr/local/redis/ 192.168.2.$i:/usr/local/; done

③ 将redis命令目录添加至PATH环境变量(/usr/local/redis/bin

[root@redis1 ~]# for i in {17,18}; do ssh 192.168.2.$i "echo 'export PATH=$PATH:/usr/local/redis/bin' >> /etc/bashrc"; done

注意:需要在redis7、redis8上执行source /etc/bashrc

④ 将redis1上源码目录拷贝到其他节点(~/redis-4.0.8

[root@redis1 ~]# for i in {17,18}; do scp -r redis-4.0.8/ 192.168.2.$i:/root/; done

⑤ 分别在redis7、redis8上执行初始化服务器脚本(install_server.sh

[root@redis7 ~]# cd redis-4.0.8/
[root@redis7 redis-4.0.8]# ./utils/install_server.sh     //使用默认值,回车即可
[root@redis8 ~]# cd redis-4.0.8/
[root@redis8 redis-4.0.8]# ./utils/install_server.sh

⑥ 停止redis7、redis8上的redis服务

[root@redis1 ~]# for i in {17..18}; do ssh 192.168.2.$i "service redis_6379 stop"; done

⑦ 拷贝redis1的配置文件到redis7、redis8节点(/etc/redis/6379.conf

[root@redis1 ~]# for i in {17..18}; do scp -r /etc/redis/6379.conf 192.168.2.$i:/etc/redis/; done

⑧ 清除redis7、redis8节点上redis数据库目录的数据(/var/lib/redis/6379/*

[root@redis1 ~]# for i in {17,18}; do ssh 192.168.2.$i "rm -rf /var/lib/redis/6379/*"; done

⑨ 启动redis7、redis8上的redis服务

[root@redis1 ~]# for i in {17..18}; do ssh 192.168.2.$i "service redis_6379 start"; done

步骤2:添加主服务器redis7(192.168.2.17)到集群

① 在管理主机manager上,添加master角色主机到集群

  • 格式:redis-trib.rb add-node master节点:端口 集群服务器:端口
[root@manager ~]# redis-trib.rb add-node 192.168.2.17:6379 192.168.2.11:6379
>>> Adding node 192.168.2.17:6379 to cluster 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: b7924a95154fa41ecc2ae2e94bc34fe68ef1f518 192.168.2.11:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 106d710f372d2962ea0fa084a4e60db4f1ad8abe 192.168.2.12:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 4cf754de67a6e5cb145e6f15bf6305ba341c1ed9 192.168.2.14:6379
   slots: (0 slots) slave
   replicates d777a09222ca342ea45b4f2e07c8ffcab895cfd9
M: d777a09222ca342ea45b4f2e07c8ffcab895cfd9 192.168.2.13:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 5cb27996eb18adefe5edc5de5f057e8cbe47cb87 192.168.2.15:6379
   slots: (0 slots) slave
   replicates b7924a95154fa41ecc2ae2e94bc34fe68ef1f518
S: 64429734fa6bf0314b88a85954e0da10d19c1be7 192.168.2.16:6379
   slots: (0 slots) slave
   replicates 106d710f372d2962ea0fa084a4e60db4f1ad8abe
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.2.17:6379 to make it join the cluster.
[OK] New node added correctly.

# 在管理主机上查看集群消息

[root@manager ~]# redis-trib.rb info 192.168.2.11:6379
192.168.2.11:6379 (b7924a95...) -> 0 keys | 5461 slots | 1 slaves.
192.168.2.17:6379 (c92fbd8d...) -> 0 keys | 0 slots | 0 slaves.    //未分配哈希槽和从节点
192.168.2.12:6379 (106d710f...) -> 0 keys | 5462 slots | 1 slaves.
192.168.2.13:6379 (d777a092...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.

# 在管理主机上检测集群

[root@manager ~]# redis-trib.rb check 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: b7924a95154fa41ecc2ae2e94bc34fe68ef1f518 192.168.2.11:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: c92fbd8d3c41f3bccf4313019181c049e914b70f 192.168.2.17:6379
   slots: (0 slots) master    //暂未分配哈希槽和从节点
   0 additional replica(s)
M: 106d710f372d2962ea0fa084a4e60db4f1ad8abe 192.168.2.12:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 4cf754de67a6e5cb145e6f15bf6305ba341c1ed9 192.168.2.14:6379
   slots: (0 slots) slave
   replicates d777a09222ca342ea45b4f2e07c8ffcab895cfd9
M: d777a09222ca342ea45b4f2e07c8ffcab895cfd9 192.168.2.13:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 5cb27996eb18adefe5edc5de5f057e8cbe47cb87 192.168.2.15:6379
   slots: (0 slots) slave
   replicates b7924a95154fa41ecc2ae2e94bc34fe68ef1f518
S: 64429734fa6bf0314b88a85954e0da10d19c1be7 192.168.2.16:6379
   slots: (0 slots) slave
   replicates 106d710f372d2962ea0fa084a4e60db4f1ad8abe
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

② 在管理主机manage上,为新的master主机分配哈希槽

[root@manager ~]# redis-trib.rb reshard 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: b7924a95154fa41ecc2ae2e94bc34fe68ef1f518 192.168.2.11:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: c92fbd8d3c41f3bccf4313019181c049e914b70f 192.168.2.17:6379
   slots: (0 slots) master
   0 additional replica(s)
M: 106d710f372d2962ea0fa084a4e60db4f1ad8abe 192.168.2.12:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 4cf754de67a6e5cb145e6f15bf6305ba341c1ed9 192.168.2.14:6379
   slots: (0 slots) slave
   replicates d777a09222ca342ea45b4f2e07c8ffcab895cfd9
M: d777a09222ca342ea45b4f2e07c8ffcab895cfd9 192.168.2.13:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 5cb27996eb18adefe5edc5de5f057e8cbe47cb87 192.168.2.15:6379
   slots: (0 slots) slave
   replicates b7924a95154fa41ecc2ae2e94bc34fe68ef1f518
S: 64429734fa6bf0314b88a85954e0da10d19c1be7 192.168.2.16:6379
   slots: (0 slots) slave
   replicates 106d710f372d2962ea0fa084a4e60db4f1ad8abe
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096     //分配多少哈希槽
What is the receiving node ID? c92fbd8d3c41f3bccf4313019181c049e914b70f   //添加的主节点ID
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:all      //从哪些节点获取(all全部)
Ready to move 4096 slots.
…
Do you want to proceed with the proposed reshard plan (yes/no)? yes   //是否同意分配

# 在管理主机上,查看集群信息

[root@manager ~]# redis-trib.rb info 192.168.2.11:6379
192.168.2.11:6379 (b7924a95...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.17:6379 (c92fbd8d...) -> 0 keys | 4096 slots | 0 slaves.   //已分配哈希槽,但未分配从节点
192.168.2.12:6379 (106d710f...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.13:6379 (d777a092...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.

步骤3:添加从服务器redis8(192.168.1.18)到集群

① 在管理主机manager上,添加slave角色主机到集群。不指定主节点的 id ,新节点成为从节点最少的主节点的从节点。

  • 格式:redis-trib.rb add-node --slave slave从节点:端口 集群服务器:端口
[root@manager ~]# redis-trib.rb add-node --slave 192.168.2.18:6379 192.168.2.11:6379
>>> Adding node 192.168.2.18:6379 to cluster 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: b7924a95154fa41ecc2ae2e94bc34fe68ef1f518 192.168.2.11:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
M: c92fbd8d3c41f3bccf4313019181c049e914b70f 192.168.2.17:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master
   0 additional replica(s)
M: 106d710f372d2962ea0fa084a4e60db4f1ad8abe 192.168.2.12:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
S: 4cf754de67a6e5cb145e6f15bf6305ba341c1ed9 192.168.2.14:6379
   slots: (0 slots) slave
   replicates d777a09222ca342ea45b4f2e07c8ffcab895cfd9
M: d777a09222ca342ea45b4f2e07c8ffcab895cfd9 192.168.2.13:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
S: 5cb27996eb18adefe5edc5de5f057e8cbe47cb87 192.168.2.15:6379
   slots: (0 slots) slave
   replicates b7924a95154fa41ecc2ae2e94bc34fe68ef1f518
S: 64429734fa6bf0314b88a85954e0da10d19c1be7 192.168.2.16:6379
   slots: (0 slots) slave
   replicates 106d710f372d2962ea0fa084a4e60db4f1ad8abe
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Automatically selected master 192.168.2.17:6379
>>> Send CLUSTER MEET to node 192.168.2.18:6379 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.168.2.17:6379.   //作为redis7的副本(从节点)
[OK] New node added correctly.

# 在管理主机上,查看集群新消息

[root@manager ~]# redis-trib.rb info 192.168.2.11:6379
192.168.2.11:6379 (b7924a95...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.17:6379 (c92fbd8d...) -> 0 keys | 4096 slots | 1 slaves.  //分配一个从节点
192.168.2.12:6379 (106d710f...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.13:6379 (d777a092...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.

# 在管理主机上,检测集群

[root@manager ~]# redis-trib.rb check 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: b7924a95154fa41ecc2ae2e94bc34fe68ef1f518 192.168.2.11:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
M: c92fbd8d3c41f3bccf4313019181c049e914b70f 192.168.2.17:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master
   1 additional replica(s)
S: c4b464da4281b41761d6dc3d68f243625aff47d5 192.168.2.18:6379
   slots: (0 slots) slave
   replicates c92fbd8d3c41f3bccf4313019181c049e914b70f
M: 106d710f372d2962ea0fa084a4e60db4f1ad8abe 192.168.2.12:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
S: 4cf754de67a6e5cb145e6f15bf6305ba341c1ed9 192.168.2.14:6379
   slots: (0 slots) slave
   replicates d777a09222ca342ea45b4f2e07c8ffcab895cfd9
M: d777a09222ca342ea45b4f2e07c8ffcab895cfd9 192.168.2.13:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
S: 5cb27996eb18adefe5edc5de5f057e8cbe47cb87 192.168.2.15:6379
   slots: (0 slots) slave
   replicates b7924a95154fa41ecc2ae2e94bc34fe68ef1f518
S: 64429734fa6bf0314b88a85954e0da10d19c1be7 192.168.2.16:6379
   slots: (0 slots) slave
   replicates 106d710f372d2962ea0fa084a4e60db4f1ad8abe
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

4、移除服务器:

步骤1:移除从服务器

① 在管理主机上,直接移除从服务器redis8(192.168.2.18)

  • 格式:redis-trib.rb del-node 节点IP:端口 节点ID号
[root@manager ~]# redis-trib.rb del-node 192.168.2.18:6379 c4b464da4281b41761d6dc3d68f243625aff47d5
>>> Removing node c4b464da4281b41761d6dc3d68f243625aff47d5 from cluster 192.168.2.18:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

# 在管理主机上,检测集群(redis8节点已被删除)

[root@manager ~]# redis-trib.rb check 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: b7924a95154fa41ecc2ae2e94bc34fe68ef1f518 192.168.2.11:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
S: 64429734fa6bf0314b88a85954e0da10d19c1be7 192.168.2.16:6379
   slots: (0 slots) slave
   replicates 106d710f372d2962ea0fa084a4e60db4f1ad8abe
M: d777a09222ca342ea45b4f2e07c8ffcab895cfd9 192.168.2.13:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
M: 106d710f372d2962ea0fa084a4e60db4f1ad8abe 192.168.2.12:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
M: c92fbd8d3c41f3bccf4313019181c049e914b70f 192.168.2.17:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master
   0 additional replica(s)
S: 4cf754de67a6e5cb145e6f15bf6305ba341c1ed9 192.168.2.14:6379
   slots: (0 slots) slave
   replicates d777a09222ca342ea45b4f2e07c8ffcab895cfd9
S: 5cb27996eb18adefe5edc5de5f057e8cbe47cb87 192.168.2.15:6379
   slots: (0 slots) slave
   replicates b7924a95154fa41ecc2ae2e94bc34fe68ef1f518
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

# 在管理主机上,查看集群新消息

[root@manager ~]# redis-trib.rb info 192.168.2.11:6379
192.168.2.11:6379 (b7924a95...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.13:6379 (d777a092...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.12:6379 (106d710f...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.17:6379 (c92fbd8d...) -> 0 keys | 4096 slots | 0 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.

步骤2:移除主服务器

① 在管理节点上,删除master服务器占用的哈希槽

[root@manager ~]# redis-trib.rb reshard 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
M: b7924a95154fa41ecc2ae2e94bc34fe68ef1f518 192.168.2.11:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
S: 64429734fa6bf0314b88a85954e0da10d19c1be7 192.168.2.16:6379
   slots: (0 slots) slave
   replicates 106d710f372d2962ea0fa084a4e60db4f1ad8abe
M: d777a09222ca342ea45b4f2e07c8ffcab895cfd9 192.168.2.13:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
M: 106d710f372d2962ea0fa084a4e60db4f1ad8abe 192.168.2.12:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
M: c92fbd8d3c41f3bccf4313019181c049e914b70f 192.168.2.17:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master
   0 additional replica(s)
S: 4cf754de67a6e5cb145e6f15bf6305ba341c1ed9 192.168.2.14:6379
   slots: (0 slots) slave
   replicates d777a09222ca342ea45b4f2e07c8ffcab895cfd9
S: 5cb27996eb18adefe5edc5de5f057e8cbe47cb87 192.168.2.15:6379
   slots: (0 slots) slave
   replicates b7924a95154fa41ecc2ae2e94bc34fe68ef1f518
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096     //分配哈希槽范围
What is the receiving node ID? b7924a95154fa41ecc2ae2e94bc34fe68ef1f518  //分配的节点ID
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:c92fbd8d3c41f3bccf4313019181c049e914b70f    //指定归还哈希槽的节点ID
Source node #2:done   //输入所有源节点ID后键入“done”。
…
Do you want to proceed with the proposed reshard plan (yes/no)? yes

# 查看集群信息

[root@manager ~]# redis-trib.rb info 192.168.2.11:6379
192.168.2.11:6379 (b7924a95...) -> 0 keys | 8192 slots | 1 slaves.
192.168.2.13:6379 (d777a092...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.12:6379 (106d710f...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.17:6379 (c92fbd8d...) -> 0 keys | 0 slots | 0 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.

② 移除master节点

  • 格式:redis-trib.rb del-node 节点IP:端口 节点ID号
[root@manager ~]# redis-trib.rb del-node 192.168.2.11:6379 c92fbd8d3c41f3bccf4313019181c049e914b70f
>>> Removing node c92fbd8d3c41f3bccf4313019181c049e914b70f from cluster 192.168.2.11:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

# 查看集群信息

[root@manager ~]# redis-trib.rb info 192.168.2.11:6379
192.168.2.11:6379 (b7924a95...) -> 0 keys | 8192 slots | 1 slaves.
192.168.2.13:6379 (d777a092...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.12:6379 (106d710f...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.

③ 在所有节点上重新均衡分配哈希槽

  • 格式:redis-trib.rb rebalance 集群节点:端口
[root@manager ~]# redis-trib.rb rebalance 192.168.2.11:6379
>>> Performing Cluster Check (using node 192.168.2.11:6379)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Rebalancing across 3 nodes. Total weight = 3
Moving 1366 slots from 192.168.2.11:6379 to 192.168.2.13:6379
Moving 1365 slots from 192.168.2.11:6379 to 192.168.2.12:6379

# 查看集群信息

[root@manager ~]# redis-trib.rb info 192.168.2.11:6379
192.168.2.11:6379 (b7924a95...) -> 0 keys | 5461 slots | 1 slaves.
192.168.2.13:6379 (d777a092...) -> 0 keys | 5462 slots | 1 slaves.
192.168.2.12:6379 (106d710f...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.

扩展:自定义部署Redis服务器,并开启集群功能脚本;

#!/bin/bash
# 描述:该脚本用来部署Redis服务器,并启用集群配置

# 检查编译环境
rpm -q gcc || yum -y install gcc

# 解压软件包(注释:提前准备软件包)
[ -e "redis-4.0.8"] && echo "目录存在" || tar -zxvf redis-4.0.8.tar.gz

# 编译安装
cd redis-4.0.8/
make install
# 初始化配置
cd utils/
echo | source install_server.sh
sleep 5

# 停止redis服务
/etc/init.d/redis_6379 stop

# 启用集群配置
sed -i '89s/yes/no/' /etc/redis/6379.conf   //关闭保护 protected-mode no
sed -i '70s/^/# /' /etc/redis/6379.conf   //注释 bind 127.0.0.1
sed -i '815s/# //' /etc/redis/6379.conf   //cluster-enabled yes
sed -i '823s/# //' /etc/redis/6379.conf   //cluster-config-file nodes-6379.conf
sed -i '829s/# //' /etc/redis/6379.conf   //cluster-node-timeout 15000
sed -i '829s/15000/5000/' /etc/redis/6379.conf  //cluster-node-timeout 5000

# 清除数据目录
rm -rf /var/lib/redis/6379/*

# 启动redis服务
service redis_6379 start

小结:

本篇章节为【第四阶段】NOSQL-DAY2 的学习笔记,这篇笔记可以初步了解到 Redis集群概述、部署Redis集群(配置manage管理集群主机、创建集群、访问集群、添加节点、移除节点):


Tip:毕竟两个人的智慧大于一个人的智慧,如果你不理解本章节的内容或需要相关笔记、视频,可私信小安,请不要害羞和回避,可以向他人请教,花点时间直到你真正的理解。


原文地址:https://blog.csdn.net/AnJern/article/details/142568008

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!