提示:本文原创作品,良心制作,干货为主,简洁清晰,一看就会
如果对redis的基础知识不了解的可以这篇文章https://blog.csdn.net/m0_63756214/article/details/154868788?spm=1001.2014.3001.5501
Redis Cluster 是 Redis 官方原生的分布式集群方案,核心用于解决
单节点 Redis 的性能瓶颈与高可用痛点。它依托内存存储的核心特性,将
数据分片后分布式存储在集群的多个实例中,且各
实例间可互通协同。这种架构不仅能让集群的总内存容量随节点数量倍增,还能通过负载分担大幅提升整体读写性能,充分发挥分布式部署的优势

Redis-cluster集群中有
16384(0-16383)个哈希槽,每个redis实例负责一部分slot(槽位),集群中的所有信息通过节点数据交换而更新。
一个哈希槽中会有很多key和value
存储数据
Redis 集群使用数据分片(sharding)来实现:Redis 集群中内置了
16384 个哈希槽,当需要在redis集群中放置key-value时,redis先对key使用
crc16算法(算出一个结果,再用这个结果对16384求
余数,这样每个key都会对应一个编号0-16383之间的哈希槽(好比任何数除以10余数都是0~9之间),那么redis就会把这个key 分配到对应范围的节点上了
查询数据
当连接三个节点任何一个节点想获取这个key时,也会这样的算法,然后内部跳转到存放这个key节点上获取数据
例:
当需要在 Redis 集群中放置一个 key-value(name1: 张三) 时,redis 先对 key 使用 crc16 算法算出一个结果678698,然后把结果对 16384 求余数(集群使用公式 CRC16(key) % 16384),这样每个key 都会对应编号6454的槽位,然后看这个槽位是谁哪个redis负责就存到那个上面

这个图中,每个redis都存储了一部分数据,如果其中一个redis宕机,那么整个集群都不能使用,redis cluster 为了保证数据的高可用性,加入了主从模式,一个主节点对应一个或多个从节点,主节点提供数据存取,从节点则是从主节点拉取数据备份,当这个主节点挂掉后,就会有这个从节点选取一个来充当主节点,从而保证集群不会挂掉,如下图

主从切换机制
选举过程是集群中所有master参与,如果半数以上master节点与故障节点通信超过(cluster-node-timeout),认为该节点故障,自动触发故障转移操作. #故障节点对应的从节点自动升级为主节点
只要
任一主节点(及其所有从节点)完全失效,集群就会因部分哈希槽无法提供服务而不可用;若主节点挂掉且无从节点,它负责的所有槽会彻底 “无人接管”,集群的槽集合不再完整(缺失了这部分槽);
Redis Cluster 要求所有 16384 个槽必须都有可用节点负责,否则会拒绝所有读写请求,避免数据访问不完整
受条件所限,这里仅演示三主三从配置,再做一个节点添加的实验。如果大家要配置更大的集群,原理是一样的
操作系统大家随便用什么,我试过centos和ubuntu混用都是完全没问题的
| 主机名 | ip地址 | 服务 | 作用 |
|---|---|---|---|
| redis-master1 | 192.168.136.10 | redis 6.2.7 | 集群主节点master1 |
| redis-slave1 | 192.168.136.20 | redis 6.2.7 | 从节点 |
| redis-master2 | 192.168.136.30 | redis 6.2.7 | 集群主节点master2 |
| redis-slave1 | 192.168.136.40 | redis 6.2.7 | 从节点 |
| redis-master3 | 192.168.136.134 | redis 6.2.7 | 集群主节点master2 |
| redis-slave1 | 192.168.136.135 | redis 6.2.7 | 从节点 |
| redis-master4 | 192.168.136.138 | redis 6.2.7 | 添加节点,集群主节点master4 |
| redis-slave4 | 192.168.136.139 | redis 6.2.7 | 添加节点,master4的从节点 |
注:配置redis前先关闭每台机器的防火墙或者开启相应端口,并同步时间
redis官网:https://redis.io/downloads/
这里一共6台机器,都像下面一模一样的配置即可
[root@redis-master ~]# wget https://download.redis.io/releases/redis-6.2.7.tar.gz #可以直接用这个链接下载,我用的版本是6.2.7
[root@redis-master1 ~]# ls
redis-6.2.7.tar.gz
[root@redis-master1 ~]# mv /data/redis-6.2.7/ /data/redis
[root@redis-master1 ~]# cd /data/redis
[root@redis-master1 redis]# yum -y install gcc make #Ubuntu就用apt -y install gcc make
[root@redis-master1 redis]# make #安装
[root@redis-master1 redis]# mkdir data #创建数据存放目录
[root@redis-master1 redis]# vim redis.conf #修改配置文件,有的已经默认有了,有的需要解开注释,不要盲目复制我的配置
bind 0.0.0.0 #让每台机器都可以连接,也可以配置网段
daemonize yes #redis后台运行
dir /data/redis/data #存放数据的目录
appendonly yes #开启AOF持久化
appendfilename "appendonly.aof" #AOF文件名称
appendfsync everysec #表示对写操作进行累积,每秒同步一次
cluster-enabled yes #解开注释,启用集群
cluster-config-file nodes-6379.conf #解开注释,集群配置文件,由redis自动更新,不需要手动配置
cluster-node-timeout 5000 #单位毫秒。集群节点超时时间:主从节点断开连接的阈值,超时则判定主节点失效,从节点可能升级为主节点
#故障转移时,所有从节点会申请成为主节点。但部分从节点若与主节点断开连接过久,数据会陈旧,不应被提升。该参数用于判断从节点与主节点的断线时长是否超限(计算方式:cluster-node-timeout × cluster-replica-validity-factor,本例为 5000×10 毫秒)
cluster-replica-validity-factor 10
cluster-migration-barrier 1 #每个主节点最少一个从节点
cluster-require-full-coverage yes #集群中的所有slot(16384个)全部覆盖,才能提供服务
[root@redis-master1 redis]# nohup src/redis-server ./redis.conf & #启动redis
[root@redis-master1 redis]# src/redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> quit
六台机器均需相同配置,嫌麻烦可通过
scp redis.conf 目标IP:/data/redis/ 复制配置文件,无需逐台重复配置,其他的步骤都需要一模一样,并且每台机器都需要启动并能看见ping-pong效果
六台redis启动之后,现在它们是没有任何关系的,这个时候我们需要给它设置一下主从让其成为一个集群
#随便其中一台集群操作
#想让哪三台成为主节点就先写哪三台,后台三就是从节点
#1表示每台master都有一个从节点
[root@redis-master1 redis]# src/redis-cli --cluster create --cluster-replicas 1 192.168.136.10:6379 192.168.136.30:6379 192.168.136.134:6379 192.168.136.20:6379 192.168.136.40:6379 192.168.136.135:6379
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.136.40:6379 to 192.168.136.10:6379
Adding replica 192.168.136.135:6379 to 192.168.136.30:6379
Adding replica 192.168.136.20:6379 to 192.168.136.134:6379
M: 34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379
slots:[0-5460] (5461 slots) master
M: 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379
slots:[5461-10922] (5462 slots) master
M: 674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379
slots:[10923-16383] (5461 slots) master
S: a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379
replicates 674e0ff751aabfd3bfd8860672da5c7c38a4837e
S: a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379
replicates 34db43cab9a8c845fa42f168b66e350447485acb
S: a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379
replicates 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 192.168.136.10:6379)
M: 34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379
slots: (0 slots) slave
replicates 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0
S: a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379
slots: (0 slots) slave
replicates 674e0ff751aabfd3bfd8860672da5c7c38a4837e
S: a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379
slots: (0 slots) slave
replicates 34db43cab9a8c845fa42f168b66e350447485acb
M: 674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

[root@redis-master1 redis]# src/redis-cli -h 192.168.136.10 -c #-c登录集群
192.168.136.10:6379> ping
PONG
192.168.136.10:6379> cluster nodes #这里可以清除的查看集群的主从情况
34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379@16379 myself,master - 0 1763450667000 1 connected 0-5460
1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379@16379 master - 0 1763450667000 2 connected 5461-10922
a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379@16379 slave 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 0 1763450666549 2 connected
a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379@16379 slave 674e0ff751aabfd3bfd8860672da5c7c38a4837e 0 1763450666549 3 connected
a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379@16379 slave 34db43cab9a8c845fa42f168b66e350447485acb 0 1763450667057 1 connected
674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379@16379 master - 0 1763450667158 3 connected 10923-16383

#随便找一台主节点测试
[root@redis-master1 redis]# src/redis-cli -h 192.168.136.10 -c
192.168.136.10:6379> set name lisa
-> Redirected to slot [5798] located at 192.168.136.30:6379 #自动存储到30这个主节点了
OK
192.168.136.20:6379> get name
-> Redirected to slot [5798] located at 192.168.136.30:6379 #查询也是从30这个主节点取数据
"lisa"
192.168.136.30:6379> quit
#随便找一台从节点查询
[root@redis-slave1 redis]# src/redis-cli -h 192.168.136.20 -c
192.168.136.20:6379> ping
PONG
192.168.136.20:6379> get name #同样能取到数据
-> Redirected to slot [5798] located at 192.168.136.30:6379
"lisa"
192.168.136.30:6379> quit
想要扩大集群,顺序是:
添加新主节点---->给主节点分配槽位---->添加从节点
首先再准备两台一模一样的redis,关闭防火墙或者开放相应端口,同步时间
#可以达到ping-pong的效果
root@redis-master4:/data/redis# src/redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> quit
root@redis-slave4:/data/redis# src/redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> quit
#添加master4
root@redis-master4:/data/redis# src/redis-cli --cluster add-node 192.168.136.138:6379 192.168.136.10:6379
>>> Adding node 192.168.136.138:6379 to cluster 192.168.136.10:6379
>>> Performing Cluster Check (using node 192.168.136.10:6379)
......
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.136.138:6379 to make it join the cluster.
[OK] New node added correctly.
#随便找一个集群中的机器查看集群状态
[root@redis-master1 redis]# src/redis-cli -h 192.168.136.10 -c
192.168.136.10:6379> cluster nodes
34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379@16379 myself,master - 0 1763455477000 1 connected 0-5460
1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379@16379 master - 0 1763455476976 2 connected 5461-10922
a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379@16379 slave 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 0 1763455478000 2 connected
a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379@16379 slave 674e0ff751aabfd3bfd8860672da5c7c38a4837e 0 1763455477482 3 connected
f4e6693a4d6c048f7e485e12235f959ad90699a5 192.168.136.138:6379@16379 master - 0 1763455478591 0 connected #可以看到master4被添加进来了,但是没有槽点
a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379@16379 slave 34db43cab9a8c845fa42f168b66e350447485acb 0 1763455478996 1 connected
674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379@16379 master - 0 1763455477989 3 connected 10923-16383
#给master4分配槽位
[root@redis-master1 redis]# src/redis-cli --cluster reshard 192.168.136.138:6379
>>> Performing Cluster Check (using node 192.168.136.138:6379)
......
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4000 #输入4000,给master4分配4000个槽位
What is the receiving node ID? f4e6693a4d6c048f7e485e12235f959ad90699a5 #填写接受槽位master4的ID
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: all #输入all,自动选择全部源节点
......
Do you want to proceed with the proposed reshard plan (yes/no)? yes #然后输入yes
#随便找一个集群中的机器查看集群状态
[root@redis-master1 redis]# src/redis-cli -h 192.168.136.10 -c
192.168.136.10:6379> cluster nodes
34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379@16379 myself,master - 0 1763455736000 1 connected 1333-5460
1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379@16379 master - 0 1763455739000 2 connected 6795-10922
a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379@16379 slave 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 0 1763455738000 2 connected
a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379@16379 slave 674e0ff751aabfd3bfd8860672da5c7c38a4837e 0 1763455740098 3 connected
f4e6693a4d6c048f7e485e12235f959ad90699a5 192.168.136.138:6379@16379 master - 0 1763455738582 7 connected 0-1332 5461-6794 10923-12255 #可以看到master4有槽位了,这样主节点就添加好了,但是还没有从节点
a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379@16379 slave 34db43cab9a8c845fa42f168b66e350447485acb 0 1763455739088 1 connected
674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379@16379 master - 0 1763455739088 3 connected 12256-16383
#--cluster-master-id接的是master4的id
root@redis-master4:/data/redis# src/redis-cli --cluster add-node 192.168.136.139:6379 192.168.136.138:6379 --cluster-slave --cluster-master-id f4e6693a4d6c048f7e485e12235f959ad90699a5
>>> Adding node 192.168.136.139:6379 to cluster 192.168.136.138:6379
>>> Performing Cluster Check (using node 192.168.136.138:6379)
......
>>> Configure node as replica of 192.168.136.138:6379.
[OK] New node added correctly.
#随便找一个集群中的机器查看集群状态
[root@redis-master1 redis]# src/redis-cli -h 192.168.136.10 -c
192.168.136.10:6379> cluster nodes
34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379@16379 myself,master - 0 1763457024000 1 connected 1333-5460
1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379@16379 master - 0 1763457029000 2 connected 6795-10922
a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379@16379 slave 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 0 1763457029000 2 connected
a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379@16379 slave 674e0ff751aabfd3bfd8860672da5c7c38a4837e 0 1763457029296 3 connected
f4e6693a4d6c048f7e485e12235f959ad90699a5 192.168.136.138:6379@16379 master - 0 1763457029296 7 connected 0-1332 5461-6794 10923-12255
a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379@16379 slave 34db43cab9a8c845fa42f168b66e350447485acb 0 1763457028282 1 connected
0154f7abf40bc5982ada59ef5eadb894252a382c 192.168.136.139:6379@16379 slave f4e6693a4d6c048f7e485e12235f959ad90699a5 0 1763457028587 7 connected
674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379@16379 master - 0 1763457027270 3 connected 12256-16383
192.168.136.10:6379>

root@redis-master4:/data/redis# src/redis-cli -h 192.168.136.138 -c
192.168.136.138:6379> ping
PONG
192.168.136.138:6379> get name #之前三主三从添加的测试数据,在master4上也能查到
"lisa"
192.168.136.138:6379> set addr shanghai
-> Redirected to slot [12790] located at 192.168.136.134:6379
OK
#随便找台数据查询
[root@redis-master2 redis]# src/redis-cli -c #注意,一定要加-c才能进入集群,不然数据查不到
127.0.0.1:6379> get name
-> Redirected to slot [5798] located at 192.168.136.138:6379
"lisa"
192.168.136.138:6379> get addr #可以查到数据
-> Redirected to slot [12790] located at 192.168.136.134:6379
"shanghai"
192.168.136.134:6379>
至此,集群添加节点成功!
缩减集群的步骤是:
删除从节点---->移走主节点的槽位---->删除主节点
在移除某个redis节点之前,首先不能在登入该节点当中,否则不能正常移除该节点
#del-node后面跟ip:端口 ID
[root@redis-master1 redis]# src/redis-cli --cluster del-node 192.168.136.139:6379 0154f7abf40bc5982ada59ef5eadb894252a382c
[root@redis-master1 redis]# src/redis-cli -h 192.168.136.10 -c
192.168.136.10:6379> cluster nodes #可见已经没有139了
34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379@16379 myself,master - 0 1763458155000 1 connected 1333-5460
1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379@16379 master - 0 1763458155000 2 connected 6795-10922
a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379@16379 slave 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 0 1763458154000 2 connected
a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379@16379 slave 674e0ff751aabfd3bfd8860672da5c7c38a4837e 0 1763458155347 3 connected
f4e6693a4d6c048f7e485e12235f959ad90699a5 192.168.136.138:6379@16379 master - 0 1763458154337 7 connected 0-1332 5461-6794 10923-12255
a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379@16379 slave 34db43cab9a8c845fa42f168b66e350447485acb 0 1763458154842 1 connected
674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379@16379 master - 0 1763458156000 3 connected 12256-16383
从节点删除成功
#ip+port:要移除的节点
#cluster-from:移除节点的id
#cluster-to:接受槽主节点的id,需要将4096平均移动到不同的主节点,需要写不同接受槽的主节点id
#cluster-slots:移除槽的数量
#由于我开始给master4分配了4000个槽位,所以现在我要把这4000个槽位移走
#我给master1和master2分配了1333个槽位,给master3分配了1334个槽位;槽位一定要全部分走
[root@redis-master1 redis]# src/redis-cli --cluster reshard 192.168.136.138:6379 --cluster-from f4e6693a4d6c048f7e485e12235f959ad90699a5 --cluster-to 34db43cab9a8c845fa42f168b66e350447485acb --cluster-slots 1333 --cluster-yes
[root@redis-master1 redis]# src/redis-cli --cluster reshard 192.168.136.138:6379 --cluster-from f4e6693a4d6c048f7e485e12235f959ad90699a5 --cluster-to 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 --cluster-slots 1333 --cluster-yes
[root@redis-master1 redis]# src/redis-cli --cluster reshard 192.168.136.138:6379 --cluster-from f4e6693a4d6c048f7e485e12235f959ad90699a5 --cluster-to 674e0ff751aabfd3bfd8860672da5c7c38a4837e --cluster-slots 1334 --cluster-yes
[root@redis-master1 redis]# src/redis-cli -h 192.168.136.10 -c
192.168.136.10:6379> cluster nodes
34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379@16379 myself,master - 0 1763459052000 8 connected 0-5460
1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379@16379 master - 0 1763459053000 9 connected 5461-6793 6795-10922
a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379@16379 slave 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 0 1763459054362 9 connected
a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379@16379 slave 674e0ff751aabfd3bfd8860672da5c7c38a4837e 0 1763459053349 10 connected
f4e6693a4d6c048f7e485e12235f959ad90699a5 192.168.136.138:6379@16379 master - 0 1763459052843 7 connected #可以看到master4已经没有槽位了
a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379@16379 slave 34db43cab9a8c845fa42f168b66e350447485acb 0 1763459052335 8 connected
674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379@16379 master - 0 1763459053854 10 connected 6794 10923-16383
192.168.136.10:6379> quit
[root@redis-master1 redis]# src/redis-cli --cluster del-node 192.168.136.138:6379 f4e6693a4d6c048f7e485e12235f959ad90699a5 #删除master4
>>> Removing node f4e6693a4d6c048f7e485e12235f959ad90699a5 from cluster 192.168.136.138:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@redis-master1 redis]# src/redis-cli -h 192.168.136.10 -c
192.168.136.10:6379> cluster nodes #可以看见,已经没有master4了
34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379@16379 myself,master - 0 1763459138000 8 connected 0-5460
1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379@16379 master - 0 1763459138000 9 connected 5461-6793 6795-10922
a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379@16379 slave 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 0 1763459137000 9 connected
a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379@16379 slave 674e0ff751aabfd3bfd8860672da5c7c38a4837e 0 1763459138468 10 connected
a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379@16379 slave 34db43cab9a8c845fa42f168b66e350447485acb 0 1763459137459 8 connected
674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379@16379 master - 0 1763459138570 10 connected 6794 10923-16383
192.168.136.10:6379>
主节点删除成功!
现在集群是三主三从

我停掉master1,看看40机器会不会成为主节点
[root@redis-master1 redis]# ps aux | grep redis
root 10117 0.3 0.3 162520 10424 pts/0 Sl 14:49 0:41 src/redis-sentinel *:26379 [sentinel]
root 10132 0.2 0.3 165080 10256 ? Ssl 14:51 0:27 src/redis-server 0.0.0.0:6379 [cluster]
root 12703 0.0 0.0 112824 988 pts/0 R+ 17:49 0:00 grep --color=auto redis
[root@redis-master1 redis]# kill 10132
[root@redis-slave2 redis]# src/redis-cli -c
127.0.0.1:6379> get name
-> Redirected to slot [5798] located at 192.168.136.138:6379
"lisa"
192.168.136.138:6379> get addr
-> Redirected to slot [12790] located at 192.168.136.134:6379
"shanghai"
192.168.136.134:6379> cluster nodes
34db43cab9a8c845fa42f168b66e350447485acb 192.168.136.10:6379@16379 master,fail - 1763459407061 1763459404000 8 disconnected
a2f6e420ea43cbc3bb3f5da412c9710326595f73 192.168.136.135:6379@16379 slave 1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 0 1763459448585 9 connected
674e0ff751aabfd3bfd8860672da5c7c38a4837e 192.168.136.134:6379@16379 myself,master - 0 1763459448000 10 connected 6794 10923-16383
a2f2e5e557cae3987fc3d63893b30122c7298773 192.168.136.20:6379@16379 slave 674e0ff751aabfd3bfd8860672da5c7c38a4837e 0 1763459451625 10 connected
1892005e984fac2d8fcffaa0e8ec5cbc21f079e0 192.168.136.30:6379@16379 master - 0 1763459449600 9 connected 5461-6793 6795-10922
a4b4e54a47dda7ff25de6cae53d7533ae2a080d6 192.168.136.40:6379@16379 master - 0 1763459450612 11 connected 0-5460 #可以看到40顶替10成为了主节点,主从切换成功
192.168.136.134:6379>
注:
文中若有疏漏,欢迎大家指正赐教。
本文为100%原创,转载请务必标注原创作者,尊重劳动成果。
求赞、求关注、求评论!你的支持是我更新的最大动力,评论区等你~