Redis高可用-Cluster
共 4727字,需浏览 10分钟
·
2021-03-09 23:57
Redis有三种高可用方案:主从,哨兵(sentinel),集群(cluster)。哨兵和集群模式都是基于redis主从来实现的,普通的redis主从无法实现自动的高可用切换。
哨兵模式是在redis主从节点外围部署哨兵集群,哨兵集群是一类特殊的redis,是基于quorum协议的监控集群,至少需要三个节点,哨兵集群会对redis主从节点状态进行监控,如果设定值个数(一般使用多数派)的哨兵监控到主节点的失败,会协商进行主从切换。一套哨兵集群可以监控多套redis主从,具体配置步骤不再赘述。
集群模式也称为redis cluster,redis cluster是一个多节点分布式缓存,架构与分布式数据库有些类似,redis key的crc16值被hash到16384个slot中,所以每个节点存储一部分redis数据,所有节点组成完整的redis集群。客户端可以对每个redis节点进行读写,但是同一时刻只能操作一个key。
redis集群的优势在于可以快速的实现扩缩容,可以在线的新增和删除节点。另外不同于sentinel模式,redis cluster中所有服务器都可以看做主节点,都可以进行读写,所有服务器的资源得到了利用。在高可用方案,每个redis主节点在另外的服务器上都会有一到多个备份的副本,并且基于多数派实现了主节点失败时的自动切换,可以容纳集群少数节点同时失败。
下面以两台机器为例简单看下redis cluster的搭建步骤
两台机器分别创建redis目录:
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
将编译好的redis二进制包(编译过程略,默认redis源码包里也有编译好的二进制文件)拷贝到7001目录中,一般我习惯使用的目录结构如下
redis-cluster/
└── 7001
├── bin
├── data
├── etc
└── log
编辑配置文件
vi /redis-cluster/7001/etc/redis.conf
bind 0.0.0.0
port 7001
pidfile /var/run/redis_7001.pid
loglevel notice
logfile "/redis-cluster/7001/log/redis_7001.log"
dir /redis-cluster/7001/data
/redis-cluster/nodes-7001.conf
daemonize yes
supervised no
appendonly yes
yes
5000
save 900 1
save 300 10
save 60 10000
dbfilename dump.rdb
appendfilename "appendonly.aof"
appendfsync everysec
按照一样的步骤拷贝配置其他五个节点,然后启动六个redis节点,可以放在脚本中
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
执行命令进行集群创建
~]# redis-cli --cluster create 192.168.1.1:7001 192.168.1.1:7002 192.168.1.1:7003 192.168.1.2:7004 192.168.1.2:7005 192.168.1.2:7006 --cluster-replicas 1
Performing hash slots allocation on 6 nodes...
-> Slots 0 - 5460
-> Slots 5461 - 10922
-> Slots 10923 - 16383
Adding replica 192.168.1.2:7006 to 192.168.1.1:7001
Adding replica 192.168.1.1:7003 to 192.168.1.2:7004
Adding replica 192.168.1.2:7005 to 192.168.1.1:7002
M: 5fca048d05b26211e5ccb8b932b6327d31773822 192.168.1.1:7001
slots:[0-5460] (5461 slots) master
M: 4f98b53609307f11674b60bd84caecce3ff8d18e 192.168.1.1:7002
slots:[10923-16383] (5461 slots) master
S: fddf4c97eeb6f59d0de9e45cb0899b309e90d537 192.168.1.1:7003
replicates 84e20fd036d2b1c37d0071a632b08138490f0173
M: 84e20fd036d2b1c37d0071a632b08138490f0173 192.168.1.2:7004
slots:[5461-10922] (5462 slots) master
S: 9769e3b34a0bdee505bba06a8dc924928917671b 192.168.1.2:7005
replicates 4f98b53609307f11674b60bd84caecce3ff8d18e
S: 9c6d002711965fbcbc945fa6ba99ab18b4c04810 192.168.1.2:7006
replicates 5fca048d05b26211e5ccb8b932b6327d31773822
Can I set the above configuration? (type 'yes' to accept): yes
Nodes configuration updated
Assign a different config epoch to each node
Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
Performing Cluster Check (using node 192.168.1.1:7001)
M: 5fca048d05b26211e5ccb8b932b6327d31773822 192.168.1.1:7001
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 84e20fd036d2b1c37d0071a632b08138490f0173 192.168.1.2:7004
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: fddf4c97eeb6f59d0de9e45cb0899b309e90d537 192.168.1.1:7003
slots: (0 slots) slave
replicates 84e20fd036d2b1c37d0071a632b08138490f0173
S: 9769e3b34a0bdee505bba06a8dc924928917671b 192.168.1.2:7005
slots: (0 slots) slave
replicates 4f98b53609307f11674b60bd84caecce3ff8d18e
M: 4f98b53609307f11674b60bd84caecce3ff8d18e 192.168.1.1:7002
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 9c6d002711965fbcbc945fa6ba99ab18b4c04810 192.168.1.2:7006
slots: (0 slots) slave
replicates 5fca048d05b26211e5ccb8b932b6327d31773822
All nodes agree about slots configuration.
Check for open slots...
Check slots coverage...
All 16384 slots covered.
集群相关检查命令
[ ]
[ ]