本篇文章给大家分享的是有关什么是redis集群配置与管理,小编觉得挺实用的,因此分享给大家学习,希望大家阅读完这篇文章后可以有所收获,话不多说,跟着小编一起来看看吧。
创新互联2013年至今,公司自成立以来始终致力于为企业提供官网建设、移动互联网业务开发(成都小程序开发、手机网站建设、重庆APP软件开发等),并且包含互联网基础服务(域名、主机服务、企业邮箱、网络营销等)应用服务;以先进完善的建站体系及不断开拓创新的精神理念,帮助企业客户实现互联网业务,严格把控项目进度与质量监控加上过硬的技术实力获得客户的一致赞誉。
Redis在3.0版本以后开始支持集群,经过中间几个版本的不断更新优化,最新的版本集群功能已经非常完善。本文简单介绍一下Redis集群搭建的过程和配置方法,redis版本是5.0.4,操作系统是中标麒麟(和Centos内核基本一致)。
1、Redis集群原理
Redis 集群是一个提供在多个Redis间节点间共享数据的程序集,集群节点共同构建了一个去中心化的网络,集群中的每个节点拥有平等的身份,节点各自保存各自的数据和集群状态。节点之间采用Gossip协议进行通信,保证了节点状态的信息同步。
Redis 集群数据通过分区来进行管理,每个节点保存集群数据的一个子集。数据的分配采用一种叫做哈希槽(hash slot)
的方式来分配,和传统的一致性哈希不太相同。Redis 集群有16384个哈希槽,每个key通过CRC16校验后对16384取模来决定放置哪个槽。
为了使在部分节点失败或者大部分节点无法通信的情况下集群仍然可用,集群使用了主从复制模型。读取数据时,根据一致性哈希算法到对应的 master 节点获取数据,如果master 挂掉之后,会启动一个对应的 salve 节点来充当 master 。
2、环境准备
这里准备在一台PC上搭建一个3主3从的redis集群。
在/opt/目录下新建一个文件夹rediscluster,用来存放集群节点目录。
然后分别新建server10、server11、server20、server21、server30、server31 6个文件夹准备6个redis节点,这些节点分别使用6379、6380、6381、6382、6383、6384端口,以server10为例配置如下:
port 6379 daemonize yes pidfile /var/run/redis_6379.pid cluster-enabled yes cluster-node-timeout 15000 cluster-config-file nodes-6379.conf
其他节点只需修改端口和文件名,依次按此进行配置即可,配置完成后启动这些节点。
[root@localhost rediscluster]# ./server10/redis-server ./server10/redis.conf & [root@localhost rediscluster]# ./server11/redis-server ./server11/redis.conf & [root@localhost rediscluster]# ./server20/redis-server ./server20/redis.conf & [root@localhost rediscluster]# ./server21/redis-server ./server21/redis.conf & [root@localhost rediscluster]# ./server30/redis-server ./server30/redis.conf & [root@localhost rediscluster]# ./server31/redis-server ./server31/redis.conf &
查看启动状态:
[root@localhost rediscluster]# ps -ef|grep redis root 11842 1 0 15:03 ? 00:00:12 ./server10/redis-server 127.0.0.1:6379 [cluster] root 11950 1 0 15:03 ? 00:00:13 ./server11/redis-server 127.0.0.1:6380 [cluster] root 12074 1 0 15:04 ? 00:00:13 ./server20/redis-server 127.0.0.1:6381 [cluster] root 12181 1 0 15:04 ? 00:00:12 ./server21/redis-server 127.0.0.1:6382 [cluster] root 12297 1 0 15:04 ? 00:00:12 ./server30/redis-server 127.0.0.1:6383 [cluster] root 12404 1 0 15:04 ? 00:00:12 ./server31/redis-server 127.0.0.1:6384 [cluster]
3、集群配置
非常简单:redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 --cluster -replicas 1
其中-replicas 1表示每个主节点1个从节点
[root@localhost rediscluster]# ./server10/redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 127.0.0.1:6383 to 127.0.0.1:6379 Adding replica 127.0.0.1:6384 to 127.0.0.1:6380 Adding replica 127.0.0.1:6382 to 127.0.0.1:6381 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379 slots:[0-5460] (5461 slots) master M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master M: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382 replicates 63e20c75984e493892265ddd2a441c81bcdc575c S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383 replicates d9a79ed6204e558b2fcee78ea05218b4de006acd S: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384 replicates efa84a74525749b8ea20585074dda81b852e9c29 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join .... >>> Performing Cluster Check (using node 127.0.0.1:6379) M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379 slots:[0-5460] (5461 slots) master additional replica(s) M: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master additional replica(s) S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382 slots: (0 slots) slave replicates 63e20c75984e493892265ddd2a441c81bcdc575c S: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384 slots: (0 slots) slave replicates efa84a74525749b8ea20585074dda81b852e9c29 M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master additional replica(s) S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383 slots: (0 slots) slave replicates d9a79ed6204e558b2fcee78ea05218b4de006acd [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
创建完成,主从节点分配如下:
Adding replica 127.0.0.1:6383 to 127.0.0.1:6379 Adding replica 127.0.0.1:6384 to 127.0.0.1:6380 Adding replica 127.0.0.1:6382 to 127.0.0.1:6381
4、集群测试
通过6379客户端连接后进行测试,发现转向了6381:
[root@localhost rediscluster]# ./server10/redis-cli -h 127.0.0.1 -c -p 6379 127.0.0.1:6379> set foo bar -> Redirected to slot [12182] located at 127.0.0.1:6381 OK 127.0.0.1:6381> get foo "bar"
在6381上连接测试:
[root@localhost rediscluster]# ./server10/redis-cli -h 127.0.0.1 -c -p 6381 127.0.0.1:6381> get foo "bar"
结果相同,说明集群配置正常。
5、集群节点扩容
在rediscluster目录下在新增两个目录server40和server41,新增2个redis节点配置6385和6386两个端口。将6385作为新增的master节点,6386作为从节点,然后启动节点:
[root@localhost server41]# ps -ef|grep redis root 11842 1 0 15:03 ? 00:00:18 ./server10/redis-server 127.0.0.1:6379 [cluster] root 11950 1 0 15:03 ? 00:00:19 ./server11/redis-server 127.0.0.1:6380 [cluster] root 12074 1 0 15:04 ? 00:00:18 ./server20/redis-server 127.0.0.1:6381 [cluster] root 12181 1 0 15:04 ? 00:00:18 ./server21/redis-server 127.0.0.1:6382 [cluster] root 12297 1 0 15:04 ? 00:00:17 ./server30/redis-server 127.0.0.1:6383 [cluster] root 12404 1 0 15:04 ? 00:00:18 ./server31/redis-server 127.0.0.1:6384 [cluster] root 30563 1 0 18:01 ? 00:00:00 ./redis-server 127.0.0.1:6385 [cluster] root 30582 1 0 18:02 ? 00:00:00 ./redis-server 127.0.0.1:6386 [cluster]
添加主节点:
[root@localhost server41]# ./redis-cli --cluster add-node 127.0.0.1:6385 127.0.0.1:6379 >>> Adding node 127.0.0.1:6385 to cluster 127.0.0.1:6379 >>> Performing Cluster Check (using node 127.0.0.1:6379) M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379 slots:[0-5460] (5461 slots) master additional replica(s) M: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master additional replica(s) S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382 slots: (0 slots) slave replicates 63e20c75984e493892265ddd2a441c81bcdc575c S: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384 slots: (0 slots) slave replicates efa84a74525749b8ea20585074dda81b852e9c29 M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master additional replica(s) S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383 slots: (0 slots) slave replicates d9a79ed6204e558b2fcee78ea05218b4de006acd [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:6385 to make it join the cluster. [OK] New node added correctly.
查看节点列表:
[root@localhost server41]# ./redis-cli 127.0.0.1:6379> cluster nodes 22e8a8e97d6f7cc7d627e577a986384d4d181a4f 127.0.0.1:6385@16385 master - 0 1555064037664 0 connected efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379@16379 myself,master - 0 1555064036000 1 connected 0-5460 d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381@16381 master - 0 1555064038666 3 connected 10923-16383 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382@16382 slave 63e20c75984e493892265ddd2a441c81bcdc575c 0 1555064035000 4 connected ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384@16384 slave efa84a74525749b8ea20585074dda81b852e9c29 0 1555064037000 6 connected 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380@16380 master - 0 1555064037000 2 connected 5461-10922 fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383@16383 slave d9a79ed6204e558b2fcee78ea05218b4de006acd 0 1555064037000 5 connected
添加从节点:
[root@localhost server41]# ./redis-cli --cluster add-node 127.0.0.1:6386 127.0.0.1:6379 --cluster-slave --cluster-master-id 22e8a8e97d6f7cc7d627e577a986384d4d181a4f >>> Adding node 127.0.0.1:6386 to cluster 127.0.0.1:6379 >>> Performing Cluster Check (using node 127.0.0.1:6379) M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379 slots:[0-5460] (5461 slots) master additional replica(s) M: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f 127.0.0.1:6385 slots: (0 slots) master M: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master additional replica(s) S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382 slots: (0 slots) slave replicates 63e20c75984e493892265ddd2a441c81bcdc575c S: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384 slots: (0 slots) slave replicates efa84a74525749b8ea20585074dda81b852e9c29 M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master additional replica(s) S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383 slots: (0 slots) slave replicates d9a79ed6204e558b2fcee78ea05218b4de006acd [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:6386 to make it join the cluster. Waiting for the cluster to join >>> Configure node as replica of 127.0.0.1:6385. [OK] New node added correctly.
添加成功后,为新节点分配数据:
[root@localhost server41]# ./redis-cli --cluster reshard 127.0.0.1:6385 How many slots do you want to move (from 1 to 16384)? 1000 What is the receiving node ID? 22e8a8e97d6f7cc7d627e577a986384d4d181a4f Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: all
这样就新增完毕了,可以通过cluster nodes命令查看一下新增后的slot分布
127.0.0.1:6379> cluster nodes 22e8a8e97d6f7cc7d627e577a986384d4d181a4f 127.0.0.1:6385@16385 master - 0 1555064706000 7 connected 0-332 5461-5794 10923-11255 efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379@16379 myself,master - 0 1555064707000 1 connected 333-5460 d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381@16381 master - 0 1555064705000 3 connected 11256-16383 7c24e205301b38caa1ff3cd8b270a1ceb7249a2e 127.0.0.1:6386@16386 slave 22e8a8e97d6f7cc7d627e577a986384d4d181a4f 0 1555064705000 7 connected 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382@16382 slave 63e20c75984e493892265ddd2a441c81bcdc575c 0 1555064707000 4 connected ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384@16384 slave efa84a74525749b8ea20585074dda81b852e9c29 0 1555064707236 6 connected 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380@16380 master - 0 1555064706000 2 connected 5795-10922 fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383@16383 slave d9a79ed6204e558b2fcee78ea05218b4de006acd 0 1555064708238 5 connected
6、集群节点缩减
缩减节点时先缩减从节点:
[root@localhost server41]# ./redis-cli --cluster del-node 127.0.0.1:6386 7c24e205301b38caa1ff3cd8b270a1ceb7249a2e >>> Removing node 7c24e205301b38caa1ff3cd8b270a1ceb7249a2e from cluster 127.0.0.1:6386 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
然后进行主节点slot转移:
[root@localhost server41]# ./redis-cli --cluster reshard 127.0.0.1:6385How many slots do you want to move (from 1 to 16384)? 1000What is the receiving node ID? efa84a74525749b8ea20585074dda81b852e9c29 //要移到的节点Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f //要删除的主节点Source node #2: done
最后在缩减主节点
[root@localhost server41]# ./redis-cli --cluster reshard 127.0.0.1:6385 How many slots do you want to move (from 1 to 16384)? 1000 What is the receiving node ID? efa84a74525749b8ea20585074dda81b852e9c29 //要移到的节点 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f //要删除的主节点 Source node #2: done
以上就是什么是redis集群配置与管理,小编相信有部分知识点可能是我们日常工作会见到或用到的。希望你能通过这篇文章学到更多知识。更多详情敬请关注创新互联行业资讯频道。
本文题目:什么是redis集群配置与管理
链接分享:http://scyingshan.cn/article/gpegdd.html