Redis集群管理之Redis Cluster集群节点增减

2023-12-13 09:32

本文主要是介绍Redis集群管理之Redis Cluster集群节点增减,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

我们再来测试一下,新加入一个节点,分2种情况,1是作为主节点,2是作为一个节点的从节点。我们分别来试一下:

  1. 新建一个 7006 节点,让其作为一个新的主节点加入: 
    新建7006目录,拷贝配置文件,修改端口,启动7006端口redis;
[root@spg 7006]# ps -ef | grep redis
root      3063  2974  0 20:04 pts/0    00:00:12 redis-server *:7001 [cluster]
root      3081  2974  0 20:05 pts/0    00:00:11 redis-server *:7002 [cluster]
root      3093  2974  0 20:05 pts/0    00:00:11 redis-server *:7003 [cluster]
root      3109  2974  0 20:06 pts/0    00:00:11 redis-server *:7004 [cluster]
root      3123  2974  0 20:06 pts/0    00:00:11 redis-server *:7005 [cluster]
root      3487  2974  0 20:30 pts/0    00:00:06 redis-server *:7000 [cluster]
root      3981  2974  0 21:09 pts/0    00:00:00 redis-server *:7006 [cluster]
root      3993  2974  0 21:09 pts/0    00:00:00 grep --color=auto redis
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

将7006节点加入集群中:

redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000
  • 1

add-node是加入指令,127.0.0.1:7006 表示新加入的节点,127.0.0.1:7000 表示加入的集群的一个节点,用来辨识是哪个集群,理论上哪个都可以。

[root@spg 7006]# redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000
>>> Adding node 127.0.0.1:7006 to cluster 127.0.0.1:7000
Connecting to node 127.0.0.1:7000: OK
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7002: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7005: OK
>>> Performing Cluster Check (using node 127.0.0.1:7000)
S: be26c521481afcd6e739e2bfef69e9dcfb63d0a6 127.0.0.1:7000slots: (0 slots) slavereplicates 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c
M: 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 127.0.0.1:7003slots:0-5460 (5461 slots) master1 additional replica(s)
M: 947cc4a9e890672cfad4806a5921e9f8bdf05c05 127.0.0.1:7002slots:10923-16383 (5461 slots) master1 additional replica(s)
S: 05ac96f9cdee679f98e8f7ce8e97cf1cbea608ca 127.0.0.1:7004slots: (0 slots) slavereplicates ce06b13387702c3ee63e0118dd10c5f81a1285b5
M: ce06b13387702c3ee63e0118dd10c5f81a1285b5 127.0.0.1:7001slots:5461-10922 (5462 slots) master1 additional replica(s)
S: b65f33d97416795226964aa22f3b4a8ac7366a99 127.0.0.1:7005slots: (0 slots) slavereplicates 947cc4a9e890672cfad4806a5921e9f8bdf05c05
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Connecting to node 127.0.0.1:7006: OK
>>> Send CLUSTER MEET to node 127.0.0.1:7006 to make it join the cluster.
[OK] New node added correctly.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

从上面可以看出节点7006已经成功加入到集群中,此时可以查看集群节点状态

[root@spg 7006]# redis-trib.rb check 127.0.0.1:7000
Connecting to node 127.0.0.1:7000: OK
Connecting to node 127.0.0.1:7006: OK
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7002: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7005: OK
>>> Performing Cluster Check (using node 127.0.0.1:7000)
S: be26c521481afcd6e739e2bfef69e9dcfb63d0a6 127.0.0.1:7000slots: (0 slots) slavereplicates 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c
M: fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006slots: (0 slots) master0 additional replica(s)
M: 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 127.0.0.1:7003slots:0-5460 (5461 slots) master1 additional replica(s)
M: 947cc4a9e890672cfad4806a5921e9f8bdf05c05 127.0.0.1:7002slots:10923-16383 (5461 slots) master1 additional replica(s)
S: 05ac96f9cdee679f98e8f7ce8e97cf1cbea608ca 127.0.0.1:7004slots: (0 slots) slavereplicates ce06b13387702c3ee63e0118dd10c5f81a1285b5
M: ce06b13387702c3ee63e0118dd10c5f81a1285b5 127.0.0.1:7001slots:5461-10922 (5462 slots) master1 additional replica(s)
S: b65f33d97416795226964aa22f3b4a8ac7366a99 127.0.0.1:7005slots: (0 slots) slavereplicates 947cc4a9e890672cfad4806a5921e9f8bdf05c05
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

可以很清楚的看到节点7006已经成功作为主节点加入到集群中了。 
PS:也可以连接到客户端查看集群中节点状态,如下所示:

[root@spg 7006]# redis-cli -c -p 7006
127.0.0.1:7006> cluster nodes
fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006 myself,master - 0 0 0 connected
be26c521481afcd6e739e2bfef69e9dcfb63d0a6 127.0.0.1:7000 slave 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 0 1456667011409 7 connected
947cc4a9e890672cfad4806a5921e9f8bdf05c05 127.0.0.1:7002 master - 0 1456667008376 3 connected 10923-16383
ce06b13387702c3ee63e0118dd10c5f81a1285b5 127.0.0.1:7001 master - 0 1456667007371 2 connected 5461-10922
b65f33d97416795226964aa22f3b4a8ac7366a99 127.0.0.1:7005 slave 947cc4a9e890672cfad4806a5921e9f8bdf05c05 0 1456667013968 3 connected
05ac96f9cdee679f98e8f7ce8e97cf1cbea608ca 127.0.0.1:7004 slave ce06b13387702c3ee63e0118dd10c5f81a1285b5 0 1456667013457 2 connected
1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 127.0.0.1:7003 master - 0 1456667012429 7 connected 0-5460
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

但是,通过上面发现:

M: fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006slots: (0 slots) master0 additional replica(s)
  • 1
  • 2
  • 3

0 slots?也就是说,虽然它现在是主节点,但是,却没有分配任何节点给它,也就是它现在还不负责数据存取。 
看来,redis cluster 不是在新加节点的时候帮我们做好了迁移工作,需要我们手动对集群进行重新分片迁移,也是这个命令:

redis-trib.rb reshard 127.0.0.1:7000
  • 1

这个命令是用来迁移slot节点的,后面的127.0.0.1:7000是表示是哪个集群,端口填[7000-7006]都可以,我们结果运行如下:

[root@spg 7006]# redis-trib.rb reshard 127.0.0.1:7000
Connecting to node 127.0.0.1:7000: OK
Connecting to node 127.0.0.1:7006: OK
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7002: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7005: OK
>>> Performing Cluster Check (using node 127.0.0.1:7000)
S: be26c521481afcd6e739e2bfef69e9dcfb63d0a6 127.0.0.1:7000slots: (0 slots) slavereplicates 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c
M: fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006slots: (0 slots) master0 additional replica(s)
M: 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 127.0.0.1:7003slots:0-5460 (5461 slots) master1 additional replica(s)
M: 947cc4a9e890672cfad4806a5921e9f8bdf05c05 127.0.0.1:7002slots:10923-16383 (5461 slots) master1 additional replica(s)
S: 05ac96f9cdee679f98e8f7ce8e97cf1cbea608ca 127.0.0.1:7004slots: (0 slots) slavereplicates ce06b13387702c3ee63e0118dd10c5f81a1285b5
M: ce06b13387702c3ee63e0118dd10c5f81a1285b5 127.0.0.1:7001slots:5461-10922 (5462 slots) master1 additional replica(s)
S: b65f33d97416795226964aa22f3b4a8ac7366a99 127.0.0.1:7005slots: (0 slots) slavereplicates 947cc4a9e890672cfad4806a5921e9f8bdf05c05
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)?
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

它提示我们需要迁移多少slot到7006上,我们可以算一下:16384/4 = 4096,也就是说,为了平衡分配起见,我们需要移动4096个槽点到7006上。

好,那输入4096:

How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID?
  • 1
  • 2

此时又提示我们,接受的node ID是多少,7006的id 我们通过上面就可以看到是fe595e7a38c659a6eb6949bb31fd7474881d6422

What is the receiving node ID? fe595e7a38c659a6eb6949bb31fd7474881d6422
Please enter all the source node IDs.Type 'all' to use all the nodes as source nodes for the hash slots.Type 'done' once you entered all the source nodes IDs.Source node #1:
  • 1
  • 2
  • 3
  • 4
  • 5

接着, redis-trib 会向你询问重新分片的源节点(source node), 也即是, 要从哪个节点中取出 4096 个哈希槽, 并将这些槽移动到7006节点上面。

如果我们不打算从特定的节点上取出指定数量的哈希槽, 那么可以向 redis-trib 输入 all , 这样的话, 集群中的所有主节点都会成为源节点, redis-trib 将从各个源节点中各取出一部分哈希槽, 凑够 4096 个, 然后移动到7006节点上:

Source node #1:all
  • 1

接下来就开始迁移了,并且会询问你是否确认:

Do you want to proceed with the proposed reshard plan (yes/no)?
  • 1

输入 yes 并使用按下回车之后, redis-trib 就会正式开始执行重新分片操作, 将指定的哈希槽从源节点一个个地移动到7006节点上面。

迁移完毕之后,我们来检查下:

[root@spg 7006]# redis-trib.rb check 127.0.0.1:7000
Connecting to node 127.0.0.1:7000: OK
Connecting to node 127.0.0.1:7006: OK
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7002: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7005: OK
>>> Performing Cluster Check (using node 127.0.0.1:7000)
S: be26c521481afcd6e739e2bfef69e9dcfb63d0a6 127.0.0.1:7000slots: (0 slots) slavereplicates 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c
M: fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006slots:0-1364,5461-6826,10923-12287 (4096 slots) master0 additional replica(s)
M: 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 127.0.0.1:7003slots:1365-5460 (4096 slots) master1 additional replica(s)
M: 947cc4a9e890672cfad4806a5921e9f8bdf05c05 127.0.0.1:7002slots:12288-16383 (4096 slots) master1 additional replica(s)
S: 05ac96f9cdee679f98e8f7ce8e97cf1cbea608ca 127.0.0.1:7004slots: (0 slots) slavereplicates ce06b13387702c3ee63e0118dd10c5f81a1285b5
M: ce06b13387702c3ee63e0118dd10c5f81a1285b5 127.0.0.1:7001slots:6827-10922 (4096 slots) master1 additional replica(s)
S: b65f33d97416795226964aa22f3b4a8ac7366a99 127.0.0.1:7005slots: (0 slots) slavereplicates 947cc4a9e890672cfad4806a5921e9f8bdf05c05
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

7006节点情况如下

M: fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006slots:0-1364,5461-6826,10923-12287 (4096 slots) master0 additional replica(s)
  • 1
  • 2
  • 3

至此,集群中新加入主节点已经完成。

2.让节点作为从节点加入集群。 
我们再新建一个节点7007,步骤类似,就先省略了。建好后,启动起来,我们看如何把它加入到集群中的从节点中:

[root@spg 7007]# redis-trib.rb add-node --slave 127.0.0.1:7007 127.0.0.1:7000
  • 1

add-node的时候加上–slave表示是加入到从节点中,但是这样加,是随机的。这里的命令行完全像我们在添加一个新主服务器时使用的一样,所以我们没有指定要给哪个主服 务器添加副本。这种情况下,redis-trib 会将7007作为一个具有较少副本的随机的主服务器的副本。

那么,你猜,它会作为谁的从节点,应该是7006,因为7006还没有从节点。我们运行下。

>>> Adding node 127.0.0.1:7007 to cluster 127.0.0.1:7000
Connecting to node 127.0.0.1:7000: OK
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7005: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7002: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7006: OK
>>> Performing Cluster Check (using node 127.0.0.1:7000)
S: be26c521481afcd6e739e2bfef69e9dcfb63d0a6 127.0.0.1:7000slots: (0 slots) slavereplicates 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c
M: 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 127.0.0.1:7003slots:1365-5460 (4096 slots) master1 additional replica(s)
S: b65f33d97416795226964aa22f3b4a8ac7366a99 127.0.0.1:7005slots: (0 slots) slavereplicates 947cc4a9e890672cfad4806a5921e9f8bdf05c05
S: 05ac96f9cdee679f98e8f7ce8e97cf1cbea608ca 127.0.0.1:7004slots: (0 slots) slavereplicates ce06b13387702c3ee63e0118dd10c5f81a1285b5
M: 947cc4a9e890672cfad4806a5921e9f8bdf05c05 127.0.0.1:7002slots:12288-16383 (4096 slots) master1 additional replica(s)
M: ce06b13387702c3ee63e0118dd10c5f81a1285b5 127.0.0.1:7001slots:6827-10922 (4096 slots) master1 additional replica(s)
M: fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006slots:0-1364,5461-6826,10923-12287 (4096 slots) master0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Automatically selected master 127.0.0.1:7006
Connecting to node 127.0.0.1:7007: OK
>>> Send CLUSTER MEET to node 127.0.0.1:7007 to make it join the cluster.
Waiting for the cluster to join.3417:M 29 Feb 21:03:48.490 # IP address for this node updated to 127.0.0.1>>> Configure node as replica of 127.0.0.1:7006.
3417:S 29 Feb 21:03:49.423 # Cluster state changed: ok
[OK] New node added correctly.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42

上面提示已经选择了7006作为主节点,将7007成功加入集群,同时你可以检查集群各节点状况。

上面的过程是集群自动分配主节点给新加入的从节点。 
但是,是否可以指定主节点呢?当然可以。我们再建一个7008节点。

[root@spg 7008]# redis-trib.rb add-node --slave --master-id fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7008 127.0.0.1:7003
  • 1

–master-id 表示指定的主节点node id。这里指定的是 7006 这个主节点。

>>> Performing Cluster Check (using node 127.0.0.1:7003)
M: 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 127.0.0.1:7003slots:1365-5460 (4096 slots) master1 additional replica(s)
S: e896e0dfb21716540e091129fbf18cc7d473faa9 127.0.0.1:7007slots: (0 slots) slavereplicates fe595e7a38c659a6eb6949bb31fd7474881d6422
S: b65f33d97416795226964aa22f3b4a8ac7366a99 127.0.0.1:7005slots: (0 slots) slavereplicates 947cc4a9e890672cfad4806a5921e9f8bdf05c05
S: 05ac96f9cdee679f98e8f7ce8e97cf1cbea608ca 127.0.0.1:7004slots: (0 slots) slavereplicates ce06b13387702c3ee63e0118dd10c5f81a1285b5
M: fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006slots:0-1364,5461-6826,10923-12287 (4096 slots) master1 additional replica(s)
M: 947cc4a9e890672cfad4806a5921e9f8bdf05c05 127.0.0.1:7002slots:12288-16383 (4096 slots) master1 additional replica(s)
M: ce06b13387702c3ee63e0118dd10c5f81a1285b5 127.0.0.1:7001slots:6827-10922 (4096 slots) master1 additional replica(s)
S: be26c521481afcd6e739e2bfef69e9dcfb63d0a6 127.0.0.1:7000slots: (0 slots) slavereplicates 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Connecting to node 127.0.0.1:7008: OK
>>> Send CLUSTER MEET to node 127.0.0.1:7008 to make it join the cluster.
Waiting for the cluster to join.3571:M 29 Feb 21:15:08.600 # IP address for this node updated to 127.0.0.1>>> Configure node as replica of 127.0.0.1:7006.
3571:S 29 Feb 21:15:09.497 # Cluster state changed: ok
[OK] New node added correctly.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

可以看出7008作为7006的从节点成功加入集群中。你也可以检查集群状态看各节点状态。如下:

[root@spg 7008]# redis-trib.rb check 127.0.0.1:7000
Connecting to node 127.0.0.1:7000: OK
Connecting to node 127.0.0.1:7008: OK
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7002: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7007: OK
Connecting to node 127.0.0.1:7005: OK
Connecting to node 127.0.0.1:7006: OK
>>> Performing Cluster Check (using node 127.0.0.1:7000)
S: be26c521481afcd6e739e2bfef69e9dcfb63d0a6 127.0.0.1:7000slots: (0 slots) slavereplicates 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c
S: 357c88af6960a11c130e0180038f8d095179b8e7 127.0.0.1:7008slots: (0 slots) slavereplicates fe595e7a38c659a6eb6949bb31fd7474881d6422
M: 1da8a7f4c3cd5d7537e90e0ca5f4fb416f41a40c 127.0.0.1:7003slots:1365-5460 (4096 slots) master1 additional replica(s)
S: 05ac96f9cdee679f98e8f7ce8e97cf1cbea608ca 127.0.0.1:7004slots: (0 slots) slavereplicates ce06b13387702c3ee63e0118dd10c5f81a1285b5
M: 947cc4a9e890672cfad4806a5921e9f8bdf05c05 127.0.0.1:7002slots:12288-16383 (4096 slots) master1 additional replica(s)
M: ce06b13387702c3ee63e0118dd10c5f81a1285b5 127.0.0.1:7001slots:6827-10922 (4096 slots) master1 additional replica(s)
S: e896e0dfb21716540e091129fbf18cc7d473faa9 127.0.0.1:7007slots: (0 slots) slavereplicates fe595e7a38c659a6eb6949bb31fd7474881d6422
S: b65f33d97416795226964aa22f3b4a8ac7366a99 127.0.0.1:7005slots: (0 slots) slavereplicates 947cc4a9e890672cfad4806a5921e9f8bdf05c05
M: fe595e7a38c659a6eb6949bb31fd7474881d6422 127.0.0.1:7006slots:0-1364,5461-6826,10923-12287 (4096 slots) master2 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  1. 移除集群中某个节点 
    有加肯定有减,redis cluster同样支持移除节点功能,同样也是redis-trib.rb的用法:
redis-trib del-node 127.0.0.1:7000 `<node-id>`
  • 1

和新加节点有点不同的是,移除需要节点的node-id。那我们尝试将7006这个主节点移除:

[root@spg 7008]# redis-trib.rb del-node 127.0.0.1:7000 fe595e7a38c659a6eb6949bb31fd7474881d6422
>>> Removing node fe595e7a38c659a6eb6949bb31fd7474881d6422 from cluster 127.0.0.1:7000
Connecting to node 127.0.0.1:7000: OK
Connecting to node 127.0.0.1:7006: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7007: OK
Connecting to node 127.0.0.1:7008: OK
Connecting to node 127.0.0.1:7005: OK
Connecting to node 127.0.0.1:7002: OK
[ERR] Node 127.0.0.1:7006 is not empty! Reshard data away and try again.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

报错了,它提示我们说,由于7006里面已经有数据了,不能被移除,要先将它的数据转移出去。也就是说得重新分片,用上面增加新节点后的分片方式一样,用我们再来一遍:

redis-trib.rb reshard 127.0.0.1:7000
  • 1
How many slots do you want to move (from 1 to 16384)?
  • 1

提示,我们要分多少个槽点,由于7007上有4096个槽点,所以这里填写4096

How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID?
  • 1
  • 2

提示我们,需要移动到哪个id上,那就填7001的吧:

What is the receiving node ID? ce06b13387702c3ee63e0118dd10c5f81a1285b5
Please enter all the source node IDs.Type 'all' to use all the nodes as source nodes for the hash slots.Type 'done' once you entered all the source nodes IDs.
Source node #1:
  • 1
  • 2
  • 3
  • 4
  • 5

这里就是关键了,它要我们从哪个节点去转移数据到7001,因为我们是要删除7006的,所以,我们就得7006的id了

Source node #1:fe595e7a38c659a6eb6949bb31fd7474881d6422
Source node #2:done
Do you want to proceed with the proposed reshard plan (yes/no)? yes
  • 1
  • 2
  • 3

ok,这样就迁移好了。好,现在再进行移除节点操作:

[root@spg 7007]# redis-trib.rb del-node 127.0.0.1:7003 fe595e7a38c659a6eb6949bb31fd7474881d6422
>>> Removing node fe595e7a38c659a6eb6949bb31fd7474881d6422 from cluster 127.0.0.1:7003
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7005: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7002: OK
Connecting to node 127.0.0.1:7006: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7008: OK
Connecting to node 127.0.0.1:7000: OK
>>> Sending CLUSTER FORGET messages to the cluster...
>>> 127.0.0.1:7007 as replica of 127.0.0.1:7003
>>> 127.0.0.1:7008 as replica of 127.0.0.1:7003
3571:S 29 Feb 21:59:10.922 # Connection with master lost.
3571:S 29 Feb 21:59:10.922 * Caching the disconnected master state.
3571:S 29 Feb 21:59:10.922 * Discarding previously cached master state.
>>> SHUTDOWN the node.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

删除成功,而且还很人性化的将7007和7008这2个没了爹的孩子送给了7003。 
此时再尝试连接7006,则提示连接不上了。

[root@spg 7007]# redis-trib.rb check 127.0.0.1:7006
Connecting to node 127.0.0.1:7006: [ERR] Sorry, can't connect to node 127.0.0.1:7006
  • 1
  • 2

4.移除一个从节点

移除一个从节点就简单的多了,因为不需要考虑数据的迁移,我们7008给移除:

[root@spg 7007]# redis-trib.rb del-node 127.0.0.1:7003 357c88af6960a11c130e0180038f8d095179b8e7
>>> Removing node 357c88af6960a11c130e0180038f8d095179b8e7 from cluster 127.0.0.1:7003
Connecting to node 127.0.0.1:7003: OK
Connecting to node 127.0.0.1:7005: OK
Connecting to node 127.0.0.1:7001: OK
Connecting to node 127.0.0.1:7002: OK
Connecting to node 127.0.0.1:7004: OK
Connecting to node 127.0.0.1:7008: OK
Connecting to node 127.0.0.1:7000: OK
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
3571:S 29 Feb 22:03:33.763 # User requested shutdown...
3571:S 29 Feb 22:03:33.763 * Calling fsync() on the AOF file.
3571:S 29 Feb 22:03:33.763 * Saving the final RDB snapshot before exiting.
3571:S 29 Feb 22:03:33.769 * DB saved on disk
3571:S 29 Feb 22:03:33.769 # Redis is now ready to exit, bye bye...
[3]-  完成                  redis-server redis.conf(工作目录:/home/shipg/soft/redis/redis-claster/7008)
(当前工作目录:/home/shipg/soft/redis/redis-claster/7007)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

至此,redis集群中增删节点部分已经尝试完成,redsi cluster 稳定版本也越来越稳定,相信不久,它肯定会成为主流,所以,还有很多 cluster 命令还没讲到,还有很多维护没讲道,篇幅有限,下一次有时间的话再继续研究。

这篇关于Redis集群管理之Redis Cluster集群节点增减的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/487952

相关文章

Redis 中的热点键和数据倾斜示例详解

《Redis中的热点键和数据倾斜示例详解》热点键是指在Redis中被频繁访问的特定键,这些键由于其高访问频率,可能导致Redis服务器的性能问题,尤其是在高并发场景下,本文给大家介绍Redis中的热... 目录Redis 中的热点键和数据倾斜热点键(Hot Key)定义特点应对策略示例数据倾斜(Data S

redis+lua实现分布式限流的示例

《redis+lua实现分布式限流的示例》本文主要介绍了redis+lua实现分布式限流的示例,可以实现复杂的限流逻辑,如滑动窗口限流,并且避免了多步操作导致的并发问题,具有一定的参考价值,感兴趣的可... 目录为什么使用Redis+Lua实现分布式限流使用ZSET也可以实现限流,为什么选择lua的方式实现

Redis中管道操作pipeline的实现

《Redis中管道操作pipeline的实现》RedisPipeline是一种优化客户端与服务器通信的技术,通过批量发送和接收命令减少网络往返次数,提高命令执行效率,本文就来介绍一下Redis中管道操... 目录什么是pipeline场景一:我要向Redis新增大批量的数据分批处理事务( MULTI/EXE

Redis中高并发读写性能的深度解析与优化

《Redis中高并发读写性能的深度解析与优化》Redis作为一款高性能的内存数据库,广泛应用于缓存、消息队列、实时统计等场景,本文将深入探讨Redis的读写并发能力,感兴趣的小伙伴可以了解下... 目录引言一、Redis 并发能力概述1.1 Redis 的读写性能1.2 影响 Redis 并发能力的因素二、

Redis中的常用的五种数据类型详解

《Redis中的常用的五种数据类型详解》:本文主要介绍Redis中的常用的五种数据类型详解,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录Redis常用的五种数据类型一、字符串(String)简介常用命令应用场景二、哈希(Hash)简介常用命令应用场景三、列表(L

Redis解决缓存击穿问题的两种方法

《Redis解决缓存击穿问题的两种方法》缓存击穿问题也叫热点Key问题,就是⼀个被高并发访问并且缓存重建业务较复杂的key突然失效了,无数的请求访问会在瞬间给数据库带来巨大的冲击,本文给大家介绍了Re... 目录引言解决办法互斥锁(强一致,性能差)逻辑过期(高可用,性能优)设计逻辑过期时间引言缓存击穿:给

Redis中如何实现商品秒杀

《Redis中如何实现商品秒杀》:本文主要介绍Redis中如何实现商品秒杀问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录技术栈功能实现步骤步骤一:准备商品库存数据步骤二:实现商品秒杀步骤三:优化Redis性能技术讲解Redis的List类型Redis的Set

Redis如何实现刷票过滤

《Redis如何实现刷票过滤》:本文主要介绍Redis如何实现刷票过滤问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录引言一、概述二、技术选型三、搭建开发环境四、使用Redis存储数据四、使用SpringBoot开发应用五、 实现同一IP每天刷票不得超过次数六

nvm如何切换与管理node版本

《nvm如何切换与管理node版本》:本文主要介绍nvm如何切换与管理node版本问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录nvm切换与管理node版本nvm安装nvm常用命令总结nvm切换与管理node版本nvm适用于多项目同时开发,然后项目适配no

Redis客户端工具之RedisInsight的下载方式

《Redis客户端工具之RedisInsight的下载方式》RedisInsight是Redis官方提供的图形化客户端工具,下载步骤包括访问Redis官网、选择RedisInsight、下载链接、注册... 目录Redis客户端工具RedisInsight的下载一、点击进入Redis官网二、点击RedisI