k8s上面的Redis集群链接不上master的解决办法
-
问题描述
之前在k8s上面部署了一台node,然后创建了6个redis的pod,构建了一个redis的集群,正常运行。
最近添加了一台slave node,然后把其中的几个redis的pod调度到了slave node上面,结果集群就起不来了,看了下log报如下错误:
127.0.0.1:6379> get test
(error) CLUSTERDOWN The cluster is down
127.0.0.1:6379> cluster info
cluster_state:fail
cluster_slots_assigned:16384
cluster_slots_ok:0
cluster_slots_pfail:16384
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:14
cluster_my_epoch:14
cluster_stats_messages_ping_sent:4
cluster_stats_messages_sent:4
cluster_stats_messages_received:0
total_cluster_links_buffer_limit_exceeded:0
$ kubectl logs redis-app-5
... ...
1:S 19 Nov 2024 01:58:13.251 * Connecting to MASTER 172.16.43.44:6379
1:S 19 Nov 2024 01:58:13.251 * MASTER <-> REPLICA sync started
1:S 19 Nov 2024 01:58:13.251 * Cluster state changed: ok
1:S 19 Nov 2024 01:58:20.754 # Cluster state changed: fail
1:S 19 Nov 2024 01:59:14.979 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 01:59:14.979 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 01:59:14.979 * MASTER <-> REPLICA sync started
1:S 19 Nov 2024 02:00:15.422 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 02:00:15.422 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 02:00:15.422 * MASTER <-> REPLICA sync started
1:S 19 Nov 2024 02:01:16.357 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 02:01:16.357 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 02:01:16.357 * MASTER <-> REPLICA sync started
-
问题分析
这种情况是redis的pod已经重新启动了,相应的ip地址可能已经变掉了,但是集群部署还是按照重启之前的配置来的,所以导致启动失败。
-
我的解决办法:
- 首先查看各个redis pod的信息
$ kubectl describe pod redis-app | grep IP cni.projectcalico.org/podIP: 172.16.178.201/32 cni.projectcalico.org/podIPs: 172.16.178.201/32 IP: 172.16.178.201 IPs: IP: 172.16.178.201 cni.projectcalico.org/podIP: 172.16.178.202/32 cni.projectcalico.org/podIPs: 172.16.178.202/32 IP: 172.16.178.202 IPs: IP: 172.16.178.202 cni.projectcalico.org/podIP: 172.16.43.1/32 cni.projectcalico.org/podIPs: 172.16.43.1/32 IP: 172.16.43.1 IPs: IP: 172.16.43.1 cni.projectcalico.org/podIP: 172.16.178.203/32 cni.projectcalico.org/podIPs: 172.16.178.203/32 IP: 172.16.178.203 IPs: IP: 172.16.178.203 cni.projectcalico.org/podIP: 172.16.43.63/32 cni.projectcalico.org/podIPs: 172.16.43.63/32 IP: 172.16.43.63 IPs: IP: 172.16.43.63 cni.projectcalico.org/podIP: 172.16.178.204/32 cni.projectcalico.org/podIPs: 172.16.178.204/32 IP: 172.16.178.204 IPs: IP: 172.16.178.204 $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-app-0 1/1 Running 0 2m34s 172.16.178.201 kevin-s1 <none> <none> redis-app-1 1/1 Running 0 2m32s 172.16.178.202 kevin-s1 <none> <none> redis-app-2 1/1 Running 0 2m30s 172.16.43.1 kevin-pc <none> <none> redis-app-3 1/1 Running 0 2m26s 172.16.178.203 kevin-s1 <none> <none> redis-app-4 1/1 Running 0 2m24s 172.16.43.63 kevin-pc <none> <none> redis-app-5 1/1 Running 0 2m19s 172.16.178.204 kevin-s1 <none> <none>
- 然后通过inem0o/redis-trib根据pods的最新ip重新创建集群
$ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379
- 在创建集群的过程中可能会遇到下面这个错误
$ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379
>>> Creating cluster
[ERR] Node 172.16.178.201:6379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
这种情况是因为redis的pod没有被重置,需要登录出问题的pod然后用redis-cii重置集群
$ kubectl exec -it redis-app-0 -- redis-cli -h 172.16.178.201 -p 6379
172.16.178.201:6379> CLUSTER RESET
OK
出问题的pod全部重置完之后再执行上面的命令,集群重新构建成功
$ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.16.178.201:6379
172.16.178.202:6379
172.16.43.1:6379
Adding replica 172.16.43.63:6379 to 172.16.178.201:6379
Adding replica 172.16.178.204:6379 to 172.16.178.202:6379
Adding replica 172.16.178.203:6379 to 172.16.43.1:6379
M: 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f 172.16.178.201:6379
slots:0-5460 (5461 slots) master
M: f5d617c0ed655dd6afa32c5d4ec6260713668639 172.16.178.202:6379
slots:5461-10922 (5462 slots) master
M: 808de7e00f10fe17a5582cd76a533159a25006d8 172.16.43.1:6379
slots:10923-16383 (5461 slots) master
S: 44ac042b99b9b73051b05d1be3d98cf475f67f0a 172.16.43.63:6379
replicates 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f
S: 8db8f89b7b28d0ce098de275340e3c4679fd342d 172.16.178.204:6379
replicates f5d617c0ed655dd6afa32c5d4ec6260713668639
S: 2f5860e62f03ea17d398bbe447a6f1d428ae8698 172.16.178.203:6379
replicates 808de7e00f10fe17a5582cd76a533159a25006d8
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.
>>> Performing Cluster Check (using node 172.16.178.201:6379)
M: 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f 172.16.178.201:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 44ac042b99b9b73051b05d1be3d98cf475f67f0a 172.16.43.63:6379@16379
slots: (0 slots) slave
replicates 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f
M: f5d617c0ed655dd6afa32c5d4ec6260713668639 172.16.178.202:6379@16379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 8db8f89b7b28d0ce098de275340e3c4679fd342d 172.16.178.204:6379@16379
slots: (0 slots) slave
replicates f5d617c0ed655dd6afa32c5d4ec6260713668639
S: 2f5860e62f03ea17d398bbe447a6f1d428ae8698 172.16.178.203:6379@16379
slots: (0 slots) slave
replicates 808de7e00f10fe17a5582cd76a533159a25006d8
M: 808de7e00f10fe17a5582cd76a533159a25006d8 172.16.43.1:6379@16379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
查看集群状态:
$ kubectl exec -it redis-app-3 -- redis-cli
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:14
cluster_my_epoch:3
cluster_stats_messages_ping_sent:39
cluster_stats_messages_pong_sent:40
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:80
cluster_stats_messages_ping_received:40
cluster_stats_messages_pong_received:36
cluster_stats_messages_received:76
total_cluster_links_buffer_limit_exceeded:0
至此问题解决。
原文地址:https://blog.csdn.net/u012502069/article/details/143887997
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!