部署简介:
1. 控制器:采用StatefulSet进行redis的部署。它为了解决有状态服务的问题,它所管理的Pod拥有固定的Pod名称,启停顺序。
2. 服务发现:两个svc,一个是暴露对外(NodePort service),一个是StatefulSet部署使用(headless service)。
在StatefulSet中与之对应的headless service,headless service,即无头服务,与service的区别就是它没有Cluster IP,解析它的名称时将返回该Headless Service对应的全部Pod的Endpoint列表。
3. 存储:数据存储:nfs。使用nfs作为后端存储,创建pv,pvc达到redis data数据存储持久化;配置存储:configmaps。创建configmaps存储,存储redis.conf配置信息
4. redis镜像:直接在redis官网下载制作,目前redis官网引入了docker hub,也可以直接跳转到docker hub下载最新版本
开始部署:
一、下载redis镜像
redis官网:https://redis.io/download/
目前官网最新版本是7.0.5,跳转到docker hub,看到目前最新的镜像是7.0.4版本
docker hub上最新版本,这次部署就是用7.0.4版本镜像,直接在服务器上:docker pull redis:7.0.4
二、创建存储共享目录
部署好nfs后,在本地创建存储共享目录,pv1-6
[root@k8s-master1 ~]# cd /data/k8s/redis/
[root@k8s-master1 redis]# ll
total 0
drwxr-xr-x 3 root root 61 Sep 22 09:13 pv1
drwxr-xr-x 3 root root 61 Sep 22 09:13 pv2
drwxr-xr-x 3 root root 45 Sep 21 18:10 pv3
drwxr-xr-x 3 root root 61 Sep 22 09:40 pv4
drwxr-xr-x 3 root root 45 Sep 21 18:10 pv5
drwxr-xr-x 3 root root 61 Sep 22 09:13 pv6
三、创建pv
创建6个pv,用于存储redis data数据
[root@k8s-master1 redis]# cat redis-pv.yaml apiVersion: v1kind: PersistentVolumemetadata: name: redis-pv1 #pv名称 labels: type: sata #标记存储类型 namespace: my-ns-redis #所属命名空间spec: capacity: #存储能力 storage: 2Gi accessModes: - ReadWriteMany #可以被多节点多次读写 persistentVolumeReclaimPolicy: Retain #pvc删除后数据保留模式 storageClassName: "redis" #storageClass名称,使用nfs不需要单独传教storageClass,直接标记名称后续引用。nfs不支持动态pv nfs: #nfs存储 path: "/data/k8s/redis/pv1" #共享目录,需要主机上真实存在 server: 192.168.198.144 #nfs地址 readOnly: false---apiVersion: v1kind: PersistentVolumemetadata: name: redis-pv2 labels: type: sata namespace: my-ns-redisspec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: "redis" nfs: path: "/data/k8s/redis/pv2" server: 192.168.198.144 readOnly: false---apiVersion: v1kind: PersistentVolumemetadata: name: redis-pv3 labels: type: sata namespace: my-ns-redisspec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: "redis" nfs: path: "/data/k8s/redis/pv3" server: 192.168.198.144 readOnly: false---apiVersion: v1kind: PersistentVolumemetadata: name: redis-pv4 labels: type: sata namespace: my-ns-redisspec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: "redis" nfs: path: "/data/k8s/redis/pv4" server: 192.168.198.144 readOnly: false---apiVersion: v1kind: PersistentVolumemetadata: name: redis-pv5 labels: type: sata namespace: my-ns-redisspec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: "redis" nfs: path: "/data/k8s/redis/pv5" server: 192.168.198.144 readOnly: false---apiVersion: v1kind: PersistentVolumemetadata: name: redis-pv6 labels: type: sata namespace: my-ns-redisspec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: "redis" nfs: path: "/data/k8s/redis/pv6" server: 192.168.198.144 readOnly: false #使用yaml文件创建pv [root@k8s-master1 redis]# kubectl create -f redis-pv.yaml
四、创建configmap
将Redis的配置文件转化为Configmap,这是一种更方便的配置读取方式。配置文件redis.conf如下
#redis配置,本地直接创建[root@k8s-master1 redis]# cat redis.conf appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379#创建configmap[root@k8s-master1 redis]# kubectl create configmap redis-conf --from-file=redis.conf -n my-ns-redis #查询configmap信息[root@k8s-master1 redis]# kubectl get cm -n my-ns-redis NAME DATA AGEredis-conf 1 17h[root@k8s-master1 redis]# kubectl describe cm -n my-ns-redis redis-conf Name: redis-confNamespace: my-ns-redisLabels: Annotations: Data====redis.conf:----appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379Events:
五、创建headless service
Headless service是StatefulSet实现稳定网络标识的基础,我们需要提前创建
[root@k8s-master1 redis]# cat headless-service.yaml apiVersion: v1kind: Servicemetadata: name: redis-service namespace: my-ns-redis labels: app: redisspec: ports: - name: redis-port port: 6379 clusterIP: None selector: app: redis[root@k8s-master1 redis]# kubectl create -f headless-service.yaml
六、通过StatefulSet创建redis集群节点
通过StatefulSet创建6个redis的pod ,实现3主3从的redis集群
[root@k8s-master1 redis]# cat redis-sts.yaml apiVersion: apps/v1kind: StatefulSet #StatefulSet类型metadata: name: redis-app #StatefulSet名称 namespace: my-ns-redis #使用的命名空间spec: serviceName: "redis-service" #引用上面创建的headless service,名字必须一致 replicas: 6 #副本数 selector: #标签选择器 matchLabels: app: redis #选择redis标签 appCluster: redis-cluster template: #容器模板 metadata: labels: app: redis #容器标签:redis appCluster: redis-cluster spec: containers: - name: redis #容器名字 image: "redis:7.0.4" #使用的镜像 command: ["/bin/bash", "-ce", "tail -f /dev/null"] command: ["redis-server"] args: - "/etc/redis/redis.conf" - "--protected-mode" - "no" ports: - name: redis containerPort: 6379 protocol: "TCP" - name: cluster containerPort: 16379 protocol: "TCP" volumeMounts: #挂载卷 - name: "redis-conf" #自定义挂载卷1的名称 mountPath: "/etc/redis" #挂载的路径,这个是redis容器里面的路径 - name: "redis-data" #自定义挂载卷2的名称 mountPath: "/var/lib/redis" #挂载的路径 volumes: - name: "redis-conf" #引用挂载,名字要和上面自定义的一致,否则无法对应挂载 configMap: #使用的存储类型 name: "redis-conf" #引用之前创建的configMap存储,名字要和之前创建使用的名字一致 items: #可以不写 - key: "redis.conf" path: "redis.conf" #这个就表示mountPath: "/etc/redis"+path: "redis.conf" ,最终:/etc/redis/redis.conf volumeClaimTemplates: #创建pvc的模板,我们没有单独创建pvc,直接使用模板创建 - metadata: name: redis-data #引用上面自定义的挂载卷2的名称,必须一致 spec: #元数据 accessModes: [ "ReadWriteMany" ] #必须和前面创建的pv的保持一致,否则pv,pvc可能绑定失败 storageClassName: "redis" #必须和前面创建的pv的保持一致 resources: requests: storage: 1Gi #需求的存储大小,小于等于pv的存储#使用yaml文件创建[root@k8s-master1 redis]# kubectl create -f redis-sts.yaml#查询[root@k8s-master1 redis]# kubectl get all -n my-ns-redis NAME READY STATUS RESTARTS AGEpod/redis-app-0 1/1 Running 0 64mpod/redis-app-1 1/1 Running 1 16hpod/redis-app-2 1/1 Running 1 16hpod/redis-app-3 1/1 Running 1 16hpod/redis-app-4 1/1 Running 1 16hpod/redis-app-5 1/1 Running 1 16hNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/redis-service ClusterIP None 6379/TCP 17hNAME READY AGEstatefulset.apps/redis-app 6/6 16h#pv,pvc均已成功绑定[root@k8s-master1 redis]# kubectl get pv,pvc -n my-ns-redis NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 6d18hpersistentvolume/redis-pv1 2Gi RWX Retain Bound my-ns-redis/redis-data-redis-app-3 redis 17hpersistentvolume/redis-pv2 2Gi RWX Retain Bound my-ns-redis/redis-data-redis-app-5 redis 17hpersistentvolume/redis-pv3 2Gi RWX Retain Bound my-ns-redis/redis-data-redis-app-2 redis 17hpersistentvolume/redis-pv4 2Gi RWX Retain Bound my-ns-redis/redis-data-redis-app-0 redis 17hpersistentvolume/redis-pv5 2Gi RWX Retain Bound my-ns-redis/redis-data-redis-app-1 redis 17hpersistentvolume/redis-pv6 2Gi RWX Retain Bound my-ns-redis/redis-data-redis-app-4 redis 17hNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/redis-data-redis-app-0 Bound redis-pv4 2Gi RWX redis 16hpersistentvolumeclaim/redis-data-redis-app-1 Bound redis-pv5 2Gi RWX redis 16hpersistentvolumeclaim/redis-data-redis-app-2 Bound redis-pv3 2Gi RWX redis 16hpersistentvolumeclaim/redis-data-redis-app-3 Bound redis-pv1 2Gi RWX redis 16hpersistentvolumeclaim/redis-data-redis-app-4 Bound redis-pv6 2Gi RWX redis 16hpersistentvolumeclaim/redis-data-redis-app-5 Bound redis-pv2 2Gi RWX redis 16h
七、初始化集群
#进入其中一个容器[root@k8s-master1 ~]# kubectl exec -it -n my-ns-redis redis-app-0 /bin/bashkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.root@redis-app-0:/data# #查询集群信息,可以看到cluster_state:fail失败,cluster_known_nodes:1节点数只有一个root@redis-app-0:/data# redis-cli -c127.0.0.1:6379> CLUSTER INFOcluster_state:failcluster_slots_assigned:0cluster_slots_ok:0cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:1cluster_size:0cluster_current_epoch:0cluster_my_epoch:0cluster_stats_messages_sent:0cluster_stats_messages_received:0total_cluster_links_buffer_limit_exceeded:0127.0.0.1:6379>127.0.0.1:6379> exit#初始化集群,ip,端口为对应pod端口root@redis-app-0:/data# redis-cli --cluster create 10.244.1.159:6379 10.244.2.136:6379 10.244.1.158:6379 10.244.2.139:6379 10.244.1.160:6379 10.244.1.157:6379 --cluster-replicas 1>>> Performing hash slots allocation on 6 nodes...Master[0] -> Slots 0 - 5460Master[1] -> Slots 5461 - 10922Master[2] -> Slots 10923 - 16383Adding replica 10.244.1.160:6379 to 10.244.1.159:6379Adding replica 10.244.1.157:6379 to 10.244.2.136:6379Adding replica 10.244.2.139:6379 to 10.244.1.158:6379M: d97f5acc6a803cc5ae1a0fd9a405e4cbc49cb72b 10.244.1.159:6379 slots:[0-5460] (5461 slots) masterM: d52754f0c6d7774430a4bb2e3abc05111421e854 10.244.2.136:6379 slots:[5461-10922] (5462 slots) masterM: fc2e51c0afc9f8b4440e652c366033ce277f9809 10.244.1.158:6379 slots:[10923-16383] (5461 slots) masterS: e645ed14a194b8e8d7b11d6e65035f14451010b6 10.244.2.139:6379 replicates fc2e51c0afc9f8b4440e652c366033ce277f9809S: 8d0bdcdf5af3d8b1ce751b7a68e5261ac514e0bc 10.244.1.160:6379 replicates d97f5acc6a803cc5ae1a0fd9a405e4cbc49cb72bS: de6607e6972983f38a9b66f7bfbac1e9eb112c63 10.244.1.157:6379 replicates d52754f0c6d7774430a4bb2e3abc05111421e854Can I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join....>>> Performing Cluster Check (using node 10.244.1.159:6379)M: d97f5acc6a803cc5ae1a0fd9a405e4cbc49cb72b 10.244.1.159:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s)S: e645ed14a194b8e8d7b11d6e65035f14451010b6 10.244.2.139:6379 slots: (0 slots) slave replicates fc2e51c0afc9f8b4440e652c366033ce277f9809S: 8d0bdcdf5af3d8b1ce751b7a68e5261ac514e0bc 10.244.1.160:6379 slots: (0 slots) slave replicates d97f5acc6a803cc5ae1a0fd9a405e4cbc49cb72bM: d52754f0c6d7774430a4bb2e3abc05111421e854 10.244.2.136:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s)S: de6607e6972983f38a9b66f7bfbac1e9eb112c63 10.244.1.157:6379 slots: (0 slots) slave replicates d52754f0c6d7774430a4bb2e3abc05111421e854M: fc2e51c0afc9f8b4440e652c366033ce277f9809 10.244.1.158:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.#至此初始化集群成功#重新进入redis集群root@redis-app-0:/data# redis-cli -c#查询集群信息,集群状态ok:cluster_state:ok;集群节点数6:cluster_known_nodes:6127.0.0.1:6379> CLUSTER INFOcluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:1cluster_stats_messages_ping_sent:94cluster_stats_messages_pong_sent:94cluster_stats_messages_sent:188cluster_stats_messages_ping_received:89cluster_stats_messages_pong_received:94cluster_stats_messages_meet_received:5cluster_stats_messages_received:188total_cluster_links_buffer_limit_exceeded:0#目前使用的主节点在10.244.1.158127.0.0.1:6379> get a-> Redirected to slot [15495] located at 10.244.1.158:6379(nil)#至此redis集群部署完成
8、主从测试
删除一个pod:redis-app-0,k8s会自动重建一个名称为:redis-app-0 的pod,名字始终保持不变,检查集群信息集群状态ok:cluster_state:ok;集群节点数6:cluster_known_nodes:6