前言知识点
- 定级:入门级
- 使用 Heketi Topology 扩容磁盘
- 使用 Heketi CLI 扩容磁盘
实战服务器配置 (架构 1:1 复刻小规模生产环境,配置略有不同)
主机名 | IP | CPU | 内存 | 系统盘 | 数据盘 | 用途 |
---|---|---|---|---|---|---|
ks-master-0 | 192.168.9.91 | 2 | 4 | 50 | 100 | KubeSphere/k8s-master |
ks-master-1 | 192.168.9.92 | 2 | 4 | 50 | 100 | KubeSphere/k8s-master |
ks-master-2 | 192.168.9.93 | 2 | 4 | 50 | 100 | KubeSphere/k8s-master |
ks-worker-0 | 192.168.9.95 | 2 | 4 | 50 | 100 | k8s-worker/CI |
ks-worker-1 | 192.168.9.96 | 2 | 4 | 50 | 100 | k8s-worker |
ks-worker-2 | 192.168.9.97 | 2 | 4 | 50 | 100 | k8s-worker |
storage-0 | 192.168.9.81 | 2 | 4 | 50 | 100+50+50 | ElasticSearch/GlusterFS/Ceph/Longhorn/NFS/ |
storage-1 | 192.168.9.82 | 2 | 4 | 50 | 100+50+50 | ElasticSearch/GlusterFS/Ceph/Longhorn |
storage-2 | 192.168.9.83 | 2 | 4 | 50 | 100+50+50 | ElasticSearch/GlusterFS/Ceph/Longhorn |
registry | 192.168.9.80 | 2 | 4 | 50 | 200 | Sonatype Nexus 3 |
合计 | 10 | 20 | 40 | 500 | 1100+ |
实战环境涉及软件版本信息
操作系统:openEuler 22.03 LTS SP2 x86_64
KubeSphere:3.3.2
Kubernetes:v1.24.12
Containerd:1.6.4
KubeKey: v3.0.8
GlusterFS:10.0-8
Heketi:v10.4.0
简介
之前的实战课程,我们已经学习了如何在 openEuler 22.03 LTS SP2 上安装部署 GlusterFS、Heketi 以及 Kubernetes 使用 in-tree storage driver
模式对接 GlusterFS 做为集群的后端存储。
今天我们来实战模拟生产环境必然会遇到的一个场景,业务上线一段时间后 GlusterFS 数据盘满了,需要扩容怎么办?
基于 Heketi 管理的 GlusterFS 数据卷扩容方案有两种:
- 调整现有 Topology 配置文件,重新加载
- 使用 Heketi CLI 直接扩容(简单,建议使用)
实战模拟前提条件:
在已有的 GlusterFS 100G 数据盘的基础上,额外添加了 2 块 50G 的磁盘,用来模拟两种数据卷扩容方案。
为了模拟实战效果,预先将已有的 100G 空间消耗掉 95G。
本文的实战过程与操作系统无关,所有相关操作均适用于其他操作系统部署的 Heketi + GlusterFS 存储集群。
磁盘空间不足故障模拟创建新 PVC
- 编辑 pvc 资源文件
vi pvc-test-95g.yaml
---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-data-95gspec: accessModes: - ReadWriteOnce storageClassName: glusterfs resources: requests: storage: 95Gi
- 执行创建命令
kubectl apply -f pvc-test-95g.yaml # 执行命令不会报错,但是 pvc 状态会处于 Pending 状态
查看报错信息
- 查看 Heketi 服务日志报错信息
# 执行命令(没有独立的日志文件,日志直接输出到了 messages 中)tail -f /var/log/messages# 输出结果如下(只截取了完整的一段,后面一直循环输出相同的错误信息)[root@ks-storage-0 heketi]# tail -f /var/log/messagesAug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #1Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #1Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #2Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #3Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] ERROR 2023/08/16 15:29:32 heketi/apps/glusterfs/volume_entry_allocate.go:37:glusterfs.(*VolumeEntry).allocBricksInCluster: Minimum brick size limit reached. Out of space.Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] ERROR 2023/08/16 15:29:32 heketi/apps/glusterfs/operations_manage.go:220:glusterfs.AsyncHttpOperation: Create Volume Build Failed: No spaceAug 16 15:29:32 ks-storage-0 heketi[34102]: [negroni] 2023-08-16T15:29:32+08:00 | 500 | #011 4.508081ms | 192.168.9.81:18080 | POST /volumes
通过上面的模拟演示,我们学会了在 K8s 集群中使用 Glusterfs 作为后端存储时,如何判断数据卷空间满了。
- 创建后状态为 Pending
- Hekiti 报错日志中有关键字 Create Volume Build Failed: No space
当 GlusterFS 存储集群磁盘空间分配完无法新建数据卷时,作为运维的我们就需要为存储集群添加新的硬盘来扩容存储集群了。
利用 Heketi 扩容 GlusterFS 数据卷
请注意,本文为了完整的展示扩容过程,执行相关命令时会完整的记录输出的结果。这样导致的后果就是本文会略显冗长,因此,各位在阅读本文时可以选择性阅读。
查看现有 Topology 信息
# 执行命令heketi-cli topology info# 正常的输出结果如下[root@ks-storage-0 heketi]# heketi-cli topology infoCluster Id: 9ad37206ce6575b5133179ba7c6e0935 File: true Block: true Volumes: Name: vol_75c90b8463d73a7fd9187a8ca22ff91f Size: 95 Id: 75c90b8463d73a7fd9187a8ca22ff91f Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83 Durability Type: replicate Replica: 3 Snapshot: Enabled Snapshot Factor: 1.00 Bricks: Id: 37006636e1fe713a395755e8d34f6f20 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick Size (GiB): 95 Node: 5e99fe0cd727b8066f200bad5524c544 Device: 8fd529a668d5c19dfc37450b755230cd Id: 3dca27f98e1c20aa092c159226ddbe4d Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick Size (GiB): 95 Node: 7bb26eb30c1c61456b5ae8d805c01cf1 Device: 51ad0981f8fed73002f5a7f2dd0d65c5 Id: 7ac64e137d803cccd4b9fcaaed4be8ad Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick Size (GiB): 95 Node: 0108350a9d13578febbfd0502f8077ff Device: 9af38756fe916fced666fcd3de786c19 Nodes: Node Id: 0108350a9d13578febbfd0502f8077ff State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.81 Storage Hostnames: 192.168.9.81 Devices: Id:9af38756fe916fced666fcd3de786c19 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb Bricks: Id:7ac64e137d803cccd4b9fcaaed4be8ad Size (GiB):95 Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick Node Id: 5e99fe0cd727b8066f200bad5524c544 State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.82 Storage Hostnames: 192.168.9.82 Devices: Id:8fd529a668d5c19dfc37450b755230cd State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb Bricks: Id:37006636e1fe713a395755e8d34f6f20 Size (GiB):95 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1 State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.83 Storage Hostnames: 192.168.9.83 Devices: Id:51ad0981f8fed73002f5a7f2dd0d65c5 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb Bricks: Id:3dca27f98e1c20aa092c159226ddbe4d Size (GiB):95 Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
查看现有 Node 信息
- 查看 node 节点列表
# 执行命令heketi-cli node list# 正常的输出结果如下[root@ks-storage-0 heketi]# heketi-cli node listId:0108350a9d13578febbfd0502f8077ff Cluster:9ad37206ce6575b5133179ba7c6e0935Id:5e99fe0cd727b8066f200bad5524c544 Cluster:9ad37206ce6575b5133179ba7c6e0935Id:7bb26eb30c1c61456b5ae8d805c01cf1 Cluster:9ad37206ce6575b5133179ba7c6e0935
- 查看 node 详细信息
以 storage-0
节点为例,查看 Node 详细信息。
# 执行命令heketi-cli node info xxxxxx# 正常的输出结果如下[root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ffNode Id: 0108350a9d13578febbfd0502f8077ffState: onlineCluster Id: 9ad37206ce6575b5133179ba7c6e0935Zone: 1Management Hostname: 192.168.9.81Storage Hostname: 192.168.9.81Devices:Id:9af38756fe916fced666fcd3de786c19 Name:/dev/sdb State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Bricks:1
查看现有 VG 信息
以 storage-0
节点为例,查看已分配 VG 信息(输出结果中删除了系统 VG 信息)。
# 简单查看[root@ks-storage-0 heketi]# vgs VG #PV #LV #SN Attr VSize VFree vg_9af38756fe916fced666fcd3de786c19 1 2 0 wz--n- 99.87g <3.92g# 查看详细信息[root@ks-storage-0 heketi]# vgdisplay vg_9af38756fe916fced666fcd3de786c19 --- Volume group --- VG Name vg_9af38756fe916fced666fcd3de786c19 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 187 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 99.87 GiB PE Size 4.00 MiB Total PE 25567 Alloc PE / Size 24564 / 95.95 GiB Free PE / Size 1003 / <3.92 GiB VG UUID jrxfIv-Fnjq-IYF8-aubc-t2y0-zwUp-YxjkDC
查看现有 LV 信息
以 storage-0
节点为例,查看已分配 LV 信息(输出结果中删除了系统 LV 信息)。
# 简单查看[root@ks-storage-0 heketi]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert brick_7ac64e137d803cccd4b9fcaaed4be8ad vg_9af38756fe916fced666fcd3de786c19 Vwi-aotz-- 95.00g tp_3c68ad0d0752d41ede13afdc3db9637b 0.05 tp_3c68ad0d0752d41ede13afdc3db9637b vg_9af38756fe916fced666fcd3de786c19 twi-aotz-- 95.00g 0.05 3.31# 查看详细信息[root@ks-storage-0 heketi]# lvdisplay --- Logical volume --- LV Name tp_3c68ad0d0752d41ede13afdc3db9637b VG Name vg_9af38756fe916fced666fcd3de786c19 LV UUID Aho32F-tBTa-VTTp-VfwY-qRbm-WUxu-puj4kv LV Write Access read/write (activated read only) LV Creation host, time ks-storage-0, 2023-08-16 15:21:06 +0800 LV Pool metadata tp_3c68ad0d0752d41ede13afdc3db9637b_tmeta LV Pool data tp_3c68ad0d0752d41ede13afdc3db9637b_tdata LV Status available # open 0 LV Size 95.00 GiB Allocated pool data 0.05% Allocated metadata 3.31% Current LE 24320 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:5 --- Logical volume --- LV Path /dev/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad LV Name brick_7ac64e137d803cccd4b9fcaaed4be8ad VG Name vg_9af38756fe916fced666fcd3de786c19 LV UUID VGTOMk-d07E-XWhw-Omzz-Pc1t-WwEH-Wh0EuY LV Write Access read/write LV Creation host, time ks-storage-0, 2023-08-16 15:21:10 +0800 LV Pool name tp_3c68ad0d0752d41ede13afdc3db9637b LV Status available # open 1 LV Size 95.00 GiB Mapped size 0.05% Current LE 24320 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192
注意:Heketi 使用了 LVM 存储池的方式创建 LV 卷,所有输出结果中看到了两个 LV。
brick_
开头的是实际可用的 LV。
扩容方案之调整 Topology 配置文件前提说明
- 扩容盘符:/dev/sdc
- 扩容容量:50G
查看现有 Topology 配置文件
- cat /etc/heketi/topology.json
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "192.168.9.81" ], "storage": [ "192.168.9.81" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] }, { "node": { "hostnames": { "manage": [ "192.168.9.82" ], "storage": [ "192.168.9.82" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] }, { "node": { "hostnames": { "manage": [ "192.168.9.83" ], "storage": [ "192.168.9.83" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] } ] } ]}
修改 Topology 文件
编辑现有的 topology.json
, vi /etc/heketi/topology.json
。
在每一个 node 的 devices 的配置下面增加 /dev/sdc,注意 /dev/sdb 后面的标点配置。
修改后的 topology.json
文件如下:
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "192.168.9.81" ], "storage": [ "192.168.9.81" ] }, "zone": 1 }, "devices": [ "/dev/sdb", "/dev/sdc" ] }, { "node": { "hostnames": { "manage": [ "192.168.9.82" ], "storage": [ "192.168.9.82" ] }, "zone": 1 }, "devices": [ "/dev/sdb", "/dev/sdc" ] }, { "node": { "hostnames": { "manage": [ "192.168.9.83" ], "storage": [ "192.168.9.83" ] }, "zone": 1 }, "devices": [ "/dev/sdb", "/dev/sdc" ] } ] } ]}
重新加载 Topology
# 执行命令heketi-cli topology load --json=/etc/heketi/topology.json# 正常的输出结果如下[root@ks-storage-0 heketi]# heketi-cli topology load --json=/etc/heketi/topology.json Found node 192.168.9.81 on cluster 9ad37206ce6575b5133179ba7c6e0935 Found device /dev/sdb Adding device /dev/sdc ... OK Found node 192.168.9.82 on cluster 9ad37206ce6575b5133179ba7c6e0935 Found device /dev/sdb Adding device /dev/sdc ... OK Found node 192.168.9.83 on cluster 9ad37206ce6575b5133179ba7c6e0935 Found device /dev/sdb Adding device /dev/sdc ... OK
查看更新后的 Topology 信息
# 执行命令heketi-cli topology info# 正常的输出结果如下[root@ks-storage-0 heketi]# heketi-cli topology infoCluster Id: 9ad37206ce6575b5133179ba7c6e0935 File: true Block: true Volumes: Name: vol_75c90b8463d73a7fd9187a8ca22ff91f Size: 95 Id: 75c90b8463d73a7fd9187a8ca22ff91f Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83 Durability Type: replicate Replica: 3 Snapshot: Enabled Snapshot Factor: 1.00 Bricks: Id: 37006636e1fe713a395755e8d34f6f20 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick Size (GiB): 95 Node: 5e99fe0cd727b8066f200bad5524c544 Device: 8fd529a668d5c19dfc37450b755230cd Id: 3dca27f98e1c20aa092c159226ddbe4d Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick Size (GiB): 95 Node: 7bb26eb30c1c61456b5ae8d805c01cf1 Device: 51ad0981f8fed73002f5a7f2dd0d65c5 Id: 7ac64e137d803cccd4b9fcaaed4be8ad Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick Size (GiB): 95 Node: 0108350a9d13578febbfd0502f8077ff Device: 9af38756fe916fced666fcd3de786c19 Nodes: Node Id: 0108350a9d13578febbfd0502f8077ff State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.81 Storage Hostnames: 192.168.9.81 Devices: Id:9af38756fe916fced666fcd3de786c19 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb Bricks: Id:7ac64e137d803cccd4b9fcaaed4be8ad Size (GiB):95 Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick Id:ab5f766ddc779449db2bf45bb165fbff State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc Bricks: Node Id: 5e99fe0cd727b8066f200bad5524c544 State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.82 Storage Hostnames: 192.168.9.82 Devices: Id:8fd529a668d5c19dfc37450b755230cd State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb Bricks: Id:37006636e1fe713a395755e8d34f6f20 Size (GiB):95 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick Id:b648c995486b0e785f78a8b674d8b590 State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/sdc Bricks: Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1 State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.83 Storage Hostnames: 192.168.9.83 Devices: Id:51ad0981f8fed73002f5a7f2dd0d65c5 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb Bricks: Id:3dca27f98e1c20aa092c159226ddbe4d Size (GiB):95 Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick Id:9b39c4e288d4a1783d204d2033444c00 State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc Bricks:
查看更新后的 Node 信息
以 storage-0
节点为例,查看更新后 Node 详细信息(重点查看 Devices 信息)。
# 执行命令heketi-cli node info xxxxxx# 正常的输出结果如下[root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ffNode Id: 0108350a9d13578febbfd0502f8077ffState: onlineCluster Id: 9ad37206ce6575b5133179ba7c6e0935Zone: 1Management Hostname: 192.168.9.81Storage Hostname: 192.168.9.81Devices:Id:9af38756fe916fced666fcd3de786c19 Name:/dev/sdb State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Bricks:1Id:ab5f766ddc779449db2bf45bb165fbff Name:/dev/sdc State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Bricks:0
查看更新后的 VG 信息
以 storage-0
节点为例,查看更新后 VG 信息(输出结果中删除了系统 VG 信息)。
[root@ks-storage-0 heketi]# vgs VG #PV #LV #SN Attr VSize VFree vg_9af38756fe916fced666fcd3de786c19 1 2 0 wz--n- 99.87g <3.92g vg_ab5f766ddc779449db2bf45bb165fbff 1 0 0 wz--n- 49.87g 49.87g
创建测试 PVC
在 k8s-master-0
节点,执行下面的相关命令。
- 编辑 pvc 资源文件
vi pvc-test-45g.yaml
---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-data-45gspec: accessModes: - ReadWriteOnce storageClassName: glusterfs resources: requests: storage: 45Gi
- 执行创建命令
kubectl apply -f pvc-test-45g.yaml
- 查看创建结果
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODEtest-data-45g Bound pvc-19343e73-6b14-40ca-b65b-356d38d16bb0 45Gi RWO glusterfs 17s Filesystemtest-data-95g Bound pvc-2461f639-1634-4085-af2f-b526a3800217 95Gi RWO glusterfs 42h Filesystem
查看新创建的 Volume
- 查看卷 list
[root@ks-storage-0 heketi]# heketi-cli volume listId:75c90b8463d73a7fd9187a8ca22ff91f Cluster:9ad37206ce6575b5133179ba7c6e0935 Name:vol_75c90b8463d73a7fd9187a8ca22ff91fId:ebd76f343b04f89ed4166c8f1ece0361 Cluster:9ad37206ce6575b5133179ba7c6e0935 Name:vol_ebd76f343b04f89ed4166c8f1ece0361
- 查看新创建的 volume 的信息
[root@ks-storage-0 heketi]# heketi-cli volume info ebd76f343b04f89ed4166c8f1ece0361Name: vol_ebd76f343b04f89ed4166c8f1ece0361Size: 45Volume Id: ebd76f343b04f89ed4166c8f1ece0361Cluster Id: 9ad37206ce6575b5133179ba7c6e0935Mount: 192.168.9.81:vol_ebd76f343b04f89ed4166c8f1ece0361Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83Block: falseFree Size: 0Reserved Size: 0Block Hosting Restriction: (none)Block Volumes: []Durability Type: replicateDistribute Count: 1Replica Count: 3Snapshot Factor: 1.00
- 查看新创建的 LV 信息
以 storage-0
节点为例,查看新分配的 LV 信息(输出结果中删除了系统 LV 信息)。
[root@ks-storage-0 heketi]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert brick_7ac64e137d803cccd4b9fcaaed4be8ad vg_9af38756fe916fced666fcd3de786c19 Vwi-aotz-- 95.00g tp_3c68ad0d0752d41ede13afdc3db9637b 0.05 tp_3c68ad0d0752d41ede13afdc3db9637b vg_9af38756fe916fced666fcd3de786c19 twi-aotz-- 95.00g 0.05 3.31 brick_27e193590ccdb5fba287fb66d5473074 vg_ab5f766ddc779449db2bf45bb165fbff Vwi-aotz-- 45.00g tp_7bdcf1e2c3aab06cb25906f017ae1b08 0.06 tp_7bdcf1e2c3aab06cb25906f017ae1b08 vg_ab5f766ddc779449db2bf45bb165fbff twi-aotz-- 45.00g 0.06 6.94
至此,我们实战演示了 Heketi 通过 Topology 配置文件扩容磁盘并验证测试的全过程。
扩容方案之 Heketi-CLI 直接扩容前提说明
- 扩容盘符:/dev/sdd
- 扩容容量:50G
查看 Node 信息
- 查看 Node 列表,获取 Node ID
[root@ks-storage-0 heketi]# heketi-cli node listId:0108350a9d13578febbfd0502f8077ff Cluster:9ad37206ce6575b5133179ba7c6e0935Id:5e99fe0cd727b8066f200bad5524c544 Cluster:9ad37206ce6575b5133179ba7c6e0935Id:7bb26eb30c1c61456b5ae8d805c01cf1 Cluster:9ad37206ce6575b5133179ba7c6e0935
- 查看 Node 详细信息,查看已有 Devices 信息(以 storage-0 为例)。
[root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ffNode Id: 0108350a9d13578febbfd0502f8077ffState: onlineCluster Id: 9ad37206ce6575b5133179ba7c6e0935Zone: 1Management Hostname: 192.168.9.81Storage Hostname: 192.168.9.81Devices:Id:9af38756fe916fced666fcd3de786c19 Name:/dev/sdb State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Bricks:1Id:ab5f766ddc779449db2bf45bb165fbff Name:/dev/sdc State:online Size (GiB):49 Used (GiB):45 Free (GiB):4 Bricks:1
添加新的 Device
新添加的磁盘在系统中显示盘符为 /dev/sdd,每个 Node 均需要执行添加 Device 的命令。
# 执行的命令heketi-cli device add --name /dev/sdd --node xxxxxx# 实际的输出结果如下[root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 0108350a9d13578febbfd0502f8077ffDevice added successfully[root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 5e99fe0cd727b8066f200bad5524c544Device added successfully[root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 7bb26eb30c1c61456b5ae8d805c01cf1Device added successfully
查看更新后的 Node 信息
以 storage-0
节点为例,查看更新后的 Node 信息(重点查看 Devices 信息)。
[root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ffNode Id: 0108350a9d13578febbfd0502f8077ffState: onlineCluster Id: 9ad37206ce6575b5133179ba7c6e0935Zone: 1Management Hostname: 192.168.9.81Storage Hostname: 192.168.9.81Devices:Id:9af38756fe916fced666fcd3de786c19 Name:/dev/sdb State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Bricks:1Id:ab5f766ddc779449db2bf45bb165fbff Name:/dev/sdc State:online Size (GiB):49 Used (GiB):45 Free (GiB):4 Bricks:1Id:c189451c573814e05ebd83d46ab9a0af Name:/dev/sdd State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Bricks:0
查看更新后的 Topology 信息
[root@ks-storage-0 heketi]# heketi-cli topology infoCluster Id: 9ad37206ce6575b5133179ba7c6e0935 File: true Block: true Volumes: Name: vol_75c90b8463d73a7fd9187a8ca22ff91f Size: 95 Id: 75c90b8463d73a7fd9187a8ca22ff91f Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83 Durability Type: replicate Replica: 3 Snapshot: Enabled Snapshot Factor: 1.00 Bricks: Id: 37006636e1fe713a395755e8d34f6f20 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick Size (GiB): 95 Node: 5e99fe0cd727b8066f200bad5524c544 Device: 8fd529a668d5c19dfc37450b755230cd Id: 3dca27f98e1c20aa092c159226ddbe4d Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick Size (GiB): 95 Node: 7bb26eb30c1c61456b5ae8d805c01cf1 Device: 51ad0981f8fed73002f5a7f2dd0d65c5 Id: 7ac64e137d803cccd4b9fcaaed4be8ad Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick Size (GiB): 95 Node: 0108350a9d13578febbfd0502f8077ff Device: 9af38756fe916fced666fcd3de786c19 Name: vol_ebd76f343b04f89ed4166c8f1ece0361 Size: 45 Id: ebd76f343b04f89ed4166c8f1ece0361 Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Mount: 192.168.9.81:vol_ebd76f343b04f89ed4166c8f1ece0361 Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83 Durability Type: replicate Replica: 3 Snapshot: Enabled Snapshot Factor: 1.00 Bricks: Id: 27e193590ccdb5fba287fb66d5473074 Path: /var/lib/heketi/mounts/vg_ab5f766ddc779449db2bf45bb165fbff/brick_27e193590ccdb5fba287fb66d5473074/brick Size (GiB): 45 Node: 0108350a9d13578febbfd0502f8077ff Device: ab5f766ddc779449db2bf45bb165fbff Id: 4fab639b551e573c61141508d75bf605 Path: /var/lib/heketi/mounts/vg_9b39c4e288d4a1783d204d2033444c00/brick_4fab639b551e573c61141508d75bf605/brick Size (GiB): 45 Node: 7bb26eb30c1c61456b5ae8d805c01cf1 Device: 9b39c4e288d4a1783d204d2033444c00 Id: 8eba3fb2253452999a1ec60f647dcf03 Path: /var/lib/heketi/mounts/vg_b648c995486b0e785f78a8b674d8b590/brick_8eba3fb2253452999a1ec60f647dcf03/brick Size (GiB): 45 Node: 5e99fe0cd727b8066f200bad5524c544 Device: b648c995486b0e785f78a8b674d8b590 Nodes: Node Id: 0108350a9d13578febbfd0502f8077ff State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.81 Storage Hostnames: 192.168.9.81 Devices: Id:9af38756fe916fced666fcd3de786c19 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb Bricks: Id:7ac64e137d803cccd4b9fcaaed4be8ad Size (GiB):95 Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick Id:ab5f766ddc779449db2bf45bb165fbff State:online Size (GiB):49 Used (GiB):45 Free (GiB):4 Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc Bricks: Id:27e193590ccdb5fba287fb66d5473074 Size (GiB):45 Path: /var/lib/heketi/mounts/vg_ab5f766ddc779449db2bf45bb165fbff/brick_27e193590ccdb5fba287fb66d5473074/brick Id:c189451c573814e05ebd83d46ab9a0af State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd Bricks: Node Id: 5e99fe0cd727b8066f200bad5524c544 State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.82 Storage Hostnames: 192.168.9.82 Devices: Id:5cd245e9826c0bfa46bef0c0d41ed0ed State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd Bricks: Id:8fd529a668d5c19dfc37450b755230cd State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb Bricks: Id:37006636e1fe713a395755e8d34f6f20 Size (GiB):95 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick Id:b648c995486b0e785f78a8b674d8b590 State:online Size (GiB):49 Used (GiB):45 Free (GiB):4 Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/sdc Bricks: Id:8eba3fb2253452999a1ec60f647dcf03 Size (GiB):45 Path: /var/lib/heketi/mounts/vg_b648c995486b0e785f78a8b674d8b590/brick_8eba3fb2253452999a1ec60f647dcf03/brick Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1 State: online Cluster Id: 9ad37206ce6575b5133179ba7c6e0935 Zone: 1 Management Hostnames: 192.168.9.83 Storage Hostnames: 192.168.9.83 Devices: Id:51ad0981f8fed73002f5a7f2dd0d65c5 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb Bricks: Id:3dca27f98e1c20aa092c159226ddbe4d Size (GiB):95 Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick Id:6656246eafefffaea49399444989eab1 State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd Bricks: Id:9b39c4e288d4a1783d204d2033444c00 State:online Size (GiB):49 Used (GiB):45 Free (GiB):4 Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc Bricks: Id:4fab639b551e573c61141508d75bf605 Size (GiB):45 Path: /var/lib/heketi/mounts/vg_9b39c4e288d4a1783d204d2033444c00/brick_4fab639b551e573c61141508d75bf605/brick
注意:重点查看 Devices 相关信息。
查看更新后的 VG 信息
以 storage-0
节点为例,查看更新后 VG 信息(输出结果中删除了系统 VG 信息)。
[root@ks-storage-0 heketi]# vgs VG #PV #LV #SN Attr VSize VFree openeuler 1 2 0 wz--n- <19.00g 0 vg_9af38756fe916fced666fcd3de786c19 1 2 0 wz--n- 99.87g <3.92g vg_ab5f766ddc779449db2bf45bb165fbff 1 2 0 wz--n- 49.87g <4.42g vg_c189451c573814e05ebd83d46ab9a0af 1 0 0 wz--n- 49.87g 49.87g
为了节省篇幅,此处省略了创建 PVC 验证、查看的过程。读者可以参考之前的操作自行验证测试。
至此,我们实战演示了通过 Heketi-CLI 扩容磁盘并验证测试的全过程。
常见问题问题 1
- 报错信息
[root@ks-master-0 k8s-yaml]# kubectl apply -f pvc-test-10g.yamlThe PersistentVolumeClaim "test-data-10G" is invalid: metadata.name: Invalid value: "test-data-10G": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9] "a-z0-9")?(\.[a-z0-9]([-a-z0-9]*[a-z0-9] "a-z0-9")?)*')
- 解决方案
创建 pvc 时,yaml 文件中定义的 metadata.name 使用了大写字母 test-data-10G
,改成小写 test-data-10g
就可以了。
问题 2
- 报错信息
The PersistentVolumeClaim "test-data-10g" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value
- 解决方案
这个是自己操作失误,之前创建了一个名为 test-data-10g 的 PVC,后来在原来的配置文件基础上,将 storage 的值改小了再去执行创建动作,引发了上面的报错。
问题 3
- 报错信息
[root@ks-storage-0 heketi]# heketi-cli topology load --json=/etc/heketi/topology.json Found node 192.168.9.81 on cluster 9ad37206ce6575b5133179ba7c6e0935 Found device /dev/sdb Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?): No device found for /dev/sdc. Found node 192.168.9.82 on cluster 9ad37206ce6575b5133179ba7c6e0935 Found device /dev/sdb Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?): No device found for /dev/sdc. Found node 192.168.9.83 on cluster 9ad37206ce6575b5133179ba7c6e0935 Found device /dev/sdb Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?): No device found for /dev/sdc.
- 解决方案
这个是自己操作失误,还没有添加磁盘 /dev/sdc 就去执行重载命令。
总结
本文详细介绍了基于 Hekiti 管理的 GlusterFS 存储集群,当出现数据盘空间分配满额无法创建数据卷的场景时,运维人员该如何增加新的物理磁盘并添加到已有存储集群中的两种解决方案。
扩容方案之调整 Topology 配置文件
扩容方案之 Heketi-CLI 直接扩容
本文来源于生产环境的真实案例,所有操作都经过实际验证。但,数据无价、扩容有风险,操作需谨慎。
本文由博客一文多发平台 OpenWrite 发布!