NFS(Network File System)
在使用Kubernetes的过程中,我们经常会用到存储。存储的最大作用,就是使容器内的数据实现持久化保存,防止删库跑路的现象发生。而要实现这一功能,就离不开网络文件系统。kubernetes通过NFS网络文件系统,将每个节点的挂载数据进行同步,那么就保证了pod被故障转移等情况,依然能读取到被存储的数据。
一、安装NFS
在kubernetes集群内安装NFS。
所有节点执行
yuminstall-ynfs-utils
nfs主节点执行
echo"/nfs/data/*(insecure,rw,sync,no_root_squash)">/etc/exports#暴露了目录/nfs/data/,`*`表示所有节点都可以访问。
mkdir-p/nfs/data
systemctlenablerpcbind--now
systemctlenablenfs-server--now
#配置生效
exportfs-r
#检查验证
[root@k8s-master~]#exportfs
/nfs/data
[root@k8s-master~]#
nfs从节点执行
#展示172.31.0.2有哪些目录可以挂载
showmount-e172.31.0.2#ip改成自己的主节点ip
mkdir-p/nfs/data
#将本地目录和远程目录进行挂载
mount-tnfs172.31.0.2:/nfs/data/nfs/data
二、验证
#写入一个测试文件
echo"hellonfsserver">/nfs/data/test.txt
通过这些步骤,我们可以看到NFS文件系统已经安装成功。172.31.0.2作为系统的主节点,暴露了/nfs/data
,其他从节点的/nfs/data
和主节点的/nfs/data
进行了挂载。在kubernetes集群内,可以任意选取一台服务器作为server,其他服务器作为client。
PV&PVC
PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置。
PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格。
Pod中的数据需要持久化保存,保存到哪里呢” />一、静态创建PV
创建数据存放目录。
#nfs主节点执行
[root@k8s-masterdata]#pwd
/nfs/data
[root@k8s-masterdata]#mkdir-p/nfs/data/01
[root@k8s-masterdata]#mkdir-p/nfs/data/02
[root@k8s-masterdata]#mkdir-p/nfs/data/03
[root@k8s-masterdata]#ls
010203
[root@k8s-masterdata]#
创建PV,pv.yml
apiVersion:v1
kind:PersistentVolume
metadata:
name:pv01-10m
spec:
capacity:
storage:10M
accessModes:
-ReadWriteMany
storageClassName:nfs
nfs:
path:/nfs/data/01
server:172.31.0.2
---
apiVersion:v1
kind:PersistentVolume
metadata:
name:pv02-1gi
spec:
capacity:
storage:1Gi
accessModes:
-ReadWriteMany
storageClassName:nfs
nfs:
path:/nfs/data/02
server:172.31.0.2
---
apiVersion:v1
kind:PersistentVolume
metadata:
name:pv03-3gi
spec:
capacity:
storage:3Gi
accessModes:
-ReadWriteMany
storageClassName:nfs
nfs:
path:/nfs/data/03
server:172.31.0.2
[root@k8s-master~]#kubectlapply-fpv.yml
persistentvolume/pv01-10mcreated
persistentvolume/pv02-1gicreated
persistentvolume/pv03-3gicreated
[root@k8s-master~]#kubectlgetpersistentvolume
NAMECAPACITYACCESSMODESRECLAIMPOLICYSTATUSCLAIMSTORAGECLASSREASONAGE
pv01-10m10MRWXRetainAvailablenfs45s
pv02-1gi1GiRWXRetainAvailablenfs45s
pv03-3gi3GiRWXRetainAvailablenfs45s
[root@k8s-master~]#
三个文件夹对应三个PV,大小分别为10M、1Gi、3Gi,这三个PV形成一个PV池。
创建PVC,pvc.yml
kind:PersistentVolumeClaim
apiVersion:v1
metadata:
name:nginx-pvc
spec:
accessModes:
-ReadWriteMany
resources:
requests:
storage:200Mi
storageClassName:nfs
[root@k8s-master~]#kubectlapply-fpvc.yml
persistentvolumeclaim/nginx-pvccreated
[root@k8s-master~]#kubectlgetpvc,pv
NAMESTATUSVOLUMECAPACITYACCESSMODESSTORAGECLASSAGE
persistentvolumeclaim/nginx-pvcBoundpv02-1gi1GiRWXnfs14m
NAMECAPACITYACCESSMODESRECLAIMPOLICYSTATUSCLAIMSTORAGECLASSREASONAGE
persistentvolume/pv01-10m10MRWXRetainAvailablenfs17m
persistentvolume/pv02-1gi1GiRWXRetainBounddefault/nginx-pvcnfs17m
persistentvolume/pv03-3gi3GiRWXRetainAvailablenfs17m
[root@k8s-master~]#
可以发现使用200Mi的PVC,会在PV池中绑定最佳的1Gi大小的pv02-1gi
的PV去使用,状态为Bound。后面创建Pod或Deployment的时候,使用PVC即可将数据持久化保存到PV中,而且NFS的任意节点可以同步。
二、动态创建PV
配置动态供应的默认存储类,sc.yml
#创建了一个存储类
apiVersion:storage.k8s.io/v1
kind:StorageClass
metadata:
name:nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class:"true"
provisioner:k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete:"true"##删除pv的时候,pv的内容是否要备份
---
apiVersion:apps/v1
kind:Deployment
metadata:
name:nfs-client-provisioner
labels:
app:nfs-client-provisioner
#replacewithnamespacewhereprovisionerisdeployed
namespace:default
spec:
replicas:1
strategy:
type:Recreate
selector:
matchLabels:
app:nfs-client-provisioner
template:
metadata:
labels:
app:nfs-client-provisioner
spec:
serviceAccountName:nfs-client-provisioner
containers:
-name:nfs-client-provisioner
image:registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
#resources:
#limits:
#cpu:10m
#requests:
#cpu:10m
volumeMounts:
-name:nfs-client-root
mountPath:/persistentvolumes
env:
-name:PROVISIONER_NAME
value:k8s-sigs.io/nfs-subdir-external-provisioner
-name:NFS_SERVER
value:172.31.0.2##指定自己nfs服务器地址
-name:NFS_PATH
value:/nfs/data##nfs服务器共享的目录
volumes:
-name:nfs-client-root
nfs:
server:172.31.0.2
path:/nfs/data
---
apiVersion:v1
kind:ServiceAccount
metadata:
name:nfs-client-provisioner
#replacewithnamespacewhereprovisionerisdeployed
namespace:default
---
kind:ClusterRole
apiVersion:rbac.authorization.k8s.io/v1
metadata:
name:nfs-client-provisioner-runner
rules:
-apiGroups:[""]
resources:["nodes"]
verbs:["get","list","watch"]
-apiGroups:[""]
resources:["persistentvolumes"]
verbs:["get","list","watch","create","delete"]
-apiGroups:[""]
resources:["persistentvolumeclaims"]
verbs:["get","list","watch","update"]
-apiGroups:["storage.k8s.io"]
resources:["storageclasses"]
verbs:["get","list","watch"]
-apiGroups:[""]
resources:["events"]
verbs:["create","update","patch"]
---
kind:ClusterRoleBinding
apiVersion:rbac.authorization.k8s.io/v1
metadata:
name:run-nfs-client-provisioner
subjects:
-kind:ServiceAccount
name:nfs-client-provisioner
#replacewithnamespacewhereprovisionerisdeployed
namespace:default
roleRef:
kind:ClusterRole
name:nfs-client-provisioner-runner
apiGroup:rbac.authorization.k8s.io
---
kind:Role
apiVersion:rbac.authorization.k8s.io/v1
metadata:
name:leader-locking-nfs-client-provisioner
#replacewithnamespacewhereprovisionerisdeployed
namespace:default
rules:
-apiGroups:[""]
resources:["endpoints"]
verbs:["get","list","watch","create","update","patch"]
---
kind:RoleBinding
apiVersion:rbac.authorization.k8s.io/v1
metadata:
name:leader-locking-nfs-client-provisioner
#replacewithnamespacewhereprovisionerisdeployed
namespace:default
subjects:
-kind:ServiceAccount
name:nfs-client-provisioner
#replacewithnamespacewhereprovisionerisdeployed
namespace:default
roleRef:
kind:Role
name:leader-locking-nfs-client-provisioner
apiGroup:rbac.authorization.k8s.io
这里注意ip修改为自己的,而且镜像仓库地址已换成阿里云的,防止镜像下载不下来。
[root@k8s-master~]#kubectlapply-fsc.yml
storageclass.storage.k8s.io/nfs-storagecreated
deployment.apps/nfs-client-provisionercreated
serviceaccount/nfs-client-provisionercreated
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runnercreated
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisionercreated
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisionercreated
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisionercreated
[root@k8s-master~]#
确认配置是否生效
[root@k8s-master~]#kubectlgetsc
NAMEPROVISIONERRECLAIMPOLICYVOLUMEBINDINGMODEALLOWVOLUMEEXPANSIONAGE
nfs-storage(default)k8s-sigs.io/nfs-subdir-external-provisionerDeleteImmediatefalse6s
[root@k8s-master~]#
动态供应测试,pvc.yml
kind:PersistentVolumeClaim
apiVersion:v1
metadata:
name:nginx-pvc
spec:
accessModes:
-ReadWriteMany
resources:
requests:
storage:200Mi
[root@k8s-master~]#kubectlapply-fpvc.yml
persistentvolumeclaim/nginx-pvccreated
[root@k8s-master~]#kubectlgetpvc
NAMESTATUSVOLUMECAPACITYACCESSMODESSTORAGECLASSAGE
nginx-pvcBoundpvc-7b01bc33-826d-41d0-a990-8c1e7c997e6f200MiRWXnfs-storage9s
[root@k8s-master~]#kubectlgetpv
NAMECAPACITYACCESSMODESRECLAIMPOLICYSTATUSCLAIMSTORAGECLASSREASONAGE
pvc-7b01bc33-826d-41d0-a990-8c1e7c997e6f200MiRWXDeleteBounddefault/nginx-pvcnfs-storage16s
[root@k8s-master~]#
pvc声明需要200Mi的空间,那么便创建了200Mi的pv,状态为Bound,测试成功。后面创建Pod或Deployment的时候,使用PVC即可将数据持久化保存到PV中,而且NFS的任意节点可以同步。
小结
这样就能明白了,NFS、PV和PVC为kubernetes集群提供了数据存储支持,应用被任意节点部署,那么之前的数据依然能够读取到。
本文由 mdnice 多平台发布