Fork me on GitHub

kubernetes系列之《PersistentVolumes》

官网:https://kubernetes.io/docs/concepts/storage/persistent-volumes/

1、理解PV&PVC

当pod中定义volume的时候,我们只需要使用pvs存储卷就可以,pvc必须与对应的pv建立关系,pvc会根据定义去pv申请,而pv是由存储空间创建出来的。pv和pvc是kubernetes抽象出来的一种存储资源。

  • Persistent Volumes(PV):对存储资源创建和使用的抽象,使得存储作为集群中的资源管理。PV分为静态和动态,动态能够自动创建PV;
  • PersistentVolumeClaims(PVC):让用户不需要关心具体的Volume实现细节;

容器与PV、PVC之间的关系:PV是提供者,PVC是消费者,消费的过程就是绑定

2、Persistent Volumes(持久卷)静态绑定

根据上图我们可以三个部分:

  • 数据卷定义:(调用PVC)
  • 卷需求模板:(PVC)
  • 容器应用:(PV)

2.1、配置数据卷和卷需求模板

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[root@k8s-master-128 volume]# cat pvc-pod.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
# 启用数据卷的名字为wwwroot,并挂载到nginx的html目录下
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
# 定义数据卷名字为wwwroot,类型为pvc
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: my-pvc


---
# 定义pvc的数据来源,根据容量大小来匹配pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# 对应上面的名字
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi

创建:

1
2
3
4
5
6
7
8
9
[root@k8s-master-128 volume]# kubectl create -f pvc-pod.yaml
deployment.apps/nginx-deployment created
persistentvolumeclaim/my-pvc created
[root@k8s-master-128 volume]# kubectl get pod|grep deploy
nginx-deployment-5cd778bdb4-77rk8 0/1 Pending 0 26h
nginx-deployment-5cd778bdb4-wrcjd 0/1 Pending 0 26h
[root@k8s-master-128 volume]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Pending 49s

因为还没有创建PV,所以新创建的PVC和deployment都处于Pending状态

2.2、定义数据卷PV

我们利用NFS做后端的空间来源

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@k8s-master-128 volume]# vim pv-pod.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/container_data
server: 172.16.194.130

创建:

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master-128 volume]# kubectl create -f pv-pod.yaml
persistentvolume/my-pv created
[root@k8s-master-128 volume]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Bound default/my-pvc 8s
[root@k8s-master-128 volume]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound my-pv 5Gi RWX 6m11s
[root@k8s-master-128 volume]# kubectl get pod|grep deploy
nginx-deployment-5cd778bdb4-77rk8 1/1 Running 0 26h
nginx-deployment-5cd778bdb4-wrcjd 1/1 Running 0 26h

此时我们可以看到,PVC根据选定的容量大小,自动匹配上了我们刚刚创建的 PV,Deployment也正常运行起来。

2.3、测试访问

首先为上面的Pod创建一个Service,通过NodePort暴露端口:

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort

通过kubectl get svc查看暴露的服务端口:

1
2
[root@k8s-master-128 volume]# kubectl get svc|grep nginx-service
nginx-service NodePort 10.0.0.190 <none> 80:48528/TCP 79s

浏览器访问:http://NodeIP:Port,

上面信息能访问到,全在NFS共享目录里可以定义:

1
2
3
4
5
[root@k8s-node-130 container_data]# pwd
/opt/container_data
[root@k8s-node-130 container_data]# cat index.html
hello PVC
[root@k8s-node-130 container_data]#

3、PersistentVolumeClaims PV动态供给

当k8s业务上来的时候,会存在大量的PVC申请,此时我们人工创建PV匹配的话,工作量就会非常大,就需要动态的自动挂载相应的存储。

我们需要使用到StorageClass来对接存储,靠它来自动关联PVC并创建PV

Kubernetes支持动态供给的存储插件:https://kubernetes.io/docs/concepts/storage/storage-classes/

部署测试:
因为NFS不支持动态存储,所以我们需要借用这个存储插件。
nfs动态相关部署可以参考:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy

3.1、定义一个Storage

1
2
3
4
5
6
7
8
9
[root@k8s-master-128 ~]# mkdir storage && cd storage
[root@k8s-master-128 storage]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"

3.2、部署RBAC授权

因为storage自动创建pv需要经过kube-apiserver,所以要进行授权

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
[root@k8s-master-128 storage]# cat rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

3.3、部署一个自动创建PV的服务

这里自动创建pv的服务由nfs-client-provisioner 完成,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@k8s-master-128 storage]# cat deployment.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 172.16.194.130
- name: NFS_PATH
value: /opt/container_data
volumes:
- name: nfs-client-root
nfs:
server: 172.16.194.130
path: /opt/container_data

只需要更改NFS的地址以及共享目录即可;

创建:

1
2
3
4
5
[root@k8s-master-128 storage]# kubectl create -f .
[root@k8s-master-128 storage]# kubectl get deploy|grep nfs
nfs-client-provisioner 1/1 1 1 2m37s
[root@k8s-master-128 storage]# kubectl get pod|grep nfs
nfs-client-provisioner-7f6869cc64-7tgt4 1/1 Running 0 44h

查看创建好的Storage:

1
2
3
[root@k8s-master-128 storage]# kubectl get sc
NAME PROVISIONER AGE
managed-nfs-storage fuseim.pri/ifs 3m48s

3.4、测试自动创建PV

还是使用静态部署Pod的例子,这次让其html下面自动挂载数据卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@k8s-master-128 storage]# cat pvc-pod.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
# 启用数据卷的名字为wwwroot,并挂载到nginx的html目录下
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
# 定义数据卷名字为wwwroot,类型为pvc
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: my-pvc


---
# 定义pvc的数据来源,根据容量大小来匹配pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# 对应上面的名字
name: my-pvc
spec:
storageClassName: "managed-nfs-storage" # 增加此行,于自动申请创建PV(此名字需与创建的storage资源名称相同)
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi

创建:

1
[root@k8s-master-128 storage]# kubectl create -f pvc-pod.yaml

查看创建的Pod及资源:

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master-128 storage]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 62m
nginx 1/1 1 1 17d
nginx-deployment 2/2 2 2 25s
[root@k8s-master-128 storage]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-7f6869cc64-7tgt4 1/1 Running 0 45h
nginx-7db9fccd9b-gr8nv 1/1 Running 0 45h
nginx-deployment-5cd778bdb4-lzcfx 1/1 Running 0 44h
nginx-deployment-5cd778bdb4-pvrrm 1/1 Running 0 44h

查看手动创建的PVC和自动创建的PV:

1
2
3
4
5
6
[root@k8s-master-128 storage]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-83032e77-7ec7-11e9-9b62-000c29f4daa9 1Gi RWX Delete Bound default/my-pvc managed-nfs-storage 44h
[root@k8s-master-128 storage]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound pvc-83032e77-7ec7-11e9-9b62-000c29f4daa9 1Gi RWX managed-nfs-storage 45s

进入其中一个容器,创建资源文件:

1
2
3
[root@k8s-master-128 storage]# kubectl exec -it nginx-deployment-5cd778bdb4-pvrrm /bin/bash
root@nginx-deployment-5cd778bdb4-pvrrm:/# cd /usr/share/nginx/html/
root@nginx-deployment-5cd778bdb4-pvrrm:/usr/share/nginx/html# echo "hello this is PVC" >index.html

访问Pod里的容器:

1
2
3
4
5
6
7
[root@k8s-master-128 storage]# kubectl get pod -o wide|grep nginx-deployment
nginx-deployment-5cd778bdb4-lzcfx 1/1 Running 0 44h 172.17.81.2 172.16.194.130 <none> <none>
nginx-deployment-5cd778bdb4-pvrrm 1/1 Running 0 44h 172.17.41.3 172.16.194.129 <none> <none>

# 在Node节点上访问
[root@k8s-node-129 ~]# curl 172.17.81.2
hello this is PVC

查看NFS存储的该文件:

1
2
3
[root@k8s-node-130 ~]# cd /opt/container_data/default-my-pvc-pvc-83032e77-7ec7-11e9-9b62-000c29f4daa9/
[root@k8s-node-130 default-my-pvc-pvc-83032e77-7ec7-11e9-9b62-000c29f4daa9]# cat index.html
hello this is PVC

至此,则动态申请资源环境配置完成,且部署Pod进行测试,动态资源分配成功。

删除资源:

1
2
3
4
5
6
7
[root@k8s-master-128 storage]# kubectl delete -f pvc-pod.yaml
deployment.apps "nginx-deployment" deleted
persistentvolumeclaim "my-pvc" deleted
[root@k8s-master-128 storage]# kubectl get pv
No resources found.
[root@k8s-master-128 storage]# kubectl get pvc
No resources found.

如若直接删除,PV将不复存在,当然后面存储的资源文件也将灰飞烟灭。

倘若想删除Pod等资源时,想要保留为其分配的PV资源,则需将配置如下:

1
2
3
4
5
6
7
8
[root@k8s-master-128 storage]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" # 将此参数改为true即可启用删除前备份PV。

完。

参考文献:

-------------本文结束感谢您的阅读-------------