自学内容网 自学内容网

k8s中的存储

目录

一、configmap

1、configmap的功能

2、configmap的使用场景

3、configmap创建方式

(1)字面值创建

(2)通过文件创建

(3)通过目录创建

(4)通过yaml文件创建

(5)configmap的使用方法

使用configmap填充环境变量

通过数据卷使用configmap

利用configMap填充pod的配置文件

通过热更新cm修改配置

二、secrets配置管理

1、secrets功能介绍

2、secrets的创建

(1)从文件创建

(2)编写yaml文件

3、secrets使用方法

(1)将secrets挂载到volume中

(2)向指定路径映射secrets密钥

(3)将secrets设置为环境变量

(4)存储docker registry的认证信息

三、volumes配置管理

1、kubernets支持的卷类型

2、emptyDir卷

3、hostpath卷

4、nfs卷

(1)部署一台nfs共享主机并在所有k8s节点中安装nfs-utils

(2)部署nfs卷 

5、persistentVolume持久卷

(1)静态持久卷pv与静态持久卷声明pvc

PersistentVolume(持久卷,简称PV)

PersistentVolumeClaim(持久卷声明,简称PVC)

(2)volumes访问模式

(3)volumes回收策略

(4)volumes状态说明

示例:静态pv

四、存储类storageclass

1、storageclass说明

2、storageclass的属性

3、存储分配器NFS Client Provisioner

4、部署NFS Client Provisioner

(1)创建sa并授权

(2)部署应用

(3)创建存储类

(4)创建pvc

(5)创建测试pod

(6)设置默认存储类

五、statefulset控制器

1、功能特性

2、statefulset的组成部分

3、构建方法

4、测试

5、statefulset的弹缩


一、configmap

1、configmap的功能

  • configMap用于保存配置数据,以键值对形式存储。

  • configMap 资源提供了向 Pod 注入配置数据的方法。

  • 镜像和配置文件解耦,以便实现镜像的可移植性和可复用性。

  • etcd限制了文件大小不能超过1M

2、configmap的使用场景

  • 填充环境变量的值

  • 设置容器内的命令行参数

  • 填充卷的配置文件

3、configmap创建方式

(1)字面值创建

[root@k8s-master ~]# kubectl create cm zx-config --from-literal fname=zx --from-literal lname=zhou
configmap/zx-config created
[root@k8s-master ~]# kubectl describe cm zx-config 
Name:         zx-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
fname:
----
zx
lname:
----
zhou

BinaryData
====

Events:  <none>
[root@k8s-master ~]# kubectl get configmaps 
NAME               DATA   AGE
kube-root-ca.crt   1      23h
zx-config          2      35s
[root@k8s-master ~]# 

(2)通过文件创建

[root@k8s-master ~]# cat /etc/resolv.conf 
# Generated by NetworkManager
search zx.org
nameserver 114.114.114.114
[root@k8s-master ~]# kubectl create cm zx2-config --from-file /etc/resolv.conf 
configmap/zx2-config created
[root@k8s-master ~]# kubectl describe cm zx2-config 
Name:         zx2-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
resolv.conf:
----
# Generated by NetworkManager
search zx.org
nameserver 114.114.114.114


BinaryData
====

Events:  <none>
[root@k8s-master ~]# 

(3)通过目录创建

[root@k8s-master ~]# mkdir zxconfig
[root@k8s-master ~]# cp /etc/fstab /etc/rc.d/rc.local zxconfig/
[root@k8s-master ~]# kubectl create cm zx3-config --from-file zxconfig/
configmap/zx3-config created
[root@k8s-master ~]# kubectl describe cm zx3-config 
Name:         zx3-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
fstab:
----

(4)通过yaml文件创建

[root@k8s-master ~]# kubectl create cm zx4-config --from-literal db_host=172.25.254.200 --from-literal db_port=3306 --dry-run=client -o yaml > zx-config.yml
[root@k8s-master ~]# vim zx-config.yml 
[root@k8s-master ~]# kubectl apply -f zx-config.yml 
configmap/zx4-config created
[root@k8s-master ~]# kubectl describe cm zx4-config 
Name:         zx4-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
db_host:
----
172.25.254.200
db_port:
----
3306

BinaryData
====

Events:  <none>
[root@k8s-master ~]# cat zx-config.yml 
apiVersion: v1
data:
  db_host: 172.25.254.200
  db_port: "3306"
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: zx4-config

(5)configmap的使用方法

  • 通过环境变量的方式直接传递给pod

  • 通过pod的 命令行运行方式

  • 作为volume的方式挂载到pod内

使用configmap填充环境变量
#将cm中的内容映射为指定变量
[root@k8s-master ~]# vim testpod1.yml
[root@k8s-master ~]# kubectl apply -f testpod1.yml 
pod/testpod created

[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP1_PORT_80_TCP=tcp://10.99.26.240:80
HOSTNAME=testpod
SHLVL=1
HOME=/
MYAPP1_SERVICE_HOST=10.99.26.240
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
key1=172.25.254.200
key2=3306
MYAPP1_SERVICE_PORT=80
MYAPP1_PORT=tcp://10.99.26.240:80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
MYAPP1_PORT_80_TCP_ADDR=10.99.26.240
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
MYAPP1_PORT_80_TCP_PORT=80
MYAPP1_PORT_80_TCP_PROTO=tcp

# 把cm中的值直接映射为变量
[root@k8s-master ~]# vim testpod1.yml 
[root@k8s-master ~]# cat testpod1.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: reg.zx.org/library/busyboxplus:latest
    name: testpod
    command:
    - /bin/sh
    - -c
    - env
    envFrom:
    - configMapRef:
        name: zx4-config
  restartPolicy: Never

[root@k8s-master ~]# kubectl delete -f testpod1.yml 
pod "testpod" deleted
[root@k8s-master ~]# kubectl apply -f testpod1.yml 
pod/testpod created
[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
MYAPP1_PORT_80_TCP=tcp://10.99.26.240:80
HOSTNAME=testpod
SHLVL=1
HOME=/
db_port=3306
MYAPP1_SERVICE_HOST=10.99.26.240
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP1_PORT=tcp://10.99.26.240:80
MYAPP1_SERVICE_PORT=80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
MYAPP1_PORT_80_TCP_ADDR=10.99.26.240
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
MYAPP1_PORT_80_TCP_PORT=80
MYAPP1_PORT_80_TCP_PROTO=tcp
db_host=172.25.254.200


# pod命令行中使用变量
[root@k8s-master ~]# vim testpod1.yml 
[root@k8s-master ~]# kubectl apply -f testpod1.yml 
pod/testpod created
[root@k8s-master ~]# kubectl logs pods/testpod
172.25.254.200 3306
[root@k8s-master ~]# cat testpod1.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: reg.zx.org/library/busyboxplus:latest
    name: testpod
    command:
    - /bin/sh
    - -c
    - echo ${db_host} ${db_port}
    envFrom:
    - configMapRef:
        name: zx4-config
  restartPolicy: Never
通过数据卷使用configmap
[root@k8s-master ~]# cp testpod1.yml testpod2.yml
[root@k8s-master ~]# vim testpod2.yml 
[root@k8s-master ~]# kubectl apply -f testpod2.yml 
pod/testpod created
[root@k8s-master ~]# kubectl logs testpod 
172.25.254.200

[root@k8s-master ~]# cat testpod2.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: reg.zx.org/library/busyboxplus:latest
    name: testpod
    command:
    - /bin/sh
    - -c
    - cat /config/db_host
    volumeMounts:
      - name: config-volume
        mountPath: /config
  volumes:
  - name: config-volume
    configMap:
      name: zx4-config
  restartPolicy: Never

利用configMap填充pod的配置文件
[root@k8s-master ~]# vim nginx.conf
[root@k8s-master ~]# cat nginx.conf 
server {
  listen 8000;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;
}
[root@k8s-master ~]# kubectl create cm nginx-conf --from-file nginx.conf 
configmap/nginx.conf created
[root@k8s-master ~]# kubectl describe cm nginx-conf 
Name:         nginx-conf
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
nginx.conf:
----
server {
  listen 8000;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;
}


BinaryData
====

Events:  <none>
[root@k8s-master ~]# kubectl create deployment nginx.conf --image reg.zx.org/library/nginx:latest --replicas 1 --dry-run=client -o yaml > nginx.yml
[root@k8s-master ~]# vim nginx.yml 
[root@k8s-master ~]# kubectl apply -f nginx.yml 

[root@k8s-master ~]# kubectl get deployments.apps 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           63s

[root@k8s-master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE               NOMINATED NODE   READINESS GATES
nginx-688685cfd4-8cmbw   1/1     Running   0          105s   10.244.1.96   k8s-node1.zx.org   <none>           <none>

[root@k8s-master ~]# cat nginx.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: reg.zx.org/library/nginx:latest
        name: nginx
        volumeMounts:
        - name: config-volume
          mountPath: /etc/nginx/conf.d

      volumes:
        - name: config-volume
          configMap:
            name: nginx-conf

[root@k8s-master ~]# curl 10.244.1.96:8000
通过热更新cm修改配置
[root@k8s-master ~]# kubectl edit cm nginx-conf     # 修改端口为8080
configmap/nginx-conf edited

[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl exec pods/nginx-688685cfd4-8cmbw -- cat /etc/nginx/conf.d/nginx.conf
server {
  listen 8080;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;
}

配置文件修改后不会生效,需要删除pod后控制器会重建pod,这时就生效了

[root@k8s-master ~]# kubectl delete pods/nginx-688685cfd4-8cmbw 
pod "nginx-688685cfd4-8cmbw" deleted

[root@k8s-master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
nginx-688685cfd4-t4pvm   1/1     Running   0          18s   10.244.2.103   k8s-node2.zx.org   <none>           <none>
[root@k8s-master ~]# curl 10.244.2.103:8080

二、secrets配置管理

1、secrets功能介绍

- Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。 

- 敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活

- Pod 可以用两种方式使用 secret:

  •  作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。 
  •  当 kubelet 为 pod 拉取镜像时使用。

- Secret的类型:

  • Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。
  • Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱。
  •  kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息

2、secrets的创建

(1)从文件创建

[root@k8s-master ~]# mkdir secrets
[root@k8s-master ~]# cd secrets/
[root@k8s-master secrets]# echo -n zx > username.txt
[root@k8s-master secrets]# echo -n zhou > password.txt
[root@k8s-master secrets]# kubectl create secret generic userlist --from-file username.txt --from-file password.txt 
secret/userlist created
[root@k8s-master secrets]# kubectl get secrets userlist -o yaml
apiVersion: v1
data:
  password.txt: emhvdQ==
  username.txt: eng=
kind: Secret
metadata:
  creationTimestamp: "2024-09-17T03:11:20Z"
  name: userlist
  namespace: default
  resourceVersion: "37150"
  uid: aa11f642-fdbd-4ff1-84c5-31785dcb4f41
type: Opaque

(2)编写yaml文件

[root@k8s-master secrets]# echo -n zx | base64
eng=
[root@k8s-master secrets]# echo -n zhou | base64
emhvdQ==
[root@k8s-master secrets]# kubectl create secret generic userlist --dry-run=client -o yaml > userlist.yml
[root@k8s-master secrets]# vim userlist.yml 
[root@k8s-master secrets]# kubectl apply -f userlist.yml 
Warning: resource secrets/userlist is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
secret/userlist configured
[root@k8s-master secrets]# kubectl describe secrets userlist
Name:         userlist
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:      4 bytes
password.txt:  4 bytes
username:      2 bytes
username.txt:  2 bytes

[root@k8s-master secrets]# cat userlist.yml 
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: null
  name: userlist
type: Opaque
data:
  username: eng=
  password: emhvdQ==

3、secrets使用方法

(1)将secrets挂载到volume中

[root@k8s-master secrets]# kubectl run nginx --image reg.zx.org/library/nginx:latest --dry-run=client -o yaml > pod1.yml
[root@k8s-master secrets]# vim pod1.yml 
[root@k8s-master secrets]# kubectl apply -f pod1.yml 
pod/nginx created
[root@k8s-master secrets]# kubectl exec  pods/nginx -it -- /bin/bash
root@nginx:/# cat /secret/
cat: /secret/: Is a directory
root@nginx:/# cd /secret/
root@nginx:/secret# ls
password  password.txtusername  username.txt
root@nginx:/secret# cat password
zhouroot@nginx:/secret# cat username
zxroot@nginx:/secret# 
root@nginx:/secret# exit
exit

[root@k8s-master secrets]# cat pod1.yml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: reg.zx.org/library/nginx:latest
    name: nginx
    volumeMounts:
    - name: secrets
      mountPath: /secret
      readOnly: true

  volumes:
  - name: secrets
    secret:
      secretName: userlist

(2)向指定路径映射secrets密钥

[root@k8s-master secrets]# kubectl delete -f pod1.yml 
pod "nginx" deleted
[root@k8s-master secrets]# 
[root@k8s-master secrets]# cp pod1.yml pod2.yml
[root@k8s-master secrets]# vim pod2.yml 
[root@k8s-master secrets]# kubectl apply -f pod2.yml 
pod/nginx created
[root@k8s-master secrets]# kubectl exec pods/nginx -it -- /bin/bash
root@nginx:/# cd secret/
root@nginx:/secret# ls
my-users
root@nginx:/secret# cd my-users
root@nginx:/secret/my-users# ls
username
root@nginx:/secret/my-users# cat username 
zxroot@nginx:/secret/my-users# 
root@nginx:/secret/my-users# exit
exit
[root@k8s-master secrets]# cat pod2.yml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: reg.zx.org/library/nginx:latest
    name: nginx
    volumeMounts:
    - name: secrets
      mountPath: /secret
      readOnly: true

  volumes:
  - name: secrets
    secret:
      secretName: userlist
      items:
      - key: username
        path: my-users/username

(3)将secrets设置为环境变量

[root@k8s-master secrets]# cp pod1.yml pod3.yml
[root@k8s-master secrets]# vim pod3.yml 
[root@k8s-master secrets]# kubectl apply -f pod3.yml 
pod/busybox created

[root@k8s-master secrets]# kubectl logs pods/busybox
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP1_PORT_80_TCP=tcp://10.99.26.240:80
HOSTNAME=busybox
SHLVL=1
HOME=/
MYAPP1_SERVICE_HOST=10.99.26.240
USERNAME=zx
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP1_SERVICE_PORT=80
MYAPP1_PORT=tcp://10.99.26.240:80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
MYAPP1_PORT_80_TCP_ADDR=10.99.26.240
KUBERNETES_SERVICE_PORT_HTTPS=443
PASS=zhou
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
MYAPP1_PORT_80_TCP_PORT=80
MYAPP1_PORT_80_TCP_PROTO=tcp

[root@k8s-master secrets]# cat pod3.yml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - image: reg.zx.org/library/busyboxplus:latest
    name: busybox
    command:
    - /bin/sh
    - -c
    - env
    env:
    - name: USERNAME
      valueFrom:
        secretKeyRef:
          name: userlist
          key: username
    - name: PASS
      valueFrom:
        secretKeyRef:
          name: userlist
          key: password
  restartPolicy: Never

(4)存储docker registry的认证信息

[root@k8s-master secrets]# 
[root@k8s-master secrets]# docker login reg.zx.org
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores

Login Succeeded
[root@k8s-master secrets]# docker load -i game2048.tar 
011b303988d2: Loading layer   5.05MB/5.05MB
36e9226e74f8: Loading layer  51.46MB/51.46MB
192e9fad2abc: Loading layer  3.584kB/3.584kB
6d7504772167: Loading layer  4.608kB/4.608kB
88fca8ae768a: Loading layer  629.8kB/629.8kB
Loaded image: timinglee/game2048:latest
[root@k8s-master secrets]# docker tag timinglee/game2048:latest reg.zx.org/zx/game2048:latest
[root@k8s-master secrets]# docker push reg.zx.org/zx/game2048:latest
The push refers to repository [reg.zx.org/zx/game2048]
88fca8ae768a: Pushed 
6d7504772167: Pushed 
192e9fad2abc: Pushed 
36e9226e74f8: Pushed 
011b303988d2: Pushed 
latest: digest: sha256:8a34fb9cb168c420604b6e5d32ca6d412cb0d533a826b313b190535c03fe9390 size: 1364

# 退出登录无法pull
[root@k8s-master secrets]# docker logout reg.zx.org
Removing login credentials for reg.zx.org
[root@k8s-master secrets]# docker pull reg.zx.org/zx/game2048:latest
Error response from daemon: unauthorized: unauthorized to access repository: zx/game2048, action: pull: unauthorized to access repository: zx/game2048, action: pull
#建立用于docker认证的secret
[root@k8s-master secrets]# kubectl create secret docker-registry docker-auth --docker-server reg.zx.org --docker-username admin --docker-password 123 --docker-email zx@zx.org
secret/docker-auth created

[root@k8s-master secrets]# vim pod4.yml 
[root@k8s-master secrets]# kubectl apply -f pod4.yml 
pod/game2048 created

[root@k8s-master secrets]# kubectl delete -f pod3.yml 
pod "busybox" deleted
[root@k8s-master secrets]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
game2048                 1/1     Running   0          24s
nginx-688685cfd4-t4pvm   1/1     Running   0          39m

[root@k8s-master secrets]# cat pod4.yml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: game2048
  name: game2048
spec:
  containers:
  - image: reg.zx.org/zx/game2048:latest
    name: game2048
  imagePullSecrets:        #不设定docker认证时无法下载镜像
  - name: docker-auth

三、volumes配置管理

  • 容器中文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题

  • 当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。

  • 当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件

  • Kubernetes 卷具有明确的生命周期与使用它的 Pod 相同

  • 卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留

  • 当一个 Pod 不再存在时,卷也将不再存在。

  • Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。

  • 卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。

1、kubernets支持的卷类型

官网:卷 | Kubernetes

k8s支持的卷的类型如下:

  • awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi

  • downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker

  • gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、

  • nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd

  • scaleIO、secret、storageos、vsphereVolume

2、emptyDir卷

功能:

当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要 Pod 在该节点上运行,卷就一直存在。卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除

emptyDir 的使用场景:

  • 缓存空间,例如基于磁盘的归并排序。

  • 耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。

  • 在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。

[root@k8s-master ~]# mkdir volumes
[root@k8s-master ~]# cd volumes/
[root@k8s-master volumes]# vim pod1.yml
[root@k8s-master volumes]# kubectl apply -f pod1.yml
pod/vol1 created
[root@k8s-master volumes]# kubectl describe pods vol1
Name:             vol1
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-node1.zx.org/172.25.254.10
Start Time:       Tue, 17 Sep 2024 11:55:11 +0800
Labels:           <none>
Annotations:      <none>
Status:           Running
IP:               10.244.1.102
IPs:
  IP:  10.244.1.102
Containers:
  vm1:
    Container ID:  docker://7c42d2f2959de768f864b6ab1ed4ae095fc9945023550be9540406981ce9dbcb
    Image:         reg.zx.org/library/busyboxplus:latest
    Image ID:      docker-pullable://reg.zx.org/library/busyboxplus@sha256:9d1c242c1fd588a1b8ec4461d33a9ba08071f0cc5bb2d50d4ca49e430014ab06
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      sleep 30000000
    State:          Running
      Started:      Tue, 17 Sep 2024 11:55:13 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /cache from cache-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5kmp9 (ro)
  vm2:
    Container ID:   docker://49385fe0330033611426759d23cb5c9254aef4d1f17e850bcb2cfaa0d846407f
    Image:          reg.zx.org/library/nginx:latest
    Image ID:       docker-pullable://reg.zx.org/library/nginx@sha256:127262f8c4c716652d0e7863bba3b8c45bc9214a57d13786c854272102f7c945
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 17 Sep 2024 11:55:14 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/nginx/html from cache-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5kmp9 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  cache-vol:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  100Mi
  kube-api-access-5kmp9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  5s    default-scheduler  Successfully assigned default/vol1 to k8s-node1.zx.org
  Normal  Pulling    3s    kubelet            Pulling image "reg.zx.org/library/busyboxplus:latest"
  Normal  Pulled     3s    kubelet            Successfully pulled image "reg.zx.org/library/busyboxplus:latest" in 377ms (377ms including waiting). Image size: 12855024 bytes.
  Normal  Created    3s    kubelet            Created container vm1
  Normal  Started    2s    kubelet            Started container vm1
  Normal  Pulling    2s    kubelet            Pulling image "reg.zx.org/library/nginx:latest"
  Normal  Pulled     2s    kubelet            Successfully pulled image "reg.zx.org/library/nginx:latest" in 283ms (283ms including waiting). Image size: 187694648 bytes.
  Normal  Created    2s    kubelet            Created container vm2
  Normal  Started    1s    kubelet            Started container vm2
[root@k8s-master volumes]# 
[root@k8s-master volumes]# kubectl exec -it pods/vol1 -c vm1 -- /bin/sh
/ # cd /cache/
/cache # curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
/cache # 
/cache # echo zx > index.html
/cache # curl localhost
zx
/cache # 
/cache # dd if=/dev/zero of=bigfile bs=1M count=101
dd: writing 'bigfile': No space left on device
101+0 records in
99+1 records out
/cache # 

3、hostpath卷

功能:

hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中,不会因pod关闭而被删除

hostPath 的一些用法

  • 运行一个需要访问 Docker 引擎内部机制的容器,挂载 /var/lib/docker 路径。

  • 在容器中运行 cAdvisor(监控) 时,以 hostPath 方式挂载 /sys。

  • 允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在

hostPath的安全隐患

  • 具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。

  • 当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 hostPath 使用的资源。

  • 基础主机上创建的文件或目录只能由 root 用户写入。您需要在 特权容器中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 hostPath 卷。

[root@k8s-master volumes]# vim pod2.yml
[root@k8s-master volumes]# cat pod2.yml 
apiVersion: v1
kind: Pod
metadata:
  name: vol1
spec:
  containers:
  - image: reg.zx.org/library/nginx:latest
    name: vm1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-vol
  volumes:
  - name: cache-vol
    hostPath:
      path: /data
      type: DirectoryOrCreate

[root@k8s-master volumes]# kubectl apply -f pod2.yml
pod/vol1 created

[root@k8s-master volumes]# kubectl get  pods  -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
nginx-688685cfd4-t4pvm   1/1     Running   0          60m   10.244.2.103   k8s-node2.zx.org   <none>           <none>
[root@k8s-master volumes]# curl 10.244.2.104
[root@k8s-node2 ~]# echo zx > /data/index.html
[root@k8s-master volumes]# curl 10.244.2.104
zx

#当pod被删除后hostPath不会被清理
[root@k8s-master volumes]# kubectl delete -f pod2.yml
pod "vol1" deleted
[root@k8s-node2 ~]# ls /data/
index.html

4、nfs卷

NFS 卷允许将一个现有的 NFS 服务器上的目录挂载到 Kubernetes 中的 Pod 中。这对于在多个 Pod 之间共享数据或持久化存储数据非常有用

例如,如果有多个容器需要访问相同的数据集,或者需要将容器中的数据持久保存到外部存储,NFS 卷可以提供一种方便的解决方案。

(1)部署一台nfs共享主机并在所有k8s节点中安装nfs-utils

#部署nfs主机
[root@docker-node1 ~]# dnf install nfs-utils -y
[root@docker-node1 ~]# systemctl enable --now nfs-server.service
[root@docker-node1 ~]# vim /etc/exports
[root@docker-node1 ~]# cat /etc/exports
/nfsdata   *(rw,sync,no_root_squash)

[root@docker-node1 ~]# mkdir /nfsdata
[root@docker-node1 ~]# exportfs -rv
exporting *:/nfsdata
[root@docker-node1 ~]# 
[root@docker-node1 ~]# showmount  -e
Export list for docker-node1.zx.org:
/nfsdata *

[root@docker-node1 ~]# ll /data/
total 8
drwxr-xr-x 2            10000 10000    6 Sep 10 18:10 ca_download
drwxr-xr-x 2 root             root    42 Sep 10 18:08 certs
drwxr-xr-x 2            10000 10000    6 Sep 10 18:10 chart_storage
drwx------ 3 systemd-coredump input   18 Sep 10 18:11 database
drwxr-xr-x 2            10000 10000    6 Sep 10 18:10 job_logs
drwxr-xr-x 2 systemd-coredump input   22 Sep 17 12:15 redis
drwxr-xr-x 3            10000 10000   20 Sep 10 18:28 registry
drwxr-xr-x 6 root             root    58 Sep 16 08:00 secret
-rw-r--r-- 1 root             root  2126 Sep 10 18:03 zx.org.crt
-rw------- 1 root             root  3272 Sep 10 18:03 zx.org.key
[root@docker-node1 ~]# 

#在k8s所有节点中安装nfs-utils
[root@k8s-master & node1 & node2  ~]# dnf install nfs-utils -y

[root@k8s-node2 ~]# showmount  -e 172.25.254.100    # 三台主机上
Export list for 172.25.254.100:
/nfsdata *

(2)部署nfs卷 

[root@k8s-master volumes]# vim pod3.yml
[root@k8s-master volumes]# kubectl apply -f pod3.yml
pod/vol1 created
[root@k8s-master volumes]# cat pod3.yml 
apiVersion: v1
kind: Pod
metadata:
  name: vol1
spec:
  containers:
  - image: reg.zx.org/library/nginx:latest
    name: vm1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-vol
  volumes:
  - name: cache-vol
    nfs:
      server: 172.25.254.100
      path: /nfsdata

[root@k8s-master volumes]# kubectl get pods  -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
vol1   1/1     Running   0          7s    10.244.2.106   k8s-node2.zx.org   <none>           <none>

[root@k8s-master volumes]# curl 10.244.2.106
[root@docker-node1 ~]# echo zx > /nfsdata/index.html
[root@k8s-master volumes]# curl 10.244.2.106

5、persistentVolume持久卷

(1)静态持久卷pv与静态持久卷声明pvc

PersistentVolume(持久卷,简称PV)
  • pv是集群内由管理员提供的网络存储的一部分。

  • PV也是集群中的一种资源。是一种volume插件,

  • 但是它的生命周期却是和使用它的Pod相互独立的。

  • PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节

  • pv有两种提供方式:静态和动态

    • 静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,它们存在于Kubernetes API中,并可用于存储使用

    • 动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass

PersistentVolumeClaim(持久卷声明,简称PVC)
  • 是用户的一种存储请求

  • 它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源

  • Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式持久卷配置

  • PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态

(2)volumes访问模式

  • ReadWriteOnce -- 该volume只能被单个节点以读写的方式映射

  • ReadOnlyMany -- 该volume可以被多个节点以只读方式映射

  • ReadWriteMany -- 该volume可以被多个节点以读写的方式映射

  • 在命令行中,访问模式可以简写为:

    • RWO - ReadWriteOnce

  • ROX - ReadOnlyMany

  • RWX – ReadWriteMany

(3)volumes回收策略

  • Retain:保留,需要手动回收

  • Recycle:回收,自动删除卷中数据(在当前版本中已经废弃)

  • Delete:删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除

只有NFS和HostPath支持回收利用

AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持删除操作

(4)volumes状态说明

  • Available 卷是一个空闲资源,尚未绑定到任何申领

  • Bound 该卷已经绑定到某申领

  • Released 所绑定的申领已被删除,但是关联存储资源尚未被集群回收

  • Failed 卷的自动回收操作失败

示例:静态pv
[root@docker-node1 ~]# mkdir  /nfsdata/pv{1..3}
[root@docker-node1 nfsdata]# ls
pv1  pv2  pv3

[root@k8s-node1 ~]# showmount  -e 172.25.254.100    # 节点(三个)有内容

#编写创建pv的yml文件,pv是集群资源,不在任何namespace中
[root@k8s-master pvc]# vim pv.yml
[root@k8s-master pvc]# kubectl apply -f pv.yml 
[root@k8s-master pvc]# cat pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv1
    server: 172.25.254.100

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv2
spec:
  capacity:
    storage: 15Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv2
    server: 172.25.254.100
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv3
spec:
  capacity:
    storage: 25Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv3
    server: 172.25.254.100

[root@k8s-master pvc]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv1    5Gi        RWO            Retain           Available           nfs            <unset>                          2m23s
pv2    15Gi       RWX            Retain           Available           nfs            <unset>                          2m23s
pv3    25Gi       RWX            Retain           Available           nfs            <unset>                          65s

 

#建立pvc,pvc是pv使用的申请,需要保证和pod在一个namesapce中
[root@k8s-master pvc]# vim pvc.yml
[root@k8s-master pvc]# kubectl apply -f pvc.yml 
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@k8s-master pvc]# cat pvc.yml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc3
spec:
  storageClassName: nfs
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 15Gi

#在其他namespace中无法应用
[root@k8s-master pvc]# kubectl -n kube-system  get pvc
No resources found in kube-system namespace.

 

[root@k8s-master pvc]# vim pod.yml
[root@k8s-master pvc]# kubectl apply -f pod.yml
[root@k8s-master pvc]# cat pod.yml 
apiVersion: v1
kind: Pod
metadata:
  name: zx
spec:
  containers:
  - image: reg.zx.org/library/nginx:latest
    name: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: vol1
  volumes:
  - name: vol1
    persistentVolumeClaim:
      claimName: pvc1

四、存储类storageclass

官网: GitHub - kubernetes-sigs/nfs-subdir-external-provisioner: Dynamic sub-dir volume provisioner on a remote NFS server.

1、storageclass说明

  • StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。

  • 每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到

2、storageclass的属性

属性说明:存储类 | Kubernetes

Provisioner(存储分配器):用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。

Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。

3、存储分配器NFS Client Provisioner

源码地址:GitHub - kubernetes-sigs/nfs-subdir-external-provisioner: Dynamic sub-dir volume provisioner on a remote NFS server.

  • NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。

  • PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服务器上)

  • PV回收的时候以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式(在NFS服务器上)

4、部署NFS Client Provisioner

(1)创建sa并授权

[root@k8s-master ~]# mkdir storageclass
[root@k8s-master ~]# cd storageclass/
[root@k8s-master storageclass]# vim rbac.yml
[root@k8s-master storageclass]# kubectl apply -f rbac.yml
namespace/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-master storageclass]# 
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get sa
NAME                     SECRETS   AGE
default                  0         9s
nfs-client-provisioner   0         9s

[root@k8s-master storageclass]# cat rbac.yml 
apiVersion: v1
kind: Namespace
metadata:
  name: nfs-client-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-client-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-client-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

(2)部署应用

[root@k8s-master ~]# docker load -i nfs-subdir-external-provisioner-4.0.2.tar 

[root@k8s-master ~]# docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 reg.zx.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2
[root@k8s-master ~]# docker push reg.zx.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2
[root@k8s-master storageclass]# vim deployment.yml
[root@k8s-master storageclass]# kubectl apply -f deployment.yml 
deployment.apps/nfs-client-provisioner created
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get deployments.apps nfs-client-provisioner
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           17s
[root@k8s-master storageclass]# cat deployment.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: reg.zx.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.25.254.100
            - name: NFS_PATH
              value: /nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.25.254.100
            path: /nfsdata

(3)创建存储类

[root@k8s-master storageclass]# vim class.yaml
[root@k8s-master storageclass]# kubectl apply -f class.yaml 
storageclass.storage.k8s.io/nfs-client created
[root@k8s-master storageclass]# kubectl get storageclasses.storage.k8s.io
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  9s
[root@k8s-master storageclass]# cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "false"

(4)创建pvc

[root@k8s-master storageclass]# vim pvc.yml
[root@k8s-master storageclass]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created
[root@k8s-master storageclass]# kubectl get pvc
NAME         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc1         Bound     pv1                                        5Gi        RWO            nfs            <unset>                 35m
pvc2         Bound     pv2                                        15Gi       RWX            nfs            <unset>                 35m
pvc3         Pending                                                                        nfs            <unset>                 35m
test-claim   Bound     pvc-3aeba466-5930-404a-b0d6-de108ac56473   1G         RWX            nfs-client     <unset>                 7s
[root@k8s-master storageclass]# cat pvc.yml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1G

(5)创建测试pod

[root@k8s-master storageclass]# vim pod.yml
[root@k8s-master storageclass]# kubectl apply -f pod.yml 
pod/test-pod created
[root@k8s-master storageclass]# cat pod.yml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: reg.zx.org/library/busyboxplus:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
[root@k8s-master storageclass]# kubectl get pods
NAME       READY   STATUS      RESTARTS   AGE
test-pod   0/1     Completed   0          4m7s

(6)设置默认存储类

  • 在未设定默认存储类时pvc必须指定使用类的名称

  • 在设定存储类后创建pvc时可以不用指定storageClassName

[root@k8s-master pvc]# kubectl edit sc nfs-client

五、statefulset控制器

1、功能特性

  • Statefulset是为了管理有状态服务的问提设计的

  • StatefulSet将应用状态抽象成了两种情况:

    • 拓扑状态:应用实例必须按某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样

    • 存储状态:应用的多个实例分别绑定了不同存储数据。

  • StatefulSet给所有的Pod进行了编号,编号规则是:$(statefulset名称)-$(序号),从0开始。

  • Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,Pod对应的DNS记录。

2、statefulset的组成部分

  • Headless Service:用来定义pod网络标识,生成可解析的DNS记录

  • volumeClaimTemplates:创建pvc,指定pvc名称大小,自动创建pvc且pvc由存储类供应。

  • StatefulSet:管理pod的

3、构建方法

#建立无头服务
[root@k8s-master pvc]# vim headless.yml
[root@k8s-master pvc]# kubectl apply -f headless.yml 
service/nginx-svc created
[root@k8s-master pvc]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   3d4h
myapp1       ClusterIP   10.99.26.240   <none>        80/TCP    2d6h
nginx-svc    ClusterIP   None           <none>        80/TCP    116s

#建立statefulset
[root@k8s-master pvc]# vim statefulset.yml
[root@k8s-master pvc]# kubectl apply -f statefulset.yml 
statefulset.apps/web created
[root@k8s-master pvc]# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          26s
web-1   1/1     Running   0          21s
web-2   1/1     Running   0          16s

[root@k8s-master pvc]# cat headless.yml 
apiVersion: v1
kind: Service
metadata:
 name: nginx-svc
 labels:
  app: nginx
spec:
 ports:
 - port: 80
   name: web
 clusterIP: None
 selector:
  app: nginx
[root@k8s-master pvc]# cat statefulset.yml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: web
spec:
 serviceName: "nginx-svc"
 replicas: 3
 selector:
  matchLabels:
   app: nginx
 template:
  metadata:
   labels:
    app: nginx
  spec:
   containers:
   - name: nginx
     image: reg.zx.org/library/nginx:latest
     volumeMounts:
       - name: www
         mountPath: /usr/share/nginx/html
 volumeClaimTemplates:
  - metadata:
     name: www
    spec:
     storageClassName: nfs-client
     accessModes:
     - ReadWriteOnce
     resources:
      requests:
       storage: 1Gi

4、测试

#为每个pod建立index.html文件
[root@docker-node1 nfsdata]# echo web-0 > default-www-web-0-pvc-31d62985-0b9a-43fc-8d5f-9451c215f79f/index.html
[root@docker-node1 nfsdata]# echo web-1 > default-www-web-1-pvc-fde53553-f3a5-4a4a-b880-b94b38fca816/index.html
[root@docker-node1 nfsdata]# echo web-2 > default-www-web-2-pvc-7866fc97-e410-4224-be13-266ca7cf8c10/index.html

#建立测试pod访问web-0~2
[root@k8s-master pvc]# kubectl run -it testpod --image reg.zx.org/library/busyboxplus:latest
If you don't see a command prompt, try pressing enter.
/ # curl 10.244.2.109
web-0

[root@k8s-master pvc]# kubectl run -it testpod --image reg.zx.org/library/busyboxplus:latest
/ # curl  web-0.nginx-svc
web-0
/ # curl  web-1.nginx-svc
web-1
/ # curl  web-2.nginx-svc
web-2

#删掉重新建立statefulset
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl apply  -f statefulset.yml
statefulset.apps/web created
或者改变pod的数量
[root@k8s-master pvc]# kubectl scale statefulset web --replicas 0
statefulset.apps/web scaled
[root@k8s-master pvc]# kubectl get pods -o wide
No resources found in default namespace.
[root@k8s-master pvc]# kubectl scale statefulset web --replicas 3
statefulset.apps/web scaled


#访问依然不变
[root@k8s-master pvc]# kubectl attach testpod -c testpod -i -t
If you don't see a command prompt, try pressing enter.
/ # curl  web-0.nginx-svc
web-0
/ # curl  web-1.nginx-svc
web-1
/ # curl  web-2.nginx-svc
web-2

5、statefulset的弹缩

首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用

用命令改变副本数

kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>

通过编辑配置改变副本数

kubectl edit statefulsets.apps <stateful-set-name>


原文地址:https://blog.csdn.net/weixin_68256171/article/details/142311678

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!