首页 公告 项目 RSS

使用k8s-csi-s3加minio作为你的k8s存储

May 11, 2023 本文有 1216 个字 需要花费 3 分钟阅读

简介

一直想用s3作为k3s的存储,今天看到了下面这个项目,感觉比较OK

https://github.com/yandex-cloud/k8s-csi-s3

操作

其实安装也很简单,所有的文件都在

https://github.com/yandex-cloud/k8s-csi-s3/tree/master/deploy/kubernetes

我自己修改了一下,注意下面所有的资源都是创建在k8s-csi-s3这个ns下面的

首先你要创建一个secret

secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: csi-s3-secret
  # Namespace depends on the configuration in the storageclass.yaml
  namespace: k8s-csi-s3
stringData:
  accessKeyID: xxx
  secretAccessKey: xxx
  # For AWS set it to "https://s3.<region>.amazonaws.com", for example https://s3.eu-central-1.amazonaws.com
  endpoint: https://xxx.xxx.cn
  # For AWS set it to AWS region
  #region: ""

之后部署驱动

provisioner.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-provisioner-sa
  namespace: k8s-csi-s3
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: csi-provisioner-sa
    namespace: k8s-csi-s3
roleRef:
  kind: ClusterRole
  name: external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
  name: csi-provisioner-s3
  namespace: k8s-csi-s3
  labels:
    app: csi-provisioner-s3
spec:
  selector:
    app: csi-provisioner-s3
  ports:
    - name: csi-s3-dummy
      port: 65535
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: csi-provisioner-s3
  namespace: k8s-csi-s3
spec:
  serviceName: "csi-provisioner-s3"
  replicas: 1
  selector:
    matchLabels:
      app: csi-provisioner-s3
  template:
    metadata:
      labels:
        app: csi-provisioner-s3
    spec:
      serviceAccount: csi-provisioner-sa
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
        - key: CriticalAddonsOnly
          operator: Exists
      containers:
        - name: csi-provisioner
          image: quay.bboysoul.cn/k8scsi/csi-provisioner:v2.1.0
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=4"
          env:
            - name: ADDRESS
              value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
        - name: csi-s3
          image: git.bboysoul.cn/bboysoul/csi-s3:0.35.4
          imagePullPolicy: IfNotPresent
          args:
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--nodeid=$(NODE_ID)"
            - "--v=4"
          env:
            - name: CSI_ENDPOINT
              value: unix:///var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
      volumes:
        - name: socket-dir
          emptyDir: {}

这里我什么都没改,就改了镜像的名字,有的镜像国内是不能下载的,懂的都懂

attacher.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-attacher-sa
  namespace: k8s-csi-s3
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-attacher-runner
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments/status"]
    verbs: ["patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-attacher-role
subjects:
  - kind: ServiceAccount
    name: csi-attacher-sa
    namespace: k8s-csi-s3
roleRef:
  kind: ClusterRole
  name: external-attacher-runner
  apiGroup: rbac.authorization.k8s.io
---
# needed for StatefulSet
kind: Service
apiVersion: v1
metadata:
  name: csi-attacher-s3
  namespace: k8s-csi-s3
  labels:
    app: csi-attacher-s3
spec:
  selector:
    app: csi-attacher-s3
  ports:
    - name: csi-s3-dummy
      port: 65535
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: csi-attacher-s3
  namespace: k8s-csi-s3
spec:
  serviceName: "csi-attacher-s3"
  replicas: 1
  selector:
    matchLabels:
      app: csi-attacher-s3
  template:
    metadata:
      labels:
        app: csi-attacher-s3
    spec:
      serviceAccount: csi-attacher-sa
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
        - key: CriticalAddonsOnly
          operator: Exists
      containers:
        - name: csi-attacher
          image: quay.bboysoul.cn/k8scsi/csi-attacher:v3.0.1
          args:
            - "--v=4"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/ru.yandex.s3.csi
            type: DirectoryOrCreate

这里也一样改了镜像的名字

csi-s3.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-s3
  namespace: k8s-csi-s3
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-s3
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-s3
subjects:
  - kind: ServiceAccount
    name: csi-s3
    namespace: k8s-csi-s3
roleRef:
  kind: ClusterRole
  name: csi-s3
  apiGroup: rbac.authorization.k8s.io
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-s3
  namespace: k8s-csi-s3
spec:
  selector:
    matchLabels:
      app: csi-s3
  template:
    metadata:
      labels:
        app: csi-s3
    spec:
      tolerations:
        - key: CriticalAddonsOnly
          operator: Exists
        - operator: Exists
          effect: NoExecute
          tolerationSeconds: 300
      serviceAccount: csi-s3
      hostNetwork: true
      containers:
        - name: driver-registrar
          image: git.bboysoul.cn/bboysoul/csi-node-driver-registrar:v1.2.0
          args:
            - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
            - "--v=4"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
            - name: DRIVER_REG_SOCK_PATH
              value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration/
        - name: csi-s3
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: git.bboysoul.cn/bboysoul/csi-s3:0.35.4
          imagePullPolicy: IfNotPresent
          args:
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--nodeid=$(NODE_ID)"
            - "--v=4"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: stage-dir
              mountPath: /var/lib/kubelet/plugins/kubernetes.io/csi
              mountPropagation: "Bidirectional"
            - name: pods-mount-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
            - name: fuse-device
              mountPath: /dev/fuse
            - name: systemd-control
              mountPath: /run/systemd
      volumes:
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry/
            type: DirectoryOrCreate
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/plugins/ru.yandex.s3.csi
            type: DirectoryOrCreate
        - name: stage-dir
          hostPath:
            path: /var/lib/kubelet/plugins/kubernetes.io/csi
            type: DirectoryOrCreate
        - name: pods-mount-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: Directory
        - name: fuse-device
          hostPath:
            path: /dev/fuse
        - name: systemd-control
          hostPath:
            path: /run/systemd
            type: DirectoryOrCreate

这里也是只改了镜像的名字

之后创建

storageclass.yaml

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: csi-s3
provisioner: ru.yandex.s3.csi
parameters:
  mounter: geesefs
  # you can set mount options here, for example limit memory cache size (recommended)
  options: "--memory-limit 1000 --dir-mode 0777 --file-mode 0666"
  # to use an existing bucket, specify it here:
  bucket: k8s
  csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
  csi.storage.k8s.io/provisioner-secret-namespace: k8s-csi-s3
  csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: k8s-csi-s3
  csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-stage-secret-namespace: k8s-csi-s3
  csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-publish-secret-namespace: k8s-csi-s3

如果你每一个pv都是一个bucket那么不用配置bucket的参数,但是如果你想和我一样,想把所有的pv创建在一个bucket中那么配置一下bucket参数,还有你secret的名字和secret所对应的ns也是要修改一下

创建完成sts之后你就可以试试创建pvc来测试下创建pv是不是可以成功

# Dynamically provisioned PVC:
# A bucket or path inside bucket will be created automatically
# for the PV and removed when the PV will be removed
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-s3-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: csi-s3

最后说几句

我的minio是使用nginx做代理的,所以,如果作为后端的存储,那么可靠性还是很重要的,思来想去,还是用nfs把

欢迎关注我的博客www.bboy.app

Have Fun