ceph环境准备
创建存储池
ceph osd pool create test-k8s
启用存储池的应用类型为rbd
ceph osd pool application get test-k8s ceph osd pool application enable test-k8s rbd ceph osd pool application get test-k8s
创建块设备
rbd create -s 20G test-k8s/xiao rbd ls -p yinzhengjie-k8s rbd info yinzhengjie-k8s/xiao
k8s节点环境准备
K8S所有节点安装ceph-common
apt -y install ceph-common
拷贝认证文件
scp /etc/ceph/ceph.client.admin.keyring 10.0.0.231:/etc/ceph scp /etc/ceph/ceph.client.admin.keyring 10.0.0.232:/etc/ceph scp /etc/ceph/ceph.client.admin.keyring 10.0.0.233:/etc/ceph
rbd存储卷之keyring实战案例
编写资源清单
cat 01-deploy-xiao-volumes-rbd.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-xiao-rbd spec: replicas: 1 selector: matchLabels: apps: v1 template: metadata: labels: apps: v1 spec: volumes: - name: data # 指定存储类型为rbd rbd: # 指定ceph的mon组件 monitors: - 10.0.0.141:6789 - 10.0.0.142:6789 - 10.0.0.143:6789 # 指定ceph的存储池 pool: xiao-k8s # 指定块设备镜像的名称 image: xiao # 指定文件系统的格式,比如: "ext4", "xfs", "ntfs",若不指定,则默认为: ext4。 fsType: xfs # 指定连接Ceph集群的用户,若不指定则为admin。 user: admin # 指定秘钥认证的路径,若不指定,则默认为: /etc/ceph/keyring keyring: /etc/ceph/ceph.client.admin.keyring containers: - name: c1 image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html
查看容器内部的挂载点即可
kubectl get pods -o wide
观察worker节点
rbd showmapped 2>/dev/null
k8s对接ceph集群rbd存储卷之secretRef实战案例 ~推荐方式
查看ceph的admin用户的key
grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}' | more
rbd存储卷之secretRef实战案例
编写资源清单
cat 02-deploy-xiao-volumes-rbd-secretRef.yaml apiVersion: v1 kind: Secret metadata: name: ceph-secret type: "kubernetes.io/rbd" stringData: key: AQA6fU5nhO+dOBAAx26GykcFxPPJo9kSOTvghQ== --- apiVersion: apps/v1 kind: Deployment metadata: name: deploy-xiao-secretref spec: replicas: 1 selector: matchLabels: apps: v1 template: metadata: labels: apps: v1 spec: volumes: - name: data # 指定存储类型为rbd rbd: # 指定ceph的mon组件 monitors: - 10.0.0.141:6789 - 10.0.0.142:6789 - 10.0.0.143:6789 # 指定ceph的存储池 pool: test-k8s # 指定块设备镜像的名称 image: xiao # 指定文件系统的格式,比如: "ext4", "xfs", "ntfs",若不指定,则默认为: ext4。 fsType: xfs # 指定连接Ceph集群的用户,若不指定则为admin。 user: admin # 指定秘钥认证的secrets secretRef: # 指定认证secret的名称 name: ceph-secret containers: - name: c1 image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html
查看容器内部的挂载点即可
k8s对接ceph集群pv和pvc案例
编写资源清单
cat 03-deploy-xiao-pv-pvc-rbd-secretRef.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv-rbd-01 spec: accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain rbd: image: xiao monitors: - 10.0.0.141:6789 - 10.0.0.142:6789 - 10.0.0.143:6789 pool: xiao-k8s fsType: xfs secretRef: name: ceph-secret user: admin capacity: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc-rbd spec: # 使用""表示不使用默认的存储类 storageClassName: "" # 将pvc关联到指定的pv volumeName: pv-rbd-01 accessModes: - ReadWriteMany resources: limits: storage: 2Gi requests: storage: 1Gi --- apiVersion: v1 kind: Secret metadata: name: ceph-secret type: "kubernetes.io/rbd" stringData: key: AQA6fU5nhO+dOBAAx26GykcFxPPJo9kSOTvghQ== --- apiVersion: apps/v1 kind: Deployment metadata: name: deploy-xiao-secretref spec: replicas: 1 selector: matchLabels: apps: v1 template: metadata: labels: apps: v1 spec: terminationGracePeriodSeconds: 3 volumes: - name: data persistentVolumeClaim: claimName: test-pvc-rbd containers: - name: c1 image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html
测试验证
kubectl apply -f 03-deploy-xiao-pv-pvc-rbd-secretRef.yaml kubectl get pods -o wide curl ip
k8s使用rbd的动态存储类关联ceph集群
下载 cloud-computing-stack.git
修改配置信息
csi-rbd-secret.yaml
csi-config-map.yaml
storageclass.yaml
#集群的ID需要加双引号哟
安装rbd的sc
kubectl apply -f deploy/rbd/kubernetes/
查看pod信息
kubectl get pods -o wide
观察rbd是否动态创建pv
rbd ls -l -p test-k8s
测试验证
cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc01 spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: csi-rbd-sc --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc02 spec: accessModes: - ReadWriteOnce resources: requests: storage: 4Gi storageClassName: csi-rbd-sc
查看sc
kubectl get sc csi-rbd-sc kubectl get pvc rbd-pvc01 rbd-pvc02
k8s使用cephFS指定秘钥之secretRef
编写资源清单
cat 04-deploy-volumes-cephFS-secretRef.yaml apiVersion: v1 kind: Secret metadata: name: ceph-secret type: "kubernetes.io/rbd" stringData: key: AQA6fU5nhO+dOBAAx26GykcFxPPJo9kSOTvghQ== --- apiVersion: apps/v1 kind: Deployment metadata: name: deploy-xiao-secretref spec: replicas: 1 selector: matchLabels: apps: v1 template: metadata: labels: apps: v1 spec: terminationGracePeriodSeconds: 3 volumes: - name: data cephfs: monitors: - 10.0.0.141:6789 - 10.0.0.142:6789 - 10.0.0.143:6789 readOnly: true user: admin secretRef: name: ceph-secret containers: - name: c1 image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html
sc自动创建pv~