一、K8s节点安装 ceph-common
CephFS
是 Ceph
中基于RADOS
(可扩展分布式对象存储)构建,通过将文件数据划分为对象并分布到集群中的多个存储节点上来实现高可用性和可扩展性。
首先所有 k8s
节点都需要安装 ceph-common
工具:
yum -y install epel-release ceph-common
二、静态供给方式
参考链接: K8s 使用 CephFS 作为后端存储(静态供给、动态供给)
静态供给方式需要提前创建好 CephFS
给到 K8s
使用。
2.1 在 Ceph 中创建 FS 和 授权用户
创建存储池:
# 数据存储池
[root@k8s-master31 ~]# ceph osd pool create cephfs_data_pool 16# 元数据存储池
[root@k8s-master31 ~]# ceph osd pool create ceph_metadata_pool 8
创建 FS
:
[root@k8s-master31 ~]# ceph fs new k8s-cephfs cephfs_data_pool ceph_metadata_pool[root@k8s-master31 ~]# ceph orch apply mds cephfs --placement="3 k8s-master31 k8s-node34 k8s-node35"
Scheduled mds.cephfs update...--placement="3 k8s-master31 k8s-node34 k8s-node35":这部分指定了 MDS 服务的部署位置。具体解释如下:
3 表示你希望部署 3 个 MDS 实例。
k8s-master31 k8s-node34 k8s-node35 是你指定的节点,表示这 3 个 MDS 实例将分别部署在 k8s-master31, k8s-node34, 和 k8s-node35 这三台服务器上。
创建用户 fs-user
并授权存储池 cephfs_data_pool
查看 admin
用户秘钥:
[root@k8s-master31 ~]# ceph auth get-key client.admin
2.2 在 k8s 中创建 secret
[root@k8s-master31 cephfs]# ceph auth get-key client.admin
AQAeg9FmYRqZGxAACbp761MR8Uf+D3VQTu0nwQ==
[root@k8s-master31 cephfs]# export ADMIN_USER_SECRET='AQAeg9FmYRqZGxAACbp761MR8Uf+D3VQTu0nwQ=='
[root@k8s-master31 cephfs]# kubectl create secret generic ceph-admin-default-secret --type="kubernetes.io/rbd" \
--from-literal=key=$ADMIN_USER_SECRET \
--namespace=default
2.3 pod 直接使用 CephFS 存储
vi cephfs-test-pod.yml
apiVersion: v1
kind: Pod
metadata:name: cephfs-test-pod
spec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentvolumeMounts:- name: data-volumemountPath: /usr/share/nginx/html/volumes:- name: data-volumecephfs:monitors:- 10.0.0.31:6789- 10.0.0.34:6789- 10.0.0.35:6789path: /user: adminsecretRef:name: ceph-admin-default-secret
kubectl apply -f cephfs-test-pod.yml
查看 pod
:
kubectl get pods
可以进到 pod
中查看分区情况:
kubectl exec -it cephfs-test-pod -- /bin/bashdf -hlPS: 无法创建文件
2.4 创建 PV
使用 CephFS
存储
vi cephfs-test-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:name: cephfs-test-pv
spec:accessModes: ["ReadWriteOnce"]capacity:storage: 2GipersistentVolumeReclaimPolicy: Retaincephfs:monitors:- 10.0.0.31:6789- 10.0.0.34:6789- 10.0.0.35:6789path: /user: adminsecretRef:name: ceph-admin-default-secret
[root@k8s-master31 cephfs]# kubectl apply -f cephfs-test-pv.yml
Warning: spec.cephfs: deprecated in v1.28, non-functional in v1.31+
persistentvolume/cephfs-test-pv created
Deprecated in v1.28: 从 Kubernetes 1.28 版本开始,spec.cephfs 配置项已被标记为不推荐使用,这意味着它将在未来的版本中被移除或替换。
Non-functional in v1.31+: 从 Kubernetes 1.31 版本起,这个配置将不再有效。
创建 PVC
绑定 PV
:
vi cephfs-test-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: cephfs-test-pvc
spec:accessModes: ["ReadWriteOnce"]resources:requests:storage: 2Gi
kubectl apply -f cephfs-test-pvc.yml
查看 pvc
和 pv
:
kubectl get pvckubectl get pv
测试 pod
挂载 pvc
:
vi cephfs-test-pod1.yml
apiVersion: v1
kind: Pod
metadata:name: cephfs-test-pod1
spec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentvolumeMounts:- name: data-volumemountPath: /usr/share/nginx/html/volumes:- name: data-volumepersistentVolumeClaim:claimName: cephfs-test-pvcreadOnly: false
kubectl apply -f cephfs-test-pod1.yml
查看 pod
:
kubectl get pods
进入pod创建文件
[root@k8s-master31 cephfs]# kubectl exec -it cephfs-test-pod1 -- bash
root@cephfs-test-pod1:/usr/share/nginx/html# df -hl /usr/share/nginx/html
Filesystem Size Used Avail Use% Mounted on
ceph-fuse 973G 0 973G 0% /usr/share/nginx/html
root@cephfs-test-pod1:/# cd /usr/share/nginx/html/
root@cephfs-test-pod1:/usr/share/nginx/html# echo www > rm.txt
root@cephfs-test-pod1:/usr/share/nginx/html# cat rm.txt
www
root@cephfs-test-pod1:/usr/share/nginx/html# ls -l rm.txt
-rw-r--r-- 1 root root 4 Sep 3 01:39 rm.txt
三、动态供给方式
参考链接: 带有 CSI 驱动程序的 Kubernetes CephFS 卷
首先我们需要为存储提供者创建一个命名空间:
kubectl create ns ceph-csi-cephfs
登录CEPH集群并获取配置:
[root@k8s-master31 ~]# ceph config generate-minimal-conf
# minimal ceph.conf for e3df76f6-66a9-11ef-b055-000c29e945c2
[global]fsid = e3df76f6-66a9-11ef-b055-000c29e945c2mon_host = [v2:10.0.0.31:3300/0,v1:10.0.0.31:6789/0] [v2:10.0.0.34:3300/0,v1:10.0.0.34:6789/0] [v2:10.0.0.35:3300/0,v1:10.0.0.35:6789/0][root@k8s-master31 ~]# ceph auth get-key client.admin
AQAeg9FmYRqZGxAACbp761MR8Uf+D3VQTu0nwQ==
创建子卷组
[root@k8s-master31 ~]# ceph fs subvolumegroup create cephfs csi
[root@k8s-master31 ~]# ceph fs subvolumegroup ls cephfs
[{"name": "csi"}
]
添加helm仓库
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm show values ceph-csi/ceph-csi-cephfs > defaultValues.yaml
编辑 values
vi values.yaml
---
csiConfig:- clusterID: e3df76f6-66a9-11ef-b055-000c29e945c2 #集群IDmonitors:- 10.0.0.31:6789- 10.0.0.34:6789- 10.0.0.35:6789cephFS:subvolumeGroup: "csi"
secret:name: csi-cephfs-secretadminID: admin #用户adminKey: AQAeg9FmYRqZGxAACbp761MR8Uf+D3VQTu0nwQ== #密钥create: true
storageClass:create: truename: csi-cephfs-scclusterID: e3df76f6-66a9-11ef-b055-000c29e945c2 #集群ID#(必选)创建卷的CephFS文件系统名称fsName: cephfsreclaimPolicy: DeleteallowVolumeExpansion: truevolumeNamePrefix: "poc-k8s-"provisionerSecret: csi-cephfs-secretcontrollerExpandSecret: csi-cephfs-secretnodeStageSecret: csi-cephfs-secret
helm安装
helm upgrade --install ceph-csi-cephfs ceph-csi/ceph-csi-cephfs --values ./values.yaml
部署pvc测试
vi pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: csi-cephfs-pvc
spec:accessModes:- ReadWriteManyresources:requests:storage: 1GistorageClassName: csi-cephfs-sc
创建pvc查看
kubectl apply -f pvc.yaml[root@k8s-master31 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-ac8b431f-e807-4e7f-91ec-70dc55270e05 1Gi RWX csi-cephfs-sc 7m37s
部署两个pod挂载pvc测试
创建两个pod
cat <<'EOF'> pod.yaml
apiVersion: v1
kind: Pod
metadata:name: cephfs-test-pod1
spec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentvolumeMounts:- name: data-volumemountPath: /usr/share/nginx/html/volumes:- name: data-volumepersistentVolumeClaim:claimName: csi-cephfs-pvcreadOnly: false
---
apiVersion: v1
kind: Pod
metadata:name: cephfs-test-pod2
spec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentvolumeMounts:- name: data-volumemountPath: /usr/share/nginx/html/volumes:- name: data-volumepersistentVolumeClaim:claimName: csi-cephfs-pvcreadOnly: false
EOF
kubectl apply -f pod.yaml
pod2创建文件
[root@k8s-master31 ~]# kubectl exec -it cephfs-test-pod2 -- bash
root@cephfs-test-pod2:/# cd /usr/share/nginx/html/
root@cephfs-test-pod2:/usr/share/nginx/html# ls
root@cephfs-test-pod2:/usr/share/nginx/html# touch 1{1..5}.txt
root@cephfs-test-pod2:/usr/share/nginx/html# ls
11.txt 12.txt 13.txt 14.txt 15.txt
pod1查看
[root@k8s-master31 ~]# kubectl exec -it cephfs-test-pod1 -- bash
root@cephfs-test-pod1:/# cd /usr/share/nginx/html/
root@cephfs-test-pod1:/usr/share/nginx/html# ls
11.txt 12.txt 13.txt 14.txt 15.txt
root@cephfs-test-pod1:/usr/share/nginx/html#
测试使用 volumeClaimTemplates 动态创建 pv 和 pvc
vi mysql.yml
# headless service
apiVersion: v1
kind: Service
metadata:name: mysql-hllabels:app: mysql-hl
spec:clusterIP: Noneports:- name: mysql-portport: 3306selector:app: mysql---
# NodePort service
apiVersion: v1
kind: Service
metadata:name: mysql-nplabels:app: mysql-np
spec:clusterIP:ports:- name: master-portport: 3306nodePort: 31306targetPort: 3306selector:app: mysqltype: NodePortexternalTrafficPolicy: Cluster---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: mysql
spec:serviceName: "mysql-hl"replicas: 1selector:matchLabels:app: mysqltemplate:metadata:labels:app: mysqlspec:containers:- name: mysqlimage: mysql:8.0.20ports:- containerPort: 3306name: master-portenv:- name: MYSQL_ROOT_PASSWORDvalue: "root"- name: TZvalue: "Asia/Shanghai"volumeMounts:- name: mysql-datamountPath: /var/lib/mysqlvolumeClaimTemplates:- metadata:name: mysql-dataspec:accessModes: ["ReadWriteOnce"]storageClassName: csi-cephfs-scresources:requests:storage: 2Gi
kubectl apply -f mysql.yaml
查看
[root@k8s-master31 ~]# kubectl get pod mysql-0 -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-0 1/1 Running 0 3m32s 172.16.192.11 k8s-node35 <none> <none>
[root@k8s-master31 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE csi-cephfs-sc 28m
pvc-cf0ea7e3-e320-4482-844e-c866190d1a2a 2Gi RWO Delete Bound default/mysql-data-mysql-0 csi-cephfs-sc 4m29s
[root@k8s-master31 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-data-mysql-0 Bound pvc-cf0ea7e3-e320-4482-844e-c866190d1a2a 2Gi RWO csi-cephfs-sc 4m33s
登入mysql
[root@k8s-master31 ~]# kubectl exec -it mysql-0 -- bash
root@mysql-0:/# mysql -u root -proot
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.20 MySQL Community Server - GPLCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.01 sec)
# 查看库占用磁盘大小
mysql> SELECT table_schema AS "Database",-> ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size (MB)"-> FROM information_schema.tables-> GROUP BY table_schema;
+--------------------+-----------+
| Database | Size (MB) |
+--------------------+-----------+
| information_schema | 0.00 |
| mysql | 7.69 |
| performance_schema | 0.00 |
| sys | 0.02 |
+--------------------+-----------+
4 rows in set (0.27 sec)