使用ceph提供文件存储

上篇文章我讲了如果使用ceph-deploy搭建一个ceph完整的集群,这次我来说下如何使用ceph的文件存储

查看ceph集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[[email protected] ~]# ceph -s
cluster:
id: faa2e2c4-98bc-47c4-a5b4-a478721b7ea2
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph--1,ceph--2,ceph--3 (age 18h)
mgr: ceph--1(active, since 17h), standbys: ceph--2, ceph--3
mds: 3 up:standby
osd: 3 osds: 3 up (since 17h), 3 in (since 17h)
rgw: 3 daemons active (ceph--1, ceph--2, ceph--3)

task status:

data:
pools: 4 pools, 128 pgs
objects: 219 objects, 1.2 KiB
usage: 3.0 GiB used, 147 GiB / 150 GiB avail
pgs: 128 active+clean

[[email protected] ~]# ceph mds stat
3 up:standby

如果想使用ceph的文件存储,必须需要启动ceph-mds组件。同时一个Ceph文件系统至少需要两个RADOS池,一个用于数据,另一个用于元数据,对元数据池使用更高的复制级别,因为此池中的任何数据丢失都可能导致整个文件系统无法访问。
对元数据池使用较低延迟的存储(例如SSD),因为这将直接影响在客户端上观察到的文件系统操作的延迟。

创建数据和元数据POOL

1
2
3
4
5
6
7
8
9
10
11
[[email protected] ~]# ceph osd pool create cephfs_data 32
pool 'cephfs_data' created
[[email protected] ~]# ceph osd pool create cephfs_metadata 32
pool 'cephfs_metadata' created
[[email protected] ~]# ceph osd lspools
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
5 cephfs_data
6 cephfs_metadata

创建ceph文件系统

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[[email protected] ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 6 and data pool 5
[[email protected] ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[[email protected] ~]# ceph mds stat
cephfs:1 {0=ceph--2=up:active} 2 up:standby
[[email protected] ~]# ceph -s
cluster:
id: faa2e2c4-98bc-47c4-a5b4-a478721b7ea2
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph--1,ceph--2,ceph--3 (age 18h)
mgr: ceph--1(active, since 17h), standbys: ceph--2, ceph--3
mds: cephfs:1 {0=ceph--2=up:active} 2 up:standby
osd: 3 osds: 3 up (since 18h), 3 in (since 18h)
rgw: 3 daemons active (ceph--1, ceph--2, ceph--3)

task status:
scrub status:
mds.ceph--2: idle

data:
pools: 6 pools, 192 pgs
objects: 241 objects, 3.4 KiB
usage: 3.0 GiB used, 147 GiB / 150 GiB avail
pgs: 192 active+clean

挂载文件系统一般分为内核级别挂载和系统用户空间挂载,现在linux内核已经提供内核级别挂载

内核级别挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[[email protected] cephfs]# lsmod |grep ceph   # centos 提供的内核节点挂载模块
ceph 363016 1
libceph 306750 1 ceph
libcrc32c 12644 1 libceph
dns_resolver 13140 1 libceph
[[email protected] ~]# which mount.ceph #查看该模块
/sbin/mount.ceph
[[email protected] ~]# rpm -qf /sbin/mount.ceph
ceph-common-14.2.16-0.el7.x86_64
[[email protected] ~]# mkdir -p /mnt/cephfs/ 创建挂载点
[[email protected] ~]# mount -t ceph 10.140.11.8:6789:/ /mnt/cephfs/ -o name=admin
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 17M 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/vda1 94G 2.7G 87G 3% /
tmpfs 7.9G 52K 7.9G 1% /var/lib/ceph/osd/ceph-0
tmpfs 1.6G 0 1.6G 0% /run/user/1000
10.140.11.8:6789:/ 47G 0 47G 0% /mnt/cephfs
[[email protected] ~]# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 150 GiB 147 GiB 14 MiB 3.0 GiB 2.01
TOTAL 150 GiB 147 GiB 14 MiB 3.0 GiB 2.01

POOLS:
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.rgw.root 1 32 1.2 KiB 4 768 KiB 0 46 GiB
default.rgw.control 2 32 0 B 8 0 B 0 46 GiB
default.rgw.meta 3 32 0 B 0 0 B 0 46 GiB
default.rgw.log 4 32 0 B 207 0 B 0 46 GiB
cephfs_data 5 32 0 B 0 0 B 0 46 GiB
cephfs_metadata 6 32 2.9 KiB 22 1.5 MiB 0 46 GiB

验证内核级别的挂载点

1
2
3
4
5
6
7
[[email protected] ~]# cd /mnt/cephfs/
[[email protected] cephfs]# ls
[[email protected] cephfs]# ls -l
total 0
[[email protected] cephfs]# touch lijiawang
[[email protected] cephfs]# echo "aaaaaaaaaaaaaaaaaaaaa" > lijiawang
[[email protected] cephfs]# cat lijiawang

用户空间挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[[email protected] ~]# yum -y install ceph-fuse
[[email protected] ~]# mkdir /mnt/ceph-fuse # 创建挂载点
[[email protected] ~]# ceph-fuse -n client.admin -m 10.140.11.8:6789,10.140.11.6:6789,10.140.11.24:6789 /mnt/ceph-fuse/
2021-01-06 09:03:28.223 7fcdd94d1f80 -1 init, newargv = 0x5588725df4c0 newargc=9
ceph-fuse[23357]: starting ceph client
ceph-fuse[23357]: starting fuse
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 17M 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/vda1 94G 2.8G 87G 4% /
tmpfs 7.9G 52K 7.9G 1% /var/lib/ceph/osd/ceph-0
tmpfs 1.6G 0 1.6G 0% /run/user/1000
10.140.11.8:6789:/ 47G 0 47G 0% /mnt/cephfs
ceph-fuse 47G 0 47G 0% /mnt/ceph-fuse

验证用户空间级别别的挂载点

1
2
3
4
5
6
7
8
[[email protected] ~]# cd /mnt/ceph-fuse
[[email protected] ceph-fuse]# ls -l
total 1
-rw-r--r-- 1 root root 22 Jan 6 08:56 lijiawang
[[email protected] ceph-fuse]# cat lijiawang
aaaaaaaaaaaaaaaaaaaaa
[[email protected] ceph-fuse]# df -T|grep ceph-fuse
ceph-fuse fuse.ceph-fuse 48746496 0 48746496 0% /mnt/ceph-fuse

本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!