ceph 安装 使用ceph-deploy 安装ceph,ceph-deploy现在只能安装nautilus或者之前的版本,octopus和octopus以后的版本不在使用ceph-deploy方式安装,而是建议使用cephadm来安装,后期我会写一个cephadm安装配置ceph集群。
准备机器 下面的机器是我用kvm虚化出来3台,每个机器上有一个数据盘vdb
主机
应用
ceph–1
ceph-deploy ceph-mon ceph-mgr ceph-raw ceph-mds ceph-osd
ceph–2
ceph-mon ceph-mgr ceph-raw ceph-mds ceph-osd
ceph–3
ceph-mon ceph-mgr ceph-raw ceph-mds ceph-osd
1 2 3 4 5 6 7 8 9 10 11 12 13 # 所有机器配置hosts主机名解析 # cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.140.11.8 ceph--1 10.140.11.6 ceph--2 10.140.11.24 ceph--3# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 478K 0 rom vda 253:0 0 100G 0 disk └─vda1 253:1 0 100G 0 part / vdb 253:16 0 50G 0 disk
deploy节点到ceph主机配置免密登录 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [[email protected] ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:g981QUE32gwwVNoVB2aSzyTDEUmg2CQdSTgi05eYUWM [email protected] The key's randomart image is: +---[RSA 2048]----+ | [email protected] *Oo.| | o =.=*o. =*@.o | | o o..o . +*o | | . .o | | . S o | | . o . . | | . . | | | | | +----[SHA256]-----+ [[email protected] ~]# ssh-copy-id ceph--1 [[email protected] ~]# ssh-copy-id ceph--2 [[email protected] ~]# ssh-copy-id ceph--3
所有节点配置国内ceph源 1 2 3 4 5 6 7 8 9 10 11 12 # cat /etc/yum.repos.d/ceph.repo [norch] name=norch baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/ enabled=1 gpgcheck=0 [x86_64] name=x86_64 baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/ enabled=1 gpgcheck=0
在ceph–1节点配置安装并配置ceph-deploy节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [[email protected] ~]# mkdir ceph-deploy [[email protected] ~]# yum -y install python-setuptools ceph-deploy [[email protected] ceph-deploy]# ceph-deploy new ceph--1 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy new ceph--1 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] func : <function new at 0x7f4162516de8> [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4161c98e18> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] ssh_copykey : True [ceph_deploy.cli][INFO ] mon : ['ceph--1'] [ceph_deploy.cli][INFO ] public_network : None [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] cluster_network : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] fsid : None [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [ceph--1][DEBUG ] connected to host: ceph--1 [ceph--1][DEBUG ] detect platform information from remote host [ceph--1][DEBUG ] detect machine type [ceph--1][DEBUG ] find the location of an executable [ceph--1][INFO ] Running command: /usr/sbin/ip link show [ceph--1][INFO ] Running command: /usr/sbin/ip addr show [ceph--1][DEBUG ] IP addresses found: [u'10.140.11.8'] [ceph_deploy.new][DEBUG ] Resolving host ceph--1 [ceph_deploy.new][DEBUG ] Monitor ceph--1 at 10.140.11.8 [ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph--1'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['10.140.11.8'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
所有节点安装ceph基础软件包 1 # yum -y install ceph ceph-mon ceph-mgr ceph-radosgw ceph-mds ceph-mgr-dashboard
使用ceph-deploy创建mon节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 [[email protected] ceph-deploy ] [[email protected] ceph-deploy ] [[email protected] ceph-deploy ] [[email protected] ceph-deploy ] [[email protected] ceph-deploy ] cluster: id: faa2e2c4-98bc-47c4-a5b4-a478721b7ea2 health: HEALTH_OK services: mon: 3 daemons, quorum ceph--1,ceph--2,ceph--3 (age 49s) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [[email protected] ceph-deploy ] [[email protected] ceph-deploy ] cluster: id: faa2e2c4-98bc-47c4-a5b4-a478721b7ea2 health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 3 daemons, quorum ceph--1,ceph--2,ceph--3 (age 2m) mgr: ceph--1(active, since 14m), standbys: ceph--2, ceph--3 osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
创建osd 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [[email protected] ceph-deploy]# ceph-deploy osd create ceph--1 --data /dev/vdb [[email protected] ceph-deploy]# ceph-deploy osd create ceph--2 --data /dev/vdb [[email protected] ceph-deploy]# ceph-deploy osd create ceph--3 --data /dev/vdb [[email protected] ceph-deploy]# ceph -s cluster: id: faa2e2c4-98bc-47c4-a5b4-a478721b7ea2 health: HEALTH_OK services: mon: 3 daemons, quorum ceph--1,ceph--2,ceph--3 (age 8m) mgr: ceph--1(active, since 20m), standbys: ceph--2, ceph--3 osd: 3 osds: 3 up (since 32s), 3 in (since 32s) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 147 GiB / 150 GiB avail pgs:
使用ceph-deploy配置ceph对象存储ceph-raw
使用ceph-deploy配置ceph文件存储ceph-mds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [[email protected] ceph-deploy]# ceph-deploy mds create ceph--1 ceph--2 ceph--3 [[email protected] ceph-deploy]# ceph -s cluster: id: faa2e2c4-98bc-47c4-a5b4-a478721b7ea2 health: HEALTH_OK services: mon: 3 daemons, quorum ceph--1,ceph--2,ceph--3 (age 17m) mgr: ceph--1(active, since 29m), standbys: ceph--2, ceph--3 mds: 3 up:standby osd: 3 osds: 3 up (since 9m), 3 in (since 9m) rgw: 3 daemons active (ceph--1, ceph--2, ceph--3) task status: data: pools: 4 pools, 128 pgs objects: 187 objects, 1.2 KiB usage: 3.0 GiB used, 147 GiB / 150 GiB avail pgs: 128 active+clean io: client: 83 KiB/s rd, 0 B/s wr, 82 op/s rd, 54 op/s wr
开启并设置ceph-dashboard 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [[email protected] ceph-deploy]# ceph mgr module enable dashboard [[email protected] ceph-deploy]# ceph dashboard create-self-signed-cert Self-signed certificate created [[email protected] ceph-deploy]# ceph config set mgr mgr/dashboard/server_addr 10.140.11.8 [root[email protected] ceph-deploy]# ceph config set mgr mgr/dashboard/server_port 8080 [[email protected] ceph-deploy]# ceph config set mgr mgr/dashboard/ssl_server_port 8443 [[email protected] ceph-deploy]# ceph dashboard set-login-credentials admin admin ****************************************************************** *** WARNING: this command is deprecated. *** *** Please use the ac-user-* related commands to manage users. *** ****************************************************************** Username and password updated systemctl restart [email protected] [[email protected] ceph-deploy]# ceph mgr services { "dashboard": "https://ceph--1:8443/" }
请用谷歌浏览器访问https://10.140.11.8:8443/#/dashboard 账号密码admin/admin