ubuntu12.04手动搭建ceph0.87(giant版本)单节点

简介

不通过ceph-deploy,手动创建服务的方式部署ceph

参考:http://docs.ceph.com/docs/giant/install/manual-deployment/

这里已经提前下载安装好了ceph的安装包,除了ceph-deploy,因为要手动安装

环境

主机名 IP地址 系统 环境
ceph01 192.168.100.101 ubuntu 12.04 ceph giant

步骤

修改主机名

1
2
root@ceph01:~# vi /etc/hostname
ceph01

修改主机映射

1
2
3
4
root@ceph01:~# vi /etc/hosts
127.0.0.1 localhost
127.0.1.1 ceph01
192.168.100.101 ceph01

设置ssh无密钥登录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
root@ceph01:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
58:47:8d:56:db:90:aa:7c:be:8a:22:a6:d6:e4:c3:8d root@ceph01
The key's randomart image is:
+--[ RSA 2048]----+
| .+o. |
| .o o+ |
| ..... . |
| o .. |
| ..S. |
| . o . |
| = o o |
| .oE.. . . |
|oo .... .... |
+-----------------+
root@ceph01:~# ssh-copy-id ceph01
Warning: Permanently added 'ceph01' (ECDSA) to the list of known hosts.
root@ceph01's password:
Now try logging into the machine, with "ssh 'ceph01'", and check in:
~/.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.

编辑ceph配置文件

1
2
3
4
root@ceph01:~# vi /etc/ceph/ceph.conf
fsid = 53eaacda-3558-4881-8e35-67f3741072dd
mon initial members = ceph01
mon host = 192.168.100.101

为群集创建密钥环并生成监控密钥

1
root@ceph01:~# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'

生成管理员密钥环,生成client.admin用户并将用户添加到密钥环

1
root@ceph01:~# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'

将client.admin密钥添加到ceph.mon.keyring。

1
root@ceph01:~# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring

使用主机名,主机IP地址和FSID生成监控映射。将其另存为/ tmp / monmap:

1
root@ceph01:~# monmaptool --create --add ceph01 192.168.100.101 --fsid 53eaacda-3558-4881-8e35-67f3741072dd /tmp/monmap

在监视器主机上创建默认数据目录(或多个目录)。

1
root@ceph01:~# mkdir /var/lib/ceph/mon/ceph01

使用监视器映射和密钥环填充监视器守护程序。

1
root@ceph01:~# ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

考虑Ceph配置文件的设置。常见设置包括以下内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[global]
fsid = {cluster-id}
mon initial members = {hostname}[, {hostname}]
mon host = {ip-address}[, {ip-address}]
public network = {network}[, {network}]
cluster network = {network}[, {network}]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = {n}
filestore xattr use omap = true
osd pool default size = {n} # Write an object n times.
osd pool default min size = {n} # Allow writing n copy in a degraded state.
osd pool default pg num = {n}
osd pool default pgp num = {n}
osd crush chooseleaf type = {n}
例如:
[global]
fsid = 53eaacda-3558-4881-8e35-67f3741072dd
mon initial members = ceph01
mon host = 192.168.100.101
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 1
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1

创建完成的文件

标记已创建监视器并准备启动:

1
root@ceph01:~# touch /var/lib/ceph/mon/ceph01/done

启动mon

对于Ubuntu,请使用Upstart:

1
root@ceph01:~# start ceph-mon id=ceph01

在这种情况下,要允许在每次重新启动时启动守护程序,您必须创建两个空文件,如下所示:

1
root@ceph01:~# touch /var/lib/ceph/mon/ceph01/upstart

对于Debian / CentOS / RHEL,请使用sysvinit:

1
# /etc/init.d/ceph start mon.ceph01

验证mon

验证Ceph是否创建了默认池。

1
2
root@ceph01:~# ceph osd lspools
0 rbd,

验证监视器是否正在运行

1
2
3
4
5
6
7
8
root@ceph01:~# ceph  - s
cluster 53eaacda-3558-4881-8e35-67f3741072dd
health HEALTH_ERR 64 pgs stuck inactive; 64 pgs stuck unclean; no osds
monmap e1: 1 mons at {ceph01=192.168.100.101:6789/0}, election epoch 2, quorum 0 ceph01
osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating

添加OSD

精简模式

准备OSD
1
root@ceph01:~# ceph-disk prepare --cluster ceph --cluster-uuid 53eaacda-3558-4881-8e35-67f3741072dd --fs-type ext4 /dev/sdb
激活OSD
1
root@ceph01:~# ceph-disk activate /dev/sdb1

长模式

生成uuid
1
root@ceph01:~# uuidgen
创建OSD

如果没有给出UUID,则在OSD启动时将自动设置。以下命令将输出OSD编号,将在后续步骤中使用该编号。

1
2
root@ceph01:~# ceph osd create 72fb9a60-38a1-48b3-b1fe-6d3f7c26e9eb
1
在新OSD上创建默认目录。
1
root@ceph01:~# mkdir /var/lib/ceph/osd/ceph-1/
如果OSD用于OS驱动器以外的驱动器,格式化并挂载到目录
1
2
root@ceph01:~# mkfs -t ext4 /dev/sdc
root@ceph01:~# mount -o user_xattr /dev/sdc /var/lib/ceph/osd/ceph-1/
初始化OSD数据目录
1
root@ceph01:~# ceph-osd -i 1 --mkfs --mkkey --osd-uuid 72fb9a60-38a1-48b3-b1fe-6d3f7c26e9eb

在使用 –mkkey选项运行ceph -osd之前,该目录必须为空。此外,ceph -osd工具需要使用–cluster选项指定自定义集群名称。

注册OSD验证密钥

如果你的群集名称与ceph不同,使用自己的群集名称:

1
root@ceph01:~# ceph auth add osd.1 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-1/keyring
将您的Ceph节点添加到CRUSH map
1
root@ceph01:~# ceph osd crush add-bucket ceph01 host
将Ceph节点置于根默认值下
1
root@ceph01:~# ceph osd crush move ceph01 root=default
将OSD添加到CRUSH映射,以便它可以开始接收数据

你也可以反编译CRUSH map,将OSD添加到设备列表,将主机添加为存储桶(如果它尚未存在于CRUSH地图中),将设备添加为主机中的项目,为其分配权重,重新编译它并设置它。

1
root@ceph01:~# ceph osd crush add osd.1 1.0 host=ceph01
启动OSD

将OSD添加到Ceph后,OSD就在你的配置中。但是,它还没有运行。OSD选单下和在。你必须先启动新的OSD才能开始接收数据。
1 对于Ubuntu,使用upstart

1
root@ceph01:~# start ceph-osd id=1

2 对于Debian / CentOS / RHEL,使用sysvinit:

1
# /etc/init.d/ceph start osd.1
验证
1
2
3
4
5
6
7
root@ceph01:~# ceph -w
root@ceph01:~# ceph osd tree
# id weight type name up/down reweight
-1 0.03722 root default
-2 0.03722 host ceph01
0 0.01813 osd.0 up 1
1 0.01909 osd.1 up 1