OpenStack-kilo版本对接ceph

简介

本文为您介绍如何配置OpenStack kilo版本cinder对接ceph存储

环境

集群 主机 ip 版本
OpenStack controller 66.66.66.71 openstack-kilo
OpenStack compute 66.66.66.72 openstack-kilo
Ceph ceph01 66.66.66.235 ceph-0.87

首先检测一下ceph集群的状态

1
2
3
4
5
6
7
8
9
10
[root@ceph01 ceph]# ceph- s
-bash: ceph-: command not found
[root@ceph01 ceph]# ceph -s
cluster 3ec3f0f5-74d7-41ce-a497-1425705cd717
health HEALTH_OK
monmap e1: 1 mons at {ceph01=66.66.66.235:6789/0}, election epoch 1, quorum 0 ceph01
osdmap e42: 3 osds: 3 up, 3 in
pgmap v273: 320 pgs, 3 pools, 16 bytes data, 3 objects
27094 MB used, 243 GB / 269 GB avail
320 active+clean

Ceph01节点操作

创建存储池

1
2
[root@ceph01 ~]# ceph osd pool create ceph-cinder 128
pool 'ceph-cinder' created

创建认证用户

1
2
3
[root@ceph01 ~]# ceph auth get-or-create client.ceph-cinder mon 'allow r' osd 'allow class-read ohildren, allow rwx pool=ceph-cinder'
[client.ceph-cinder]
key = AQCIVY5beGJDNxAAphG+hdDC1vG4yVC5Ew7Y+w==

生成keyring,并发送到compute节点

1
[root@ceph01 ~]# ceph auth get-or-create client.ceph-cinder |ssh compute /etc/ceph/ceph.client.ceph-cinder.keyring

发送ceph配置文件到compute节点

1
[root@ceph01 ~]# scp /etc/ceph/ceph.conf compute:/etc/ceph/ceph.conf

compute节点操作

生成uuid,并写入到秘钥文件中

1
2
3
4
5
6
7
8
9
10
[root@compute ~]# uuidgen
207a92a6-acaf-47c2-9556-e560a79ba472
[root@compute ~]# cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>207a92a6-acaf-47c2-9556-e560a79ba472</uuid>
<usage type='ceph'>
<name>client.ceph-cinder secret </name>
</usage>
</secret>
EOF

定义秘钥

1
2
[root@compute ~]# virsh secret-define --file secret.xml
Secret 207a92a6-acaf-47c2-9556-e560a79ba472 created

设置秘钥加密值

1
[root@compute ~]# virsh secret-set-value --secret 207a92a6-acaf-47c2-9556-e560a79ba472 --base64 AQCIVY5beGJDNxAAphG+hdDC1vG4yVC5Ew7Y+w==

编辑cinder配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@compute ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = ceph-cinder
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = ceph-cinder
rbd_secret_uuid = 207a92a6-acaf-47c2-9556-e560a79ba472
# 这里的uuid是我们刚刚生成的uuid
并注释掉[lvm]块下的所有配置

重启cinder服务

1
[root@compute ~]# systemctl restart openstack-cinder-volume

Controller节点操作

查看服务是否启动

1
2
3
4
5
6
7
8
9
10
 [root@compute ~]# ssh controller
[root@controller ~]# source admin-openrc
[root@controller ~]# cinder service-list
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2018-09-04T10:37:36.000000 | - |
| cinder-volume | compute@ceph | nova | enabled | up | 2018-09-04T10:37:37.000000 | - |
| cinder-volume | controller@lvm | nova | enabled | down | 2018-09-03T09:04:57.000000 | - |
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+

查看到lvm存储是down状态,ceph存储是up状态

创建cinder卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@controller ~]# cinder create --display-name vol-1 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2018-09-04T10:43:50.386006 |
| display_description | None |
| display_name | vol-1 |
| encrypted | False |
| id | 418344cf-3955-479b-8d5e-94633abae1f8 |
| metadata | {} |
| multiattach | false |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+

查看卷是否创建成功

1
2
3
4
5
6
[root@controller ~]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 418344cf-3955-479b-8d5e-94633abae1f8 | available | vol-1 | 1 | - | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

这时候看到vol-1卷的状态是available

查看卷是否存储到ceph的存储池中

在ceph01上操作

1
2
[root@ceph01 ceph]# rbd ls ceph-cinder
volume-418344cf-3955-479b-8d5e-94633abae1f8

能看到已经存储到ceph的存储池中,并且id也是相对应的

错误

创建卷失败了
查看/var/log/cinder/volume.log日志

1
2
oslo_messaging.rpc.dispatcher PermissionError: error creating image
cinder.volume.manager PermissionError: error creating image

发现是权限问题
后来查到是没有生成uuid,所以我就执行了导入操作到virsh中,成功创建卷