OpenStack-kilo版本cinder对接多Ceph存储

简介

本文为您介绍如何配置OpenStack kilo版本 cinder对接多个ceph存储

环境

主机 地址 版本
controller 192.168.11.71 OpenStack-kilo
compute 192.168.11.72 OpenStack-kilo
ceph01 192.168.11.235 Ceph-Giant
ceph02 192.168.11.232 Ceph-Giant

Cinder配置

在compute节点上操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@compute ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
glance_host = controller
enabled_backends = ceph01,ceph02 # 这里设置两种存储
[ceph01]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph01.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = df76a280-68fb-4bfb-bfab-976f0c71efa2 # 这里要自行导入到vrish中,下同
volume_backend_name = ceph01 # 设置命名,以便引用,下同
[ceph02]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = ceph-cinder
rbd_ceph_conf = /etc/ceph/ceph02.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = ceph-cinder
rbd_secret_uuid = 207a92a6-acaf-47c2-9556-e560a79ba472
volume_backend_name = ceph02

具体如何配置key到virsh中,请看我的另一篇设置cinder存储

重启cinder服务

1
[root@compute ~]# systemctl restart openstack-cinder-volumes

查看存储是否都启动成功

在controller节点上查看

1
2
3
4
5
6
7
8
9
[root@controller ~]# source admin-openrc
[root@controller ~]# cinder service-list
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2018-09-05T04:28:07.000000 | - |
| cinder-volume | compute@ceph01| nova | enabled | up | 2018-09-05T04:28:05.000000 | - |
| cinder-volume | compute@ceph02| nova | enabled | up | 2018-09-05T04:28:05.000000 | - |
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+

能看到compute@ceph01和compute@ceph02,并且状态都为up状态,说明多ceph存储启用成功

设置存储类型

现在如果创建卷,scheduler会选择合适的位置创建卷,如果想要创建的时候选择某个存储池,可以设置存储类型

创建卷类型

1
2
[root@controller ~]# cinder type-create ceph01
[root@controller ~]# cinder type-create ceph02

查看卷类型

1
2
3
4
5
6
7
[root@controller ~]# cinder type-list
+--------------------------------------+------+
| ID | Name |
+--------------------------------------+------+
| 8c50de7d-d6ba-4866-ba42-93d14859860b | ceph01|
| abdaa7a5-f95b-4f24-ab21-6d5bc74344a7 | ceph02|
+--------------------------------------+------+

这时候还不能选择存储池,还需要设置volume_backend_name

1
2
[root@controller ~]# cinder type-key ceph01 set volume_backend_name=ceph01
[root@controller ~]# cinder type-key ceph02 set volume_backend_name=ceph02

这里的volume_backend_name是我们一开始设置的

查看是否设置成功

1
2
3
4
5
6
7
[root@controller ~]# cinder extra-specs-list
+--------------------------------------+-------+-------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-------+-------------------------------------+
| 8c50de7d-d6ba-4866-ba42-93d14859860b | ceph01| {u'volume_backend_name': u'ceph01'} |
| abdaa7a5-f95b-4f24-ab21-6d5bc74344a7 | ceph02| {u'volume_backend_name': u'ceph02'} |
+--------------------------------------+------+-----------------------------------+

现在可以根据类型选择存储池位置创建卷了

创建一个ceph01类型的存储

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@controller ~]# cinder create --display-name ceph01 --volume-type ceph01 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2018-09-05T04:40:44.892618 |
| display_description | None |
| display_name | ceph01 |
| encrypted | False |
| id | 29b71020-8f0f-46d5-a2e3-5e89953a15ee |
| metadata | {} |
| multiattach | false |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | ceph01 |
+---------------------+--------------------------------------+

查看是否创建成功

1
2
3
4
5
6
[root@controller ~]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 29b71020-8f0f-46d5-a2e3-5e89953a15ee | available | ceph01 | 1 | ceph01 | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

看到状态为available,说明创建成功

再看看ceph01的存储池是否有数据

1
2
[root@ceph01 my-cluster]# rbd ls ceph-cinder
volume-29b71020-8f0f-46d5-a2e3-5e89953a15ee

看到id对应刚刚创建的卷id,说明类型设置成功

接下来创建ceph02卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@controller ~]# cinder create --display-name ceph02 --volume-type ceph02 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2018-09-05T04:44:26.657639 |
| display_description | None |
| display_name | ceph02 |
| encrypted | False |
| id | e170af88-5cb3-4b47-b550-f254bf544b50 |
| metadata | {} |
| multiattach | false |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | ceph02 |
+---------------------+--------------------------------------+

查看卷是否创建成功

1
2
3
4
5
6
7
[root@controller ~]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 29b71020-8f0f-46d5-a2e3-5e89953a15ee | available | ceph01 | 1 | ceph01 | false | |
| e170af88-5cb3-4b47-b550-f254bf544b50 | available | ceph02 | 1 | ceph02 | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

可以看到ceph02的状态为available,说明创建成功

再看看ceph02存储池中是否有数据

1
2
[root@ceph02 ~]# rbd ls volumes
volume-e170af88-5cb3-4b47-b550-f254bf544b50

看到存储池中id对应卷id,说明类型设置成功