我尝试使用 ceph 池擦除代码后端配置 openstack
我正在尝试—data-pool
在 ceph.conf 上使用rbd default data pool
类似以下博客https://themeanti.me/technology/2018/08/23/ceph_erasure_openstack.html,但是在创建带有此类错误的图像时仍然遇到问题
HttpException: 500: Server Error for url: https://10.24.11.100:9292/v2/images/601d113d-8f61-49e8-bde1-bb0fa7eedd75/file, Internal Server Error
(os-venv) root@mon1:~# tail -f /var/log/kolla/glance/glance-api.log
2020-12-26 19:50:40.499 777 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.6/site-packages/glance_store/capabilities.py", line 176, in op_checker
2020-12-26 19:50:40.499 777 ERROR glance.common.wsgi return store_op_fun(store, *args, **kwargs)
2020-12-26 19:50:40.499 777 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.6/site-packages/glance_store/_drivers/rbd.py", line 519, in add
2020-12-26 19:50:40.499 777 ERROR glance.common.wsgi image_size, order)
2020-12-26 19:50:40.499 777 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.6/site-packages/glance_store/_drivers/rbd.py", line 397, in _create_image
2020-12-26 19:50:40.499 777 ERROR glance.common.wsgi features=int(features))
2020-12-26 19:50:40.499 777 ERROR glance.common.wsgi File "rbd.pyx", line 1266, in rbd.RBD.create
2020-12-26 19:50:40.499 777 ERROR glance.common.wsgi rbd.PermissionError: [errno 1] RBD permission error (error creating image)
我的泳池详情
(os-venv) root@mon1:~# ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 43 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth
pool 2 'images' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode off last_change 72 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 3 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 76 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 4 'vms' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 79 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 5 'images_data' erasure profile images_data-profile size 6 min_size 5 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode off last_change 82 flags hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 16384 application rbd
pool 6 'volumes_data' erasure profile volumes_data-profile size 6 min_size 5 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode off last_change 85 flags hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 16384 application rbd
pool 7 'vms_data' erasure profile vms_data-profile size 6 min_size 5 crush_rule 3 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode off last_change 88 flags hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 16384 application rbd
我的 ceph.conf 配置:
[global]
cluster network = 10.24.15.0/24
fsid = 0ce82009-a4b8-4c0f-b2ac-badac219f7ea
mon host = [v2:10.24.14.11:3300,v1:10.24.14.11:6789],[v2:10.24.14.12:3300,v1:10.24.14.12:6789],[v2:10.24.14.13:3300,v1:10.24.14.13:6789]
mon initial members = mon01,mon02,mon03
osd pool default crush rule = -1
public network = 10.24.14.0/24
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
[osd]
osd memory target = 4294967296
[client.glance]
rbd default data pool = images_data
[client.cinder]
rbd default data pool = volumes_data
[client.nova]
rbd default data pool = vms_data
Openstack 使用我从副本池(glance、cinder、nova)制作的密钥环连接到 ceph
有什么想法可以解决这个问题吗?
如果有配置丢失或不清楚,请在下面评论