首页 > 技术文章 > ceph运维系列-块存储

weiwei2021 2020-12-01 12:08 原文

一 摘要

基于centos8.1 7.6 连ceph 14.2.15

二 环境信息

(一)操作系统信息

[root@cephclient ~]# cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)
[root@cephclient ~]# uname -a
Linux cephclient.novalocal 4.18.0-147.el8.x86_64 #1 SMP Wed Dec 4 21:51:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@cephclient ~]#

三 ceph 块存储运维

(一)ceph 块存储

3.1.1 ceph 客户端配置

3.1.1.1检查内核是否支持rbd

[root@cephclient ~]# modprobe rbd
[root@cephclient ~]# echo $?
0
[root@cephclient ~]#

3.1.1.2 安装ceph 客户端

3.1.1.2.1 配置yum源
[root@cephclient yum.repos.d]# vim ceph14centos8.repo
[root@cephclient yum.repos.d]# cat ceph14centos8.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
3.1.1.2.2 下载安装包到本地
yum -y install --downloadonly --downloaddir=/root/software/cephcentos8/ ceph
3.1.1.2.3 安装
yum -y install  ceph
3.1.1.2.4 创建ceph块客户端用户名及认证密钥(在服务器端创建)

服务器端集群请参考ceph集群部署

登录ceph-deploy 节点,切换到cephadmin用户,进入cephcluster 目录

[cephadmin@ceph001 ~]$ cd cephcluster/
[cephadmin@ceph001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@ceph001 cephcluster]$


生成密钥文件并存放到ceph.client.rbd.keyring

[cephadmin@ceph001 cephcluster]$ ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd' | tee ./ceph.client.rbd.keyring
[client.rbd]
        key = AQBXoMVfJqKiJxAAIOCDFiEJey0GcHu1RP61PA==
[cephadmin@ceph001 cephcluster]$ ll
total 144
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-mds.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-mgr.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-osd.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-rgw.keyring
-rw------- 1 cephadmin cephadmin    151 Nov 30 17:17 ceph.client.admin.keyring
-rw-rw-r-- 1 cephadmin cephadmin     61 Dec  1 09:45 ceph.client.rbd.keyring
-rw-rw-r-- 1 cephadmin cephadmin    313 Nov 30 17:09 ceph.conf
-rw-rw-r-- 1 cephadmin cephadmin    247 Nov 30 17:00 ceph.conf.bak.orig
-rw-rw-r-- 1 cephadmin cephadmin 108766 Nov 30 17:46 ceph-deploy-ceph.log
-rw------- 1 cephadmin cephadmin     73 Nov 30 16:50 ceph.mon.keyring
[cephadmin@ceph001 cephcluster]$

3.1.1.2.5 ceph client 配置

从server 端将ceph.client.rbd.keyring \ceph.conf配置文件拷贝到client

[cephadmin@ceph001 cephcluster]$ scp ceph.client.rbd.keyring ceph.conf root@172.31.185.211:/etc/ceph/
The authenticity of host '172.31.185.211 (172.31.185.211)' can't be established.
ECDSA key fingerprint is SHA256:ES6ytBX1siYV4WMG2CF3/21VKaDd5y27lbWQggeqRWM.
ECDSA key fingerprint is MD5:08:8e:ce:cd:2c:b4:24:69:44:c9:e4:42:a7:bb:ee:3a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.31.185.211' (ECDSA) to the list of known hosts.
root@172.31.185.211's password:
ceph.client.rbd.keyring                                                                                      100%   61    28.6KB/s   00:00
ceph.conf                                                     

配置客户端host

[root@cephclient etc]# cp /etc/hosts /etc/hosts.bak.orig
[root@cephclient etc]# vim /etc/hosts
[root@cephclient etc]#

172.31.185.127 ceph001
172.31.185.198 ceph002
172.31.185.203 ceph003
验证客户端配置是否成功
[root@cephclient etc]# ceph -s --name client.rbd
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 16h)
    mgr: ceph002(active, since 15h), standbys: ceph003, ceph001
    osd: 3 osds: 3 up (since 16h), 3 in (since 16h)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 147 GiB / 150 GiB avail
    pgs:

[root@cephclient etc]#


3.1.2 创建块设备及客户端映射

3.1.2.1 创建块设备

登录ceph 节点 ,先检查下有没有创建rbd 池

[cephadmin@ceph001 ~]$ ceph -s
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 17h)
    mgr: ceph002(active, since 17h), standbys: ceph003, ceph001
    osd: 3 osds: 3 up (since 17h), 3 in (since 17h)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 147 GiB / 150 GiB avail
    pgs:

[cephadmin@ceph001 ~]$ ceph osd lspools
[cephadmin@ceph001 ~]$


经检查没有池,建osd 池

#64 是pg_num 生成环境怎么配置,需要好好研究
[cephadmin@ceph001 ~]$ ceph osd pool create rbd 64
pool 'rbd' created
[cephadmin@ceph001 ~]$

确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值(总的pg):
少于 5 个 OSD 时可把 pg_num 设置为 128
OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值

创建块设备,这个既可以ceph 集群服务器端执行,也可以在client 端*(因为刚才对client 授权了)

在客户端服务器执行该命令,创建一个2G 的块

[root@cephclient ~]# rbd create rbd1 --size 2048 --name client.rbd
# 检查创建是否成功
[root@cephclient ~]# rbd ls --name client.rbd
rbd1
[root@cephclient ~]#
#server 端也能看到
[cephadmin@ceph001 ~]$  rbd ls
rbd1
[cephadmin@ceph001 ~]$

对该块指定一个池

[root@cephclient ~]# rbd ls -p rbd --name client.rbd    #-p 指定 池名称
rbd1
[root@cephclient ~]#

查看块设备详细信息

root@cephclient ~]# rbd --image rbd1 info --name client.rbd
rbd image 'rbd1':
        size 2 GiB in 512 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 11256f6966f5
        block_name_prefix: rbd_data.11256f6966f5
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Tue Dec  1 11:30:53 2020
        access_timestamp: Tue Dec  1 11:30:53 2020
        modify_timestamp: Tue Dec  1 11:30:53 2020
[root@cephclient ~]#

3.1.2.2 映射到客户端

客户端执行

[root@cephclient ~]# rbd map --image rbd1 --name client.rbd
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd1 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[root@cephclient ~]#

报错 ,因为内核有些模块没有开启

有多种方法解决该问题,此处采用动态禁用

[root@cephclient ~]# rbd feature disable rbd1 exclusive-lock object-map deep-flatten fast-diff -n client.rbd
[root@cephclient ~]#

再次映射

[root@cephclient ~]# rbd map --image rbd1 --name client.rbd
/dev/rbd0
[root@cephclient ~]#

查看映射信息

[root@cephclient dev]# ll /dev/rbd*
brw-rw---- 1 root disk 252, 0 Dec  1 11:44 /dev/rbd0

/dev/rbd:
total 0
drwxr-xr-x 2 root root 60 Dec  1 11:44 rbd
[root@cephclient dev]# rbd showmapped --name client.rbd
id pool namespace image snap device
0  rbd            rbd1  -    /dev/rbd0
[root@cephclient dev]#


[root@cephclient dev]# fdisk -l  /dev/rbd0
Disk /dev/rbd0: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
[root@cephclient dev]#

3.1.3 创建文件系统并挂载

3.1.3.1 创建文件系统

[root@cephclient dev]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=3072, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@cephclient dev]#

3.1.3.2 挂载

[root@cephclient dev]# mkdir /mnt/ceph-disk1
[root@cephclient dev]# mount /dev/rbd0 /mnt/ceph-disk1

检查是否挂载成功
[root@cephclient dev]# df -h /mnt/ceph-disk1
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0       2.0G   47M  2.0G   3% /mnt/ceph-disk1
[root@cephclient dev]#

写入数据测试

[root@cephclient dev]# dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0430997 s, 2.4 GB/s
[root@cephclient dev]#

查看是否写入

[root@cephclient dev]# ll -h  /mnt/ceph-disk1/file1
-rw-r--r-- 1 root root 100M Dec  1 11:54 /mnt/ceph-disk1/file1
[root@cephclient dev]#

3.1.3.3 配置自动挂载服务

编写脚本 rbd-mount,存放到/usr/local/bin/rbd-mount

[root@cephclient dev]# vim /usr/local/bin/rbd-mount
#!/bin/bash

# Pool name where block device image is stored
export poolname=rbd

# Disk image name
export rbdimage=rbd1

# Mounted Directory
export mountpoint=/mnt/ceph-disk1

# Image mount/unmount and pool are passed from the systemd service as arguments
# Are we are mounting or unmounting
if [ "$1" == "m" ]; then
   modprobe rbd
   rbd feature disable $rbdimage object-map fast-diff deep-flatten
   rbd map $rbdimage --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
   mkdir -p $mountpoint
   mount /dev/rbd/$poolname/$rbdimage $mountpoint
fi
if [ "$1" == "u" ]; then
   umount $mountpoint
   rbd unmap /dev/rbd/$poolname/$rbdimage
fi
~


添加执行权限

[root@cephclient dev]# chmod u+x /usr/local/bin/rbd-mount

配置服务 /etc/systemd/system 新增服务rbd-mount.service

[root@cephclient dev]# cat /etc/systemd/system/rbd-mount.service
[Unit]
Description=RADOS block device mapping for $rbdimage in pool $poolname"
Conflicts=shutdown.target
Wants=network-online.target
After=NetworkManager-wait-online.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/rbd-mount m
ExecStop=/usr/local/bin/rbd-mount u
[Install]
WantedBy=multi-user.target
[root@cephclient dev]#

配置开机自启动

[root@cephclient dev]# systemctl daemon-reload
[root@cephclient dev]# systemctl enable rbd-mount.service
Created symlink /etc/systemd/system/multi-user.target.wants/rbd-mount.service → /etc/systemd/system/rbd-mount.service.
[root@cephclient dev]# reboot -f

重启后检查 挂载成功

[root@cephclient ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G  8.5M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda2        19G  3.1G   16G  16% /
/dev/vda1      1014M  164M  851M  17% /boot
tmpfs           379M     0  379M   0% /run/user/0
/dev/rbd0       2.0G  147M  1.9G   8% /mnt/ceph-disk1
[root@cephclient ~]#


推荐阅读