Ceph migration from filestore to bluestore LVM

ceph
  • Migration from filestore to bluestore

There is no special migration path for upgrading an OSD to bluestore; the process is to simply destroy the OSD, rebuild it as bluestore, and then let Ceph recover the data on the newly created OSD.

  • check health: ceph -s

Now we need to stop all of the OSDs from running, unmount the disks, and then wipe them by going through the following steps:

systemctl stop ceph-osd@*

umount /dev/sdx ( osd related partition)

ceph-volume lvm zap /dev/sd<x>

Now we can also edit the partition table on the flash device to remove the filestore journals

fdisk /dev/sd<x>

  • Create a linux partition for each OSD you intend to create and partprobe the device

fdisk /dev/sdx make a primary from 1049kB to 100%

partprobe /dev/sdx

Go back to one of your monitors. First, confirm the OSDs we are going to remove and remove the OSDs using the following purge commands

ceph osd tree

ceph osd purge osd.x --yes-i-really-mean-it ( This should be launched on the master)

Check Ceph cluster with the ceph -s . You should now see that the OSD has been removed.

Now issue the ceph-volume command to create the bluestore OSD :

ceph-volume lvm create --bluestore --data /dev/sdx (--block.db /dev/sdy<ssd>) (--dmcrypt)

The commands below should not be necessary:

systemctl enable --runtime ceph-osd@x

systemctl start --runtime ceph-osd@x

ID CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF 
-1       0.26097 root default                               
-3       0.08699     host sd-ceph-1                         
 0   hdd 0.01900         osd.0          up  1.00000 1.00000  
 1   hdd 0.01900         osd.1          up  1.00000 1.00000 
 2   hdd 0.01900         osd.2          up  1.00000 1.00000 
15   ssd 0.00999         osd.15         up  0.19310 1.00000 
16   ssd 0.00999         osd.16         up  0.50014 1.00000 
17   ssd 0.00999         osd.17         up  0.31978 1.00000 
-5       0.08699     host sd-ceph-2                         
 3   hdd 0.01900         osd.3          up  1.00000 1.00000 
 4   hdd 0.01900         osd.4          up  1.00000 1.00000 
 5   hdd 0.01900         osd.5          up  1.00000 1.00000 
11   ssd 0.00999         osd.11         up  0.90004 1.00000 
12   ssd 0.00999         osd.12         up  0.75008 1.00000 
14   ssd 0.00999         osd.14         up  0.65010 1.00000 
-7       0.08699     host sd-ceph-3                         
 6   hdd 0.01900         osd.6          up  1.00000 1.00000 
 7   hdd 0.01900         osd.7          up  1.00000 1.00000 
 8   hdd 0.01900         osd.8          up  1.00000 1.00000 
 9   ssd 0.00999         osd.9          up  0.60507 1.00000 
10   ssd 0.00999         osd.10         up  0.41284 1.00000 
13   ssd 0.00999         osd.13         up  0.80006 1.00000
root@sd-ceph-3:~# mount |grep osd
/dev/vdd1 on /var/lib/ceph/osd/ceph-8 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/vdg1 on /var/lib/ceph/osd/ceph-13 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/vdf1 on /var/lib/ceph/osd/ceph-9 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/vde1 on /var/lib/ceph/osd/ceph-10 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/vdc1 on /var/lib/ceph/osd/ceph-7 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/vdb1 on /var/lib/ceph/osd/ceph-6 type xfs (rw,noatime,attr2,inode64,noquota)
systemctl stop ceph-osd@4 && umount $(grep -w "ceph-4"  /proc/mounts| awk '{print $1}')
ceph-volume lvm zap /dev/vdc
parted  -s /dev/vdc mklabel msdos
parted  -a optimal /dev/vdc mkpart primary 1049kB 100% && partprobe /dev/vdc
ceph osd purge osd.4 --yes-i-really-mean-it ( This should be launched on the master)
( root@sd-ceph-2:~# ssh sd-ceph-1 'ceph osd purge osd.5 --yes-i-really-mean-it' )
ceph-volume lvm create --bluestore --data /dev/vdc1