The other day I needed to migrate a bunch of Openstack VMs (~1T of volumes) from openstack version ‘Stein’ to a brand new ‘Caracal’ platform. The former was using NFS based storage. The latter Ceph, obviously. Glance storage is still NFS mounted.

So let’s try to migrate this the ‘Openstack way’. Turns out Openstack (Cinder) needed to copy the data 3 times!

  1. Download from glance to some temporary storage
  2. Convert the image to ‘raw’ format, even if the uploaded image is already ‘raw’.
  3. Copy the data to a Ceph RBD volume

I spent the day migrating two volumes of a total of 250G. Had a lot of other minor issues but I lost a lot of time because of this ridicilous 3 step import.

1 lesson you should have already learnt: Go all the way Ceph if you want to do Openstack at this day. You can configure Openstack that it can even use internal Ceph clones (I guess copy on write type, didn’t go there yet) if you have Glance and Cinder both in ‘Ceph RBD’ mode.

But this is not the path we chose, surely there must be a faster way to import volumes into Ceph RBD volumes? Indeed there is, only 1 step needed!

Set the stage

  • Old cluster Cinder: NFS mounted storage with QCOW2 images
  • New cluster cinder: Ceph RBD

The new cluster is setup using openstack helm (Kubernetes). Same might be valid for kolla Ansible managed cluster. I am not sure if Ceph is containerized in those setups.

Prerequisites

Mount the ‘old’ cinder storage

Log into a host of your new cluster and mount the cinder storage using NFS. Obviously your NFS server needs to allow this (/etc/exports)

Let’s assume you mounted your ‘old’ cinder storage on ‘/mnt/cinder_stein’.

Make sure you can access Ceph RBD volumes on your host

Install Ceph client tools (rbd)

apt install ceph-common

Setup your Ceph config file and admin secret.

TODO

Migration

Find the volume ID on your old cluster

TODO openstack command

Create a volume on the new cluster with the exact same size

TODO openstack command

Copy the image between file and the RBD volume

Make sure both your source and destination (if any) are powered down. The volumes should not be in use when copying the data.

qemu-img has support for Ceph RBD so you can copy the image in 1 step:

qemu-img convert -m 16 -W -p -n -f qcow2 -O raw /mnt/cinder_stein/volume-28f9c74f-ce89-4c2e-8758-ccc54e4593ad.c2520883-9abb-497b-a008-8e8e0622f7f5 rbd:cinder.volumes/dc5d63e5-5eb7-44bb-b238-3c89442d430b

Create your VM

Launch a new instance, attaching the migrated volume and all your other settings. It is also possible to add the same fixed IP if you setup networking as on the old cluster

TODO example commands