From c08f20634159d977df8f551d07617a8c6b2fea64 Mon Sep 17 00:00:00 2001 From: Thanos Makatos Date: Wed, 6 Apr 2022 12:00:27 +0100 Subject: document live migration for SPDK (#659) Signed-off-by: Thanos Makatos Reviewed-by: John Levon --- docs/spdk.md | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) (limited to 'docs') diff --git a/docs/spdk.md b/docs/spdk.md index b60e930..4ff1ba3 100644 --- a/docs/spdk.md +++ b/docs/spdk.md @@ -6,6 +6,9 @@ experimental support for a virtual NVMe controller called nvmf/vfio-user. The controller can be used with the same QEMU command line as the one used for GPIO. +Build QEMU +---------- + Use Oracle's QEMU d377d483f9 from https://github.com/oracle/qemu: git clone https://github.com/oracle/qemu qemu-orcl @@ -14,6 +17,9 @@ Use Oracle's QEMU d377d483f9 from https://github.com/oracle/qemu: ./configure --enable-multiprocess make +Build SPDK +---------- + Use SPDK 72a5fa139: git clone https://github.com/spdk/spdk @@ -22,6 +28,7 @@ Use SPDK 72a5fa139: ./configure --with-vfio-user make + Start SPDK: LD_LIBRARY_PATH=build/lib:dpdk/build/lib build/bin/nvmf_tgt & @@ -35,6 +42,9 @@ Create an NVMe controller with a 512MB RAM-based namespace: scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode0 Malloc0 && \ scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode0 -t VFIOUSER -a /var/run -s 0 +Start the Guest +--------------- + Start the guest with e.g. 4 GB of RAM: qemu-orcl/build/qemu-system-x86_64 ... \ @@ -42,6 +52,50 @@ Start the guest with e.g. 4 GB of RAM: -device vfio-user-pci,socket=/var/run/cntrl +Live Migration +-------------- + +[SPDK v22.01](https://github.com/spdk/spdk/releases/tag/v22.01) has +[experimental support for live migration](https://spdk.io/release/2022/01/27/22.01_release/). +[This CR](https://review.spdk.io/gerrit/c/spdk/spdk/+/11745/11) contains +additional fixes that make live migration more reliable. Check it out and build +SPDK as explained in [Build SPDK](), both on the source and on the destination +hosts. + +Then build QEMU as explained in [Build QEMU]() using the following version: + + https://github.com/oracle/qemu/tree/vfio-user-dbfix + +Start the guest at the source host as explained in +[Start the Guest](), appending the `x-enable-migration=on` argument to the +`vfio-user-pci` option. + +Then, at the destination host, start the nvmf/vfio-user target and QEMU, +passing the `-incoming` option to QEMU: + + -incoming tcp:0:4444 + +QEMU will block at the destination waiting for the guest to be migrated. + +Bear in mind that if the guest's disk don't reside in shared storage you'll get +I/O errors soon after migration. The easiest way around this is to put the +guest's disk on some NFS mount and share between the source and destination +hosts. + +Finally, migrate the guest by issuing the `migrate` command on the QEMU +monitor (enter CTRL-A + C to enter the monitor): + + migrate -d tcp::4444 + +Migration should happen almost instantaneously, there's no message to show that +migration finished neither in the source nor on the destination hosts. Simply +hitting ENTER at the destination is enough to tell that migration finished. + +Finally, type `q` in the source QEMU monitor to exit source QEMU. + +For more information in live migration see +https://www.linux-kvm.org/page/Migration. + libvirt ------- -- cgit v1.1