diff options
author | Stefan Hajnoczi <stefanha@redhat.com> | 2023-05-16 15:02:29 -0400 |
---|---|---|
committer | Kevin Wolf <kwolf@redhat.com> | 2023-05-30 17:32:02 +0200 |
commit | f6eac904f6825d47adc6102c8d7b59b8ba5b778e (patch) | |
tree | 5b1faf62077883c92fd8619063bd3738680c20e0 /hw/block/xen-block.c | |
parent | ab61335025b1274bd7042219203524045b23e0d3 (diff) | |
download | qemu-f6eac904f6825d47adc6102c8d7b59b8ba5b778e.zip qemu-f6eac904f6825d47adc6102c8d7b59b8ba5b778e.tar.gz qemu-f6eac904f6825d47adc6102c8d7b59b8ba5b778e.tar.bz2 |
xen-block: implement BlockDevOps->drained_begin()
Detach event channels during drained sections to stop I/O submission
from the ring. xen-block is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.
Extend xen_device_set_event_channel_context() to allow ctx=NULL. The
event channel still exists but the event loop does not monitor the file
descriptor. Event channel processing can resume by calling
xen_device_set_event_channel_context() with a non-NULL ctx.
Factor out xen_device_set_event_channel_context() calls in
hw/block/dataplane/xen-block.c into attach/detach helper functions.
Incidentally, these don't require the AioContext lock because
aio_set_fd_handler() is thread-safe.
It's safer to register BlockDevOps after the dataplane instance has been
created. The BlockDevOps .drained_begin/end() callbacks depend on the
dataplane instance, so move the blk_set_dev_ops() call after
xen_block_dataplane_create().
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230516190238.8401-12-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'hw/block/xen-block.c')
-rw-r--r-- | hw/block/xen-block.c | 24 |
1 files changed, 21 insertions, 3 deletions
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c index f5a7445..f099914 100644 --- a/hw/block/xen-block.c +++ b/hw/block/xen-block.c @@ -189,8 +189,26 @@ static void xen_block_resize_cb(void *opaque) xen_device_backend_printf(xendev, "state", "%u", state); } +/* Suspend request handling */ +static void xen_block_drained_begin(void *opaque) +{ + XenBlockDevice *blockdev = opaque; + + xen_block_dataplane_detach(blockdev->dataplane); +} + +/* Resume request handling */ +static void xen_block_drained_end(void *opaque) +{ + XenBlockDevice *blockdev = opaque; + + xen_block_dataplane_attach(blockdev->dataplane); +} + static const BlockDevOps xen_block_dev_ops = { - .resize_cb = xen_block_resize_cb, + .resize_cb = xen_block_resize_cb, + .drained_begin = xen_block_drained_begin, + .drained_end = xen_block_drained_end, }; static void xen_block_realize(XenDevice *xendev, Error **errp) @@ -242,8 +260,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp) return; } - blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev); - if (conf->discard_granularity == -1) { conf->discard_granularity = conf->physical_block_size; } @@ -277,6 +293,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp) blockdev->dataplane = xen_block_dataplane_create(xendev, blk, conf->logical_block_size, blockdev->props.iothread); + + blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev); } static void xen_block_frontend_changed(XenDevice *xendev, |