diff options
author | Kevin Wolf <kwolf@redhat.com> | 2018-09-18 17:09:16 +0200 |
---|---|---|
committer | Kevin Wolf <kwolf@redhat.com> | 2018-09-25 15:50:15 +0200 |
commit | cfe29d8294e06420e15d4938421ae006c8ac49e7 (patch) | |
tree | 98c0ebb7866f1c3066de2ddfe1a01c8f04375bbf /util/aio-wait.c | |
parent | 5599c162c3bec2bc8f0123e4d5802a70d9984b3b (diff) | |
download | qemu-cfe29d8294e06420e15d4938421ae006c8ac49e7.zip qemu-cfe29d8294e06420e15d4938421ae006c8ac49e7.tar.gz qemu-cfe29d8294e06420e15d4938421ae006c8ac49e7.tar.bz2 |
block: Use a single global AioWait
When draining a block node, we recurse to its parent and for subtree
drains also to its children. A single AIO_WAIT_WHILE() is then used to
wait for bdrv_drain_poll() to become true, which depends on all of the
nodes we recursed to. However, if the respective child or parent becomes
quiescent and calls bdrv_wakeup(), only the AioWait of the child/parent
is checked, while AIO_WAIT_WHILE() depends on the AioWait of the
original node.
Fix this by using a single AioWait for all callers of AIO_WAIT_WHILE().
This may mean that the draining thread gets a few more unnecessary
wakeups because an unrelated operation got completed, but we already
wake it up when something _could_ have changed rather than only if it
has certainly changed.
Apart from that, drain is a slow path anyway. In theory it would be
possible to use wakeups more selectively and still correctly, but the
gains are likely not worth the additional complexity. In fact, this
patch is a nice simplification for some places in the code.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Diffstat (limited to 'util/aio-wait.c')
-rw-r--r-- | util/aio-wait.c | 11 |
1 files changed, 6 insertions, 5 deletions
diff --git a/util/aio-wait.c b/util/aio-wait.c index b8a8f86..b487749 100644 --- a/util/aio-wait.c +++ b/util/aio-wait.c @@ -26,21 +26,22 @@ #include "qemu/main-loop.h" #include "block/aio-wait.h" +AioWait global_aio_wait; + static void dummy_bh_cb(void *opaque) { /* The point is to make AIO_WAIT_WHILE()'s aio_poll() return */ } -void aio_wait_kick(AioWait *wait) +void aio_wait_kick(void) { /* The barrier (or an atomic op) is in the caller. */ - if (atomic_read(&wait->num_waiters)) { + if (atomic_read(&global_aio_wait.num_waiters)) { aio_bh_schedule_oneshot(qemu_get_aio_context(), dummy_bh_cb, NULL); } } typedef struct { - AioWait wait; bool done; QEMUBHFunc *cb; void *opaque; @@ -54,7 +55,7 @@ static void aio_wait_bh(void *opaque) data->cb(data->opaque); data->done = true; - aio_wait_kick(&data->wait); + aio_wait_kick(); } void aio_wait_bh_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque) @@ -67,5 +68,5 @@ void aio_wait_bh_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque) assert(qemu_get_current_aio_context() == qemu_get_aio_context()); aio_bh_schedule_oneshot(ctx, aio_wait_bh, &data); - AIO_WAIT_WHILE(&data.wait, ctx, !data.done); + AIO_WAIT_WHILE(ctx, !data.done); } |