diff options
author | Kevin Wolf <kwolf@redhat.com> | 2024-03-14 17:58:24 +0100 |
---|---|---|
committer | Kevin Wolf <kwolf@redhat.com> | 2024-03-18 12:38:02 +0100 |
commit | 9c707525cbb1dd1e56876e45c70c0c08f2876d41 (patch) | |
tree | 1670192108968d8efaafd8246ea0710678e4a81b /hw/intc/sh_intc.c | |
parent | ae5a40e8581185654a667fbbf7e4adbc2a2a3e45 (diff) | |
download | qemu-9c707525cbb1dd1e56876e45c70c0c08f2876d41.zip qemu-9c707525cbb1dd1e56876e45c70c0c08f2876d41.tar.gz qemu-9c707525cbb1dd1e56876e45c70c0c08f2876d41.tar.bz2 |
nbd/server: Fix race in draining the export
When draining an NBD export, nbd_drained_begin() first sets
client->quiescing so that nbd_client_receive_next_request() won't start
any new request coroutines. Then nbd_drained_poll() tries to makes sure
that we wait for any existing request coroutines by checking that
client->nb_requests has become 0.
However, there is a small window between creating a new request
coroutine and increasing client->nb_requests. If a coroutine is in this
state, it won't be waited for and drain returns too early.
In the context of switching to a different AioContext, this means that
blk_aio_attached() will see client->recv_coroutine != NULL and fail its
assertion.
Fix this by increasing client->nb_requests immediately when starting the
coroutine. Doing this after the checks if we should create a new
coroutine is okay because client->lock is held.
Cc: qemu-stable@nongnu.org
Fixes: fd6afc501a01 ("nbd/server: Use drained block ops to quiesce the server")
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20240314165825.40261-2-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'hw/intc/sh_intc.c')
0 files changed, 0 insertions, 0 deletions