From 98e3ab35054b946f7c2aba5408822532b0920b53 Mon Sep 17 00:00:00 2001 From: Kevin Wolf Date: Tue, 10 May 2022 17:10:19 +0200 Subject: coroutine: Rename qemu_coroutine_inc/dec_pool_size() It's true that these functions currently affect the batch size in which coroutines are reused (i.e. moved from the global release pool to the allocation pool of a specific thread), but this is a bug and will be fixed in a separate patch. In fact, the comment in the header file already just promises that it influences the pool size, so reflect this in the name of the functions. As a nice side effect, the shorter function name makes some line wrapping unnecessary. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf Message-Id: <20220510151020.105528-2-kwolf@redhat.com> Signed-off-by: Kevin Wolf --- include/qemu/coroutine.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h index 284571b..031cf23 100644 --- a/include/qemu/coroutine.h +++ b/include/qemu/coroutine.h @@ -334,12 +334,12 @@ void coroutine_fn yield_until_fd_readable(int fd); /** * Increase coroutine pool size */ -void qemu_coroutine_increase_pool_batch_size(unsigned int additional_pool_size); +void qemu_coroutine_inc_pool_size(unsigned int additional_pool_size); /** - * Devcrease coroutine pool size + * Decrease coroutine pool size */ -void qemu_coroutine_decrease_pool_batch_size(unsigned int additional_pool_size); +void qemu_coroutine_dec_pool_size(unsigned int additional_pool_size); #include "qemu/lockable.h" -- cgit v1.1 From a5fced40212ed73c715ca298a2929dd4d99c9999 Mon Sep 17 00:00:00 2001 From: Eric Blake Date: Wed, 11 May 2022 19:49:23 -0500 Subject: qemu-nbd: Pass max connections to blockdev layer The next patch wants to adjust whether the NBD server code advertises MULTI_CONN based on whether it is known if the server limits to exactly one client. For a server started by QMP, this information is obtained through nbd_server_start (which can support more than one export); but for qemu-nbd (which supports exactly one export), it is controlled only by the command-line option -e/--shared. Since we already have a hook function used by qemu-nbd, it's easiest to just alter its signature to fit our needs. Signed-off-by: Eric Blake Message-Id: <20220512004924.417153-2-eblake@redhat.com> Signed-off-by: Kevin Wolf --- include/block/nbd.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include') diff --git a/include/block/nbd.h b/include/block/nbd.h index a98eb66..c5a29ce 100644 --- a/include/block/nbd.h +++ b/include/block/nbd.h @@ -344,7 +344,7 @@ void nbd_client_new(QIOChannelSocket *sioc, void nbd_client_get(NBDClient *client); void nbd_client_put(NBDClient *client); -void nbd_server_is_qemu_nbd(bool value); +void nbd_server_is_qemu_nbd(int max_connections); bool nbd_server_is_running(void); void nbd_server_start(SocketAddress *addr, const char *tls_creds, const char *tls_authz, uint32_t max_connections, -- cgit v1.1 From 58a6fdcc9efb2a7c1ef4893dca4aa5e8020ca3dc Mon Sep 17 00:00:00 2001 From: Eric Blake Date: Wed, 11 May 2022 19:49:24 -0500 Subject: nbd/server: Allow MULTI_CONN for shared writable exports According to the NBD spec, a server that advertises NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will not see any cache inconsistencies: when properly separated by a single flush, actions performed by one client will be visible to another client, regardless of which client did the flush. We always satisfy these conditions in qemu - even when we support multiple clients, ALL clients go through a single point of reference into the block layer, with no local caching. The effect of one client is instantly visible to the next client. Even if our backend were a network device, we argue that any multi-path caching effects that would cause inconsistencies in back-to-back actions not seeing the effect of previous actions would be a bug in that backend, and not the fault of caching in qemu. As such, it is safe to unconditionally advertise CAN_MULTI_CONN for any qemu NBD server situation that supports parallel clients. Note, however, that we don't want to advertise CAN_MULTI_CONN when we know that a second client cannot connect (for historical reasons, qemu-nbd defaults to a single connection while nbd-server-add and QMP commands default to unlimited connections; but we already have existing means to let either style of NBD server creation alter those defaults). This is visible by no longer advertising MULTI_CONN for 'qemu-nbd -r' without -e, as in the iotest nbd-qemu-allocation. The harder part of this patch is setting up an iotest to demonstrate behavior of multiple NBD clients to a single server. It might be possible with parallel qemu-io processes, but I found it easier to do in python with the help of libnbd, and help from Nir and Vladimir in writing the test. Signed-off-by: Eric Blake Suggested-by: Nir Soffer Suggested-by: Vladimir Sementsov-Ogievskiy Message-Id: <20220512004924.417153-3-eblake@redhat.com> Signed-off-by: Kevin Wolf --- include/block/nbd.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/block/nbd.h b/include/block/nbd.h index c5a29ce..c74b7a9 100644 --- a/include/block/nbd.h +++ b/include/block/nbd.h @@ -1,5 +1,5 @@ /* - * Copyright (C) 2016-2020 Red Hat, Inc. + * Copyright (C) 2016-2022 Red Hat, Inc. * Copyright (C) 2005 Anthony Liguori * * Network Block Device @@ -346,6 +346,7 @@ void nbd_client_put(NBDClient *client); void nbd_server_is_qemu_nbd(int max_connections); bool nbd_server_is_running(void); +int nbd_server_max_connections(void); void nbd_server_start(SocketAddress *addr, const char *tls_creds, const char *tls_authz, uint32_t max_connections, Error **errp); -- cgit v1.1