aboutsummaryrefslogtreecommitdiff
path: root/block/block-backend.c
AgeCommit message (Collapse)AuthorFilesLines
2023-11-21block: Fix deadlocks in bdrv_graph_wrunlock()Kevin Wolf1-2/+6
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that have a nested event loop. Nested event loops can depend on other iothreads making progress, so in order to allow them to make progress it must not hold the AioContext lock of another thread while calling aio_poll(). This introduces a @bs parameter to bdrv_graph_wrunlock() whose AioContext is temporarily dropped (which matches bdrv_graph_wrlock()), and a bdrv_graph_wrunlock_ctx() that can be used if the BlockDriverState doesn't necessarily exist any more when unlocking. This also requires a change to bdrv_schedule_unref(), which was relying on the incorrectly taken lock. It needs to take the lock itself now. While this is a separate bug, it can't be fixed a separate patch because otherwise the intermediate state would either deadlock or try to release a lock that we don't even hold. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-3-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> [kwolf: Fixed up bdrv_schedule_unref()] Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-11-21block: Fix bdrv_graph_wrlock() call in blk_remove_bs()Kevin Wolf1-1/+3
While not all callers of blk_remove_bs() are correct in this respect, the assumption in the function is that callers hold the AioContext lock of the BlockBackend (this is required by the drain calls in it). In order to avoid deadlock in the nested event loop, bdrv_graph_wrlock() has then to be called with the root BlockDriverState as its parameter instead of NULL, so that this AioContext lock is temporarily dropped. Fixes: https://issues.redhat.com/browse/RHEL-1761 Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-2-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-11-07block: Mark bdrv_skip_filters() and callers GRAPH_RDLOCKKevin Wolf1-0/+1
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_skip_filters() need to hold a reader lock for the graph because it calls bdrv_filter_child(), which accesses bs->file/backing. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-9-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-11-07block: Mark bdrv_root_attach_child() GRAPH_WRLOCKKevin Wolf1-0/+2
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_root_attach_child(). These callers will typically already hold the graph lock once the locking work is completed, which means that they can't call functions that take it internally. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-5-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-11-07block: Mark bdrv_probe_blocksizes() and callers GRAPH_RDLOCKKevin Wolf1-0/+2
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_probe_blocksizes() need to hold a reader lock for the graph because it calls bdrv_filter_bs(), which accesses bs->file/backing. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-2-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_op_is_blocked() and callers GRAPH_RDLOCKKevin Wolf1-0/+1
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_op_is_blocked() need to hold a reader lock for the graph because it calls bdrv_get_device_or_node_name(), which accesses the parents list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-18-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12qcow2: Mark qcow2_signal_corruption() and callers GRAPH_RDLOCKKevin Wolf1-0/+2
This adds GRAPH_RDLOCK annotations to declare that callers of qcow2_signal_corruption() need to hold a reader lock for the graph because it calls bdrv_get_node_name(), which accesses the parents list of a node. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-15-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCKKevin Wolf1-1/+5
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_first_blk() and bdrv_is_root_node() need to hold a reader lock for the graph. These functions are the only functions in block-backend.c that access the parent list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-5-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-09-21Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into stagingStefan Hajnoczi1-32/+32
Block layer patches - Graph locking part 4 (node management) - qemu-img map: report compressed data blocks - block-backend: process I/O in the current AioContext # -----BEGIN PGP SIGNATURE----- # # iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmULHnURHGt3b2xmQHJl # ZGhhdC5jb20ACgkQfwmycsiPL9aB5hAAqH8To7WIUtg1rj1PY809ck78ghm18PKg # TNdN7IbrXQghX5foh2VgPwVVl+JaW2CSrJYWQcAO6AbvFduNIi9iKzI6RT0xKXpb # b8oQXS7zntFzwBv8ohOU5NSVJOgVmNP4h5qJIMmXgB9ZcLFG40zggVH2qQT7guUf # 9MAc81kI/d5vvSHY0ZjdHjNOgwG4q1j8yytL7OFqWUfB8sXloUCA9lT7w4jIYD8L # v2StUOLWB01Zts2o8SCNaFxuajs6wUee8b/DM1cyPyLy4KtOdXvLKhq2NlXpLo2i # aZFr4PtizTVwrQZIJttA9jqM+QCsDOsiSat3BLNNsKUaCWHZB0rOGLCzMCtisyOo # 4PzuL4UI21ik2zieO1qVM+Thqvw16kHtp6dD9pGk4X4ogGreGYEIxzBl79luR+AV # NCRizoeFWTHKymS1tSoKrWT9ZNHcLmwemO6Tt1rMYk9jV3T4uY5e1NwxaUavEfsX # f8dLfQjhNiySOoDknT1OSerBOVdTXURS2ri5H3GZxrxvJ4jOeFkn52C8r3YlZ3Wp # Cr9LCUJZeXgwY+Q1JQ3D4VLY8aZ83txpw6XKEy0eTEv5wxkBj5LWhXx7hNb5F3lg # bqaRYijVJn+P82wVxlftIzMfNeVBFHzFE90taPV5grJjr8lgrGBFmD7Puc97kfDX # oTDBwRxJeew= # =qTNA # -----END PGP SIGNATURE----- # gpg: Signature made Wed 20 Sep 2023 12:31:49 EDT # gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6 # gpg: issuer "kwolf@redhat.com" # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (28 commits) block: mark aio_poll as non-coroutine block-backend: process zoned requests in the current AioContext block-backend: process I/O in the current AioContext test-bdrv-drain: avoid race with BH in IOThread drain test block: remove AIOCBInfo->get_aio_context() qemu-img: map: report compressed data blocks block: add BDRV_BLOCK_COMPRESSED flag for bdrv_block_status() block: Mark bdrv_add/del_child() and caller GRAPH_WRLOCK block: Mark bdrv_unref_child() GRAPH_WRLOCK block: Mark bdrv_root_unref_child() GRAPH_WRLOCK block: Take graph rdlock in bdrv_change_aio_context() block: Take graph rdlock in bdrv_drop_intermediate() block: Mark bdrv_parent_cb_change_media() GRAPH_RDLOCK block: Mark bdrv_child_perm() GRAPH_RDLOCK block: Mark bdrv_get_cumulative_perm() and callers GRAPH_RDLOCK block: Mark bdrv_parent_perms_conflict() and callers GRAPH_RDLOCK block: Mark bdrv_attach_child() GRAPH_WRLOCK block: Call transaction callbacks with lock held block: Mark bdrv_attach_child_common() GRAPH_WRLOCK block: Mark bdrv_replace_child_tran() GRAPH_WRLOCK ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-09-20block-backend: process zoned requests in the current AioContextStefan Hajnoczi1-6/+6
Process zoned requests in the current thread's AioContext instead of in the BlockBackend's AioContext. There is no need to use the BlockBackend's AioContext thanks to CoMutex bs->wps->colock, which protects zone metadata. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230912231037.826804-5-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-09-20block-backend: process I/O in the current AioContextStefan Hajnoczi1-3/+3
Switch blk_aio_*() APIs over to multi-queue by using qemu_get_current_aio_context() instead of blk_get_aio_context(). This change will allow devices to process I/O in multiple IOThreads in the future. I audited existing blk_aio_*() callers: - migration/block.c: blk_mig_lock() protects the data accessed by the completion callback. - The remaining emulated devices and exports run with qemu_get_aio_context() == blk_get_aio_context(). Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230912231037.826804-4-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-09-20block: remove AIOCBInfo->get_aio_context()Stefan Hajnoczi1-17/+0
The synchronous bdrv_aio_cancel() function needs the acb's AioContext so it can call aio_poll() to wait for cancellation. It turns out that all users run under the BQL in the main AioContext, so this callback is not needed. Remove the callback, mark bdrv_aio_cancel() GLOBAL_STATE_CODE just like its blk_aio_cancel() caller, and poll the main loop AioContext. The purpose of this cleanup is to identify bdrv_aio_cancel() as an API that does not work with the multi-queue block layer. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230912231037.826804-2-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-09-20block: Mark bdrv_root_unref_child() GRAPH_WRLOCKKevin Wolf1-0/+3
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_root_unref_child(). These callers will typically already hold the graph lock once the locking work is completed, which means that they can't call functions that take it internally. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230911094620.45040-20-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-09-20block: Mark bdrv_parent_perms_conflict() and callers GRAPH_RDLOCKKevin Wolf1-6/+20
The function reads the parents list, so it needs to hold the graph lock. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230911094620.45040-14-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-08-29block/throttle-groups: Use ThrottleDirection instread of bool is_writezhenwei pi1-2/+2
'bool is_write' style is obsolete from throttle framework, adapt block throttle groups to the new style: - use ThrottleDirection instead of 'bool is_write'. Ex, schedule_next_request(ThrottleGroupMember *tgm, bool is_write) -> schedule_next_request(ThrottleGroupMember *tgm, ThrottleDirection direction) - use THROTTLE_MAX instead of hard code. Ex, ThrottleGroupMember *tokens[2] -> ThrottleGroupMember *tokens[THROTTLE_MAX] - use ThrottleDirection instead of hard code on iteration. Ex, (i = 0; i < 2; i++) -> for (dir = THROTTLE_READ; dir < THROTTLE_MAX; dir++) Use a simple python script to test the new style: #!/usr/bin/python3 import subprocess import random import time commands = ['virsh blkdeviotune jammy vda --write-bytes-sec ', \ 'virsh blkdeviotune jammy vda --write-iops-sec ', \ 'virsh blkdeviotune jammy vda --read-bytes-sec ', \ 'virsh blkdeviotune jammy vda --read-iops-sec '] for loop in range(1, 1000): time.sleep(random.randrange(3, 5)) command = commands[random.randrange(0, 3)] + str(random.randrange(0, 1000000)) subprocess.run(command, shell=True, check=True) This works fine. Signed-off-by: zhenwei pi <pizhenwei@bytedance.com> Message-Id: <20230728022006.1098509-10-pizhenwei@bytedance.com> Reviewed-by: Hanna Czenczek <hreitz@redhat.com> Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
2023-06-01block: add blk_io_plug_call() APIStefan Hajnoczi1-22/+0
Introduce a new API for thread-local blk_io_plug() that does not traverse the block graph. The goal is to make blk_io_plug() multi-queue friendly. Instead of having block drivers track whether or not we're in a plugged section, provide an API that allows them to defer a function call until we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is called multiple times with the same fn/opaque pair, then fn() is only called once at the end of the function - resulting in batching. This patch introduces the API and changes blk_io_plug()/blk_io_unplug(). blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument because the plug state is now thread-local. Later patches convert block drivers to blk_io_plug_call() and then we can finally remove .bdrv_co_io_plug() once all block drivers have been converted. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Acked-by: Kevin Wolf <kwolf@redhat.com> Message-id: 20230530180959.1108766-2-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-05-30block: add blk_in_drain() APIStefan Hajnoczi1-0/+7
The BlockBackend quiesce_counter is greater than zero during drained sections. Add an API to check whether the BlockBackend is in a drained section. The next patch will use this API. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230516190238.8401-10-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-30block-backend: split blk_do_set_aio_context()Stefan Hajnoczi1-38/+23
blk_set_aio_context() is not fully transactional because blk_do_set_aio_context() updates blk->ctx outside the transaction. Most of the time this goes unnoticed but a BlockDevOps.drained_end() callback that invokes blk_get_aio_context() fails assert(ctx == blk->ctx). This happens because blk->ctx is only assigned after BlockDevOps.drained_end() is called and we're in an intermediate state where BlockDrvierState nodes already have the new context and the BlockBackend still has the old context. Making blk_set_aio_context() fully transactional solves this assertion failure because the BlockBackend's context is updated as part of the transaction (before BlockDevOps.drained_end() is called). Split blk_do_set_aio_context() in order to solve this assertion failure. This helper function actually serves two different purposes: 1. It drives blk_set_aio_context(). 2. It responds to BdrvChildClass->change_aio_ctx(). Get rid of the helper function. Do #1 inside blk_set_aio_context() and do #2 inside blk_root_set_aio_ctx_commit(). This simplifies the code. The only drawback of the fully transactional approach is that blk_set_aio_context() must contend with blk_root_set_aio_ctx_commit() being invoked as part of the AioContext change propagation. This can be solved by temporarily setting blk->allow_aio_context_change to true. Future patches call blk_get_aio_context() from BlockDevOps->drained_end(), so this patch will become necessary. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230516190238.8401-2-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-30block-backend: Fix blk_new_open() for iothreadsKevin Wolf1-6/+21
This fixes blk_new_open() to not assume that bs is in the main context. In particular, the BlockBackend must be created with the right AioContext because it will refuse to move to a different context afterwards. (blk->allow_aio_context_change is false.) Use this opportunity to use blk_insert_bs() instead of duplicating the bdrv_root_attach_child() call. This is consistent with what blk_new_with_bs() does. Add comments to document the locking rules. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230525124713.401149-5-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-30block: Take main AioContext lock when calling bdrv_open()Kevin Wolf1-0/+2
The function documentation already says that all callers must hold the main AioContext lock, but not all of them do. This can cause assertion failures when functions called by bdrv_open() try to drop the lock. Fix a few more callers to take the lock before calling bdrv_open(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230525124713.401149-4-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-30block-coroutine-wrapper: Take AioContext lock in no_co_wrappersKevin Wolf1-1/+6
All of the functions that currently take a BlockDriverState, BdrvChild or BlockBackend as their first parameter expect the associated AioContext to be locked when they are called. In the case of no_co_wrappers, they are called from bottom halves directly in the main loop, so no other caller can be expected to take the lock for them. This can result in assertion failures because a lock that isn't taken is released in nested event loops. Looking at the first parameter is already done by co_wrappers to decide where the coroutine should run, so doing the same in no_co_wrappers is only consistent. Take the lock in the generated bottom halves to fix the problem. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230525124713.401149-2-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-15block: introduce zone append write for zoned devicesSam Li1-0/+61
A zone append command is a write operation that specifies the first logical block of a zone as the write position. When writing to a zoned block device using zone append, the byte offset of the call may point at any position within the zone to which the data is being appended. Upon completion the device will respond with the position where the data has been written in the zone. Signed-off-by: Sam Li <faithilikerun@gmail.com> Reviewed-by: Dmitry Fomichev <dmitry.fomichev@wdc.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20230508051510.177850-3-faithilikerun@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-05-15block/block-backend: add block layer APIs resembling Linux ZonedBlockDevice ↵Sam Li1-0/+137
ioctls Add zoned device option to host_device BlockDriver. It will be presented only for zoned host block devices. By adding zone management operations to the host_block_device BlockDriver, users can use the new block layer APIs including Report Zone and four zone management operations (open, close, finish, reset, reset_all). Qemu-io uses the new APIs to perform zoned storage commands of the device: zone_report(zrp), zone_open(zo), zone_close(zc), zone_reset(zrs), zone_finish(zf). For example, to test zone_report, use following command: $ ./build/qemu-io --image-opts -n driver=host_device, filename=/dev/nullb0 -c "zrp offset nr_zones" Signed-off-by: Sam Li <faithilikerun@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Dmitry Fomichev <dmitry.fomichev@wdc.com> Acked-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20230508045533.175575-4-faithilikerun@gmail.com Message-id: 20230324090605.28361-4-faithilikerun@gmail.com [Adjust commit message prefix as suggested by Philippe Mathieu-Daudé <philmd@linaro.org> and remove spurious ret = -errno in raw_co_zone_mgmt(). --Stefan] Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-05-10block: Consistently call bdrv_activate() outside coroutineKevin Wolf1-1/+9
Migration code can call bdrv_activate() in coroutine context, whereas other callers call it outside of coroutines. As it calls other code that is not supposed to run in coroutines, standardise on running outside of coroutines. This adds a no_co_wrapper to switch to the main loop before calling bdrv_activate(). Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230504115750.54437-3-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-04-25block: don't acquire AioContext lock in bdrv_drain_all()Stefan Hajnoczi1-7/+1
There is no need for the AioContext lock in bdrv_drain_all() because nothing in AIO_WAIT_WHILE() needs the lock and the condition is atomic. AIO_WAIT_WHILE_UNLOCKED() has no use for the AioContext parameter other than performing a check that is nowadays already done by the GLOBAL_STATE_CODE()/IO_CODE() macros. Set the ctx argument to NULL here to help us keep track of all converted callers. Eventually all callers will have been converted and then the argument can be dropped entirely. Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230309190855.414275-2-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-04-25block: protect BlockBackend->queued_requests with a lockStefan Hajnoczi1-2/+16
The CoQueue API offers thread-safety via the lock argument that qemu_co_queue_wait() and qemu_co_enter_next() take. BlockBackend currently does not make use of the lock argument. This means that multiple threads submitting I/O requests can corrupt the CoQueue's QSIMPLEQ. Add a QemuMutex and pass it to CoQueue APIs so that the queue is protected. While we're at it, also assert that the queue is empty when the BlockBackend is deleted. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Hanna Czenczek <hreitz@redhat.com> Message-Id: <20230307210427.269214-4-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-04-25block: make BlockBackend->disable_request_queuing atomicStefan Hajnoczi1-3/+4
This field is accessed by multiple threads without a lock. Use explicit qatomic_read()/qatomic_set() calls. There is no need for acquire/release because blk_set_disable_request_queuing() doesn't provide any guarantees (it helps that it's used at BlockBackend creation time and not when there is I/O in flight). Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Hanna Czenczek <hreitz@redhat.com> Message-Id: <20230307210427.269214-3-stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-04-25block: make BlockBackend->quiesce_counter atomicStefan Hajnoczi1-7/+7
The main loop thread increments/decrements BlockBackend->quiesce_counter when drained sections begin/end. The counter is read in the I/O code path. Therefore this field is used to communicate between threads without a lock. Acquire/release are not necessary because the BlockBackend->in_flight counter already uses sequentially consistent accesses and running I/O requests hold that counter when blk_wait_while_drained() is called. qatomic_read() can be used. Use qatomic_fetch_inc()/qatomic_fetch_dec() for modifications even though sequentially consistent atomic accesses are not strictly required here. They are, however, nicer to read than multiple calls to qatomic_read() and qatomic_set(). Since beginning and ending drain is not a hot path the extra cost doesn't matter. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230307210427.269214-2-stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-04-20block-backend: remove qatomic_mb_read()Paolo Bonzini1-2/+2
There is already a barrier in AIO_WAIT_WHILE_INTERNAL(), thus the qatomic_mb_read() is not adding anything. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-11block, block-backend: write some hot coroutine wrappers by handPaolo Bonzini1-0/+27
The introduction of the graph lock is causing blk_get_geometry, a hot function used in the I/O path, to create a coroutine. However, the only part that really needs to run in coroutine context is the call to bdrv_co_refresh_total_sectors, which in turn only happens in the rare case of host CD-ROM devices. So, write by hand the three wrappers on the path from blk_co_get_geometry to bdrv_co_refresh_total_sectors, so that the coroutine wrapper is only created if bdrv_nb_sectors actually calls bdrv_refresh_total_sectors. Reported-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20230407153303.391121-9-pbonzini@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-04-11block-backend: ignore inserted state in blk_co_nb_sectorsPaolo Bonzini1-15/+8
All callers of blk_co_nb_sectors (and blk_nb_sectors) are able to handle a non-inserted CD-ROM as a zero-length file, they do not need to raise an error. Not using blk_co_is_available() aligns the function with blk_co_get_geometry(), which becomes a simple wrapper for blk_co_nb_sectors(). It will also make it possible to skip the creation of a coroutine in the (common) case where bs->bl.has_variable_length is false. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20230407153303.391121-8-pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-04-11block-backend: inline bdrv_co_get_geometryPaolo Bonzini1-2/+6
bdrv_co_get_geometry is only used in blk_co_get_geometry. Inline it in there, to reduce the number of wrappers for bs->total_sectors. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20230407153303.391121-7-pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-03-27block/export: Fix graph locking in blk_get_geometry() callKevin Wolf1-2/+5
blk_get_geometry() eventually calls bdrv_nb_sectors(), which is a co_wrapper_mixed_bdrv_rdlock. This means that when it is called from coroutine context, it already assume to have the graph locked. However, virtio_blk_sect_range_ok() in block/export/virtio-blk-handler.c (used by vhost-user-blk and VDUSE exports) runs in a coroutine, but doesn't take the graph lock - blk_*() functions are generally expected to do that internally. This causes an assertion failure when accessing an export for the first time if it runs in an iothread. This is an example of the crash: $ ./storage-daemon/qemu-storage-daemon --object iothread,id=th0 --blockdev file,filename=/home/kwolf/images/hd.img,node-name=disk --export vhost-user-blk,addr.type=unix,addr.path=/tmp/vhost.sock,node-name=disk,id=exp0,iothread=th0 qemu-storage-daemon: ../block/graph-lock.c:268: void assert_bdrv_graph_readable(void): Assertion `qemu_in_main_thread() || reader_count()' failed. (gdb) bt #0 0x00007ffff6eafe5c in __pthread_kill_implementation () from /lib64/libc.so.6 #1 0x00007ffff6e5fa76 in raise () from /lib64/libc.so.6 #2 0x00007ffff6e497fc in abort () from /lib64/libc.so.6 #3 0x00007ffff6e4971b in __assert_fail_base.cold () from /lib64/libc.so.6 #4 0x00007ffff6e58656 in __assert_fail () from /lib64/libc.so.6 #5 0x00005555556337a3 in assert_bdrv_graph_readable () at ../block/graph-lock.c:268 #6 0x00005555555fd5a2 in bdrv_co_nb_sectors (bs=0x5555564c5ef0) at ../block.c:5847 #7 0x00005555555ee949 in bdrv_nb_sectors (bs=0x5555564c5ef0) at block/block-gen.c:256 #8 0x00005555555fd6b9 in bdrv_get_geometry (bs=0x5555564c5ef0, nb_sectors_ptr=0x7fffef7fedd0) at ../block.c:5884 #9 0x000055555562ad6d in blk_get_geometry (blk=0x5555564cb200, nb_sectors_ptr=0x7fffef7fedd0) at ../block/block-backend.c:1624 #10 0x00005555555ddb74 in virtio_blk_sect_range_ok (blk=0x5555564cb200, block_size=512, sector=0, size=512) at ../block/export/virtio-blk-handler.c:44 #11 0x00005555555dd80d in virtio_blk_process_req (handler=0x5555564cbb98, in_iov=0x7fffe8003830, out_iov=0x7fffe8003860, in_num=1, out_num=0) at ../block/export/virtio-blk-handler.c:189 #12 0x00005555555dd546 in vu_blk_virtio_process_req (opaque=0x7fffe8003800) at ../block/export/vhost-user-blk-server.c:66 #13 0x00005555557bf4a1 in coroutine_trampoline (i0=-402635264, i1=32767) at ../util/coroutine-ucontext.c:177 #14 0x00007ffff6e75c20 in ?? () from /lib64/libc.so.6 #15 0x00007fffefffa870 in ?? () #16 0x0000000000000000 in ?? () Fix this by creating a new blk_co_get_geometry() that takes the lock, and changing blk_get_geometry() to be a co_wrapper_mixed around it. To make the resulting code cleaner, virtio-blk-handler.c can directly call the coroutine version now (though that wouldn't be necessary for fixing the bug, taking the lock in blk_co_get_geometry() is what fixes it). Fixes: 8ab8140a04cf771d63e9754d6ba6c1e676bfe507 Reported-by: Lukáš Doktor <ldoktor@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230327113959.60071-1-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_eject/lock_medium() and callers GRAPH_RDLOCKKevin Wolf1-0/+2
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_eject() and bdrv_co_lock_medium() need to hold a reader lock for the graph. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-20-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_is_inserted() and callers GRAPH_RDLOCKEmanuele Giuseppe Esposito1-11/+14
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_is_inserted() need to hold a reader lock for the graph. blk_is_inserted() is done as a co_wrapper_mixed_bdrv_rdlock (unlike most other blk_* functions) because it is called a lot from other blk_co_*() functions that already hold the lock. These calls go through blk_is_available(), which becomes a co_wrapper_mixed_bdrv_rdlock, too, for the same reason. Functions that run in a coroutine and can call bdrv_co_is_available() directly are changed to do so, which results in better TSA coverage. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-19-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_io_(un)plug() and callers GRAPH_RDLOCKKevin Wolf1-0/+2
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_io_plug() and bdrv_co_io_unplug() need to hold a reader lock for the graph. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-18-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_copy_range() GRAPH_RDLOCKEmanuele Giuseppe Esposito1-0/+2
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_copy_range() need to hold a reader lock for the graph. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-15-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark public read/write functions GRAPH_RDLOCKKevin Wolf1-0/+2
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_pread*/pwrite*() need to hold a reader lock for the graph. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-12-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_pdiscard() and callers GRAPH_RDLOCKEmanuele Giuseppe Esposito1-0/+1
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_pdiscard() need to hold a reader lock for the graph. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-9-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_flush() and callers GRAPH_RDLOCKEmanuele Giuseppe Esposito1-1/+2
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_flush() need to hold a reader lock for the graph. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-8-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_ioctl() and callers GRAPH_RDLOCKKevin Wolf1-0/+1
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_ioctl() need to hold a reader lock for the graph. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-6-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_block_status() and callers GRAPH_RDLOCKKevin Wolf1-0/+2
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_block_status() need to hold a reader lock for the graph. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-5-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-23block: Mark bdrv_co_truncate() and callers GRAPH_RDLOCKKevin Wolf1-0/+1
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_co_truncate() need to hold a reader lock for the graph. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230203152202.49054-4-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-01block: Convert bdrv_lock_medium() to co_wrapperEmanuele Giuseppe Esposito1-2/+2
bdrv_lock_medium() is categorized as an I/O function, and it currently doesn't run in a coroutine. We should let it take a graph rdlock since it traverses the block nodes graph, which however is only possible in a coroutine. The only caller of this function is blk_lock_medium(). Therefore make blk_lock_medium() a co_wrapper, so that it always creates a new coroutine, and then make bdrv_lock_medium() a coroutine_fn where the lock can be taken. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230113204212.359076-13-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-01block: Convert bdrv_eject() to co_wrapperEmanuele Giuseppe Esposito1-2/+2
bdrv_eject() is categorized as an I/O function, and it currently doesn't run in a coroutine. We should let it take a graph rdlock since it traverses the block nodes graph, which however is only possible in a coroutine. The only caller of this function is blk_eject(). Therefore make blk_eject() a co_wrapper, so that it always creates a new coroutine, and then make bdrv_eject() coroutine_fn where the lock can be taken. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230113204212.359076-12-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-01block: use bdrv_co_refresh_total_sectors when possibleEmanuele Giuseppe Esposito1-3/+3
In some places we are sure we are always running in a coroutine, therefore it's useless to call the generated_co_wrapper, instead call directly the _co_ function. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230113204212.359076-9-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-01block-backend: use bdrv_getlength instead of blk_getlengthEmanuele Giuseppe Esposito1-1/+1
The only difference is that blk_ checks if the block is available, but this check is already performed above in blk_check_byte_request(). This is in preparation for the graph rdlock, which will be taken by both the callers of blk_check_byte_request() and blk_getlength(). Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230113204212.359076-8-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-01block: Convert bdrv_refresh_total_sectors() to co_wrapper_mixedEmanuele Giuseppe Esposito1-4/+6
BlockDriver->bdrv_getlength is categorized as IO callback, and it currently doesn't run in a coroutine. We should let it take a graph rdlock since the callback traverses the block nodes graph, which however is only possible in a coroutine. Therefore turn it into a co_wrapper to move the actual function into a coroutine where the lock can be taken. Because now this function creates a new coroutine and polls, we need to take the AioContext lock where it is missing, for the only reason that internally co_wrapper calls AIO_WAIT_WHILE and it expects to release the AioContext lock. This is especially messy when a co_wrapper creates a coroutine and polls in bdrv_open_driver, because this function has so many callers in so many context that it can easily lead to deadlocks. Therefore the new rule for bdrv_open_driver is that the caller must always hold the AioContext lock of the given bs (except if it is a coroutine), because the function calls bdrv_refresh_total_sectors() which is now a co_wrapper. Once the rwlock is ultimated and placed in every place it needs to be, we will poll using AIO_WAIT_WHILE_UNLOCKED and remove the AioContext lock. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230113204212.359076-7-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-01block: Convert bdrv_is_inserted() to co_wrapperEmanuele Giuseppe Esposito1-2/+2
bdrv_is_inserted() is categorized as an I/O function, and it currently doesn't run in a coroutine. We should let it take a graph rdlock since it traverses the block nodes graph, which however is only possible in a coroutine. Therefore turn it into a co_wrapper to move the actual function into a coroutine where the lock can be taken. At the same time, add also blk_is_inserted as co_wrapper_mixed, since it is called in both coroutine and non-coroutine contexts. Because now this function creates a new coroutine and polls, we need to take the AioContext lock where it is missing, for the only reason that internally c_w_mixed_bdrv_rdlock calls AIO_WAIT_WHILE and it expects to release the AioContext lock. Once the rwlock is ultimated and placed in every place it needs to be, we will poll using AIO_WAIT_WHILE_UNLOCKED and remove the AioContext lock. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230113204212.359076-5-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-01block: Convert bdrv_io_unplug() to co_wrapperEmanuele Giuseppe Esposito1-2/+2
BlockDriver->bdrv_io_unplug is categorized as IO callback, and it currently doesn't run in a coroutine. We should let it take a graph rdlock since the callback traverses the block nodes graph, which however is only possible in a coroutine. The only caller of this function is blk_io_unplug(), therefore make blk_io_unplug() a co_wrapper, so that we're always running in a coroutine where the lock can be taken. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230113204212.359076-4-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>