aboutsummaryrefslogtreecommitdiff
path: root/block/graph-lock.c
AgeCommit message (Collapse)AuthorFilesLines
2023-10-12block: Add assertion for bdrv_graph_wrlock()Kevin Wolf1-1/+2
bdrv_graph_wrlock() can't run in a coroutine (because it polls) and requires holding the BQL. We already have GLOBAL_STATE_CODE() to assert the latter. Assert the former as well and add a no_coroutine_fn marker. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-23-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-09-20block: Introduce bdrv_schedule_unref()Kevin Wolf1-7/+19
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However, bdrv_unref() takes the graph writer lock internally, so it can't actually be called while already holding a graph lock without causing a deadlock. bdrv_unref() also can't just become GRAPH_WRLOCK because it drains the node before closing it, and draining requires that the graph is unlocked. The solution is to defer deleting the node until we don't hold the lock any more and draining is possible again. Note that keeping images open for longer than necessary can create problems, too: You can't open an image again before it is really closed (if image locking didn't prevent it, it would cause corruption). Reopening an image immediately happens at least during bdrv_open() and bdrv_co_create(). In order to solve this problem, make sure to run the deferred unref in bdrv_graph_wrunlock(), i.e. the first possible place where we can drain again. This is also why bdrv_schedule_unref() is marked GRAPH_WRLOCK. The output of iotest 051 is updated because the additional polling changes the order of HMP output, resulting in a new "(qemu)" prompt in the test output that was previously on a separate line and filtered out. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230911094620.45040-6-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-09-08block: spelling fixesMichael Tokarev1-1/+1
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Eric Blake <eblake@redhat.com>
2023-06-28Revert "graph-lock: Disable locking for now"Kevin Wolf1-26/+0
Now that bdrv_graph_wrlock() temporarily drops the AioContext lock that its caller holds, it can poll without causing deadlocks. We can now re-enable graph locking. This reverts commit ad128dff0bf4b6f971d05eb4335a627883a19c1d. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230605085711.21261-12-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-06-28graph-lock: Unlock the AioContext while pollingKevin Wolf1-1/+22
If the caller keeps the AioContext lock for a block node in an iothread, polling in bdrv_graph_wrlock() deadlocks if the condition isn't fulfilled immediately. Now that all callers make sure to actually have the AioContext locked when they call bdrv_replace_child_noperm() like they should, we can change bdrv_graph_wrlock() to take a BlockDriverState whose AioContext lock the caller holds (NULL if it doesn't) and unlock it temporarily while polling. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230605085711.21261-11-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-19graph-lock: Disable locking for nowKevin Wolf1-0/+24
In QEMU 8.0, we've been seeing deadlocks in bdrv_graph_wrlock(). They come from callers that hold an AioContext lock, which is not allowed during polling. In theory, we could temporarily release the lock, but callers are inconsistent about whether they hold a lock, and if they do, some are also confused about which one they hold. While all of this is fixable, it's not trivial, and the best course of action for 8.0.1 is probably just disabling the graph locking code temporarily. We don't currently rely on graph locking yet. It is supposed to replace the AioContext lock eventually to enable multiqueue support, but as long as we still have the AioContext lock, it is sufficient without the graph lock. Once the AioContext lock goes away, the deadlock doesn't exist any more either and this commit can be reverted. (Of course, it can also be reverted while the AioContext lock still exists if the callers have been fixed.) Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230517152834.277483-2-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-19graph-lock: Honour read locks even in the main threadKevin Wolf1-10/+0
There are some conditions under which we don't actually need to do anything for taking a reader lock: Writing the graph is only possible from the main context while holding the BQL. So if a reader is running in the main context under the BQL and knows that it won't be interrupted until the next writer runs, we don't actually need to do anything. This is the case if the reader code neither has a nested event loop (this is forbidden anyway while you hold the lock) nor is a coroutine (because a writer could run when the coroutine has yielded). These conditions are exactly what bdrv_graph_rdlock_main_loop() asserts. They are not fulfilled in bdrv_graph_co_rdlock(), which always runs in a coroutine. This deletes the shortcuts in bdrv_graph_co_rdlock() that skip taking the reader lock in the main thread. Reported-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230510203601.418015-9-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-10block: compile out assert_bdrv_graph_readable() by defaultStefan Hajnoczi1-0/+3
reader_count() is a performance bottleneck because the global aio_context_list_lock mutex causes thread contention. Put this debugging assertion behind a new ./configure --enable-debug-graph-lock option and disable it by default. The --enable-debug-graph-lock option is also enabled by the more general --enable-debug option. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230501173443.153062-1-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-04-25block: convert bdrv_graph_wrlock() to AIO_WAIT_WHILE_UNLOCKED()Stefan Hajnoczi1-1/+1
The following conversion is safe and does not change behavior: GLOBAL_STATE_CODE(); ... - AIO_WAIT_WHILE(qemu_get_aio_context(), ...); + AIO_WAIT_WHILE_UNLOCKED(NULL, ...); Since we're in GLOBAL_STATE_CODE(), qemu_get_aio_context() is our home thread's AioContext. Thus AIO_WAIT_WHILE() does not unlock the AioContext: if (ctx_ && in_aio_context_home_thread(ctx_)) { \ while ((cond)) { \ aio_poll(ctx_, true); \ waited_ = true; \ } \ And that means AIO_WAIT_WHILE_UNLOCKED(NULL, ...) can be substituted. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230309190855.414275-4-stefanha@redhat.com> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-12-15graph-lock: TSA annotations for lock/unlock functionsKevin Wolf1-0/+3
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20221207131838.239125-15-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-12-15block: assert that graph read and writes are performed correctlyEmanuele Giuseppe Esposito1-0/+11
Remove the old assert_bdrv_graph_writable, and replace it with the new version using graph-lock API. See the function documentation for more information. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20221207131838.239125-14-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-12-15graph-lock: Introduce a lock to protect block graph operationsPaolo Bonzini1-0/+261
Block layer graph operations are always run under BQL in the main loop. This is proved by the assertion qemu_in_main_thread() and its wrapper macro GLOBAL_STATE_CODE. However, there are also concurrent coroutines running in other iothreads that always try to traverse the graph. Currently this is protected (among various other things) by the AioContext lock, but once this is removed, we need to make sure that reads do not happen while modifying the graph. We distinguish between writer (main loop, under BQL) that modifies the graph, and readers (all other coroutines running in various AioContext), that go through the graph edges, reading ->parents and->children. The writer (main loop) has "exclusive" access, so it first waits for any current read to finish, and then prevents incoming ones from entering while it has the exclusive access. The readers (coroutines in multiple AioContext) are free to access the graph as long the writer is not modifying the graph. In case it is, they go in a CoQueue and sleep until the writer is done. If a coroutine changes AioContext, the counter in the original and new AioContext are left intact, since the writer does not care where the reader is, but only if there is one. As a result, some AioContexts might have a negative reader count, to balance the positive count of the AioContext that took the lock. This also means that when an AioContext is deleted it may have a nonzero reader count. In that case we transfer the count to a global shared counter so that the writer is always aware of all readers. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20221207131838.239125-3-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>