aboutsummaryrefslogtreecommitdiff
path: root/blockjob.c
AgeCommit message (Collapse)AuthorFilesLines
2019-08-16Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell1-0/+3
Block layer patches: - file-posix: Fix O_DIRECT alignment detection - Fixes for concurrent block jobs - block-backend: Queue requests while drained (fix IDE vs. job crashes) - qemu-img convert: Deprecate using -n and -o together - iotests: Migration tests with filter nodes - iotests: More media change tests # gpg: Signature made Fri 16 Aug 2019 10:29:18 BST # gpg: using RSA key 7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * remotes/kevin/tags/for-upstream: file-posix: Handle undetectable alignment qemu-img convert: Deprecate using -n and -o together block-backend: Queue requests while drained mirror: Keep mirror_top_bs drained after dropping permissions block: Remove blk_pread_unthrottled() iotests: Add test for concurrent stream/commit tests: Test mid-drain bdrv_replace_child_noperm() tests: Test polling in bdrv_drop_intermediate() block: Reduce (un)drains when replacing a child block: Keep subtree drained in drop_intermediate block: Simplify bdrv_filter_default_perms() iotests: Test migration with all kinds of filter nodes iotests: Move migration helpers to iotests.py iotests/118: Add -blockdev based tests iotests/118: Create test classes dynamically iotests/118: Test media change for scsi-cd Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16Include qemu/main-loop.h lessMarkus Armbruster1-0/+1
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16block-backend: Queue requests while drainedKevin Wolf1-0/+3
This fixes devices like IDE that can still start new requests from I/O handlers in the CPU thread while the block backend is drained. The basic assumption is that in a drain section, no new requests should be allowed through a BlockBackend (blk_drained_begin/end don't exist, we get drain sections only on the node level). However, there are two special cases where requests should not be queued: 1. Block jobs: We already make sure that block jobs are paused in a drain section, so they won't start new requests. However, if the drain_begin is called on the job's BlockBackend first, it can happen that we deadlock because the job stays busy until it reaches a pause point - which it can't if its requests aren't processed any more. The proper solution here would be to make all requests through the job's filter node instead of using a BlockBackend. For now, just disabling request queuing on the job BlockBackend is simpler. 2. In test cases where making requests through bdrv_* would be cumbersome because we'd need a BdrvChild. As we already got the functionality to disable request queuing from 1., use it in tests, too, for convenience. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2019-07-19block: Do not poll in bdrv_do_drained_end()Max Reitz1-1/+1
We should never poll anywhere in bdrv_do_drained_end() (including its recursive callees like bdrv_drain_invoke()), because it does not cope well with graph changes. In fact, it has been written based on the postulation that no graph changes will happen in it. Instead, the callers that want to poll must poll, i.e. all currently globally available wrappers: bdrv_drained_end(), bdrv_subtree_drained_end(), bdrv_unapply_subtree_drain(), and bdrv_drain_all_end(). Graph changes there do not matter. They can poll simply by passing a pointer to a drained_end_counter and wait until it reaches 0. This patch also adds a non-polling global wrapper for bdrv_do_drained_end() that takes a drained_end_counter pointer. We need such a variant because now no function called anywhere from bdrv_do_drained_end() must poll. This includes BdrvChildRole.drained_end(), which already must not poll according to its interface documentation, but bdrv_child_cb_drained_end() just violates that by invoking bdrv_drained_end() (which does poll). Therefore, BdrvChildRole.drained_end() must take a *drained_end_counter parameter, which bdrv_child_cb_drained_end() can pass on to the new bdrv_drained_end_no_poll() function. Note that we now have a pattern of all drained_end-related functions either polling or receiving a *drained_end_counter to let the caller poll based on that. A problem with a single poll loop is that when the drained section in bdrv_set_aio_context_ignore() ends, some nodes in the subgraph may be in the old contexts, while others are in the new context already. To let the collective poll in bdrv_drained_end() work correctly, we must not hold a lock to the old context, so that the old context can make progress in case it is different from the current context. (In the process, remove the comment saying that the current context is always the old context, because it is wrong.) In all other places, all nodes in a subtree must be in the same context, so we can just poll that. The exception of course is bdrv_drain_all_end(), but that always runs in the main context, so we can just poll NULL (like bdrv_drain_all_begin() does). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-18block: drop bs->jobVladimir Sementsov-Ogievskiy1-8/+0
Drop remaining users of bs->job: 1. assertions actually duplicated by assert(!bs->refcnt) 2. trace-point seems not enough reason to change stream_start to return BlockJob pointer 3. Restricting creation of two jobs based on same bs is bad idea, as 3.1 Some jobs creates filters to be their main node, so, this check don't actually prevent creating second job on same real node (which will create another filter node) (but I hope it is restricted by other mechanisms) 3.2 Even without bs->job we have two systems of permissions: op-blockers and BLK_PERM 3.3 We may want to run several jobs on one node one day And finally, drop bs->job pointer itself. Hurrah! Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-18blockdev: blockdev_mark_auto_del: drop usage of bs->jobVladimir Sementsov-Ogievskiy1-0/+14
We are going to remove bs->job pointer. Drop it's usage in blockdev_mark_auto_del: instead of looking at bs->job let's check all jobs for references to bs. Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-12Include qemu-common.h exactly where neededMarkus Armbruster1-1/+0
No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]
2019-06-04block: Adjust AioContexts when attaching nodesKevin Wolf1-2/+8
So far, we only made sure that updating the AioContext of a node affected the whole subtree. However, if a node is newly attached to a new parent, we also need to make sure that both the subtree of the node and the parent are in the same AioContext. This tries to move the new child node to the parent AioContext and returns an error if this isn't possible. BlockBackends now actually apply their AioContext to their root node. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-04block: Add BlockBackend.ctxKevin Wolf1-1/+1
This adds a new parameter to blk_new() which requires its callers to declare from which AioContext this BlockBackend is going to be used (or the locks of which AioContext need to be taken anyway). The given context is only stored and kept up to date when changing AioContexts. Actually applying the stored AioContext to the root node is saved for another commit. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-05-28block: Make bdrv_root_attach_child() unref child_bs on failureAlberto Garcia1-1/+1
A consequence of the previous patch is that bdrv_attach_child() transfers the reference to child_bs from the caller to parent_bs, which will drop it on bdrv_close() or when someone calls bdrv_unref_child(). But this only happens when bdrv_attach_child() succeeds. If it fails then the caller is responsible for dropping the reference to child_bs. This patch makes bdrv_attach_child() take the reference also when there is an error, freeing the caller for having to do it. A similar situation happens with bdrv_root_attach_child(), so the changes on this patch affect both functions. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 20dfb3d9ccec559cdd1a9690146abad5d204a186.1557754872.git.berto@igalia.com [mreitz: Removed now superfluous BdrvChild * variable in bdrv_open_child()] Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-05-20blockjob: Remove AioContext notifiersKevin Wolf1-41/+2
The notifiers made sure that the job is quiesced and that the job->aio_context field is updated. The first part is unnecessary today since bdrv_set_aio_context_ignore() drains the block node, and this means drainig the block job, too. The second part can be done in the .set_aio_ctx callback of the block job BdrvChildRole. The notifiers were problematic because they poll the AioContext while the graph is in an inconsistent state with some nodes already in the new context, but others still in the old context. So removing the notifiers not only simplifies the code, but actually makes the code safer. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-05-20blockjob: Propagate AioContext change to all job nodesKevin Wolf1-0/+34
Block jobs require that all of the nodes the job is using are in the same AioContext. Therefore all BdrvChild objects of the job propagate .(can_)set_aio_context to all other job nodes, so that the switch is checked and performed consistently even if both nodes are in different subtrees. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-03-19blockjob: fix user pause in block_job_error_actionVladimir Sementsov-Ogievskiy1-3/+5
Job (especially mirror) may call block_job_error_action several times before actual pause if it has several in-flight requests. block_job_error_action will call job_pause more than once in this case, which lead to following block-job-resume qmp command can't actually resume the job. Fix it by do not increase pause level in block_job_error_action if user_paused already set. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-09-25block: Use a single global AioWaitKevin Wolf1-12/+1
When draining a block node, we recurse to its parent and for subtree drains also to its children. A single AIO_WAIT_WHILE() is then used to wait for bdrv_drain_poll() to become true, which depends on all of the nodes we recursed to. However, if the respective child or parent becomes quiescent and calls bdrv_wakeup(), only the AioWait of the child/parent is checked, while AIO_WAIT_WHILE() depends on the AioWait of the original node. Fix this by using a single AioWait for all callers of AIO_WAIT_WHILE(). This may mean that the draining thread gets a few more unnecessary wakeups because an unrelated operation got completed, but we already wake it up when something _could_ have changed rather than only if it has certainly changed. Apart from that, drain is a slow path anyway. In theory it would be possible to use wakeups more selectively and still correctly, but the gains are likely not worth the additional complexity. In fact, this patch is a nice simplification for some places in the code. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-09-25blockjob: Lie better in child_job_drained_poll()Kevin Wolf1-1/+1
Block jobs claim in .drained_poll() that they are in a quiescent state as soon as job->deferred_to_main_loop is true. This is obviously wrong, they still have a completion BH to run. We only get away with this because commit 91af091f923 added an unconditional aio_poll(false) to the drain functions, but this is bypassing the regular drain mechanisms. However, just removing this and telling that the job is still active doesn't work either: The completion callbacks themselves call drain functions (directly, or indirectly with bdrv_reopen), so they would deadlock then. As a better lie, tell that the job is active as long as the BH is pending, but falsely call it quiescent from the point in the BH when the completion callback is called. At this point, nested drain calls won't deadlock because they ignore the job, and outer drains will wait for the job to really reach a quiescent state because the callback is already running. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-09-25blockjob: Wake up BDS when job becomes idleKevin Wolf1-0/+18
In the context of draining a BDS, the .drained_poll callback of block jobs is called. If this returns true (i.e. there is still some activity pending), the drain operation may call aio_poll() with blocking=true to wait for completion. As soon as the pending activity is completed and the job finally arrives in a quiescent state (i.e. its coroutine either yields with busy=false or terminates), the block job must notify the aio_poll() loop to wake up, otherwise we get a deadlock if both are running in different threads. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-08-28qapi: Drop qapi_event_send_FOO()'s Error ** argumentPeter Xu1-8/+5
The generated qapi_event_send_FOO() take an Error ** argument. They can't actually fail, because all they do with the argument is passing it to functions that can't fail: the QObject output visitor, and the @qmp_emit callback, which is either monitor_qapi_event_queue() or event_test_emit(). Drop the argument, and pass &error_abort to the QObject output visitor and @qmp_emit instead. Suggested-by: Eric Blake <eblake@redhat.com> Suggested-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20180815133747.25032-4-peterx@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Commit message rewritten, update to qapi-code-gen.txt corrected] Signed-off-by: Markus Armbruster <armbru@redhat.com>
2018-06-18block: Really pause block jobs on drainKevin Wolf1-0/+23
We already requested that block jobs be paused in .bdrv_drained_begin, but no guarantee was made that the job was actually inactive at the point where bdrv_drained_begin() returned. This introduces a new callback BdrvChildRole.bdrv_drained_poll() and uses it to make bdrv_drain_poll() consider block jobs using the node to be drained. For the test case to work as expected, we have to switch from block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even considered active and must be waited for when draining the node. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-05-23blockjob: Remove BlockJob.driverKevin Wolf1-7/+10
BlockJob.driver is redundant with Job.driver and only used in very few places any more. Remove it. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-05-23job: Move progress fields to JobKevin Wolf1-18/+8
BlockJob has fields .offset and .len, which are actually misnomers today because they are no longer tied to block device sizes, but just progress counters. As such they make a lot of sense in generic Jobs. This patch moves the fields to Job and renames them to .progress_current and .progress_total to describe their function better. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Add job_transition_to_ready()Kevin Wolf1-15/+18
The transition to the READY state was still performed in the BlockJob layer, in the same function that sent the BLOCK_JOB_READY QMP event. This patch brings the state transition to the Job layer and implements the QMP event using a notifier called from the Job layer, like we already do for other events related to state transitions. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Add job_is_ready()Kevin Wolf1-2/+1
Instead of having a 'bool ready' in BlockJob, add a function that derives its value from the job status. At the same time, this fixes the behaviour to match what the QAPI documentation promises for query-block-job: 'true if the job may be completed'. When the ready flag was introduced in commit ef6dbf1e46e, the flag never had to be reset to match the description because after being ready, the jobs would immediately complete and disappear. Job transactions and manual job finalisation were introduced only later. With these changes, jobs may stay around even after having completed (and they are not ready to be completed a second time), however their patches forgot to reset the ready flag. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Add job_dismiss()Kevin Wolf1-13/+0
This moves block_job_dismiss() to the Job layer. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Add job_yield()Kevin Wolf1-16/+0
This moves block_job_yield() to the Job layer. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move completion and cancellation to JobKevin Wolf1-76/+0
This moves the top-level job completion and cancellation functions from BlockJob to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-05-23job: Move transactions to JobKevin Wolf1-233/+5
This moves the logic that implements job transactions from BlockJob to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Switch transactions to JobTxnKevin Wolf1-28/+32
This doesn't actually move any transaction code to Job yet, but it renames the type for transactions from BlockJobTxn to JobTxn and makes them contain Jobs rather than BlockJobs Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move job_finish_sync() to JobKevin Wolf1-46/+9
block_job_finish_sync() doesn't contain anything block job specific any more, so it can be moved to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move .complete callback to JobKevin Wolf1-18/+5
This moves the .complete callback that tells a READY job to complete from BlockJobDriver to JobDriver. The wrapper function job_complete() doesn't require anything block job specific any more and can be moved to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Add job_drain()Kevin Wolf1-10/+10
block_job_drain() contains a blk_drain() call which cannot be moved to Job, so add a new JobDriver callback JobDriver.drain which has a common implementation for all BlockJobs. In addition to this we keep the existing BlockJobDriver.drain callback that is called by the common drain implementation for all block jobs. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Convert block_job_cancel_async() to JobKevin Wolf1-12/+13
block_job_cancel_async() did two things that were still block job specific: * Setting job->force. This field makes sense on the Job level, so we can just move it. While at it, rename it to job->force_cancel to make its purpose more obvious. * Resetting the I/O status. This can't be moved because generic Jobs don't have an I/O status. What the function really implements is a user resume, except without entering the coroutine. Consequently, it makes sense to call the .user_resume driver callback here which already resets the I/O status. The old block_job_cancel_async() has two separate if statements that check job->iostatus != BLOCK_DEVICE_IO_STATUS_OK and job->user_paused. However, the former condition always implies the latter (as is asserted in block_job_iostatus_reset()), so changing the explicit call of block_job_iostatus_reset() on the former condition with the .user_resume callback on the latter condition is equivalent and doesn't need to access any BlockJob specific state. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move single job finalisation to JobKevin Wolf1-117/+25
This moves the finalisation of a single job from BlockJob to Job. Some part of this code depends on job transactions, and job transactions call this code, we introduce some temporary calls from Job functions to BlockJob ones. This will be fixed once transactions move to Job, too. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Add job_event_*()Kevin Wolf1-14/+27
Go through the Job layer in order to send QMP events. For the moment, these functions only call a notifier in the BlockJob layer that sends the existing commands. This uses notifiers rather than JobDriver callbacks because internal users of jobs won't receive QMP events, but might still be interested in getting notified for the events. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23blockjob: Split block_job_event_pending()Kevin Wolf1-9/+18
block_job_event_pending() doesn't only send a QMP event, but it also transitions to the PENDING state. Split the function so that we get one part only sending the event (like other block_job_event_* functions) and another part that does the state transition. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move BlockJobCreateFlags to JobKevin Wolf1-20/+7
This renames the BlockJobCreateFlags constants, moves a few JOB_INTERNAL checks to job_create() and the auto_{finalize,dismiss} fields from BlockJob to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Replace BlockJob.completed with job_is_completed()Kevin Wolf1-9/+7
Since we introduced an explicit status to block job, BlockJob.completed is redundant because it can be derived from the status. Remove the field from BlockJob and add a function to derive it from the status at the Job level. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Move pause/resume functions to JobKevin Wolf1-65/+16
While we already moved the state related to job pausing to Job, the functions to do were still BlockJob only. This commit moves them over to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Add job_sleep_ns()Kevin Wolf1-27/+0
There is nothing block layer specific about block_job_sleep_ns(), so move the function to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move coroutine and related code to JobKevin Wolf1-180/+39
This commit moves some core functions for dealing with the job coroutine from BlockJob to Job. This includes primarily entering the coroutine (both for the first and reentering) and yielding explicitly and at pause points. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Move defer_to_main_loop to JobKevin Wolf1-52/+5
Move the defer_to_main_loop functionality from BlockJob to Job. The code can be simplified because we can use job->aio_context in job_defer_to_main_loop_bh() now, instead of having to access the BlockDriverState. Probably taking the data->aio_context lock in addition was already unnecessary in the old code because we didn't actually make use of anything protected by the old AioContext except getting the new AioContext, in case it changed between scheduling the BH and running it. But it's certainly unnecessary now that the BDS isn't accessed at all any more. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Add Job.aio_contextKevin Wolf1-1/+4
When block jobs need an AioContext, they just take it from their main block node. Generic jobs don't have a main block node, so we need to assign them an AioContext explicitly. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Move cancelled to JobKevin Wolf1-15/+13
We cannot yet move the whole logic around job cancelling to Job because it depends on quite a few other things that are still only in BlockJob, but we can move the cancelled field at least. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Add reference countingKevin Wolf1-27/+21
This moves reference counting from BlockJob to Job. In order to keep calling the BlockJob cleanup code when the job is deleted via job_unref(), introduce a new JobDriver.free callback. Every block job must use block_job_free() for this callback, this is asserted in block_job_create(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Move state transitions to JobKevin Wolf1-79/+23
This moves BlockJob.status and the closely related functions (block_)job_state_transition() and (block_)job_apply_verb to Job. The two QAPI enums are renamed to JobStatus and JobVerb. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2018-05-23job: Maintain a list of all jobsKevin Wolf1-22/+24
This moves the job list from BlockJob to Job. Now we can check for duplicate IDs in job_create(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Add job_delete()Kevin Wolf1-2/+1
This moves freeing the Job object and its fields from block_job_unref() to job_delete(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Add JobDriver.job_typeKevin Wolf1-9/+7
This moves the job_type field from BlockJobDriver to JobDriver. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Rename BlockJobType into JobTypeKevin Wolf1-3/+3
QAPI types aren't externally visible, so we can rename them without causing problems. Before we add a job type to Job, rename the enum so it can be used for more than just block jobs. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Create Job, JobDriver and job_create()Kevin Wolf1-24/+22
This is the first step towards creating an infrastructure for generic background jobs that aren't tied to a block device. For now, Job only stores its ID and JobDriver, the rest stays in BlockJob. The following patches will move over more parts of BlockJob to Job if they are meaningful outside the context of a block job. BlockJob.driver is now redundant, but this patch leaves it around to avoid unnecessary churn. The next patches will get rid of almost all of its uses anyway so that it can be removed later with much less churn. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-16blockjob: do not cancel timer in resumeStefan Hajnoczi1-7/+15
Currently the timer is cancelled and the block job is entered by block_job_resume(). This behavior causes drain to run extra blockjob iterations when the job was sleeping due to the ratelimit. This patch leaves the job asleep when block_job_resume() is called. Jobs can still be forcibly woken up using block_job_enter(), which is used to cancel jobs. After this patch drain no longer runs extra blockjob iterations. This is the expected behavior that qemu-iotests 185 used to rely on. We temporarily changed the 185 test output to make it pass for the QEMU 2.12 release but now it's time to address this issue. Cc: QingFeng Hao <haoqf@linux.vnet.ibm.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: QingFeng Hao <haoqf@linux.vnet.ibm.com> Message-id: 20180508135436.30140-3-stefanha@redhat.com Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>