aboutsummaryrefslogtreecommitdiff
path: root/block/stream.c
AgeCommit message (Collapse)AuthorFilesLines
2017-05-26stream: fix crash in stream_start() when block_job_create() failsAlberto Garcia1-1/+1
The code that tries to reopen a BlockDriverState in stream_start() when the creation of a new block job fails crashes because it attempts to dereference a pointer that is known to be NULL. This is a regression introduced in a170a91fd3eab6155da39e740381867e, likely because the code was copied from stream_complete(). Cc: qemu-stable@nongnu.org Reported-by: Kashyap Chamarthy <kchamart@redhat.com> Signed-off-by: Alberto Garcia <berto@igalia.com> Tested-by: Kashyap Chamarthy <kchamart@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-02-28block: Add Error parameter to bdrv_set_backing_hd()Kevin Wolf1-1/+8
Not all callers of bdrv_set_backing_hd() know for sure that attaching the backing file will be allowed by the permission system. Return the error from the function rather than aborting. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2017-02-28stream: Use real permissions in streaming block jobKevin Wolf1-12/+27
The correct permissions are relatively obvious here (and explained in code comments). For intermediate streaming, we need to reopen the top node read-write before creating the job now because the permissions system catches attempts to get the BLK_PERM_WRITE_UNCHANGED permission on a read-only node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
2017-02-28blockjob: Add permissions to block_job_add_bdrv()Kevin Wolf1-1/+3
Block jobs don't actually do I/O through the the reference they create with block_job_add_bdrv(), but they might want to use the permisssion system to express what the block job does to intermediate nodes. This adds permissions to block_job_add_bdrv() to provide the means to request permissions. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
2017-02-28blockjob: Add permissions to block_job_create()Kevin Wolf1-2/+3
This functions creates a BlockBackend internally, so the block jobs need to tell it what they want to do with the BB. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
2016-11-14blockjob: add block_job_startJohn Snow1-3/+2
Instead of automatically starting jobs at creation time via backup_start et al, we'd like to return a job object pointer that can be started manually at later point in time. For now, add the block_job_start mechanism and start the jobs automatically as we have been doing, with conversions job-by-job coming in later patches. Of note: cancellation of unstarted jobs will perform all the normal cleanup as if the job had started, particularly abort and clean. The only difference is that we will not emit any events, because the job never actually started. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14blockjob: add .start fieldJohn Snow1-1/+2
Add an explicit start field to specify the entrypoint. We already have ownership of the coroutine itself AND managing the lifetime of the coroutine, let's take control of creation of the coroutine, too. This will allow us to delay creation of the actual coroutine until we know we'll actually start a BlockJob in block_job_start. This avoids the sticky question of how to "un-create" a Coroutine that hasn't been started yet. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1478587839-9834-4-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01blockjobs: split interface into public/private, Part 1John Snow1-1/+1
To make it a little more obvious which functions are intended to be public interface and which are intended to be for use only by jobs themselves, split the interface into "public" and "private" files. Convert blockjobs (e.g. block/backup) to using the private interface. Leave blockdev and others on the public interface. There are remaining uses of private state by qemu-img, and several cases in blockdev.c and block/io.c where we grab job->blk for the purposes of acquiring an AIOContext. These will be corrected in future patches. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-7-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01blockjob: centralize QMP event emissionsJohn Snow1-4/+3
There's no reason to leave this to blockdev; we can do it in blockjobs directly and get rid of an extra callback for most users. All non-internal events, even those created outside of QMP, will consistently emit events. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-5-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01blockjobs: Allow creating internal jobsJohn Snow1-1/+1
Add the ability to create jobs without an ID. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-3-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-10-31block: Support streaming to an intermediate layerAlberto Garcia1-0/+24
This makes sure that the image we are streaming into is open in read-write mode during the operation. Operation blockers are also set in all intermediate nodes, since they will be removed from the chain afterwards. Finally, this also unblocks the stream operation in backing files. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13Improve block job rate limiting for small bandwidth valuesSascha Silbe1-8/+4
ratelimit_calculate_delay() previously reset the accounting every time slice, no matter how much data had been processed before. This had (at least) two consequences: 1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream. Not sure if there are real-world use cases where this would be a problem. Mirroring and backup over a slow link (e.g. DSL) would come to mind, though. 2. Tests for block job operations (e.g. cancel) were rather racy All block jobs currently use a time slice of 100ms. That's a reasonable value to get smooth output during regular operation. However this also meant that the state of block jobs changed every 100ms, no matter how low the configured limit was. On busy hosts, qemu often transferred additional chunks until the test case had a chance to cancel the job. Fix the block job rate limit code to delay for more than one time slice to address the above issues. To make it easier to handle oversized chunks we switch the semantics from returning a delay _before_ the current request to a delay _after_ the current request. If necessary, this delay consists of multiple time slice units. Since the mirror job sends multiple chunks in one go even if the rate limit was exceeded in between, we need to keep track of the start of the current time slice so we can correctly re-compute the delay for the updated amount of data. The minimum bandwidth now is 1 data unit per time slice. The block jobs are currently passing the amount of data transferred in sectors and using 100ms time slices, so this translates to 5120 bytes/second. With chunk sizes usually being O(512KiB), tests have plenty of time (O(100s)) to operate on block jobs. The chance of a race condition now is fairly remote, except possibly on insanely loaded systems. Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Message-id: 1467127721-9564-2-git-send-email-silbe@linux.vnet.ibm.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13coroutine: move entry argument to qemu_coroutine_createPaolo Bonzini1-2/+2
In practice the entry argument is always known at creation time, and it is confusing that sometimes qemu_coroutine_enter is used with a non-NULL argument to re-enter a coroutine (this happens in block/sheepdog.c and tests/test-coroutine.c). So pass the opaque value at creation time, for consistency with e.g. aio_bh_new. Mostly done with the following semantic patch: @ entry1 @ expression entry, arg, co; @@ - co = qemu_coroutine_create(entry); + co = qemu_coroutine_create(entry, arg); ... - qemu_coroutine_enter(co, arg); + qemu_coroutine_enter(co); @ entry2 @ expression entry, arg; identifier co; @@ - Coroutine *co = qemu_coroutine_create(entry); + Coroutine *co = qemu_coroutine_create(entry, arg); ... - qemu_coroutine_enter(co, arg); + qemu_coroutine_enter(co); @ entry3 @ expression entry, arg; @@ - qemu_coroutine_enter(qemu_coroutine_create(entry), arg); + qemu_coroutine_enter(qemu_coroutine_create(entry, arg)); @ reentry @ expression co; @@ - qemu_coroutine_enter(co, NULL); + qemu_coroutine_enter(co); except for the aforementioned few places where the semantic patch stumbled (as expected) and for test_co_queue, which would otherwise produce an uninitialized variable warning. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13stream: Add 'job-id' parameter to 'block-stream'Alberto Garcia1-6/+6
This patch adds a new optional 'job-id' parameter to 'block-stream', allowing the user to specify the ID of the block job to be created. The HMP 'block_stream' command remains unchanged. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13blockjob: Add 'job_id' parameter to block_job_create()Alberto Garcia1-1/+1
When a new job is created, the job ID is taken from the device name of the BDS. This patch adds a new 'job_id' parameter to let the caller provide one instead. This patch also verifies that the ID is always unique and well-formed. This causes problems in a couple of places where no ID is being set, because the BDS does not have a device name. In the case of test_block_job_start() (from test-blockjob-txn.c) we can simply use this new 'job_id' parameter to set the missing ID. In the case of img_commit() (from qemu-img.c) we still don't have the API to make commit_active_start() set the job ID, so we solve it by setting a default value. We'll get rid of this as soon as we extend the API. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-05-25stream: Use BlockBackend for I/OKevin Wolf1-6/+9
This changes the streaming block job to use the job's BlockBackend for performing the COR reads. job->bs isn't used by the streaming code any more afterwards. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-05-19blockjob: Don't touch BDS iostatusKevin Wolf1-7/+0
Block jobs don't actually make use of the iostatus for their BDSes, but they manage a separate block job iostatus. Still, they require that it is enabled for the source BDS and they enable it automatically for the target and set the error handling mode - which ends up never being used by the job. This patch removes all of the BDS iostatus handling from the block job, which removes another few bs->blk accesses. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-05-19blockjob: Don't set iostatus of targetKevin Wolf1-2/+1
When block job errors were introduced, we assigned the iostatus of the target BDS "just in case". The field has never been accessible for the user because the target isn't listed in query-block. Before we can allow the user to have a second BlockBackend on the target, we need to clean this up. If anything, we would want to set the iostatus for the internal BB of the job (which we can always do later), but certainly not for a separate BB which the job doesn't even use. As a nice side effect, this gets us rid of another bs->blk use. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-29Merge remote-tracking branch 'remotes/cody/tags/block-pull-request' into stagingPeter Maydell1-5/+6
# gpg: Signature made Tue 29 Mar 2016 01:48:09 BST using RSA key ID C0DE3057 # gpg: Good signature from "Jeffrey Cody <jcody@redhat.com>" # gpg: aka "Jeffrey Cody <jeff@codyprime.org>" # gpg: aka "Jeffrey Cody <codyprime@gmail.com>" * remotes/cody/tags/block-pull-request: qemu-iotests: add no-op streaming test qemu-iotests: fix test_stream_partial() block: never cancel a streaming job without running stream_complete() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-03-28block: never cancel a streaming job without running stream_complete()Alberto Garcia1-5/+6
We need to call stream_complete() in order to do all the necessary clean-ups, even if there's an early failure. At the moment it's only useful to make sure that s->backing_file_str is not leaked, but it will become more important if we introduce support for streaming to any intermediate node. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 2abedf2debc65c250560237f31a8e6756883c8fc.1458566441.git.berto@igalia.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-03-22include/qemu/osdep.h: Don't include qapi/error.hMarkus Armbruster1-0/+1
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the Error typedef. Since then, we've moved to include qemu/osdep.h everywhere. Its file comment explains: "To avoid getting into possible circular include dependencies, this file should not include any other QEMU headers, with the exceptions of config-host.h, compiler.h, os-posix.h and os-win32.h, all of which are doing a similar job to this file and are under similar constraints." qapi/error.h doesn't do a similar job, and it doesn't adhere to similar constraints: it includes qapi-types.h. That's in excess of 100KiB of crap most .c files don't actually need. Add the typedef to qemu/typedefs.h, and include that instead of qapi/error.h. Include qapi/error.h in .c files that need it and don't get it now. Include qapi-types.h in qom/object.h for uint16List. Update scripts/clean-includes accordingly. Update it further to match reality: replace config.h by config-target.h, add sysemu/os-posix.h, sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h comment quoted above similarly. This reduces the number of objects depending on qapi/error.h from "all of them" to less than a third. Unfortunately, the number depending on qapi-types.h shrinks only a little. More work is needed for that one. Signed-off-by: Markus Armbruster <armbru@redhat.com> [Fix compilation without the spice devel packages. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-01-20block: Clean up includesPeter Maydell1-0/+1
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-10-23block: Move I/O status and error actions into BBMax Reitz1-1/+2
These options are only relevant for the user of a whole BDS tree (like a guest device or a block job) and should thus be moved into the BlockBackend. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-10-16block: Manage backing file references in bdrv_set_backing_hd()Kevin Wolf1-29/+1
This simplifies the code somewhat, especially when dropping whole backing file subchains. The exception is the mirroring code that does adventurous things with bdrv_swap() and in order to keep it working, I had to duplicate most of bdrv_set_backing_hd() locally. We'll get rid again of this ugliness shortly. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-10-16block: Convert bs->backing_hd to BdrvChildKevin Wolf1-4/+4
This is the final step in converting all of the BlockDriverState pointers that block drivers use to BdrvChild. After this patch, bs->children contains the full list of child nodes that are referenced by a given BDS, and these children are only referenced through BdrvChild, so that updating the pointer in there is enough for changing edges in the graph. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-06-22Include qapi/qmp/qerror.h exactly where neededMarkus Armbruster1-0/+1
In particular, don't include it into headers. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
2015-06-22qerror: Clean up QERR_ macros to expand into a single stringMarkus Armbruster1-2/+2
These macros expand into error class enumeration constant, comma, string. Unclean. Has been that way since commit 13f59ae. The error class is always ERROR_CLASS_GENERIC_ERROR since the previous commit. Clean up as follows: * Prepend every use of a QERR_ macro by ERROR_CLASS_GENERIC_ERROR, and delete it from the QERR_ macro. No change after preprocessing. * Rewrite error_set(ERROR_CLASS_GENERIC_ERROR, ...) into error_setg(...). Again, no change after preprocessing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
2014-11-03block: let stream blockjob run in BDS AioContextStefan Hajnoczi1-14/+36
The stream block job must run in the BlockDriverState AioContext so that it works with dataplane. The basics of acquiring the AioContext are easy in blockdev.c. The tricky part is the completion code which drops part of the backing file chain. This must be done in the main loop where bdrv_unref() and bdrv_close() are safe to call. Use block_job_defer_to_main_loop() to achieve that. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1413889440-32577-9-git-send-email-stefanha@redhat.com
2014-10-20block: Rename BlockDriverCompletionFunc to BlockCompletionFuncMarkus Armbruster1-1/+1
I'll use it with block backends shortly, and the name is going to fit badly there. It's a block layer thing anyway, not just a block driver thing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-07-18block: Add Error argument to bdrv_refresh_limits()Kevin Wolf1-1/+1
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-07-01block: add backing-file option to block-streamJeff Cody1-6/+5
On some image chains, QEMU may not always be able to resolve the filenames properly, when updating the backing file of an image after a block job. For instance, certain relative pathnames may fail, or drives may have been specified originally by file descriptor (e.g. /dev/fd/???), or a relative protocol pathname may have been used. In these instances, QEMU may lack the information to be able to make the correct choice, but the user or management layer most likely does have that knowledge. With this extension to the block-stream api, the user is able to change the backing file of the active layer as part of the block-stream operation. This allows the change to be 'safe', in the sense that if the attempt to write the active image metadata fails, then the block-stream operation returns failure, without disrupting the guest. If a backing file string is not specified in the command, the backing file string to use is determined in the same manner as it was previously. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-06-23qapi: adjust existing definesWenchao Xia1-2/+2
In order to let event defines use existing types later, instead of redefine new ones, some old type defines for spice and vnc are changed, and BlockErrorAction is moved from block.h to qapi schema. Note that BlockErrorAction is not merged with BlockdevOnError. At this point, VncInfo is not made a child of VncBasicInfo, because VncBasicInfo has mandatory fields where VncInfo makes them optional. Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
2014-05-28block: Use bdrv_set_backing_hd everywhereFam Zheng1-2/+2
We need to handle the coming backing_blocker properly, so don't open code the assignment, instead, call bdrv_set_backing_hd to change backing_hd. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-01-24block: Update BlockLimits when they might have changedKevin Wolf1-0/+2
When reopening with different flags, or when backing files disappear from the chain, the limits may change. Make sure they get updated in these cases. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Benoît Canet <benoit@irqsave.net>
2013-11-28block/stream: Don't stream unbacked devicesMax Reitz1-0/+5
If a block device is unbacked, a streaming blockjob should immediately finish instead of beginning to try to stream, then noticing the backing file does not contain even the first sector (since it does not exist) and then finishing normally. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-10-11qapi: make use of new BlockJobTypeFam Zheng1-1/+1
Switch the string to enum type BlockJobType in BlockJobDriver. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-10-11blockjob: rename BlockJobType to BlockJobDriverFam Zheng1-2/+2
We will use BlockJobType as the enum type name of block jobs in QAPI, rename current BlockJobType to BlockJobDriver, which will eventually become a set of operations, similar to block drivers. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-25block: Fix compiler warning (-Werror=uninitialized)Stefan Weil1-2/+3
The patch fixes a warning from gcc (Debian 4.6.3-14+rpi1) 4.6.3: block/stream.c:141:22: error: ‘copy’ may be used uninitialized in this function [-Werror=uninitialized] This is not a real bug - a better compiler would not complain. Now 'copy' has always a defined value, so the check for ret >= 0 can be removed. Signed-off-by: Stefan Weil <sw@weilnetz.de> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-06block: expect errors from bdrv_co_is_allocatedPaolo Bonzini1-1/+1
Some bdrv_is_allocated callers do not expect errors, but the fallback in qcow2.c might make other callers trip on assertion failures or infinite loops. Fix the callers to always look for errors. Cc: qemu-stable@nongnu.org Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-09-06block: remove bdrv_is_allocated_above/bdrv_co_is_allocated_above distinctionPaolo Bonzini1-2/+2
Now that bdrv_is_allocated detects coroutine context, the two can use the same code. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-09-06block: make bdrv_co_is_allocated staticPaolo Bonzini1-2/+2
bdrv_is_allocated can detect coroutine context and go through a fast path, similar to other block layer functions. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-09-06block: make bdrv_delete() staticFam Zheng1-1/+1
Manage BlockDriverState lifecycle with refcnt, so bdrv_delete() is no longer public and should be called by bdrv_unref() if refcnt is decreased to 0. This is an identical change because effectively, there's no multiple reference of BDS now: no caller of bdrv_ref() yet, only bdrv_new() sets bs->refcnt to 1, so all bdrv_unref() now actually delete the BDS. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-08-22aio / timers: convert block_job_sleep_ns and co_sleep_ns to new APIAlex Bligh1-1/+1
Convert block_job_sleep_ns and co_sleep_ns to use the new timer API. Signed-off-by: Alex Bligh <alex@alex.org.uk> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-08-19block: stop relying on io_flush() in bdrv_drain_all()Stefan Hajnoczi1-1/+5
If a block driver has no file descriptors to monitor but there are still active requests, it can return 1 from .io_flush(). This is used to spin during synchronous I/O. Stop relying on .io_flush() and instead check QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active requests. This is the first step in removing .io_flush() so that event loops no longer need to have the concept of synchronous I/O. Eventually we may be able to kill synchronous I/O completely by running everything in a coroutine, but that is future work. Note this patch moves bs->throttled_reqs initialization to bdrv_new() so that bdrv_requests_pending(bs) can safely access it. In practice bs is g_malloc0() so the memory is already zeroed but it's safer to initialize the queue properly. We also need to fix up block/stream.c:close_unused_images() to prevent traversing a dangling pointer while it rearranges the backing file chain. This is necessary since the new bdrv_drain_all() traverses the backing file chain. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-06-28block: Make BlockJobTypes constKevin Wolf1-1/+1
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-12-19block: move include files to include/block/Paolo Bonzini1-2/+2
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-11aio: Get rid of qemu_aio_flush()Kevin Wolf1-1/+1
There are no remaining users, and new users should probably be using bdrv_drain_all() in the first place. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-10-24block: rename block_job_complete to block_job_completedPaolo Bonzini1-2/+2
The imperative will be used for the QMP command. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-28stream: add on-error argumentPaolo Bonzini1-1/+27
This patch adds support for error management to streaming. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-28block: move job APIs to separate filesPaolo Bonzini1-0/+1
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>