aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)AuthorFilesLines
2018-10-19vpc: Fail open on bad header checksumMarkus Armbruster1-3/+5
vpc_open() merely prints a warning when it finds a bad header checksum. Turn that into a hard error. Cc: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20181017082702.5581-39-armbru@redhat.com> [Error message capitalized for local consistency] Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2018-10-19block: Use warn_report() & friends to report warningsMarkus Armbruster3-4/+4
Calling error_report() in a function that takes an Error ** argument is suspicious. Convert a few that are actually warnings to warn_report(). While there, split warnings consisting of multiple sentences to conform to conventions spelled out in warn_report()'s contract, and improve a rather useless warning in sheepdog.c. Cc: Kevin Wolf <kwolf@redhat.com> Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Lieven <pl@kamp.de> Cc: Liu Yuan <namei.unix@gmail.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20181017082702.5581-4-armbru@redhat.com> Drop changes to "without an explicit read-only=on" warnings, because there's a series removing them pending. Also drop a cc: to a former Sheepdog maintainer. Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2018-10-19error: Fix use of error_prepend() with &error_fatal, &error_abortMarkus Armbruster2-4/+4
From include/qapi/error.h: * Pass an existing error to the caller with the message modified: * error_propagate(errp, err); * error_prepend(errp, "Could not frobnicate '%s': ", name); Fei Li pointed out that doing error_propagate() first doesn't work well when @errp is &error_fatal or &error_abort: the error_prepend() is never reached. Since I doubt fixing the documentation will stop people from getting it wrong, introduce error_propagate_prepend(), in the hope that it lures people away from using its constituents in the wrong order. Update the instructions in error.h accordingly. Convert existing error_prepend() next to error_propagate to error_propagate_prepend(). If any of these get reached with &error_fatal or &error_abort, the error messages improve. I didn't check whether that's the case anywhere. Cc: Fei Li <fli@suse.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20181017082702.5581-2-armbru@redhat.com>
2018-10-12nvme: correct locking around completionPaolo Bonzini1-2/+0
nvme_poll_queues is already protected by q->lock, and AIO callbacks are invoked outside the AioContext lock. So remove the acquire/release pair in nvme_handle_event. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20180814062739.19640-1-pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
2018-10-01block-backend: Set werror/rerror defaults in blk_new()Kevin Wolf1-0/+3
Currently, the default values for werror and rerror have to be set explicitly with blk_set_on_error() by the callers of blk_new(). The only caller actually doing this is blockdev_init(), which is called for BlockBackends created using -drive. In particular, anonymous BlockBackends created with -device ...,drive=<node-name> didn't get the correct default set and instead defaulted to the integer value 0 (= BLOCKDEV_ON_ERROR_REPORT). This is the intended default for rerror anyway, but the default for werror should be BLOCKDEV_ON_ERROR_ENOSPC. Set the defaults in blk_new() instead so that they apply no matter what way the BlockBackend was created. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
2018-10-01qcow2: Explicit number replaced by a constantLeonid Bloch1-2/+2
Signed-off-by: Leonid Bloch <lbloch@janustech.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01qcow2: Set the default cache-clean-interval to 10 minutesLeonid Bloch2-2/+4
The default cache-clean-interval is set to 10 minutes, in order to lower the overhead of the qcow2 caches (before the default was 0, i.e. disabled). * For non-Linux platforms the default is kept at 0, because cache-clean-interval is not supported there yet. Signed-off-by: Leonid Bloch <lbloch@janustech.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01qcow2: Resize the cache upon image resizingLeonid Bloch1-0/+11
The caches are now recalculated upon image resizing. This is done because the new default behavior of assigning L2 cache relatively to the image size, implies that the cache will be adapted accordingly after an image resize. Signed-off-by: Leonid Bloch <lbloch@janustech.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01qcow2: Increase the default upper limit on the L2 cache sizeLeonid Bloch1-1/+5
The upper limit on the L2 cache size is increased from 1 MB to 32 MB on Linux platforms, and to 8 MB on other platforms (this difference is caused by the ability to set intervals for cache cleaning on Linux platforms only). This is done in order to allow default full coverage with the L2 cache for images of up to 256 GB in size (was 8 GB). Note, that only the needed amount to cover the full image is allocated. The value which is changed here is just the upper limit on the L2 cache size, beyond which it will not grow, even if the size of the image will require it to. Signed-off-by: Leonid Bloch <lbloch@janustech.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01qcow2: Assign the L2 cache relatively to the image sizeLeonid Bloch2-15/+10
Sufficient L2 cache can noticeably improve the performance when using large images with frequent I/O. Previously, unless 'cache-size' was specified and was large enough, the L2 cache was set to a certain size without taking the virtual image size into account. Now, the L2 cache assignment is aware of the virtual size of the image, and will cover the entire image, unless the cache size needed for that is larger than a certain maximum. This maximum is set to 1 MB by default (enough to cover an 8 GB image with the default cluster size) but can be increased or decreased using the 'l2-cache-size' option. This option was previously documented as the *maximum* L2 cache size, and this patch makes it behave as such, instead of as a constant size. Also, the existing option 'cache-size' can limit the sum of both L2 and refcount caches, as previously. Signed-off-by: Leonid Bloch <lbloch@janustech.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01qcow2: Avoid duplication in setting the refcount cache sizeLeonid Bloch1-3/+2
The refcount cache size does not need to be set to its minimum value in read_cache_sizes(), as it is set to at least its minimum value in qcow2_update_options_prepare(). Signed-off-by: Leonid Bloch <lbloch@janustech.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01qcow2: Make sizes more humanly readableLeonid Bloch2-5/+6
Signed-off-by: Leonid Bloch <lbloch@janustech.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01file-posix: Forbid trying to change unsupported options during reopenAlberto Garcia1-2/+7
The file-posix code is used for the "file", "host_device" and "host_cdrom" drivers, and it allows reopening images. However the only option that is actually processed is "x-check-cache-dropped", and changes in all other options (e.g. "filename") are silently ignored: (qemu) qemu-io virtio0 "reopen -o file.filename=no-such-file" While we could allow changing some of the other options, let's keep things as they are for now but return an error if the user tries to change any of them. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01file-posix: x-check-cache-dropped should default to false on reopenAlberto Garcia1-1/+1
The default value of x-check-cache-dropped is false. There's no reason to use the previous value as a default in raw_reopen_prepare() because bdrv_reopen_queue_child() already takes care of putting the old options in the BDRVReopenState.options QDict. If x-check-cache-dropped was previously set but is now missing from the reopen QDict then it should be reset to false. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-10-01file-posix: Include filename in locking error messageFam Zheng1-4/+6
Image locking errors happening at device initialization time doesn't say which file cannot be locked, for instance, -device scsi-disk,drive=drive-1: Failed to get shared "write" lock Is another process using the image? could refer to either the overlay image or its backing image. Hoist the error_append_hint to the caller of raw_check_lock_bytes where file name is known, and include it in the error hint. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-09-28Merge remote-tracking branch 'remotes/famz/tags/staging-pull-request' into ↵Peter Maydell1-0/+21
staging Block and testing patches - Paolo's AIO fixes. - VMDK streamOptimized corner case fix - VM testing improvment on -cpu # gpg: Signature made Wed 26 Sep 2018 03:54:08 BST # gpg: using RSA key CA35624C6A9171C6 # gpg: Good signature from "Fam Zheng <famz@redhat.com>" # Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6 * remotes/famz/tags/staging-pull-request: vmdk: align end of file to a sector boundary tests/vm: Use -cpu max rather than -cpu host aio-posix: do skip system call if ctx->notifier polling succeeds aio-posix: compute timeout before polling aio-posix: fix concurrent access to poll_disable_cnt Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-09-26vmdk: align end of file to a sector boundaryyuchenlin1-0/+21
There is a rare case which the size of last compressed cluster is larger than the cluster size, which will cause the file is not aligned at the sector boundary. There are three reasons to do it. First, if vmdk doesn't align at the sector boundary, there may be many undefined behaviors, such as, in vbox it will show VMDK: Compressed image is corrupted 'syno-vm-disk1.vmdk' (VERR_ZIP_CORRUPTED) when we try to import an ova with unaligned vmdk. Second, all the cluster_sector is aligned to sector, the last one should be like this, too. Third, it ease reading with sector based I/Os. Signed-off-by: yuchenlin <yuchenlin@synology.com> Message-Id: <20180913082952.3675-1-yuchenlin@synology.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
2018-09-25Merge remote-tracking branch ↵Peter Maydell1-0/+0
'remotes/huth-gitlab/tags/pull-request-2018-09-25' into staging - Deprecate the usage of a network backend via "name" instead of "id" - Deprecate the "enforce-config-section" machine parameter - Re-enable the wdt_ib700, endianness and vmxnet3 qtests - Some trivial fixes and doc update patches that crossed my way # gpg: Signature made Tue 25 Sep 2018 16:58:42 BST # gpg: using RSA key 2ED9D774FE702DB5 # gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" # gpg: aka "Thomas Huth <thuth@redhat.com>" # gpg: aka "Thomas Huth <huth@tuxfamily.org>" # gpg: aka "Thomas Huth <th.huth@posteo.de>" # Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5 * remotes/huth-gitlab/tags/pull-request-2018-09-25: Revert "check: Move VMXNET3 test to common" Revert "check: Move endianess test to common" Revert "check: Move wdt_ib700 test to common" tests/migration: Speed up the test on ppc64 hw/qdev-core: Fix description of instance_init qdev: fix a typo in comment docs: Fix some typos (most found by codespell) trivial: Make bios files and source files non-executable memfd: fix possible usage of the uninitialized file descriptor hw/core/machine: Officially deprecate the enforce-config-section parameter net/slirp: Deprecate the [hub_id name] parameter tuple net: Deprecate the "name" parameter of -net Makefile: Add missing dependency for qemu-deprecated.texi Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-09-25trivial: Make bios files and source files non-executableThomas Huth1-0/+0
These files can not be executed on the host, so they should not be marked as executable. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2018-09-25block: Use a single global AioWaitKevin Wolf2-12/+6
When draining a block node, we recurse to its parent and for subtree drains also to its children. A single AIO_WAIT_WHILE() is then used to wait for bdrv_drain_poll() to become true, which depends on all of the nodes we recursed to. However, if the respective child or parent becomes quiescent and calls bdrv_wakeup(), only the AioWait of the child/parent is checked, while AIO_WAIT_WHILE() depends on the AioWait of the original node. Fix this by using a single AioWait for all callers of AIO_WAIT_WHILE(). This may mean that the draining thread gets a few more unnecessary wakeups because an unrelated operation got completed, but we already wake it up when something _could_ have changed rather than only if it has certainly changed. Apart from that, drain is a slow path anyway. In theory it would be possible to use wakeups more selectively and still correctly, but the gains are likely not worth the additional complexity. In fact, this patch is a nice simplification for some places in the code. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-09-25block: Remove aio_poll() in bdrv_drain_poll variantsKevin Wolf1-8/+0
bdrv_drain_poll_top_level() was buggy because it didn't release the AioContext lock of the node to be drained before calling aio_poll(). This way, callbacks called by aio_poll() would possibly take the lock a second time and run into a deadlock with a nested AIO_WAIT_WHILE() call. However, it turns out that the aio_poll() call isn't actually needed any more. It was introduced in commit 91af091f923, which is effectively reverted by this patch. The cases it was supposed to fix are now covered by bdrv_drain_poll(), which waits for block jobs to reach a quiescent state. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-09-25block-backend: Decrease in_flight only after callbackKevin Wolf1-1/+1
Request callbacks can do pretty much anything, including operations that will yield from the coroutine (such as draining the backend). In that case, a decreased in_flight would be visible to other code and could lead to a drain completing while the callback hasn't actually completed yet. Note that reordering these operations forbids calling drain directly inside an AIO callback. As Paolo explains, indirectly calling it is okay: - Calling it through a coroutine is okay, because then bdrv_drained_begin() goes through bdrv_co_yield_to_drain() and you have in_flight=2 when bdrv_co_yield_to_drain() yields, then soon in_flight=1 when the aio_co_wake() in the AIO callback completes, then in_flight=0 after the bottom half starts. - Calling it through a bottom half would be okay too, as long as the AIO callback remembers to do inc_in_flight/dec_in_flight just like bdrv_co_yield_to_drain() and bdrv_co_drain_bh_cb() do A few more important cases that come to mind: - A coroutine that yields because of I/O is okay, with a sequence similar to bdrv_co_yield_to_drain(). - A coroutine that yields with no I/O pending will correctly decrease in_flight to zero before yielding. - Calling more AIO from the callback won't overflow the counter just because of mutual recursion, because AIO functions always yield at least once before invoking the callback. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-25block-backend: Fix potential double blk_delete()Kevin Wolf1-1/+8
blk_unref() first decreases the refcount of the BlockBackend and calls blk_delete() if the refcount reaches zero. Requests can still be in flight at this point, they are only drained during blk_delete(): At this point, arbitrary callbacks can run. If any callback takes a temporary BlockBackend reference, it will first increase the refcount to 1 and then decrease it to 0 again, triggering another blk_delete(). This will cause a use-after-free crash in the outer blk_delete(). Fix it by draining the BlockBackend before decreasing to refcount to 0. Assert in blk_ref() that it never takes the first refcount (which would mean that the BlockBackend is already being deleted). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-09-25block-backend: Add .drained_poll callbackKevin Wolf1-0/+9
A bdrv_drain operation must ensure that all parents are quiesced, this includes BlockBackends. Otherwise, callbacks called by requests that are completed on the BDS layer, but not quite yet on the BlockBackend layer could still create new requests. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-09-25block: Add missing locking in bdrv_co_drain_bh_cb()Kevin Wolf1-0/+15
bdrv_do_drained_begin/end() assume that they are called with the AioContext lock of bs held. If we call drain functions from a coroutine with the AioContext lock held, we yield and schedule a BH to move out of coroutine context. This means that the lock for the home context of the coroutine is released and must be re-acquired in the bottom half. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-09-25block/linux-aio: acquire AioContext before qemu_laio_process_completionsSergio Lopez1-1/+1
In qemu_laio_process_completions_and_submit, the AioContext is acquired before the ioq_submit iteration and after qemu_laio_process_completions, but the latter is not thread safe either. This change avoids a number of random crashes when the Main Thread and an IO Thread collide processing completions for the same AioContext. This is an example of such crash: - The IO Thread is trying to acquire the AioContext at aio_co_enter, which evidences that it didn't lock it before: Thread 3 (Thread 0x7fdfd8bd8700 (LWP 36743)): #0 0x00007fdfe0dd542d in __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 #1 0x00007fdfe0dd0de6 in _L_lock_870 () at /lib64/libpthread.so.0 #2 0x00007fdfe0dd0cdf in __GI___pthread_mutex_lock (mutex=mutex@entry=0x5631fde0e6c0) at ../nptl/pthread_mutex_lock.c:114 #3 0x00005631fc0603a7 in qemu_mutex_lock_impl (mutex=0x5631fde0e6c0, file=0x5631fc23520f "util/async.c", line=511) at util/qemu-thread-posix.c:66 #4 0x00005631fc05b558 in aio_co_enter (ctx=0x5631fde0e660, co=0x7fdfcc0c2b40) at util/async.c:493 #5 0x00005631fc05b5ac in aio_co_wake (co=<optimized out>) at util/async.c:478 #6 0x00005631fbfc51ad in qemu_laio_process_completion (laiocb=<optimized out>) at block/linux-aio.c:104 #7 0x00005631fbfc523c in qemu_laio_process_completions (s=s@entry=0x7fdfc0297670) at block/linux-aio.c:222 #8 0x00005631fbfc5499 in qemu_laio_process_completions_and_submit (s=0x7fdfc0297670) at block/linux-aio.c:237 #9 0x00005631fc05d978 in aio_dispatch_handlers (ctx=ctx@entry=0x5631fde0e660) at util/aio-posix.c:406 #10 0x00005631fc05e3ea in aio_poll (ctx=0x5631fde0e660, blocking=blocking@entry=true) at util/aio-posix.c:693 #11 0x00005631fbd7ad96 in iothread_run (opaque=0x5631fde0e1c0) at iothread.c:64 #12 0x00007fdfe0dcee25 in start_thread (arg=0x7fdfd8bd8700) at pthread_create.c:308 #13 0x00007fdfe0afc34d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 - The Main Thread is also processing completions from the same AioContext, and crashes due to failed assertion at util/iov.c:78: Thread 1 (Thread 0x7fdfeb5eac80 (LWP 36740)): #0 0x00007fdfe0a391f7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007fdfe0a3a8e8 in __GI_abort () at abort.c:90 #2 0x00007fdfe0a32266 in __assert_fail_base (fmt=0x7fdfe0b84e68 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x5631fc238ccb "offset == 0", file=file@entry=0x5631fc23698e "util/iov.c", line=line@entry=78, function=function@entry=0x5631fc236adc <__PRETTY_FUNCTION__.15220> "iov_memset") at assert.c:92 #3 0x00007fdfe0a32312 in __GI___assert_fail (assertion=assertion@entry=0x5631fc238ccb "offset == 0", file=file@entry=0x5631fc23698e "util/iov.c", line=line@entry=78, function=function@entry=0x5631fc236adc <__PRETTY_FUNCTION__.15220> "iov_memset") at assert.c:101 #4 0x00005631fc065287 in iov_memset (iov=<optimized out>, iov_cnt=<optimized out>, offset=<optimized out>, offset@entry=65536, fillc=fillc@entry=0, bytes=15515191315812405248) at util/iov.c:78 #5 0x00005631fc065a63 in qemu_iovec_memset (qiov=<optimized out>, offset=offset@entry=65536, fillc=fillc@entry=0, bytes=<optimized out>) at util/iov.c:410 #6 0x00005631fbfc5178 in qemu_laio_process_completion (laiocb=0x7fdd920df630) at block/linux-aio.c:88 #7 0x00005631fbfc523c in qemu_laio_process_completions (s=s@entry=0x7fdfc0297670) at block/linux-aio.c:222 #8 0x00005631fbfc5499 in qemu_laio_process_completions_and_submit (s=0x7fdfc0297670) at block/linux-aio.c:237 #9 0x00005631fbfc54ed in qemu_laio_poll_cb (opaque=<optimized out>) at block/linux-aio.c:272 #10 0x00005631fc05d85e in run_poll_handlers_once (ctx=ctx@entry=0x5631fde0e660) at util/aio-posix.c:497 #11 0x00005631fc05e2ca in aio_poll (blocking=false, ctx=0x5631fde0e660) at util/aio-posix.c:574 #12 0x00005631fc05e2ca in aio_poll (ctx=0x5631fde0e660, blocking=blocking@entry=false) at util/aio-posix.c:604 #13 0x00005631fbfcb8a3 in bdrv_do_drained_begin (ignore_parent=<optimized out>, recursive=<optimized out>, bs=<optimized out>) at block/io.c:273 #14 0x00005631fbfcb8a3 in bdrv_do_drained_begin (bs=0x5631fe8b6200, recursive=<optimized out>, parent=0x0, ignore_bds_parents=<optimized out>, poll=<optimized out>) at block/io.c:390 #15 0x00005631fbfbcd2e in blk_drain (blk=0x5631fe83ac80) at block/block-backend.c:1590 #16 0x00005631fbfbe138 in blk_remove_bs (blk=blk@entry=0x5631fe83ac80) at block/block-backend.c:774 #17 0x00005631fbfbe3d6 in blk_unref (blk=0x5631fe83ac80) at block/block-backend.c:401 #18 0x00005631fbfbe3d6 in blk_unref (blk=0x5631fe83ac80) at block/block-backend.c:449 #19 0x00005631fbfc9a69 in commit_complete (job=0x5631fe8b94b0, opaque=0x7fdfcc1bb080) at block/commit.c:92 #20 0x00005631fbf7d662 in job_defer_to_main_loop_bh (opaque=0x7fdfcc1b4560) at job.c:973 #21 0x00005631fc05ad41 in aio_bh_poll (bh=0x7fdfcc01ad90) at util/async.c:90 #22 0x00005631fc05ad41 in aio_bh_poll (ctx=ctx@entry=0x5631fddffdb0) at util/async.c:118 #23 0x00005631fc05e210 in aio_dispatch (ctx=0x5631fddffdb0) at util/aio-posix.c:436 #24 0x00005631fc05ac1e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:261 #25 0x00007fdfeaae44c9 in g_main_context_dispatch (context=0x5631fde00140) at gmain.c:3201 #26 0x00007fdfeaae44c9 in g_main_context_dispatch (context=context@entry=0x5631fde00140) at gmain.c:3854 #27 0x00005631fc05d503 in main_loop_wait () at util/main-loop.c:215 #28 0x00005631fc05d503 in main_loop_wait (timeout=<optimized out>) at util/main-loop.c:238 #29 0x00005631fc05d503 in main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:497 #30 0x00005631fbd81412 in main_loop () at vl.c:1866 #31 0x00005631fbc18ff3 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4647 - A closer examination shows that s->io_q.in_flight appears to have gone backwards: (gdb) frame 7 #7 0x00005631fbfc523c in qemu_laio_process_completions (s=s@entry=0x7fdfc0297670) at block/linux-aio.c:222 222 qemu_laio_process_completion(laiocb); (gdb) p s $2 = (LinuxAioState *) 0x7fdfc0297670 (gdb) p *s $3 = {aio_context = 0x5631fde0e660, ctx = 0x7fdfeb43b000, e = {rfd = 33, wfd = 33}, io_q = {plugged = 0, in_queue = 0, in_flight = 4294967280, blocked = false, pending = {sqh_first = 0x0, sqh_last = 0x7fdfc0297698}}, completion_bh = 0x7fdfc0280ef0, event_idx = 21, event_max = 241} (gdb) p/x s->io_q.in_flight $4 = 0xfffffff0 Signed-off-by: Sergio Lopez <slp@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-09-25block/stream: refactor stream to use job callbacksJohn Snow1-8/+15
Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180906130225.5118-8-jsnow@redhat.com Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-09-25block/mirror: conservative mirror_exit refactorJohn Snow1-11/+33
For purposes of minimum code movement, refactor the mirror_exit callback to use the post-finalization callbacks in a trivial way. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 20180906130225.5118-7-jsnow@redhat.com Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> [mreitz: Added comment for the mirror_exit() function] Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-09-25block/mirror: don't install backing chain on abortJohn Snow1-1/+1
In cases where we abort the block/mirror job, there's no point in installing the new backing chain before we finish aborting. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 20180906130225.5118-6-jsnow@redhat.com Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-09-25block/commit: refactor commit to use job callbacksJohn Snow1-41/+51
Use the component callbacks; prepare, abort, and clean. NB: prepare is only called when the job has not yet failed; and abort can be called after prepare. complete -> prepare -> abort -> clean complete -> abort -> clean During refactor, a potential problem with bdrv_drop_intermediate was identified, the patched behavior is no worse than the pre-patch behavior, so leave a FIXME for now to be fixed in a future patch. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180906130225.5118-5-jsnow@redhat.com Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-09-25block/stream: add block job creation flagsJohn Snow1-2/+3
Add support for taking and passing forward job creation flags. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20180906130225.5118-4-jsnow@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-09-25block/mirror: add block job creation flagsJohn Snow1-2/+3
Add support for taking and passing forward job creation flags. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20180906130225.5118-3-jsnow@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-09-25block/commit: add block job creation flagsJohn Snow1-2/+3
Add support for taking and passing forward job creation flags. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20180906130225.5118-2-jsnow@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-09-24curl: Make sslverify=off disable host as well as peer verification.Richard W.M. Jones1-0/+2
The sslverify setting is supposed to turn off all TLS certificate checks in libcurl. However because of the way we use it, it only turns off peer certificate authenticity checks (CURLOPT_SSL_VERIFYPEER). This patch makes it also turn off the check that the server name in the certificate is the same as the server you're connecting to (CURLOPT_SSL_VERIFYHOST). We can use Google's server at 8.8.8.8 which happens to have a bad TLS certificate to demonstrate this: $ ./qemu-img create -q -f qcow2 -b 'json: { "file.sslverify": "off", "file.driver": "https", "file.url": "https://8.8.8.8/foo" }' /var/tmp/file.qcow2 qemu-img: /var/tmp/file.qcow2: CURL: Error opening file: SSL: no alternative certificate subject name matches target host name '8.8.8.8' Could not open backing image to determine size. With this patch applied, qemu-img connects to the server regardless of the bad certificate: $ ./qemu-img create -q -f qcow2 -b 'json: { "file.sslverify": "off", "file.driver": "https", "file.url": "https://8.8.8.8/foo" }' /var/tmp/file.qcow2 qemu-img: /var/tmp/file.qcow2: CURL: Error opening file: The requested URL returned error: 404 Not Found (The 404 error is expected because 8.8.8.8 is not actually serving a file called "/foo".) Of course the default (without sslverify=off) remains to always check the certificate: $ ./qemu-img create -q -f qcow2 -b 'json: { "file.driver": "https", "file.url": "https://8.8.8.8/foo" }' /var/tmp/file.qcow2 qemu-img: /var/tmp/file.qcow2: CURL: Error opening file: SSL: no alternative certificate subject name matches target host name '8.8.8.8' Could not open backing image to determine size. Further information about the two settings is available here: https://curl.haxx.se/libcurl/c/CURLOPT_SSL_VERIFYPEER.html https://curl.haxx.se/libcurl/c/CURLOPT_SSL_VERIFYHOST.html Signed-off-by: Richard W.M. Jones <rjones@redhat.com> Message-id: 20180914095622.19698-1-rjones@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2018-09-24block/rbd: Attempt to parse legacy filenamesJeff Cody1-2/+52
When we converted rbd to get rid of the older key/value-centric encoding format, we broke compatibility with image files with backing file strings encoded in the old format. This leaves a bit of an ugly conundrum, and a hacky solution. If the initial attempt to parse the "proper" options fails, it assumes that we may have an older key/value encoded filename. Fall back to attempting to parse the filename, and extract the required options from it. If that fails, pass along the original error message. We do not support mixed modern usage alongside legacy keyvalue pair usage. A deprecation warning has been added, although care should be taken when actually deprecating since the impact is not limited to commandline or qapi usage, but also opening existing images. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Message-id: 15b332e5432ad069441f7275a46080f465d789a0.1536704901.git.jcody@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2018-09-24block/rbd: pull out qemu_rbd_convert_optionsJeff Cody1-12/+24
Code movement to pull the conversion from Qdict to BlockdevOptionsRbd into a helper function. Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Message-id: 5b49a980f2cde6610ab1df41bb0277d00b5db893.1536704901.git.jcody@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2018-09-24Merge remote-tracking branch 'remotes/xanclic/tags/pull-block-2018-08-31-v2' ↵Peter Maydell5-121/+76
into staging Block patches: - (Block) job exit refactoring, part 1 (removing job_defer_to_main_loop()) - test-bdrv-drain leak fix # gpg: Signature made Fri 31 Aug 2018 15:30:33 BST # gpg: using RSA key F407DB0061D5CF40 # gpg: Good signature from "Max Reitz <mreitz@redhat.com>" # Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40 * remotes/xanclic/tags/pull-block-2018-08-31-v2: jobs: remove job_defer_to_main_loop jobs: remove ret argument to job_completed; privatize it block/backup: make function variables consistently named jobs: utilize job_exit shim block/mirror: utilize job_exit shim block/commit: utilize job_exit shim jobs: add exit shim jobs: canonize Error object jobs: change start callback to run callback tests: fix bdrv-drain leak Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-31block/backup: make function variables consistently namedJohn Snow1-31/+31
Rename opaque_job to job to be consistent with other job implementations. Rename 'job', the BackupBlockJob object, to 's' to also be consistent. Suggested-by: Eric Blake <eblake@redhat.com> Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-8-jsnow@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-31jobs: utilize job_exit shimJohn Snow3-42/+10
Utilize the job_exit shim by not calling job_defer_to_main_loop, and where applicable, converting the deferred callback into the job_exit callback. This converts backup, stream, create, and the unit tests all at once. Most of these jobs do not see any changes to the order in which they clean up their resources, except the test-blockjob-txn test, which now puts down its bs before job_completed is called. This is safe for the same reason the reordering in the mirror job is safe, because job_completed no longer runs under two locks, making the unref safe even if it causes a flush. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-7-jsnow@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-31block/mirror: utilize job_exit shimJohn Snow1-18/+11
Change the manual deferment to mirror_exit into the implicit callback to job_exit and the mirror_exit callback. This does change the order of some bdrv_unref calls and job_completed, but thanks to the new context in which we call .exit, this is safe to defer the possible flushing of any nodes to the job_finalize_single cleanup stage. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 20180830015734.19765-6-jsnow@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-31block/commit: utilize job_exit shimJohn Snow1-17/+5
Change the manual deferment to commit_complete into the implicit callback to job_exit, renaming commit_complete to commit_exit. This conversion does change the timing of when job_completed is called to after the bdrv_replace_node and bdrv_unref calls, which could have implications for bjob->blk which will now be put down after this cleanup. Kevin highlights that we did not take any permissions for that backend at job creation time, so it is safe to reorder these operations. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-5-jsnow@redhat.com Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-31jobs: canonize Error objectJohn Snow5-7/+6
Jobs presently use both an Error object in the case of the create job, and char strings in the case of generic errors elsewhere. Unify the two paths as just j->err, and remove the extra argument from job_completed. The integer error code for job_completed is kept for now, to be removed shortly in a separate patch. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 20180830015734.19765-3-jsnow@redhat.com [mreitz: Dropped a superfluous g_strdup()] Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-31jobs: change start callback to run callbackJohn Snow5-16/+23
Presently we codify the entry point for a job as the "start" callback, but a more apt name would be "run" to clarify the idea that when this function returns we consider the job to have "finished," except for any cleanup which occurs in separate callbacks later. As part of this clarification, change the signature to include an error object and a return code. The error ptr is not yet used, and the return code while captured, will be overwritten by actions in the job_completed function. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-2-jsnow@redhat.com Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-28qapi: Drop qapi_event_send_FOO()'s Error ** argumentPeter Xu4-10/+7
The generated qapi_event_send_FOO() take an Error ** argument. They can't actually fail, because all they do with the argument is passing it to functions that can't fail: the QObject output visitor, and the @qmp_emit callback, which is either monitor_qapi_event_queue() or event_test_emit(). Drop the argument, and pass &error_abort to the QObject output visitor and @qmp_emit instead. Suggested-by: Eric Blake <eblake@redhat.com> Suggested-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20180815133747.25032-4-peterx@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Commit message rewritten, update to qapi-code-gen.txt corrected] Signed-off-by: Markus Armbruster <armbru@redhat.com>
2018-08-15Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell11-46/+44
Block layer patches: - Remove deprecated -drive options for geometry/serial/addr - luks: Allow shared writers if the parents allow them (share-rw=on) - qemu-img: Fix error when trying to convert to encrypted target image - mirror: Fail gracefully for source == target - I/O throttling: Fix behaviour during drain (always ignore the limits) - bdrv_reopen() related fixes for bs->options/explicit_options content - Documentation improvements # gpg: Signature made Wed 15 Aug 2018 12:11:43 BST # gpg: using RSA key 7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * remotes/kevin/tags/for-upstream: (21 commits) qapi: block: Remove mentions of error types which were removed block: Simplify append_open_options() block: Update bs->options if bdrv_reopen() succeeds block: Simplify bdrv_reopen_abort() block: Remove children options from bs->{options,explicit_options} qdict: Make qdict_extract_subqdict() accept dst = NULL block: drop empty .bdrv_close handlers block: make .bdrv_close optional qemu-img: fix regression copying secrets during convert mirror: Fail gracefully for source == target qapi/block: Document restrictions for node names block: Remove dead deprecation warning code block: Remove deprecated -drive option serial block: Remove deprecated -drive option addr block: Remove deprecated -drive geometry options luks: Allow share-rw=on throttle-groups: Don't allow timers without throttled requests qemu-iotests: Update 093 to improve the draining test throttle-groups: Skip the round-robin if a member is being drained qemu-iotests: Test removing a throttle group member with a pending timer ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-15block: drop empty .bdrv_close handlersVladimir Sementsov-Ogievskiy6-32/+0
.bdrv_close handler is optional after previous commit, no needs to keep empty functions more. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-08-15block: make .bdrv_close optionalVladimir Sementsov-Ogievskiy1-1/+3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-08-15mirror: Fail gracefully for source == targetKevin Wolf1-0/+5
blockdev-mirror with the same node for source and target segfaults today: A node is in its own backing chain, so mirror_start_job() decides that this is an active commit. When adding the intermediate nodes with block_job_add_bdrv(), it starts the iteration through the subchain with the backing file of source, though, so it never reaches target and instead runs into NULL at the base. While we could fix that by starting with source itself, there is no point in allowing mirroring a node into itself and I wouldn't be surprised if this caused more problems later. So just check for this scenario and error out. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2018-08-15block: Remove deprecated -drive option serialKevin Wolf1-1/+0
This reinstates commit b0083267444a5e0f28391f6c2831a539f878d424, which was temporarily reverted for the 3.0 release so that libvirt gets some extra time to update their command lines. The -drive option serial was deprecated in QEMU 2.10. It's time to remove it. Tests need to be updated to set the serial number with -global instead of using the -drive option. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
2018-08-15luks: Allow share-rw=onFam Zheng1-1/+3
Format drivers such as qcow2 don't allow sharing the same image between two QEMU instances in order to prevent image corruptions, because of metadata cache. LUKS driver don't modify metadata except for when creating image, so it is safe to relax the permission. This makes share-rw=on property work on virtual devices. Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>