aboutsummaryrefslogtreecommitdiff
path: root/migration/block.c
AgeCommit message (Collapse)AuthorFilesLines
2023-09-21block-migration: Ensure we don't crash during migration cleanupFabiano Rosas1-2/+9
We can fail the blk_insert_bs() at init_blk_migration(), leaving the BlkMigDevState without a dirty_bitmap and BlockDriverState. Account for the possibly missing elements when doing cleanup. Fix the following crashes: Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault. 0x0000555555ec83ef in bdrv_release_dirty_bitmap (bitmap=0x0) at ../block/dirty-bitmap.c:359 359 BlockDriverState *bs = bitmap->bs; #0 0x0000555555ec83ef in bdrv_release_dirty_bitmap (bitmap=0x0) at ../block/dirty-bitmap.c:359 #1 0x0000555555bba331 in unset_dirty_tracking () at ../migration/block.c:371 #2 0x0000555555bbad98 in block_migration_cleanup_bmds () at ../migration/block.c:681 Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault. 0x0000555555e971ff in bdrv_op_unblock (bs=0x0, op=BLOCK_OP_TYPE_BACKUP_SOURCE, reason=0x0) at ../block.c:7073 7073 QLIST_FOREACH_SAFE(blocker, &bs->op_blockers[op], list, next) { #0 0x0000555555e971ff in bdrv_op_unblock (bs=0x0, op=BLOCK_OP_TYPE_BACKUP_SOURCE, reason=0x0) at ../block.c:7073 #1 0x0000555555e9734a in bdrv_op_unblock_all (bs=0x0, reason=0x0) at ../block.c:7095 #2 0x0000555555bbae13 in block_migration_cleanup_bmds () at ../migration/block.c:690 Signed-off-by: Fabiano Rosas <farosas@suse.de> Message-id: 20230731203338.27581-1-farosas@suse.de Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> (cherry picked from commit f187609f27b261702a17f79d20bf252ee0d4f9cd) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-05-18migration: Move rate_limit_max and rate_limit_used to migration_statsJuan Quintela1-2/+3
These way we can make them atomic and use this functions from any place. I also moved all functions that use rate_limit to migration-stats. Functions got renamed, they are not qemu_file anymore. qemu_file_rate_limit -> migration_rate_exceeded qemu_file_set_rate_limit -> migration_rate_set qemu_file_get_rate_limit -> migration_rate_get qemu_file_reset_rate_limit -> migration_rate_reset qemu_file_acct_rate_limit -> migration_rate_account. Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230515195709.63843-6-quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-15qemu-file: Remove total from qemu_file_total_transferred_*()Juan Quintela1-2/+2
Function is already quite long. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20230508130909.65420-7-quintela@redhat.com>
2023-05-05qemu-file: Make total_transferred an uint64_tJuan Quintela1-3/+2
Change all the functions that use it. It was already passed as uint64_t. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20230504113841.23130-8-quintela@redhat.com>
2023-05-05migration: qemu_file_total_transferred() function is monotonicJuan Quintela1-7/+1
So delta_bytes can only be greater or equal to zero. Never negative. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20230504113841.23130-3-quintela@redhat.com>
2023-04-24migration: Move migrate_use_block_incremental() to option.cJuan Quintela1-1/+1
To be consistent with every other parameter, rename to migrate_block_incremental(). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_use_block() to options.cJuan Quintela1-1/+1
Once that we are there, we rename the function to migrate_block() to be consistent with all other capabilities. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Create options.cJuan Quintela1-0/+1
We move there all capabilities helpers from migration.c. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> --- Following David advise: - looked through the history, capabilities are newer than 2012, so we can remove that bit of the header. - This part is posterior to Anthony. Original Author is Orit. Once there, I put myself. Peter Xu also did quite a bit of work here. Anyone else wants/needs to be there? I didn't search too hard because nobody asked before to be added. What do you think?
2023-04-11migration/block: replace uses of blk_nb_sectors that do not check resultPaolo Bonzini1-3/+2
Uses of blk_nb_sectors must check whether the result is negative. Otherwise, underflow can happen. Fortunately, alloc_aio_bitmap() and bmds_aio_inflight() both have an alternative way to retrieve the number of sectors in the file. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20230407153303.391121-6-pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-02-15migration: Rename res_{postcopy,precopy}_onlyJuan Quintela1-4/+3
Once that res_compatible is removed, they don't make sense anymore. We remove the _only preffix. And to make things clearer we rename them to must_precopy and can_postcopy. Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-15migration: Remove unused res_compatibleJuan Quintela1-1/+0
Nothing assigns to it after previous commit. Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-15migration/block: Convert remaining DPRINTF() debug macro to trace eventsPhilippe Mathieu-Daudé1-11/+1
Finish the conversion from commit fe80c0241d ("migration: using trace_ to replace DPRINTF"). Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration: Remove unused threshold_size parameterJuan Quintela1-1/+1
Until previous commit, save_live_pending() was used for ram. Now with the split into state_pending_estimate() and state_pending_exact() it is not needed anymore, so remove them. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2023-02-06migration: Split save_live_pending() into state_pending_*Juan Quintela1-6/+7
We split the function into to: - state_pending_estimate: We estimate the remaining state size without stopping the machine. - state pending_exact: We calculate the exact amount of remaining state. The only "device" that implements different functions for _estimate() and _exact() is ram. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2023-02-06migration: No save_live_pending() method uses the QEMUFile parameterJuan Quintela1-1/+1
So remove it everywhere. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2023-01-20include/block: Untangle inclusion loopsMarkus Armbruster1-0/+1
We have two inclusion loops: block/block.h -> block/block-global-state.h -> block/block-common.h -> block/blockjob.h -> block/block.h block/block.h -> block/block-io.h -> block/block-common.h -> block/blockjob.h -> block/block.h I believe these go back to Emanuele's reorganization of the block API, merged a few months ago in commit d7e2fe4aac8. Fortunately, breaking them is merely a matter of deleting unnecessary includes from headers, and adding them back in places where they are now missing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20221221133551.3967339-2-armbru@redhat.com>
2022-11-21migration: Block migration comment or code is wrongJuan Quintela1-2/+2
And it appears that what is wrong is the code. During bulk stage we need to make sure that some block is dirty, but no games with max_size at all. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-08-02migration: Define BLK_MIG_BLOCK_SIZE as unsigned long longPeter Maydell1-1/+1
When we use BLK_MIG_BLOCK_SIZE in expressions like block_mig_state.submitted * BLK_MIG_BLOCK_SIZE, this multiplication is done as 32 bits, because both operands are 32 bits. Coverity complains about possible overflows because we then accumulate that into a 64 bit variable. Define BLK_MIG_BLOCK_SIZE as unsigned long long using the ULL suffix. The only two current uses of it with this problem are both in block_save_pending(), so we could just cast to uint64_t there, but using the ULL suffix is simpler and ensures that we don't accidentally introduce new variants of the same issue in future. Resolves: Coverity CID 1487136, 1487175 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20220721115207.729615-3-peter.maydell@linaro.org> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-12block: Change blk_{pread,pwrite}() param orderAlberto Faria1-3/+3
Swap 'buf' and 'bytes' around for consistency with blk_co_{pread,pwrite}(), and in preparation to implement these functions using generated_co_wrapper. Callers were updated using this Coccinelle script: @@ expression blk, offset, buf, bytes, flags; @@ - blk_pread(blk, offset, buf, bytes, flags) + blk_pread(blk, offset, bytes, buf, flags) @@ expression blk, offset, buf, bytes, flags; @@ - blk_pwrite(blk, offset, buf, bytes, flags) + blk_pwrite(blk, offset, bytes, buf, flags) It had no effect on hw/block/nand.c, presumably due to the #if, so that file was updated manually. Overly-long lines were then fixed by hand. Signed-off-by: Alberto Faria <afaria@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20220705161527.1054072-4-afaria@redhat.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
2022-07-12block: Add a 'flags' param to blk_pread()Alberto Faria1-2/+2
For consistency with other I/O functions, and in preparation to implement it using generated_co_wrapper. Callers were updated using this Coccinelle script: @@ expression blk, offset, buf, bytes; @@ - blk_pread(blk, offset, buf, bytes) + blk_pread(blk, offset, buf, bytes, 0) It had no effect on hw/block/nand.c, presumably due to the #if, so that file was updated manually. Overly-long lines were then fixed by hand. Signed-off-by: Alberto Faria <afaria@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20220705161527.1054072-3-afaria@redhat.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
2022-06-22migration: rename qemu_ftell to qemu_file_total_transferredDaniel P. Berrangé1-5/+5
The name 'ftell' gives the misleading impression that the QEMUFile objects are seekable. This is not the case, as in general we just have an opaque stream. The users of this method are only interested in the total bytes processed. This switches to a new name that reflects the intended usage. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: Wrapped long line
2022-03-04block: rename bdrv_invalidate_cache_all, blk_invalidate_cache and ↵Emanuele Giuseppe Esposito1-1/+1
test_sync_op_invalidate_cache Following the bdrv_activate renaming, change also the name of the respective callers. bdrv_invalidate_cache_all -> bdrv_activate_all blk_invalidate_cache -> blk_activate test_sync_op_invalidate_cache -> test_sync_op_activate No functional change intended. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20220209105452.1694545-5-eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-10-26migration: using trace_ to replace DPRINTFBihong Yu1-18/+18
Signed-off-by: Bihong Yu <yubihong@huawei.com> Message-Id: <1603179176-5360-1-git-send-email-yubihong@huawei.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2020-10-26migration: Don't use '#' flag of printf formatBihong Yu1-1/+1
Signed-off-by: Bihong Yu <yubihong@huawei.com> Reviewed-by: Chuan Zheng <zhengchuan@huawei.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <1603163448-27122-3-git-send-email-yubihong@huawei.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2020-10-26migration: Do not use C99 // commentsBihong Yu1-1/+1
Signed-off-by: Bihong Yu <yubihong@huawei.com> Reviewed-by: Chuan Zheng <zhengchuan@huawei.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <1603163448-27122-2-git-send-email-yubihong@huawei.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2020-02-28migration/block: rename BLOCK_SIZE macroStefan Hajnoczi1-19/+20
Both <linux/fs.h> and <sys/mount.h> define BLOCK_SIZE macros. Avoiding using that name in block/migration.c. I noticed this when including <liburing.h> (Linux io_uring) from "block/aio.h" and compilation failed. Although patches adding that include haven't been sent yet, it makes sense to rename the macro now in case someone else stumbles on it in the meantime. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-10-17block/dirty-bitmap: add bs linkVladimir Sementsov-Ogievskiy1-2/+2
Add bs field to BdrvDirtyBitmap structure. Drop BlockDriverState parameter from bitmap APIs where possible. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20190916141911.5255-3-vsementsov@virtuozzo.com [Rebased on top of block-copy. --js] Signed-off-by: John Snow <jsnow@redhat.com>
2019-09-16block: Remove unused masksNir Soffer1-1/+1
Replace confusing usage: ~BDRV_SECTOR_MASK With more clear: (BDRV_SECTOR_SIZE - 1) Remove BDRV_SECTOR_MASK and the unused BDRV_BLOCK_OFFSET_MASK which was it's last user. Signed-off-by: Nir Soffer <nsoffer@redhat.com> Message-id: 20190827185913.27427-3-nsoffer@redhat.com Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-09-12migration: register_savevm_live doesn't need devDr. David Alan Gilbert1-1/+1
Commit 78dd48df3 removed the last caller of register_savevm_live for an instantiable device (rather than a single system wide device); so trim out the parameter. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190822115433.12070-1-dgilbert@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-16block/dirty-bitmap: add bdrv_dirty_bitmap_getJohn Snow1-3/+2
Add a public interface for get. While we're at it, rename "bdrv_get_dirty_bitmap_locked" to "bdrv_dirty_bitmap_get_locked". (There are more functions to rename to the bdrv_dirty_bitmap_VERB form, but they will wait until the conclusion of this series.) Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-11-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16Include qemu/main-loop.h lessMarkus Armbruster1-0/+1
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-06-04block: Add BlockBackend.ctxKevin Wolf1-1/+2
This adds a new parameter to blk_new() which requires its callers to declare from which AioContext this BlockBackend is going to be used (or the locks of which AioContext need to be taken anyway). The given context is only stored and kept up to date when changing AioContexts. Actually applying the stored AioContext to the root node is saved for another commit. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-22migration/block: use qemu_iovec_init_bufVladimir Sementsov-Ogievskiy1-7/+3
Use new qemu_iovec_init_buf() instead of qemu_iovec_init_external( ... , 1), which simplifies the code. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20190218140926.333779-14-vsementsov@virtuozzo.com Message-Id: <20190218140926.333779-14-vsementsov@virtuozzo.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-01-11qemu/queue.h: leave head structs anonymous unless necessaryPaolo Bonzini1-2/+2
Most list head structs need not be given a name. In most cases the name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV or reverse iteration, but this does not apply to lists of other kinds, and even for QTAILQ in practice this is only rarely needed. In addition, we will soon reimplement those macros completely so that they do not need a name for the head struct. So clean up everything, not giving a name except in the rare case where it is necessary. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-03-23migration/block: compare only read blocks against the rate limiterPeter Lieven1-2/+1
only read_done blocks are in the queued to be flushed to the migration stream. submitted blocks are still in flight. Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1520507908-16743-6-git-send-email-pl@kamp.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-23migration/block: limit the number of parallel I/O requestsPeter Lieven1-0/+2
the current implementation submits up to 512 I/O requests in parallel which is much to high especially for a background task. This patch adds a maximum limit of 16 I/O requests that can be submitted in parallel to avoid monopolizing the I/O device. Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1520507908-16743-5-git-send-email-pl@kamp.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-13migration: introduce postcopy-only pendingVladimir Sementsov-Ogievskiy1-3/+4
There would be savevm states (dirty-bitmap) which can migrate only in postcopy stage. The corresponding pending is introduced here. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-id: 20180313180320.339796-6-vsementsov@virtuozzo.com
2018-03-09migration/block: rename MAX_INFLIGHT_IO to MAX_IO_BUFFERSPeter Lieven1-4/+3
this actually limits (as the original commit mesage suggests) the number of I/O buffers that can be allocated and not the number of parallel (inflight) I/O requests. Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1520507908-16743-4-git-send-email-pl@kamp.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-09migration/block: reset dirty bitmap before read in bulk phasePeter Lieven1-3/+2
Reset the dirty bitmap before reading to make sure we don't miss any new data. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1520507908-16743-3-git-send-email-pl@kamp.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-01-22Replace all occurances of __FUNCTION__ with __func__Alistair Francis1-2/+2
Replace all occurs of __FUNCTION__ except for the check in checkpatch with the non GCC specific __func__. One line in hcd-musb.c was manually tweaked to pass checkpatch. Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Gerd Hoffmann <kraxel@redhat.com> [THH: Removed hunks related to pxa2xx_mmci.c (fixed already)] Signed-off-by: Thomas Huth <thuth@redhat.com>
2017-12-18Remove empty statementsLadi Prosek1-1/+1
Thanks to Laszlo Ersek for spotting the double semicolon in target/i386/kvm.c I have trivially grepped the tree for ';;' in C files. Suggested-by: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Ladi Prosek <lprosek@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-11-17block: Make bdrv_next() keep strong referencesMax Reitz1-0/+1
On one hand, it is a good idea for bdrv_next() to return a strong reference because ideally nearly every pointer should be refcounted. This fixes intermittent failure of iotest 194. On the other, it is absolutely necessary for bdrv_next() itself to keep a strong reference to both the BB (in its first phase) and the BDS (at least in the second phase) because when called the next time, it will dereference those objects to get a link to the next one. Therefore, it needs these objects to stay around until then. Just storing the pointer to the next in the iterator is not really viable because that pointer might become invalid as well. Both arguments taken together means we should probably just invoke bdrv_ref() and blk_ref() in bdrv_next(). This means we have to assert that bdrv_next() is always called from the main loop, but that was probably necessary already before this patch and judging from the callers, it also looks to actually be the case. Keeping these strong references means however that callers need to give them up if they decide to abort the iteration early. They can do so through the new bdrv_next_cleanup() function. Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20171110172545.32609-1-mreitz@redhat.com Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-10-06dirty-bitmap: Change bdrv_[re]set_dirty_bitmap() to use bytesEric Blake1-2/+5
Some of the callers were already scaling bytes to sectors; others can be easily converted to pass byte offsets, all in our shift towards a consistent byte interface everywhere. Making the change will also make it easier to write the hold-out callers to use byte rather than sectors for their iterations; it also makes it easier for a future dirty-bitmap patch to offload scaling over to the internal hbitmap. Although all callers happen to pass sector-aligned values, make the internal scaling robust to any sub-sector requests. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-06dirty-bitmap: Change bdrv_get_dirty_locked() to take bytesEric Blake1-1/+2
Half the callers were already scaling bytes to sectors; the other half can eventually be simplified to use byte iteration. Both callers were already using the result as a bool, so make that explicit. Making the change also makes it easier for a future dirty-bitmap patch to offload scaling over to the internal hbitmap. Remember, asking whether a byte is dirty is effectively asking whether the entire granularity containing the byte is dirty, since we only track dirtiness by granularity. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-06dirty-bitmap: Change bdrv_get_dirty_count() to report bytesEric Blake1-1/+1
Thanks to recent cleanups, all callers were scaling a return value of sectors into bytes; do the scaling internally instead. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-09-27migration: disable auto-converge during bulk block migrationPeter Lieven1-0/+5
auto-converge and block migration currently do not play well together. During block migration the auto-converge logic detects that ram migration makes no progress and thus throttles down the vm until it nearly stalls completely. Avoid this by disabling the throttling logic during the bulk phase of the block migration. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1506421996-12513-1-git-send-email-pl@kamp.de> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-07-10migration: Rename cleanup() to save_cleanup()Juan Quintela1-1/+1
We need a cleanup for loads, so we rename here to be consistent. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> -- Rename htab_cleanup to htap_save_cleanup as dave suggestion Message-Id: <20170628095228.4661-3-quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-07-10migration: Rename save_live_setup() to save_setup()Juan Quintela1-1/+1
We are going to use it now for more than save live regions. Once there rename qemu_savevm_state_begin() to qemu_savevm_state_setup(). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170628095228.4661-2-quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-07-10block: Make bdrv_is_allocated() byte-basedEric Blake1-6/+10
We are gradually moving away from sector-based interfaces, towards byte-based. In the common case, allocation is unlikely to ever use values that are not naturally sector-aligned, but it is possible that byte-based values will let us be more precise about allocation at the end of an unaligned file that can do byte-based access. Changing the signature of the function to use int64_t *pnum ensures that the compiler enforces that all callers are updated. For now, the io.c layer still assert()s that all callers are sector-aligned on input and that *pnum is sector-aligned on return to the caller, but that can be relaxed when a later patch implements byte-based block status. Therefore, this code adds usages like DIV_ROUND_UP(,BDRV_SECTOR_SIZE) to callers that still want aligned values, where the call might reasonbly give non-aligned results in the future; on the other hand, no rounding is needed for callers that should just continue to work with byte alignment. For the most part this patch is just the addition of scaling at the callers followed by inverse scaling at bdrv_is_allocated(). But some code, particularly bdrv_commit(), gets a lot simpler because it no longer has to mess with sectors; also, it is now possible to pass NULL if the caller does not care how much of the image is allocated beyond the initial offset. Leave comments where we can further simplify once a later patch eliminates the need for sector-aligned requests through bdrv_is_allocated(). For ease of review, bdrv_is_allocated_above() will be tackled separately. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-16block: protect modification of dirty bitmaps with a mutexPaolo Bonzini1-4/+6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20170605123908.18777-17-pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>