aboutsummaryrefslogtreecommitdiff
path: root/migration
AgeCommit message (Collapse)AuthorFilesLines
2023-01-20include/block: Untangle inclusion loopsMarkus Armbruster3-0/+3
We have two inclusion loops: block/block.h -> block/block-global-state.h -> block/block-common.h -> block/blockjob.h -> block/block.h block/block.h -> block/block-io.h -> block/block-common.h -> block/blockjob.h -> block/block.h I believe these go back to Emanuele's reorganization of the block API, merged a few months ago in commit d7e2fe4aac8. Fortunately, breaking them is merely a matter of deleting unnecessary includes from headers, and adding them back in places where they are now missing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20221221133551.3967339-2-armbru@redhat.com>
2022-12-15Merge tag 'next-8.0-pull-request' of https://gitlab.com/juan.quintela/qemu ↵Peter Maydell8-448/+419
into staging Migration patches for 8.0 Hi This are the patches that I had to drop form the last PULL request because they werent fixes: - AVX2 is dropped, intel posted a fix, I have to redo it - Fix for out of order channels is out Daniel nacked it and I need to redo it # gpg: Signature made Thu 15 Dec 2022 09:38:29 GMT # gpg: using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723 # gpg: Good signature from "Juan Quintela <quintela@redhat.com>" [full] # gpg: aka "Juan Quintela <quintela@trasno.org>" [full] # Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723 * tag 'next-8.0-pull-request' of https://gitlab.com/juan.quintela/qemu: migration: Drop rs->f migration: Remove old preempt code around state maintainance migration: Send requested page directly in rp-return thread migration: Move last_sent_block into PageSearchStatus migration: Make PageSearchStatus part of RAMState migration: Add pss_init() migration: Introduce pss_channel migration: Teach PSS about host page migration: Use atomic ops properly for page accountings migration: Yield bitmap_mutex properly when sending/sleeping migration: Remove RAMState.f references in compression code migration: Trivial cleanup save_page_header() on same block check migration: Cleanup xbzrle zero page cache update logic migration: Add postcopy_preempt_active() migration: Take bitmap mutex when completing ram migration migration: Export ram_release_page() migration: Export ram_transferred_ram() multifd: Create page_count fields into both MultiFD{Recv,Send}Params multifd: Create page_size fields into both MultiFD{Recv,Send}Params Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15Merge tag 'pull-misc-2022-12-14' of https://repo.or.cz/qemu/armbru into stagingPeter Maydell2-13/+3
Miscellaneous patches for 2022-12-14 # gpg: Signature made Wed 14 Dec 2022 15:23:02 GMT # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * tag 'pull-misc-2022-12-14' of https://repo.or.cz/qemu/armbru: ppc4xx_sdram: Simplify sdram_ddr_size() to return block/vmdk: Simplify vmdk_co_create() to return directly cleanup: Tweak and re-run return_directly.cocci io: Tidy up fat-fingered parameter name qapi: Use returned bool to check for failure (again) sockets: Use ERRP_GUARD() where obviously appropriate qemu-config: Use ERRP_GUARD() where obviously appropriate qemu-config: Make config_parse_qdict() return bool monitor: Use ERRP_GUARD() in monitor_init() monitor: Simplify monitor_fd_param()'s error handling error: Move ERRP_GUARD() to the beginning of the function error: Drop a few superfluous ERRP_GUARD() error: Drop some obviously superfluous error_propagate() Drop more useless casts from void * to pointer Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15migration: Drop rs->fPeter Xu1-8/+4
Now with rs->pss we can already cache channels in pss->pss_channels. That pss_channel contains more infromation than rs->f because it's per-channel. So rs->f could be replaced by rss->pss[RAM_CHANNEL_PRECOPY].pss_channel, while rs->f itself is a bit vague now. Note that vanilla postcopy still send pages via pss[RAM_CHANNEL_PRECOPY], that's slightly confusing but it reflects the reality. Then, after the replacement we can safely drop rs->f. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Remove old preempt code around state maintainancePeter Xu3-297/+3
With the new code to send pages in rp-return thread, there's little help to keep lots of the old code on maintaining the preempt state in migration thread, because the new way should always be faster.. Then if we'll always send pages in the rp-return thread anyway, we don't need those logic to maintain preempt state anymore because now we serialize things using the mutex directly instead of using those fields. It's very unfortunate to have those code for a short period, but that's still one intermediate step that we noticed the next bottleneck on the migration thread. Now what we can do best is to drop unnecessary code as long as the new code is stable to reduce the burden. It's actually a good thing because the new "sending page in rp-return thread" model is (IMHO) even cleaner and with better performance. Remove the old code that was responsible for maintaining preempt states, at the meantime also remove x-postcopy-preempt-break-huge parameter because with concurrent sender threads we don't really need to break-huge anymore. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Send requested page directly in rp-return threadPeter Xu2-16/+131
With all the facilities ready, send the requested page directly in the rp-return thread rather than queuing it in the request queue, if and only if postcopy preempt is enabled. It can achieve so because it uses separate channel for sending urgent pages. The only shared data is bitmap and it's protected by the bitmap_mutex. Note that since we're moving the ownership of the urgent channel from the migration thread to rp thread it also means the rp thread is responsible for managing the qemufile, e.g. properly close it when pausing migration happens. For this, let migration_release_from_dst_file to cover shutdown of the urgent channel too, renaming it as migration_release_dst_files() to better show what it does. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Move last_sent_block into PageSearchStatusPeter Xu1-30/+41
Since we use PageSearchStatus to represent a channel, it makes perfect sense to keep last_sent_block (aka, leverage RAM_SAVE_FLAG_CONTINUE) to be per-channel rather than global because each channel can be sending different pages on ramblocks. Hence move it from RAMState into PageSearchStatus. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Make PageSearchStatus part of RAMStatePeter Xu1-51/+61
We used to allocate PSS structure on the stack for precopy when sending pages. Make it static, so as to describe per-channel ram migration status. Here we declared RAM_CHANNEL_MAX instances, preparing for postcopy to use it, even though this patch has not yet to start using the 2nd instance. This should not have any functional change per se, but it already starts to export PSS information via the RAMState, so that e.g. one PSS channel can start to reference the other PSS channel. Always protect PSS access using the same RAMState.bitmap_mutex. We already do so, so no code change needed, just some comment update. Maybe we should consider renaming bitmap_mutex some day as it's going to be a more commonly and big mutex we use for ram states, but just leave it for later. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Add pss_init()Peter Xu1-3/+9
Helper to init PSS structures. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Introduce pss_channelPeter Xu1-32/+38
Introduce pss_channel for PageSearchStatus, define it as "the migration channel to be used to transfer this host page". We used to have rs->f, which is a mirror to MigrationState.to_dst_file. After postcopy preempt initial version, rs->f can be dynamically changed depending on which channel we want to use. But that later work still doesn't grant full concurrency of sending pages in e.g. different threads, because rs->f can either be the PRECOPY channel or POSTCOPY channel. This needs to be per-thread too. PageSearchStatus is actually a good piece of struct which we can leverage if we want to have multiple threads sending pages. Sending a single guest page may not make sense, so we make the granule to be "host page", and in the PSS structure we allow specify a QEMUFile* to migrate a specific host page. Then we open the possibility to specify different channels in different threads with different PSS structures. The PSS prefix can be slightly misleading here because e.g. for the upcoming usage of postcopy channel/thread it's not "searching" (or, scanning) at all but sending the explicit page that was requested. However since PSS existed for some years keep it as-is until someone complains. This patch mostly (simply) replace rs->f with pss->pss_channel only. No functional change intended for this patch yet. But it does prepare to finally drop rs->f, and make ram_save_guest_page() thread safe. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Teach PSS about host pagePeter Xu1-19/+76
Migration code has a lot to do with host pages. Teaching PSS core about the idea of host page helps a lot and makes the code clean. Meanwhile, this prepares for the future changes that can leverage the new PSS helpers that this patch introduces to send host page in another thread. Three more fields are introduced for this: (1) host_page_sending: this is set to true when QEMU is sending a host page, false otherwise. (2) host_page_{start|end}: these point to the start/end of host page we're sending, and it's only valid when host_page_sending==true. For example, when we look up the next dirty page on the ramblock, with host_page_sending==true, we'll not try to look for anything beyond the current host page boundary. This can be slightly efficient than current code because currently we'll set pss->page to next dirty bit (which can be over current host page boundary) and reset it to host page boundary if we found it goes beyond that. With above, we can easily make migration_bitmap_find_dirty() self contained by updating pss->page properly. rs* parameter is removed because it's not even used in old code. When sending a host page, we should use the pss helpers like this: - pss_host_page_prepare(pss): called before sending host page - pss_within_range(pss): whether we're still working on the cur host page? - pss_host_page_finish(pss): called after sending a host page Then we can use ram_save_target_page() to save one small page. Currently ram_save_host_page() is still the only user. If there'll be another function to send host page (e.g. in return path thread) in the future, it should follow the same style. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Use atomic ops properly for page accountingsPeter Xu4-23/+51
To prepare for thread-safety on page accountings, at least below counters need to be accessed only atomically, they are: ram_counters.transferred ram_counters.duplicate ram_counters.normal ram_counters.postcopy_bytes There are a lot of other counters but they won't be accessed outside migration thread, then they're still safe to be accessed without atomic ops. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Yield bitmap_mutex properly when sending/sleepingPeter Xu1-11/+35
Don't take the bitmap mutex when sending pages, or when being throttled by migration_rate_limit() (which is a bit tricky to call it here in ram code, but seems still helpful). It prepares for the possibility of concurrently sending pages in >1 threads using the function ram_save_host_page() because all threads may need the bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of qemu_sem_wait() blocking for one thread will not block the other from progressing. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Remove RAMState.f references in compression codePeter Xu1-5/+7
Removing referencing to RAMState.f in compress_page_with_multi_thread() and flush_compressed_data(). Compression code by default isn't compatible with having >1 channels (or it won't currently know which channel to flush the compressed data), so to make it simple we always flush on the default to_dst_file port until someone wants to add >1 ports support, as rs->f right now can really change (after postcopy preempt is introduced). There should be no functional change at all after patch applied, since as long as rs->f referenced in compression code, it must be to_dst_file. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Trivial cleanup save_page_header() on same block checkPeter Xu1-2/+3
The 2nd check on RAM_SAVE_FLAG_CONTINUE is a bit redundant. Use a boolean to be clearer. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Cleanup xbzrle zero page cache update logicPeter Xu1-5/+1
The major change is to replace "!save_page_use_compression()" with "xbzrle_enabled" to make it clear. Reasonings: (1) When compression enabled, "!save_page_use_compression()" is exactly the same as checking "xbzrle_enabled". (2) When compression disabled, "!save_page_use_compression()" always return true. We used to try calling the xbzrle code, but after this change we won't, and we shouldn't need to. Since at it, drop the xbzrle_enabled check in xbzrle_cache_zero_page() because with this change it's not needed anymore. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Add postcopy_preempt_active()Peter Xu1-2/+7
Add the helper to show that postcopy preempt enabled, meanwhile active. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Take bitmap mutex when completing ram migrationPeter Xu1-0/+2
Any call to ram_find_and_save_block() needs to take the bitmap mutex. We used to not take it for most of ram_save_complete() because we thought we're the only one left using the bitmap, but it's not true after the preempt full patchset applied, since the return path can be taking it too. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Export ram_release_page()Juan Quintela2-1/+2
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Leonardo Bras <leobras@redhat.com>
2022-12-15migration: Export ram_transferred_ram()Juan Quintela2-1/+3
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: David Edmondson <david.edmondson@oracle.com> Reviewed-by: Leonardo Bras <leobras@redhat.com>
2022-12-15multifd: Create page_count fields into both MultiFD{Recv,Send}ParamsJuan Quintela2-3/+8
We were recalculating it left and right. We plan to change that values on next patches. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Leonardo Bras <leobras@redhat.com>
2022-12-15multifd: Create page_size fields into both MultiFD{Recv,Send}ParamsJuan Quintela4-25/+23
We were calling qemu_target_page_size() left and right. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Leonardo Bras <leobras@redhat.com>
2022-12-14qapi migration: Elide redundant has_FOO in generated CMarkus Armbruster3-22/+10
The has_FOO for pointer-valued FOO are redundant, except for arrays. They are also a nuisance to work with. Recent commit "qapi: Start to elide redundant has_FOO in generated C" provided the means to elide them step by step. This is the step for qapi/migration.json. Said commit explains the transformation in more detail. The invariant violations mentioned there do not occur here. Cc: Juan Quintela <quintela@redhat.com> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20221104160712.3005652-17-armbru@redhat.com>
2022-12-14cleanup: Tweak and re-run return_directly.cocciMarkus Armbruster2-13/+3
Tweak the semantic patch to drop redundant parenthesis around the return expression. Coccinelle drops a comment in hw/rdma/vmw/pvrdma_cmd.c; restored manually. Coccinelle messes up vmdk_co_create(), not sure why. Change dropped, will be done manually in the next commit. Line breaks in target/avr/cpu.h and hw/rdma/vmw/pvrdma_cmd.c tidied up manually. Whitespace in tools/virtiofsd/fuse_lowlevel.c tidied up manually. checkpatch.pl complains "return of an errno should typically be -ve" two times for hw/9pfs/9p-synth.c. Preexisting, the patch merely makes it visible to checkpatch.pl. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20221122134917.1217307-2-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-11-21migration: Block migration comment or code is wrongJuan Quintela1-2/+2
And it appears that what is wrong is the code. During bulk stage we need to make sure that some block is dirty, but no games with max_size at all. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-11-21migration: Disable multifd explicitly with compressionPeter Xu2-6/+12
Multifd thread model does not work for compression, explicitly disable it. Note that previuosly even we can enable both of them, nothing will go wrong, because the compression code has higher priority so multifd feature will just be ignored. Now we'll fail even earlier at config time so the user should be aware of the consequence better. Note that there can be a slight chance of breaking existing users, but let's assume they're not majority and not serious users, or they should have found that multifd is not working already. With that, we can safely drop the check in ram_save_target_page() for using multifd, because when multifd=on then compression=off, then the removed check on save_page_use_compression() will also always return false too. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-11-21migration: Disallow postcopy preempt to be used with compressPeter Xu1-0/+11
The preempt mode requires the capability to assign channel for each of the page, while the compression logic will currently assign pages to different compress thread/local-channel so potentially they're incompatible. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-11-21migration: Fix race on qemu_file_shutdown()Peter Xu1-3/+24
In qemu_file_shutdown(), there's a possible race if with current order of operation. There're two major things to do: (1) Do real shutdown() (e.g. shutdown() syscall on socket) (2) Update qemufile's last_error We must do (2) before (1) otherwise there can be a race condition like: page receiver other thread ------------- ------------ qemu_get_buffer() do shutdown() returns 0 (buffer all zero) (meanwhile we didn't check this retcode) try to detect IO error last_error==NULL, IO okay install ALL-ZERO page set last_error --> guest crash! To fix this, we can also check retval of qemu_get_buffer(), but not all APIs can be properly checked and ultimately we still need to go back to qemu_file_get_error(). E.g. qemu_get_byte() doesn't return error. Maybe some day a rework of qemufile API is really needed, but for now keep using qemu_file_get_error() and fix it by not allowing that race condition to happen. Here shutdown() is indeed special because the last_error was emulated. For real -EIO errors it'll always be set when e.g. sendmsg() error triggers so we won't miss those ones, only shutdown() is a bit tricky here. Cc: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-11-21migration: Fix possible infinite loop of ram save processPeter Xu1-4/+12
When starting ram saving procedure (especially at the completion phase), always set last_seen_block to non-NULL to make sure we can always correctly detect the case where "we've migrated all the dirty pages". Then we'll guarantee both last_seen_block and pss.block will be valid always before the loop starts. See the comment in the code for some details. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-11-21migration/multifd/zero-copy: Create helper function for flushingLeonardo Bras1-11/+19
Move flushing code from multifd_send_sync_main() to a new helper, and call it in multifd_send_sync_main(). Signed-off-by: Leonardo Bras <leobras@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-11-21migration/channel-block: fix return value for qio_channel_block_{readv,writev}Fiona Ebner1-2/+4
in the error case. The documentation in include/io/channel.h states that -1 or QIO_CHANNEL_ERR_BLOCK should be returned upon error. Simply passing along the return value from the bdrv-functions has the potential to confuse the call sides. Non-blocking mode is not implemented currently, so -1 it is. Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-10-27reset: allow registering handlers that aren't called by snapshot loadingJason A. Donenfeld1-1/+1
Snapshot loading only expects to call deterministic handlers, not non-deterministic ones. So introduce a way of registering handlers that won't be called when reseting for snapshots. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Message-id: 20221025004327.568476-2-Jason@zx2c4.com [PMM: updated json doc comment with Markus' text; fixed checkpatch style nit] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-07migration: add missing coroutine_fn annotationsMarc-André Lureau1-1/+2
Callers of coroutine_fn must be coroutine_fn themselves, or the call must be within "if (qemu_in_coroutine())". Apply coroutine_fn to functions where this holds. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Alberto Faria <afaria@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20220922084924.201610-26-pbonzini@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-10-04Use g_new() & friends where that makes obvious senseMarkus Armbruster1-2/+2
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer, for two reasons. One, it catches multiplication overflowing size_t. Two, it returns T * rather than void *, which lets the compiler catch more type errors. This commit only touches allocations with size arguments of the form sizeof(T). Patch created mechanically with: $ spatch --in-place --sp-file scripts/coccinelle/use-g_new-etc.cocci \ --macro-file scripts/cocci-macro-file.h FILES... The previous iteration was commit a95942b50c. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20220923084254.4173111-1-armbru@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-08-02migration: Define BLK_MIG_BLOCK_SIZE as unsigned long longPeter Maydell1-1/+1
When we use BLK_MIG_BLOCK_SIZE in expressions like block_mig_state.submitted * BLK_MIG_BLOCK_SIZE, this multiplication is done as 32 bits, because both operands are 32 bits. Coverity complains about possible overflows because we then accumulate that into a 64 bit variable. Define BLK_MIG_BLOCK_SIZE as unsigned long long using the ULL suffix. The only two current uses of it with this problem are both in block_save_pending(), so we could just cast to uint64_t there, but using the ULL suffix is simpler and ensures that we don't accidentally introduce new variants of the same issue in future. Resolves: Coverity CID 1487136, 1487175 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20220721115207.729615-3-peter.maydell@linaro.org> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-08-02migration: Assert that migrate_multifd_compression() returns an in-range valuePeter Maydell1-0/+1
Coverity complains that when we use the return value from migrate_multifd_compression() as an array index: multifd_recv_state->ops = multifd_ops[migrate_multifd_compression()]; that this might overrun the array (which is declared to have size MULTIFD_COMPRESSION__MAX). This is because the function return type is MultiFDCompression, which is an autogenerated enum. The code generator includes the "one greater than the maximum possible value" MULTIFD_COMPRESSION__MAX in the enum, even though this is not actually a valid value for the enum, and this makes Coverity think that migrate_multifd_compression() could return that __MAX value and index off the end of the array. Suppress the Coverity error by asserting that the value we're going to return is within range. Resolves: Coverity CID 1487239, 1487254 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20220721115207.729615-2-peter.maydell@linaro.org> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-08-02Revert "migration: Simplify unqueue_page()"Thomas Huth2-12/+28
This reverts commit cfd66f30fb0f735df06ff4220e5000290a43dad3. The simplification of unqueue_page() introduced a bug that sometimes breaks migration on s390x hosts. The problem is not fully understood yet, but since we are already in the freeze for QEMU 7.1 and we need something working there, let's revert this patch for the upcoming release. The optimization can be redone later again in a proper way if necessary. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2099934 Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20220802061949.331576-1-thuth@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-08-02migration: add remaining params->has_* = true in migration_instance_init()Leonardo Bras1-0/+4
Some of params->has_* = true are missing in migration_instance_init, this causes migrate_params_check() to skip some tests, allowing some unsupported scenarios. Fix this by adding all missing params->has_* = true in migration_instance_init(). Fixes: 69ef1f36b0 ("migration: define 'tls-creds' and 'tls-hostname' migration parameters") Fixes: 1d58872a91 ("migration: do not wait for free thread") Fixes: d2f1d29b95 ("migration: add support for a "tls-authz" migration parameter") Signed-off-by: Leonardo Bras <leobras@redhat.com> Message-Id: <20220726010235.342927-1-leobras@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Avoid false-positive on non-supported scenarios for zero-copy-sendLeonardo Bras1-1/+14
Migration with zero-copy-send currently has it's limitations, as it can't be used with TLS nor any kind of compression. In such scenarios, it should output errors during parameter / capability setting. But currently there are some ways of setting this not-supported scenarios without printing the error message: !) For 'compression' capability, it works by enabling it together with zero-copy-send. This happens because the validity test for zero-copy uses the helper unction migrate_use_compression(), which check for compression presence in s->enabled_capabilities[MIGRATION_CAPABILITY_COMPRESS]. The point here is: the validity test happens before the capability gets enabled. If all of them get enabled together, this test will not return error. In order to fix that, replace migrate_use_compression() by directly testing the cap_list parameter migrate_caps_check(). 2) For features enabled by parameters such as TLS & 'multifd_compression', there was also a possibility of setting non-supported scenarios: setting zero-copy-send first, then setting the unsupported parameter. In order to fix that, also add a check for parameters conflicting with zero-copy-send on migrate_params_check(). 3) XBZRLE is also a compression capability, so it makes sense to also add it to the list of capabilities which are not supported with zero-copy-send. Fixes: 1abaec9a1b2c ("migration: Change zero_copy_send from migration parameter to migration capability") Signed-off-by: Leonardo Bras <leobras@redhat.com> Message-Id: <20220719122345.253713-1-leobras@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20multifd: Document the locking of MultiFD{Send/Recv}ParamsJuan Quintela1-25/+41
Reorder the structures so we can know if the fields are: - Read only - Their own locking (i.e. sems) - Protected by 'mutex' - Only for the multifd channel Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <20220531104318.7494-2-quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: Typo fixes from Chen Zhang
2022-07-20migration/multifd: Report to user when zerocopy not workingLeonardo Bras3-0/+9
Some errors, like the lack of Scatter-Gather support by the network interface(NETIF_F_SG) may cause sendmsg(...,MSG_ZEROCOPY) to fail on using zero-copy, which causes it to fall back to the default copying mechanism. After each full dirty-bitmap scan there should be a zero-copy flush happening, which checks for errors each of the previous calls to sendmsg(...,MSG_ZEROCOPY). If all of them failed to use zero-copy, then increment dirty_sync_missed_zero_copy migration stat to let the user know about it. Signed-off-by: Leonardo Bras <leobras@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Message-Id: <20220711211112.18951-4-leobras@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20Add dirty-sync-missed-zero-copy migration statLeonardo Bras1-0/+2
Signed-off-by: Leonardo Bras <leobras@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20220711211112.18951-3-leobras@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: remove unreachable code after reading dataDaniel P. Berrangé1-3/+1
The code calls qio_channel_read() in a loop when it reports QIO_CHANNEL_ERR_BLOCK. This code is reported when errno==EAGAIN. As such the later block of code will always hit the 'errno != EAGAIN' condition, making the final 'else' unreachable. Fixes: Coverity CID 1490203 Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20220627135318.156121-1-berrange@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Respect postcopy request order in preemption modePeter Xu1-13/+52
With preemption mode on, when we see a postcopy request that was requesting for exactly the page that we have preempted before (so we've partially sent the page already via PRECOPY channel and it got preempted by another postcopy request), currently we drop the request so that after all the other postcopy requests are serviced then we'll go back to precopy stream and start to handle that. We dropped the request because we can't send it via postcopy channel since the precopy channel already contains partial of the data, and we can only send a huge page via one channel as a whole. We can't split a huge page into two channels. That's a very corner case and that works, but there's a change on the order of postcopy requests that we handle since we're postponing this (unlucky) postcopy request to be later than the other queued postcopy requests. The problem is there's a possibility that when the guest was very busy, the postcopy queue can be always non-empty, it means this dropped request will never be handled until the end of postcopy migration. So, there's a chance that there's one dest QEMU vcpu thread waiting for a page fault for an extremely long time just because it's unluckily accessing the specific page that was preempted before. The worst case time it needs can be as long as the whole postcopy migration procedure. It's extremely unlikely to happen, but when it happens it's not good. The root cause of this problem is because we treat pss->postcopy_requested variable as with two meanings bound together, as the variable shows: 1. Whether this page request is urgent, and, 2. Which channel we should use for this page request. With the old code, when we set postcopy_requested it means either both (1) and (2) are true, or both (1) and (2) are false. We can never have (1) and (2) to have different values. However it doesn't necessarily need to be like that. It's very legal that there's one request that has (1) very high urgency, but (2) we'd like to use the precopy channel. Just like the corner case we were discussing above. To differenciate the two meanings better, introduce a new field called postcopy_target_channel, showing which channel we should use for this page request, so as to cover the old meaning (2) only. Then we leave the postcopy_requested variable to stand only for meaning (1), which is the urgency of this page request. With this change, we can easily boost priority of a preempted precopy page as long as we know that page is also requested as a postcopy page. So with the new approach in get_queued_page() instead of dropping that request, we send it right away with the precopy channel so we get back the ordering of the page faults just like how they're requested on dest. Reported-by: Manish Mishra <manish.mishra@nutanix.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Manish Mishra <manish.mishra@nutanix.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185520.27583-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Enable TLS for preempt channelPeter Xu2-8/+50
This patch is based on the async preempt channel creation. It continues wiring up the new channel with TLS handshake to destionation when enabled. Note that only the src QEMU needs such operation; the dest QEMU does not need any change for TLS support due to the fact that all channels are established synchronously there, so all the TLS magic is already properly handled by migration_tls_channel_process_incoming(). Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185518.27529-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Export tls-[creds|hostname|authz] params to cmdline tooPeter Xu1-3/+3
It's useful for specifying tls credentials all in the cmdline (along with the -object tls-creds-*), especially for debugging purpose. The trick here is we must remember to not free these fields again in the finalize() function of migration object, otherwise it'll cause double-free. The thing is when destroying an object, we'll first destroy the properties that bound to the object, then the object itself. To be explicit, when destroy the object in object_finalize() we have such sequence of operations: object_property_del_all(obj); object_deinit(obj, ti); So after this change the two fields are properly released already even before reaching the finalize() function but in object_property_del_all(), hence we don't need to free them anymore in finalize() or it's double-free. This also fixes a trivial memory leak for tls-authz as we forgot to free it before this patch. Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185515.27475-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Add helpers to detect TLS capabilityPeter Xu5-10/+17
Add migrate_channel_requires_tls() to detect whether the specific channel requires TLS, leveraging the recently introduced migrate_use_tls(). No functional change intended. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185513.27421-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Add property x-postcopy-preempt-break-hugePeter Xu3-0/+16
Add a property field that can conditionally disable the "break sending huge page" behavior in postcopy preemption. By default it's enabled. It should only be used for debugging purposes, and we should never remove the "x-" prefix. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Manish Mishra <manish.mishra@nutanix.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185511.27366-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Create the postcopy preempt channel asynchronouslyPeter Xu4-12/+68
This patch allows the postcopy preempt channel to be created asynchronously. The benefit is that when the connection is slow, we won't take the BQL (and potentially block all things like QMP) for a long time without releasing. A function postcopy_preempt_wait_channel() is introduced, allowing the migration thread to be able to wait on the channel creation. The channel is always created by the main thread, in which we'll kick a new semaphore to tell the migration thread that the channel has created. We'll need to wait for the new channel in two places: (1) when there's a new postcopy migration that is starting, or (2) when there's a postcopy migration to resume. For the start of migration, we don't need to wait for this channel until when we want to start postcopy, aka, postcopy_start(). We'll fail the migration if we found that the channel creation failed (which should probably not happen at all in 99% of the cases, because the main channel is using the same network topology). For a postcopy recovery, we'll need to wait in postcopy_pause(). In that case if the channel creation failed, we can't fail the migration or we'll crash the VM, instead we keep in PAUSED state, waiting for yet another recovery. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Manish Mishra <manish.mishra@nutanix.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185509.27311-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Postcopy recover with preempt enabledPeter Xu7-8/+119
To allow postcopy recovery, the ram fast load (preempt-only) dest QEMU thread needs similar handling on fault tolerance. When ram_load_postcopy() fails, instead of stopping the thread it halts with a semaphore, preparing to be kicked again when recovery is detected. A mutex is introduced to make sure there's no concurrent operation upon the socket. To make it simple, the fast ram load thread will take the mutex during its whole procedure, and only release it if it's paused. The fast-path socket will be properly released by the main loading thread safely when there's network failures during postcopy with that mutex held. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185506.27257-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>