aboutsummaryrefslogtreecommitdiff
path: root/migration
AgeCommit message (Collapse)AuthorFilesLines
2023-02-11migration: Make ram_save_target_page() a pointerJuan Quintela1-4/+15
We are going to create a new function for multifd latest in the series. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2023-02-11migration: Calculate ram size onceJuan Quintela1-2/+5
We are recalculating ram size continously, when we know that it don't change during migration. Create a field in RAMState to track it. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-02-11migration: Split ram_bytes_total_common() in two functionsJuan Quintela1-11/+14
It is just a big if in the middle of the function, and we need two functions anways. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Juan Quintela <quintela@redhat.com> --- Reindent to make Phillipe happy (and CODING_STYLE)
2023-02-11migration: Make find_dirty_block() return a single parameterJuan Quintela1-15/+22
We used to return two bools, just return a single int with the following meaning: old return / again / new return false false PAGE_ALL_CLEAN false true PAGE_TRY_AGAIN true true PAGE_DIRTY_FOUND /* We don't care about again at all */ Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-11migration: Simplify ram_find_and_save_block()Juan Quintela1-11/+9
We will need later that find_dirty_block() return errors, so simplify the loop. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-11multifd: Remove some redundant codeLi Zhang1-11/+4
Clean up some unnecessary code Signed-off-by: Li Zhang <lizhang@suse.de> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-11multifd: cleanup the function multifd_channel_connectLi Zhang1-22/+21
Cleanup multifd_channel_connect Signed-off-by: Li Zhang <lizhang@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-11migration: Remove spurious filesJuan Quintela1-1274/+0
I introduced spurious files on my tree during a rebase: commit ebfc57871506b3fe36cc41f69ee3ad31a34afd63 Author: Zhenzhong Duan <zhenzhong.duan@intel.com> Date: Mon Oct 17 15:53:51 2022 +0800 multifd: Fix flush of zero copy page send request Make IO channel flush call after the inflight request has been drained in multifd thread, or else we may missed to flush the inflight request. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> To make things worse, it appears like Zhenzhong is the one to blame. for(int i=0; i < 1000000; i++) { printf("I will not do rebases when I am tired\n"); } Sorry, Juan. Reviewed-by: Cédric Le Goater <clg@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-08Drop duplicate #includeMarkus Armbruster1-2/+0
Tracked down with the help of scripts/clean-includes. Signed-off-by: Markus Armbruster <armbru@redhat.com> Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230202133830.2152150-21-armbru@redhat.com>
2023-02-06migration: save/delete migration thread infoJiang Jiacheng2-0/+10
To support query migration thread infomation, save and delete thread(live_migration and multifdsend) information at thread creation and finish. Signed-off-by: Jiang Jiacheng <jiangjiacheng@huawei.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration: Introduce interface query-migrationthreadsJiang Jiacheng3-0/+80
Introduce interface query-migrationthreads. The interface is used to query information about migration threads and returns with migration thread's name and its id. Introduce threadinfo.c to manage threads with migration. Signed-off-by: Jiang Jiacheng <jiangjiacheng@huawei.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06multifd: Fix flush of zero copy page send requestZhenzhong Duan2-4/+1278
Make IO channel flush call after the inflight request has been drained in multifd thread, or else we may missed to flush the inflight request. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06multifd: Fix a race on reading MultiFDPages_t.blockZhenzhong Duan1-2/+5
In multifd_queue_page() MultiFDPages_t.block is checked twice. Between the two checks, MultiFDPages_t.block may be reset to NULL by multifd thread. This lead to the 2nd check always true then a redundant page submitted to multifd thread again. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration: check magic value for deciding the mapping of channelsmanish.mishra7-31/+101
Current logic assumes that channel connections on the destination side are always established in the same order as the source and the first one will always be the main channel followed by the multifid or post-copy preemption channel. This may not be always true, as even if a channel has a connection established on the source side it can be in the pending state on the destination side and a newer connection can be established first. Basically causing out of order mapping of channels on the destination side. Currently, all channels except post-copy preempt send a magic number, this patch uses that magic number to decide the type of channel. This logic is applicable only for precopy(multifd) live migration, as mentioned, the post-copy preempt channel does not send any magic number. Also, tls live migrations already does tls handshake before creating other channels, so this issue is not possible with tls, hence this logic is avoided for tls live migrations. This patch uses read peek to check the magic number of channels so that current data/control stream management remains un-effected. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Suggested-by: Daniel P. Berrange <berrange@redhat.com> Signed-off-by: manish.mishra <manish.mishra@nutanix.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06io: Add support for MSG_PEEK for socket channelmanish.mishra2-0/+2
MSG_PEEK peeks at the channel, The data is treated as unread and the next read shall still return this data. This support is currently added only for socket class. Extra parameter 'flags' is added to io_readv calls to pass extra read flags like MSG_PEEK. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Suggested-by: Daniel P. Berrange <berrange@redhat.com> Signed-off-by: manish.mishra <manish.mishra@nutanix.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/dirtyrate: Show sample pages only in page-sampling modeZhenzhong Duan1-4/+6
The value of "Sample Pages" is confusing in mode other than page-sampling. See below: (qemu) calc_dirty_rate -b 10 520 (qemu) info dirty_rate Status: measuring Start Time: 11646834 (ms) Sample Pages: 520 (per GB) Period: 10 (sec) Mode: dirty-bitmap Dirty rate: (not ready) (qemu) info dirty_rate Status: measured Start Time: 11646834 (ms) Sample Pages: 0 (per GB) Period: 10 (sec) Mode: dirty-bitmap Dirty rate: 2 (MB/s) While it's totally useless in dirty-ring and dirty-bitmap mode, fix to show it only in page-sampling mode. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration: Perform vmsd structure check during testsDr. David Alan Gilbert1-0/+42
Perform a check on vmsd structures during test runs in the hope of catching any missing terminators and other simple screwups. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration: Add canary to VMSTATE_END_OF_LISTDr. David Alan Gilbert2-0/+3
We fairly regularly forget VMSTATE_END_OF_LIST markers off descriptions; given that the current check is only for ->name being NULL, sometimes we get unlucky and the code apparently works and no one spots the error. Explicitly add a flag, VMS_END that should be set, and assert it is set during the traversal. Note: This can't go in until we update the copy of vmstate.h in slirp. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/rdma: fix return value for qio_channel_rdma_{readv,writev}Fiona Ebner1-5/+10
upon errors. As the documentation in include/io/channel.h states, only -1 and QIO_CHANNEL_ERR_BLOCK should be returned upon error. Other values have the potential to confuse the call sites. error_setg is used rather than error_setg_errno, because there are certain code paths where -1 (as a non-errno) is propagated up (e.g. starting from qemu_rdma_block_for_wrid or qemu_rdma_post_recv_control) all the way to qio_channel_rdma_{readv,writev}. Similar to a216ec85b7 ("migration/channel-block: fix return value for qio_channel_block_{readv,writev}"). Suggested-by: Zhang Chen <chen.zhang@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration: Show downtime during postcopy phasePeter Xu1-2/+12
The downtime should be displayed during postcopy phase because the switchover phase is done. OTOH it's weird to show "expected downtime" which can confuse what does that mean if the switchover has already happened anyway. This is a slight ABI change on QMP, but I assume it shouldn't affect anyone. Reviewed-by: Leonardo Bras <leobras@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/ram: Factor out check for advised postcopyDavid Hildenbrand2-7/+8
Let's factor out this check, to be used in virtio-mem context next. While at it, fix a spelling error in a related comment. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>S Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/savevm: Allow immutable device state to be migrated early (i.e., ↵David Hildenbrand1-0/+14
before RAM) For virtio-mem, we want to have the plugged/unplugged state of memory blocks available before migrating any actual RAM content, and perform sanity checks before touching anything on the destination. This information is immutable on the migration source while migration is active, We want to use this information for proper preallocation support with migration: currently, we don't preallocate memory on the migration target, and especially with hugetlb, we can easily run out of hugetlb pages during RAM migration and will crash (SIGBUS) instead of catching this gracefully via preallocation. Migrating device state via a VMSD before we start iterating is currently impossible: the only approach that would be possible is avoiding a VMSD and migrating state manually during save_setup(), to be restored during load_state(). Let's allow for migrating device state via a VMSD early, during the setup phase in qemu_savevm_state_setup(). To keep it simple, we indicate applicable VMSD's using an "early_setup" flag. Note that only very selected devices (i.e., ones seriously messing with RAM setup) are supposed to make use of such early state migration. While at it, also use a bool for the "unmigratable" member. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>S Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/savevm: Prepare vmdesc json writer in qemu_savevm_state_setup()David Hildenbrand3-6/+18
... and store it in the migration state. This is a preparation for storing selected vmds's already in qemu_savevm_state_setup(). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/savevm: Move more savevm handling into vmstate_save()David Hildenbrand1-42/+37
Let's move more code into vmstate_save(), reducing code duplication and preparing for reuse of vmstate_save() in qemu_savevm_state_setup(). We have to move vmstate_save() to make the compiler happy. We'll now also trace from qemu_save_device_state(), triggering the same tracepoints as previously called from qemu_savevm_state_complete_precopy_non_iterable() only. Note that qemu_save_device_state() ignores iterable device state, such as RAM, and consequently doesn't trigger some other trace points (e.g., trace_savevm_state_setup()). Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/ram: Optimize ram_write_tracking_start() for RamDiscardManagerDavid Hildenbrand1-2/+34
ram_block_populate_read() already optimizes for RamDiscardManager. However, ram_write_tracking_start() will still try protecting discarded memory ranges. Let's optimize, because discarded ranges don't map any pages and (1) For anonymous memory, trying to protect using uffd-wp without a mapped page is ignored by the kernel and consequently a NOP. (2) For shared/file-backed memory, we will fill present page tables in the range with PTE markers. However, we will even allocate page tables just to fill them with unnecessary PTE markers and effectively waste memory. So let's exclude these ranges, just like ram_block_populate_read() already does. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/ram: Rely on used_length for uffd_change_protection()David Hildenbrand1-1/+1
ram_mig_ram_block_resized() will abort migration (including background snapshots) when resizing a RAMBlock. ram_block_populate_read() will only populate RAM up to used_length, so at least for anonymous memory protecting everything between used_length and max_length won't actually be protected and is just a NOP. So let's only protect everything up to used_length. Note: it still makes sense to register uffd-wp for max_length, such that RAM_UF_WRITEPROTECT is independent of a changing used_length. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/ram: Don't explicitly unprotect when unregistering uffd-wpDavid Hildenbrand1-9/+0
When unregistering uffd-wp, older kernels before commit f369b07c86143 ("mm/uffd:reset write protection when unregister with wp-mode") won't clear the uffd-wp PTE bit. When re-registering uffd-wp, the previous uffd-wp PTE bits would trigger again. With above commit, the kernel will clear the uffd-wp PTE bits when unregistering itself. Consequently, we'll clear the uffd-wp PTE bits now twice -- whereby we don't care about clearing them at all: a new background snapshot will re-register uffd-wp and re-protect all memory either way. So let's skip the manual clearing of uffd-wp. If ever relevant, we could clear conditionally in uffd_unregister_memory() -- we just need a way to figure out more recent kernels. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/ram: Fix error handling in ram_write_tracking_start()David Hildenbrand1-2/+3
If something goes wrong during uffd_change_protection(), we would miss to unregister uffd-wp and not release our reference. Fix it by performing the uffd_change_protection(true) last. Note that a uffd_change_protection(false) on the recovery path without a prior uffd_change_protection(false) is fine. Fixes: 278e2f551a09 ("migration: support UFFD write fault processing in ram_save_iterate()") Cc: qemu-stable@nongnu.org Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/ram: Fix populate_read_range()David Hildenbrand1-1/+3
Unfortunately, commit f7b9dcfbcf44 broke populate_read_range(): the loop end condition is very wrong, resulting in that function not populating the full range. Lets' fix that. Fixes: f7b9dcfbcf44 ("migration/ram: Factor out populating pages readable in ram_block_populate_pages()") Cc: qemu-stable@nongnu.org Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06util/userfaultfd: Add uffd_open()Peter Xu1-6/+5
Add a helper to create the uffd handle. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration: simplify migration_iteration_run()Juan Quintela1-12/+12
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2023-02-06migration: Remove unused threshold_size parameterJuan Quintela7-23/+15
Until previous commit, save_live_pending() was used for ram. Now with the split into state_pending_estimate() and state_pending_exact() it is not needed anymore, so remove them. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2023-02-06migration: Split save_live_pending() into state_pending_*Juan Quintela7-43/+101
We split the function into to: - state_pending_estimate: We estimate the remaining state size without stopping the machine. - state pending_exact: We calculate the exact amount of remaining state. The only "device" that implements different functions for _estimate() and _exact() is ram. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2023-02-06migration: No save_live_pending() method uses the QEMUFile parameterJuan Quintela6-7/+7
So remove it everywhere. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2023-02-06migration: Fix migration crash when target psize larger than hostPeter Xu1-2/+19
Commit d9e474ea56 overlooked the case where the target psize is even larger than the host psize. One example is Alpha has 8K page size and migration will start to crash the source QEMU when running Alpha migration on x86. Fix it by detecting that case and set host start/end just to cover the single page to be migrated. This will slightly optimize the common case where host psize equals to guest psize so we don't even need to do the roundups, but that's trivial. Cc: qemu-stable@nongnu.org Reported-by: Thomas Huth <thuth@redhat.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1456 Fixes: d9e474ea56 ("migration: Teach PSS about host page") Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-04migration: Move the QMP command from monitor/ to migration/Markus Armbruster1-0/+30
This moves the command from MAINTAINERS sections "Human Monitor (HMP)" and "QMP" to "Migration". Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20230124121946.1139465-19-armbru@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>
2023-02-04migration: Move HMP commands from monitor/ to migration/Markus Armbruster2-0/+808
This moves these commands from MAINTAINERS sections "Human Monitor (HMP)" and "QMP" to "Migration". Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20230124121946.1139465-18-armbru@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>
2023-01-20include/block: Untangle inclusion loopsMarkus Armbruster3-0/+3
We have two inclusion loops: block/block.h -> block/block-global-state.h -> block/block-common.h -> block/blockjob.h -> block/block.h block/block.h -> block/block-io.h -> block/block-common.h -> block/blockjob.h -> block/block.h I believe these go back to Emanuele's reorganization of the block API, merged a few months ago in commit d7e2fe4aac8. Fortunately, breaking them is merely a matter of deleting unnecessary includes from headers, and adding them back in places where they are now missing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20221221133551.3967339-2-armbru@redhat.com>
2022-12-15Merge tag 'next-8.0-pull-request' of https://gitlab.com/juan.quintela/qemu ↵Peter Maydell8-448/+419
into staging Migration patches for 8.0 Hi This are the patches that I had to drop form the last PULL request because they werent fixes: - AVX2 is dropped, intel posted a fix, I have to redo it - Fix for out of order channels is out Daniel nacked it and I need to redo it # gpg: Signature made Thu 15 Dec 2022 09:38:29 GMT # gpg: using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723 # gpg: Good signature from "Juan Quintela <quintela@redhat.com>" [full] # gpg: aka "Juan Quintela <quintela@trasno.org>" [full] # Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723 * tag 'next-8.0-pull-request' of https://gitlab.com/juan.quintela/qemu: migration: Drop rs->f migration: Remove old preempt code around state maintainance migration: Send requested page directly in rp-return thread migration: Move last_sent_block into PageSearchStatus migration: Make PageSearchStatus part of RAMState migration: Add pss_init() migration: Introduce pss_channel migration: Teach PSS about host page migration: Use atomic ops properly for page accountings migration: Yield bitmap_mutex properly when sending/sleeping migration: Remove RAMState.f references in compression code migration: Trivial cleanup save_page_header() on same block check migration: Cleanup xbzrle zero page cache update logic migration: Add postcopy_preempt_active() migration: Take bitmap mutex when completing ram migration migration: Export ram_release_page() migration: Export ram_transferred_ram() multifd: Create page_count fields into both MultiFD{Recv,Send}Params multifd: Create page_size fields into both MultiFD{Recv,Send}Params Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15Merge tag 'pull-misc-2022-12-14' of https://repo.or.cz/qemu/armbru into stagingPeter Maydell2-13/+3
Miscellaneous patches for 2022-12-14 # gpg: Signature made Wed 14 Dec 2022 15:23:02 GMT # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * tag 'pull-misc-2022-12-14' of https://repo.or.cz/qemu/armbru: ppc4xx_sdram: Simplify sdram_ddr_size() to return block/vmdk: Simplify vmdk_co_create() to return directly cleanup: Tweak and re-run return_directly.cocci io: Tidy up fat-fingered parameter name qapi: Use returned bool to check for failure (again) sockets: Use ERRP_GUARD() where obviously appropriate qemu-config: Use ERRP_GUARD() where obviously appropriate qemu-config: Make config_parse_qdict() return bool monitor: Use ERRP_GUARD() in monitor_init() monitor: Simplify monitor_fd_param()'s error handling error: Move ERRP_GUARD() to the beginning of the function error: Drop a few superfluous ERRP_GUARD() error: Drop some obviously superfluous error_propagate() Drop more useless casts from void * to pointer Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15migration: Drop rs->fPeter Xu1-8/+4
Now with rs->pss we can already cache channels in pss->pss_channels. That pss_channel contains more infromation than rs->f because it's per-channel. So rs->f could be replaced by rss->pss[RAM_CHANNEL_PRECOPY].pss_channel, while rs->f itself is a bit vague now. Note that vanilla postcopy still send pages via pss[RAM_CHANNEL_PRECOPY], that's slightly confusing but it reflects the reality. Then, after the replacement we can safely drop rs->f. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Remove old preempt code around state maintainancePeter Xu3-297/+3
With the new code to send pages in rp-return thread, there's little help to keep lots of the old code on maintaining the preempt state in migration thread, because the new way should always be faster.. Then if we'll always send pages in the rp-return thread anyway, we don't need those logic to maintain preempt state anymore because now we serialize things using the mutex directly instead of using those fields. It's very unfortunate to have those code for a short period, but that's still one intermediate step that we noticed the next bottleneck on the migration thread. Now what we can do best is to drop unnecessary code as long as the new code is stable to reduce the burden. It's actually a good thing because the new "sending page in rp-return thread" model is (IMHO) even cleaner and with better performance. Remove the old code that was responsible for maintaining preempt states, at the meantime also remove x-postcopy-preempt-break-huge parameter because with concurrent sender threads we don't really need to break-huge anymore. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Send requested page directly in rp-return threadPeter Xu2-16/+131
With all the facilities ready, send the requested page directly in the rp-return thread rather than queuing it in the request queue, if and only if postcopy preempt is enabled. It can achieve so because it uses separate channel for sending urgent pages. The only shared data is bitmap and it's protected by the bitmap_mutex. Note that since we're moving the ownership of the urgent channel from the migration thread to rp thread it also means the rp thread is responsible for managing the qemufile, e.g. properly close it when pausing migration happens. For this, let migration_release_from_dst_file to cover shutdown of the urgent channel too, renaming it as migration_release_dst_files() to better show what it does. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Move last_sent_block into PageSearchStatusPeter Xu1-30/+41
Since we use PageSearchStatus to represent a channel, it makes perfect sense to keep last_sent_block (aka, leverage RAM_SAVE_FLAG_CONTINUE) to be per-channel rather than global because each channel can be sending different pages on ramblocks. Hence move it from RAMState into PageSearchStatus. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Make PageSearchStatus part of RAMStatePeter Xu1-51/+61
We used to allocate PSS structure on the stack for precopy when sending pages. Make it static, so as to describe per-channel ram migration status. Here we declared RAM_CHANNEL_MAX instances, preparing for postcopy to use it, even though this patch has not yet to start using the 2nd instance. This should not have any functional change per se, but it already starts to export PSS information via the RAMState, so that e.g. one PSS channel can start to reference the other PSS channel. Always protect PSS access using the same RAMState.bitmap_mutex. We already do so, so no code change needed, just some comment update. Maybe we should consider renaming bitmap_mutex some day as it's going to be a more commonly and big mutex we use for ram states, but just leave it for later. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Add pss_init()Peter Xu1-3/+9
Helper to init PSS structures. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Introduce pss_channelPeter Xu1-32/+38
Introduce pss_channel for PageSearchStatus, define it as "the migration channel to be used to transfer this host page". We used to have rs->f, which is a mirror to MigrationState.to_dst_file. After postcopy preempt initial version, rs->f can be dynamically changed depending on which channel we want to use. But that later work still doesn't grant full concurrency of sending pages in e.g. different threads, because rs->f can either be the PRECOPY channel or POSTCOPY channel. This needs to be per-thread too. PageSearchStatus is actually a good piece of struct which we can leverage if we want to have multiple threads sending pages. Sending a single guest page may not make sense, so we make the granule to be "host page", and in the PSS structure we allow specify a QEMUFile* to migrate a specific host page. Then we open the possibility to specify different channels in different threads with different PSS structures. The PSS prefix can be slightly misleading here because e.g. for the upcoming usage of postcopy channel/thread it's not "searching" (or, scanning) at all but sending the explicit page that was requested. However since PSS existed for some years keep it as-is until someone complains. This patch mostly (simply) replace rs->f with pss->pss_channel only. No functional change intended for this patch yet. But it does prepare to finally drop rs->f, and make ram_save_guest_page() thread safe. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Teach PSS about host pagePeter Xu1-19/+76
Migration code has a lot to do with host pages. Teaching PSS core about the idea of host page helps a lot and makes the code clean. Meanwhile, this prepares for the future changes that can leverage the new PSS helpers that this patch introduces to send host page in another thread. Three more fields are introduced for this: (1) host_page_sending: this is set to true when QEMU is sending a host page, false otherwise. (2) host_page_{start|end}: these point to the start/end of host page we're sending, and it's only valid when host_page_sending==true. For example, when we look up the next dirty page on the ramblock, with host_page_sending==true, we'll not try to look for anything beyond the current host page boundary. This can be slightly efficient than current code because currently we'll set pss->page to next dirty bit (which can be over current host page boundary) and reset it to host page boundary if we found it goes beyond that. With above, we can easily make migration_bitmap_find_dirty() self contained by updating pss->page properly. rs* parameter is removed because it's not even used in old code. When sending a host page, we should use the pss helpers like this: - pss_host_page_prepare(pss): called before sending host page - pss_within_range(pss): whether we're still working on the cur host page? - pss_host_page_finish(pss): called after sending a host page Then we can use ram_save_target_page() to save one small page. Currently ram_save_host_page() is still the only user. If there'll be another function to send host page (e.g. in return path thread) in the future, it should follow the same style. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Use atomic ops properly for page accountingsPeter Xu4-23/+51
To prepare for thread-safety on page accountings, at least below counters need to be accessed only atomically, they are: ram_counters.transferred ram_counters.duplicate ram_counters.normal ram_counters.postcopy_bytes There are a lot of other counters but they won't be accessed outside migration thread, then they're still safe to be accessed without atomic ops. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Yield bitmap_mutex properly when sending/sleepingPeter Xu1-11/+35
Don't take the bitmap mutex when sending pages, or when being throttled by migration_rate_limit() (which is a bit tricky to call it here in ram code, but seems still helpful). It prepares for the possibility of concurrently sending pages in >1 threads using the function ram_save_host_page() because all threads may need the bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of qemu_sem_wait() blocking for one thread will not block the other from progressing. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>