aboutsummaryrefslogtreecommitdiff
path: root/migration/migration.h
AgeCommit message (Collapse)AuthorFilesLines
2023-07-25migration: spelling fixesMichael Tokarev1-2/+2
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Fabiano Rosas <farosas@suse.de>
2023-07-08migration: unexport migrate_fd_error()Laszlo Ersek1-1/+0
The only migrate_fd_error() call sites are in "migration/migration.c", which is also where we define migrate_fd_error(). Make the function static, and remove its declaration from "migration/migration.h". Cc: Juan Quintela <quintela@redhat.com> (maintainer:Migration) Cc: Leonardo Bras <leobras@redhat.com> (reviewer:Migration) Cc: Peter Xu <peterx@redhat.com> (reviewer:Migration) Cc: qemu-trivial@nongnu.org Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2018404 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-06-30vfio/migration: Reset bytes_transferred properlyAvihai Horon1-0/+1
Currently, VFIO bytes_transferred is not reset properly: 1. bytes_transferred is not reset after a VM snapshot (so a migration following a snapshot will report incorrect value). 2. bytes_transferred is a single counter for all VFIO devices, however upon migration failure it is reset multiple times, by each VFIO device. Fix it by introducing a new function vfio_reset_bytes_transferred() and calling it during migration and snapshot start. Remove existing bytes_transferred reset in VFIO migration state notifier, which is not needed anymore. Fixes: 3710586caa5d ("qapi: Add VFIO devices migration stats in Migration stats") Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Cédric Le Goater <clg@redhat.com>
2023-06-30migration: Implement switchover ack logicAvihai Horon1-0/+14
Implement switchover ack logic. This prevents the source from stopping the VM and completing the migration until an ACK is received from the destination that it's OK to do so. To achieve this, a new SaveVMHandlers handler switchover_ack_needed() and a new return path message MIG_RP_MSG_SWITCHOVER_ACK are added. The switchover_ack_needed() handler is called during migration setup in the destination to check if switchover ack is used by the migrated device. When switchover is approved by all migrated devices in the destination that support this capability, the MIG_RP_MSG_SWITCHOVER_ACK return path message is sent to the source to notify it that it's OK to do switchover. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Peter Xu <peterx@redhat.com> Tested-by: YangHang Liu <yanghliu@redhat.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Cédric Le Goater <clg@redhat.com>
2023-06-02migration: switch from .vm_was_running to .vm_old_stateVladimir Sementsov-Ogievskiy1-3/+6
No logic change here, only refactoring. That's a preparation for next commit where we finally restore the stopped vm state on migration failure or cancellation. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230517123752.21615-5-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-18migration: split migration_incoming_coVladimir Sementsov-Ogievskiy1-1/+8
Originally, migration_incoming_co was introduced by 25d0c16f625feb3b6 "migration: Switch to COLO process after finishing loadvm" to be able to enter from COLO code to one specific yield point, added by 25d0c16f625feb3b6. Later in 923709896b1b0 "migration: poll the cm event for destination qemu" we reused this variable to wake the migration incoming coroutine from RDMA code. That was doubtful idea. Entering coroutines is a very fragile thing: you should be absolutely sure which yield point you are going to enter. I don't know how much is it safe to enter during qemu_loadvm_state() which I think what RDMA want to do. But for sure RDMA shouldn't enter the special COLO-related yield-point. As well, COLO code doesn't want to enter during qemu_loadvm_state(), it want to enter it's own specific yield-point. As well, when in 8e48ac95865ac97d "COLO: Add block replication into colo process" we added bdrv_invalidate_cache_all() call (now it's called activate_all()) it became possible to enter the migration incoming coroutine during that call which is wrong too. So, let't make these things separate and disjoint: loadvm_co for RDMA, non-NULL during qemu_loadvm_state(), and colo_incoming_co for COLO, non-NULL only around specific yield. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230515130640.46035-3-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-10migration: drop colo_incoming_thread from MigrationIncomingStateVladimir Sementsov-Ogievskiy1-2/+0
have_colo_incoming_thread variable is unused. colo_incoming_thread can be local. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Zhang Chen <chen.zhang@intel.com> Message-Id: <20230428194928.1426370-6-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-04-27multifd: Only flush once each full round of memoryJuan Quintela1-2/+1
We need to add a new flag to mean to flush at that point. Notice that we still flush at the end of setup and at the end of complete stages. Signed-off-by: Juan Quintela <quintela@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> --- Add missing qemu_fflush(), now it passes all tests always. In the previous version, the check that changes the default value to false got lost in some rebase. Get it back.
2023-04-27multifd: Create property multifd-flush-after-each-sectionJuan Quintela1-0/+12
We used to flush all channels at the end of each RAM section sent. That is not needed, so preparing to only flush after a full iteration through all the RAM. Default value of the property is false. But we return "true" in migrate_multifd_flush_after_each_section() until we implement the code in following patches. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> --- Rename each-iteration to after-each-section Rename multifd-sync-after-each-section to multifd-flush-after-each-section Move to machine-8.0 (peter)
2023-04-27migration: Move migrate_use_tls() to options.cJuan Quintela1-2/+0
Once there, rename it to migrate_tls() and make it return bool for consistency. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> --- Fix typos found by fabiano
2023-04-24migration: Move migrate_postcopy() to options.cJuan Quintela1-2/+0
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de>
2023-04-24migration: Create migrate_max_cpu_throttle()Juan Quintela1-2/+0
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de>
2023-04-24migration: Move migrate_use_block_incremental() to option.cJuan Quintela1-1/+0
To be consistent with every other parameter, rename to migrate_block_incremental(). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move parameters functions to option.cJuan Quintela1-11/+0
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_use_return() to options.cJuan Quintela1-1/+0
Once that we are there, we rename the function to migrate_return_path() to be consistent with all other capabilities. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_use_block() to options.cJuan Quintela1-1/+0
Once that we are there, we rename the function to migrate_block() to be consistent with all other capabilities. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_use_xbzrle() to options.cJuan Quintela1-1/+0
Once that we are there, we rename the function to migrate_xbzrle() to be consistent with all other capabilities. We change the type to return bool also for consistency. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_use_zero_copy_send() to options.cJuan Quintela1-5/+0
Once that we are there, we rename the function to migrate_zero_copy_send() to be consistent with all other capabilities. We can remove the CONFIG_LINUX guard. We already check that we can't setup this capability in migrate_caps_check(). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_use_multifd() to options.cJuan Quintela1-1/+0
Once that we are there, we rename the function to migrate_multifd() to be consistent with all other capabilities. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_use_events() to options.cJuan Quintela1-1/+0
Once that we are there, we rename the function to migrate_events() to be consistent with all other capabilities. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_use_compression() to options.cJuan Quintela1-1/+0
Once that we are there, we rename the function to migrate_compress() to be consistent with all other capabilities. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Move migrate_colo_enabled() to options.cJuan Quintela1-1/+0
Once that we are there, we rename the function to migrate_colo() to be consistent with all other capabilities. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-24migration: Create options.cJuan Quintela1-12/+0
We move there all capabilities helpers from migration.c. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> --- Following David advise: - looked through the history, capabilities are newer than 2012, so we can remove that bit of the header. - This part is posterior to Anthony. Original Author is Orit. Once there, I put myself. Peter Xu also did quite a bit of work here. Anyone else wants/needs to be there? I didn't search too hard because nobody asked before to be added. What do you think?
2023-04-24migration: rename enabled_capabilities to capabilitiesJuan Quintela1-1/+1
It is clear from the context what that means, and such a long name with the extra long names of the capabilities make very difficilut to stay inside the 80 columns limit. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-12migration: Recover behavior of preempt channel creation for pre-7.2Peter Xu1-0/+7
In 8.0 devel window we reworked preempt channel creation, so that there'll be no race condition when the migration channel and preempt channel got established in the wrong order in commit 5655aab079. However no one noticed that the change will also be not compatible with older qemus, majorly 7.1/7.2 versions where preempt mode started to be supported. Leverage the same pre-7.2 flag introduced in the previous patch to recover the behavior hopefully before 8.0 releases, so we don't break migration when we migrate from 8.0 to older qemu binaries. Fixes: 5655aab079 ("migration: Postpone postcopy preempt channel to be after main") Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-04-12migration: Fix potential race on postcopy_qemufile_srcPeter Xu1-1/+33
postcopy_qemufile_src object should be owned by one thread, either the main thread (e.g. when at the beginning, or at the end of migration), or by the return path thread (when during a preempt enabled postcopy migration). If that's not the case the access to the object might be racy. postcopy_preempt_shutdown_file() can be potentially racy, because it's called at the end phase of migration on the main thread, however during which the return path thread hasn't yet been recycled; the recycle happens in await_return_path_close_on_source() which is after this point. It means, logically it's posslbe the main thread and the return path thread are both operating on the same qemufile. While I don't think qemufile is thread safe at all. postcopy_preempt_shutdown_file() used to be needed because that's where we send EOS to dest so that dest can safely shutdown the preempt thread. To avoid the possible race, remove this only place that a race can happen. Instead we figure out another way to safely close the preempt thread on dest. The core idea during postcopy on deciding "when to stop" is that dest will send a postcopy SHUT message to src, telling src that all data is there. Hence to shut the dest preempt thread maybe better to do it directly on dest node. This patch proposed such a way that we change postcopy_prio_thread_created into PreemptThreadStatus, so that we kick the preempt thread on dest qemu by a sequence of: mis->preempt_thread_status = PREEMPT_THREAD_QUIT; qemu_file_shutdown(mis->postcopy_qemufile_dst); While here shutdown() is probably so far the easiest way to kick preempt thread from a blocked qemu_get_be64(). Then it reads preempt_thread_status to make sure it's not a network failure but a willingness to quit the thread. We could have avoided that extra status but just rely on migration status. The problem is postcopy_ram_incoming_cleanup() is just called early enough so we're still during POSTCOPY_ACTIVE no matter what.. So just make it simple to have the status introduced. One flag x-preempt-pre-7-2 is added to keep old pre-7.2 behaviors of postcopy preempt. Fixes: 9358982744 ("migration: Send requested page directly in rp-return thread") Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-11migration: Postpone postcopy preempt channel to be after mainPeter Xu1-0/+6
Postcopy with preempt-mode enabled needs two channels to communicate. The order of channel establishment is not guaranteed. It can happen that the dest QEMU got the preempt channel connection request before the main channel is established, then the migration may make no progress even during precopy due to the wrong order. To fix it, create the preempt channel only if we know the main channel is established. For a general postcopy migration, we delay it until postcopy_start(), that's where we already went through some part of precopy on the main channel. To make sure dest QEMU has already established the channel, we wait until we got the first PONG received. That's something we do at the start of precopy when postcopy enabled so it's guaranteed to happen sooner or later. For a postcopy recovery, we delay it to qemu_savevm_state_resume_prepare() where we'll have round trips of data on bitmap synchronizations, which means the main channel must have been established. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-11migration: Add a semaphore to count PONGsPeter Xu1-0/+6
This is mostly useless, but useful for us to know whether the main channel is correctly established without changing the migration protocol. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-11migration: Rework multi-channel checks on URIPeter Xu1-3/+0
The whole idea of multi-channel checks was not properly done, IMHO. Currently we check multi-channel in a lot of places, but actually that's not needed because we only need to check it right after we get the URI and that should be it. If the URI check succeeded, we should never need to check it again because we must have it. If it check fails, we should fail immediately on either the qmp_migrate or qmp_migrate_incoming, instead of failingg it later after the connection established. Neither should we fail any set capabiliities like what we used to do here: 5ad15e8614 ("migration: allow enabling mutilfd for specific protocol only", 2021-10-19) Because logically the URI will only be set later after the capability is set, so it doesn't make a lot of sense to check the URI type when setting the capability, because we're checking the cap with an old URI passed in, and that may not even be the URI we're going to use later. This patch mostly reverted all such checks for before, dropping the variable migrate_allow_multi_channels and helpers. Instead, add a common helper to check URI for multi-channels for either qmp_migrate and qmp_migrate_incoming and that should do all the proper checks. The failure will only trigger with the "migrate" or "migrate_incoming" command, or when user specified "-incoming xxx" where "xxx" is not "defer". Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-02-06migration/savevm: Prepare vmdesc json writer in qemu_savevm_state_setup()David Hildenbrand1-0/+4
... and store it in the migration state. This is a preparation for storing selected vmds's already in qemu_savevm_state_setup(). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-12-15migration: Remove old preempt code around state maintainancePeter Xu1-7/+0
With the new code to send pages in rp-return thread, there's little help to keep lots of the old code on maintaining the preempt state in migration thread, because the new way should always be faster.. Then if we'll always send pages in the rp-return thread anyway, we don't need those logic to maintain preempt state anymore because now we serialize things using the mutex directly instead of using those fields. It's very unfortunate to have those code for a short period, but that's still one intermediate step that we noticed the next bottleneck on the migration thread. Now what we can do best is to drop unnecessary code as long as the new code is stable to reduce the burden. It's actually a good thing because the new "sending page in rp-return thread" model is (IMHO) even cleaner and with better performance. Remove the old code that was responsible for maintaining preempt states, at the meantime also remove x-postcopy-preempt-break-huge parameter because with concurrent sender threads we don't really need to break-huge anymore. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2022-07-20migration: Add property x-postcopy-preempt-break-hugePeter Xu1-0/+7
Add a property field that can conditionally disable the "break sending huge page" behavior in postcopy preemption. By default it's enabled. It should only be used for debugging purposes, and we should never remove the "x-" prefix. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Manish Mishra <manish.mishra@nutanix.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185511.27366-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Create the postcopy preempt channel asynchronouslyPeter Xu1-0/+7
This patch allows the postcopy preempt channel to be created asynchronously. The benefit is that when the connection is slow, we won't take the BQL (and potentially block all things like QMP) for a long time without releasing. A function postcopy_preempt_wait_channel() is introduced, allowing the migration thread to be able to wait on the channel creation. The channel is always created by the main thread, in which we'll kick a new semaphore to tell the migration thread that the channel has created. We'll need to wait for the new channel in two places: (1) when there's a new postcopy migration that is starting, or (2) when there's a postcopy migration to resume. For the start of migration, we don't need to wait for this channel until when we want to start postcopy, aka, postcopy_start(). We'll fail the migration if we found that the channel creation failed (which should probably not happen at all in 99% of the cases, because the main channel is using the same network topology). For a postcopy recovery, we'll need to wait in postcopy_pause(). In that case if the channel creation failed, we can't fail the migration or we'll crash the VM, instead we keep in PAUSED state, waiting for yet another recovery. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Manish Mishra <manish.mishra@nutanix.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185509.27311-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Postcopy recover with preempt enabledPeter Xu1-0/+19
To allow postcopy recovery, the ram fast load (preempt-only) dest QEMU thread needs similar handling on fault tolerance. When ram_load_postcopy() fails, instead of stopping the thread it halts with a semaphore, preparing to be kicked again when recovery is detected. A mutex is introduced to make sure there's no concurrent operation upon the socket. To make it simple, the fast ram load thread will take the mutex during its whole procedure, and only release it if it's paused. The fast-path socket will be properly released by the main loading thread safely when there's network failures during postcopy with that mutex held. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185506.27257-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Postcopy preemption enablementPeter Xu1-1/+1
This patch enables postcopy-preempt feature. It contains two major changes to the migration logic: (1) Postcopy requests are now sent via a different socket from precopy background migration stream, so as to be isolated from very high page request delays. (2) For huge page enabled hosts: when there's postcopy requests, they can now intercept a partial sending of huge host pages on src QEMU. After this patch, we'll live migrate a VM with two channels for postcopy: (1) PRECOPY channel, which is the default channel that transfers background pages; and (2) POSTCOPY channel, which only transfers requested pages. There's no strict rule of which channel to use, e.g., if a requested page is already being transferred on precopy channel, then we will keep using the same precopy channel to transfer the page even if it's explicitly requested. In 99% of the cases we'll prioritize the channels so we send requested page via the postcopy channel as long as possible. On the source QEMU, when we found a postcopy request, we'll interrupt the PRECOPY channel sending process and quickly switch to the POSTCOPY channel. After we serviced all the high priority postcopy pages, we'll switch back to PRECOPY channel so that we'll continue to send the interrupted huge page again. There's no new thread introduced on src QEMU. On the destination QEMU, one new thread is introduced to receive page data from the postcopy specific socket (done in the preparation patch). This patch has a side effect: after sending postcopy pages, previously we'll assume the guest will access follow up pages so we'll keep sending from there. Now it's changed. Instead of going on with a postcopy requested page, we'll go back and continue sending the precopy huge page (which can be intercepted by a postcopy request so the huge page can be sent partially before). Whether that's a problem is debatable, because "assuming the guest will continue to access the next page" may not really suite when huge pages are used, especially if the huge page is large (e.g. 1GB pages). So that locality hint is much meaningless if huge pages are used. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185504.27203-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-20migration: Postcopy preemption preparation on channel creationPeter Xu1-0/+8
Create a new socket for postcopy to be prepared to send postcopy requested pages via this specific channel, so as to not get blocked by precopy pages. A new thread is also created on dest qemu to receive data from this new channel based on the ram_load_postcopy() routine. The ram_load_postcopy(POSTCOPY) branch and the thread has not started to function, and that'll be done in follow up patches. Cleanup the new sockets on both src/dst QEMUs, meanwhile look after the new thread too to make sure it'll be recycled properly. Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185502.27149-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: With Peter's fix to quieten compiler warning on start_migration
2022-07-20migration: Add postcopy-preempt capabilityPeter Xu1-0/+1
Firstly, postcopy already preempts precopy due to the fact that we do unqueue_page() first before looking into dirty bits. However that's not enough, e.g., when there're host huge page enabled, when sending a precopy huge page, a postcopy request needs to wait until the whole huge page that is sending to finish. That could introduce quite some delay, the bigger the huge page is the larger delay it'll bring. This patch adds a new capability to allow postcopy requests to preempt existing precopy page during sending a huge page, so that postcopy requests can be serviced even faster. Meanwhile to send it even faster, bypass the precopy stream by providing a standalone postcopy socket for sending requested pages. Since the new behavior will not be compatible with the old behavior, this will not be the default, it's enabled only when the new capability is set on both src/dst QEMUs. This patch only adds the capability itself, the logic will be added in follow up patches. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220707185342.26794-2-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-05-16migration: Add migrate_use_tls() helperLeonardo Bras1-0/+1
A lot of places check parameters.tls_creds in order to evaluate if TLS is in use, and sometimes call migrate_get_current() just for that test. Add new helper function migrate_use_tls() in order to simplify testing for TLS usage. Signed-off-by: Leonardo Bras <leobras@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20220513062836.965425-6-leobras@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-05-16migration: Add zero-copy-send parameter for QMP/HMP for LinuxLeonardo Bras1-0/+5
Add property that allows zero-copy migration of memory pages on the sending side, and also includes a helper function migrate_use_zero_copy_send() to check if it's enabled. No code is introduced to actually do the migration, but it allow future implementations to enable/disable this feature. On non-Linux builds this parameter is compiled-out. Signed-off-by: Leonardo Bras <leobras@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20220513062836.965425-5-leobras@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-04-21migration: Allow migrate-recover to run multiple timesPeter Xu1-1/+0
Previously migration didn't have an easy way to cleanup the listening transport, migrate recovery only allows to execute once. That's done with a trick flag in postcopy_recover_triggered. Now the facility is already there. Drop postcopy_recover_triggered and instead allows a new migrate-recover to release the previous listener transport. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220331150857.74406-8-peterx@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-04-21migration: Move migrate_allow_multifd and helpers into migration.cPeter Xu1-0/+3
This variable, along with its helpers, is used to detect whether multiple channel will be supported for migration. In follow up patches, there'll be other capability that requires multi-channels. Hence move it outside multifd specific code and make it public. Meanwhile rename it from "multifd" to "multi_channels" to show its real meaning. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220331150857.74406-5-peterx@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-03-02migration: Add migration_incoming_transport_cleanup()Peter Xu1-0/+1
Add a helper to cleanup the transport listener. When do it, we should also null-ify the cleanup hook and the data, then it's even safe to call it multiple times. Move the socket_address_list cleanup altogether, because that's a mirror of the listener channels and only for the purpose of query-migrate. Hence when someone wants to cleanup the listener transport, it should also want to cleanup the socket list too, always. No functional change intended. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220301083925.33483-15-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-03-02migration: Move static var in ram_block_from_stream() into globalPeter Xu1-1/+2
Static variable is very unfriendly to threading of ram_block_from_stream(). Move it into MigrationIncomingState. Make the incoming state pointer to be passed over to ram_block_from_stream() on both caller sites. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220301083925.33483-8-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-03-02migration: Add postcopy_thread_create()Peter Xu1-3/+5
Postcopy create threads. A common manner is we init a sem and use it to sync with the thread. Namely, we have fault_thread_sem and listen_thread_sem and they're only used for this. Make it a shared infrastructure so it's easier to create yet another thread. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220301083925.33483-7-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-03-02migration: Introduce postcopy channels on dest nodePeter Xu1-1/+35
Postcopy handles huge pages in a special way that currently we can only have one "channel" to transfer the page. It's because when we install pages using UFFDIO_COPY, we need to have the whole huge page ready, it also means we need to have a temp huge page when trying to receive the whole content of the page. Currently all maintainance around this tmp page is global: firstly we'll allocate a temp huge page, then we maintain its status mostly within ram_load_postcopy(). To enable multiple channels for postcopy, the first thing we need to do is to prepare N temp huge pages as caching, one for each channel. Meanwhile we need to maintain the tmp huge page status per-channel too. To give some example, some local variables maintained in ram_load_postcopy() are listed; they are responsible for maintaining temp huge page status: - all_zero: this keeps whether this huge page contains all zeros - target_pages: this counts how many target pages have been copied - host_page: this keeps the host ptr for the page to install Move all these fields to be together with the temp huge pages to form a new structure called PostcopyTmpPage. Then for each (future) postcopy channel, we need one structure to keep the state around. For vanilla postcopy, obviously there's only one channel. It contains both precopy and postcopy pages. This patch teaches the dest migration node to start realize the possible number of postcopy channels by introducing the "postcopy_channels" variable. Its value is calculated when setup postcopy on dest node (during POSTCOPY_LISTEN phase). Vanilla postcopy will have channels=1, but when postcopy-preempt capability is enabled (in the future), we will boost it to 2 because even during partial sending of a precopy huge page we still want to preempt it and start sending the postcopy requested page right away (so we start to keep two temp huge pages; more if we want to enable multifd). In this patch there's a TODO marked for that; so far the channels is always set to 1. We need to send one "host huge page" on one channel only and we cannot split them, because otherwise the data upon the same huge page can locate on more than one channel so we need more complicated logic to manage. One temp host huge page for each channel will be enough for us for now. Postcopy will still always use the index=0 huge page even after this patch. However it prepares for the latter patches where it can start to use multiple channels (which needs src intervention, because only src knows which channel we should use). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220301083925.33483-5-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: Fixed up long line
2021-11-03migration: provide an error message to migration_cancel()Laurent Vivier1-1/+1
This avoids to call migrate_get_current() in the caller function whereas migration_cancel() already needs the pointer to the current migration state. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2021-07-26migration: Make from_dst_file accesses thread-safePeter Xu1-3/+5
Accessing from_dst_file is potentially racy in current code base like below: if (s->from_dst_file) do_something(s->from_dst_file); Because from_dst_file can be reset right after the check in another thread (rp_thread). One example is migrate_fd_cancel(). Use the same qemu_file_lock to protect it too, just like to_dst_file. When it's safe to access without lock, comment it. There's one special reference in migration_thread() that can be replaced by the newly introduced rp_thread_created flag. Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Lukas Straub <lukasstraub2@web.de> Message-Id: <20210722175841.938739-3-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> with Peter's fixup
2021-07-26migration: Fix missing join() of rp_threadPeter Xu1-0/+7
It's possible that the migration thread skip the join() of the rp_thread in below race and crash on src right at finishing migration: migration_thread rp_thread ---------------- --------- migration_completion() (before rp_thread quits) from_dst_file=NULL [thread got scheduled out] s->rp_state.from_dst_file==NULL (skip join() of rp_thread) migrate_fd_cleanup() qemu_fclose(s->to_dst_file) yank_unregister_instance() assert(yank_find_entry()) <------- crash It could mostly happen with postcopy, but that shouldn't be required, e.g., I think it could also trigger with MIGRATION_CAPABILITY_RETURN_PATH set. It's suspected that above race could be the root cause of a recent (but rare) migration-test break reported by either Dave or PMM: https://lore.kernel.org/qemu-devel/YPamXAHwan%2FPPXLf@work-vm/ The issue is: from_dst_file is reset in the rp_thread, so if the thread reset it to NULL fast enough then the migration thread will assume there's no rp_thread at all. This could potentially cause more severe issue (e.g. crash) after the yank code. Fix it by using a boolean to keep "whether we've created rp_thread". Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210722175841.938739-2-peterx@redhat.com> Reviewed-by: Lukas Straub <lukasstraub2@web.de> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2021-06-08migration: Add cleanup hook for inwards migrationDr. David Alan Gilbert1-0/+4
Add a cleanup hook for incoming migration that gets called at the end as a way for a transport to allow cleanup. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20210421112834.107651-4-dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2021-05-14Merge remote-tracking branch ↵Peter Maydell1-0/+2
'remotes/thuth-gitlab/tags/pull-request-2021-05-14' into staging * Replace YAML anchors by extends in the gitlab-CI yaml files * Many small qtest fixes (e.g. to fix issues discovered by Coverity) * Poison more config switches in common code * Fix the failing Travis-CI and Cirrus-CI tasks # gpg: Signature made Fri 14 May 2021 12:17:39 BST # gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5 # gpg: issuer "thuth@redhat.com" # gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full] # gpg: aka "Thomas Huth <thuth@redhat.com>" [full] # gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full] # gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown] # Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5 * remotes/thuth-gitlab/tags/pull-request-2021-05-14: cirrus.yml: Fix the MSYS2 task pc-bios/s390-ccw: Fix inline assembly for older versions of Clang tests/qtest/migration-test: Use g_autofree to avoid leaks on error paths configure: Poison all current target-specific #defines migration: Move populate_vfio_info() into a separate file include/sysemu: Poison all accelerator CONFIG switches in common code tests: Avoid side effects inside g_assert() arguments tests/qtest/rtc-test: Remove pointless NULL check tests/qtest/tpm-util.c: Free memory with correct free function tests/migration-test: Fix "true" vs true tests/qtest/npcm7xx_pwm-test.c: Avoid g_assert_true() for non-test assertions tests/qtest/ahci-test.c: Calculate iso_size with 64-bit arithmetic util/compatfd.c: Replaced a malloc call with g_malloc. libqtest: refuse QTEST_QEMU_BINARY=qemu-kvm docs/devel/qgraph: add troubleshooting information libqos/qgraph: fix "UNAVAILBLE" typo gitlab-ci: Replace YAML anchors by extends (native_test_job) gitlab-ci: Replace YAML anchors by extends (native_build_job) gitlab-ci: Replace YAML anchors by extends (container_job) tests/docker/dockerfiles: Add ccache to containers where it was missing Signed-off-by: Peter Maydell <peter.maydell@linaro.org>