aboutsummaryrefslogtreecommitdiff
path: root/migration
diff options
context:
space:
mode:
authorLaurent Vivier <lvivier@redhat.com>2020-06-17 13:31:54 +0200
committerDr. David Alan Gilbert <dgilbert@redhat.com>2020-06-17 17:48:39 +0100
commit7e89a1401a9674c9882948f05f4d17ea7be1c4eb (patch)
tree1090373a6623a9362ba18b80f0c8ff349ff39339 /migration
parent6bcd361a52e73889d2123033ce48450289a1933e (diff)
downloadqemu-7e89a1401a9674c9882948f05f4d17ea7be1c4eb.zip
qemu-7e89a1401a9674c9882948f05f4d17ea7be1c4eb.tar.gz
qemu-7e89a1401a9674c9882948f05f4d17ea7be1c4eb.tar.bz2
migration: fix multifd_send_pages() next channel
multifd_send_pages() loops around the available channels, the next channel to use between two calls to multifd_send_pages() is stored inside a local static variable, next_channel. It works well, except if the number of channels decreases between two calls to multifd_send_pages(). In this case, the loop can try to access the data of a channel that doesn't exist anymore. The problem can be triggered if we start a migration with a given number of channels and then we cancel the migration to restart it with a lower number. This ends generally with an error like: qemu-system-ppc64: .../util/qemu-thread-posix.c:77: qemu_mutex_lock_impl: Assertion `mutex->initialized' failed. This patch fixes the error by capping next_channel with the current number of channels before using it. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20200617113154.593233-1-lvivier@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Diffstat (limited to 'migration')
-rw-r--r--migration/multifd.c6
1 files changed, 6 insertions, 0 deletions
diff --git a/migration/multifd.c b/migration/multifd.c
index 5a3e4d0..d044120 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -415,6 +415,12 @@ static int multifd_send_pages(QEMUFile *f)
}
qemu_sem_wait(&multifd_send_state->channels_ready);
+ /*
+ * next_channel can remain from a previous migration that was
+ * using more channels, so ensure it doesn't overflow if the
+ * limit is lower now.
+ */
+ next_channel %= migrate_multifd_channels();
for (i = next_channel;; i = (i + 1) % migrate_multifd_channels()) {
p = &multifd_send_state->params[i];