diff options
author | Peter Xu <peterx@redhat.com> | 2022-01-19 16:09:19 +0800 |
---|---|---|
committer | Juan Quintela <quintela@redhat.com> | 2022-01-28 15:38:23 +0100 |
commit | cfd66f30fb0f735df06ff4220e5000290a43dad3 (patch) | |
tree | 1c60c70a0404aa963b75c2c73e8783b51d9ae54f /migration/trace-events | |
parent | a1fe28df7547120bc3ac8bc4c3d1565d4cf7905e (diff) | |
download | qemu-cfd66f30fb0f735df06ff4220e5000290a43dad3.zip qemu-cfd66f30fb0f735df06ff4220e5000290a43dad3.tar.gz qemu-cfd66f30fb0f735df06ff4220e5000290a43dad3.tar.bz2 |
migration: Simplify unqueue_page()
This patch simplifies unqueue_page() on both sides of it (itself, and caller).
Firstly, due to the fact that right after unqueue_page() returned true, we'll
definitely send a huge page (see ram_save_huge_page() call - it will _never_
exit before finish sending that huge page), so unqueue_page() does not need to
jump in small page size if huge page is enabled on the ramblock. IOW, it's
destined that only the 1st 4K page will be valid, when unqueue the 2nd+ time
we'll notice the whole huge page has already been sent anyway. Switching to
operating on huge page reduces a lot of the loops of redundant unqueue_page().
Meanwhile, drop the dirty check. It's not helpful to call test_bit() every
time to jump over clean pages, as ram_save_host_page() has already done so,
while in a faster way (see commit ba1b7c812c ("migration/ram: Optimize
ram_save_host_page()", 2021-05-13)). So that's not necessary too.
Drop the two tracepoints along the way - based on above analysis it's very
possible that no one is really using it..
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Diffstat (limited to 'migration/trace-events')
-rw-r--r-- | migration/trace-events | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/migration/trace-events b/migration/trace-events index 171a83a..48aa7b1 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -86,8 +86,6 @@ put_qlist_end(const char *field_name, const char *vmsd_name) "%s(%s)" qemu_file_fclose(void) "" # ram.c -get_queued_page(const char *block_name, uint64_t tmp_offset, unsigned long page_abs) "%s/0x%" PRIx64 " page_abs=0x%lx" -get_queued_page_not_dirty(const char *block_name, uint64_t tmp_offset, unsigned long page_abs) "%s/0x%" PRIx64 " page_abs=0x%lx" migration_bitmap_sync_start(void) "" migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64 migration_bitmap_clear_dirty(char *str, uint64_t start, uint64_t size, unsigned long page) "rb %s start 0x%"PRIx64" size 0x%"PRIx64" page 0x%lx" @@ -113,6 +111,7 @@ ram_save_iterate_big_wait(uint64_t milliconds, int iterations) "big wait: %" PRI ram_load_complete(int ret, uint64_t seq_iter) "exit_code %d seq iteration %" PRIu64 ram_write_tracking_ramblock_start(const char *block_id, size_t page_size, void *addr, size_t length) "%s: page_size: %zu addr: %p length: %zu" ram_write_tracking_ramblock_stop(const char *block_id, size_t page_size, void *addr, size_t length) "%s: page_size: %zu addr: %p length: %zu" +unqueue_page(char *block, uint64_t offset, bool dirty) "ramblock '%s' offset 0x%"PRIx64" dirty %d" # multifd.c multifd_new_send_channel_async(uint8_t id) "channel %u" |