diff options
author | Jitendra Kolhe <jitendra.kolhe@hpe.com> | 2017-02-24 09:01:43 +0530 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2017-03-14 13:26:36 +0100 |
commit | 1e356fc14beaa3ece6c0e961bd479af58be3198b (patch) | |
tree | aa40ec3ef455a7e166f61df797a28ee8cd9c7934 /backends/hostmem.c | |
parent | c0d9f7d0bcedeaa65d5c984fbe0d351e1402eab5 (diff) | |
download | qemu-1e356fc14beaa3ece6c0e961bd479af58be3198b.zip qemu-1e356fc14beaa3ece6c0e961bd479af58be3198b.tar.gz qemu-1e356fc14beaa3ece6c0e961bd479af58be3198b.tar.bz2 |
mem-prealloc: reduce large guest start-up and migration time.
Using "-mem-prealloc" option for a large guest leads to higher guest
start-up and migration time. This is because with "-mem-prealloc" option
qemu tries to map every guest page (create address translations), and
make sure the pages are available during runtime. virsh/libvirt by
default, seems to use "-mem-prealloc" option in case the guest is
configured to use huge pages. The patch tries to map all guest pages
simultaneously by spawning multiple threads. Currently limiting the
change to QEMU library functions on POSIX compliant host only, as we are
not sure if the problem exists on win32. Below are some stats with
"-mem-prealloc" option for guest configured to use huge pages.
------------------------------------------------------------------------
Idle Guest | Start-up time | Migration time
------------------------------------------------------------------------
Guest stats with 2M HugePage usage - single threaded (existing code)
------------------------------------------------------------------------
64 Core - 4TB | 54m11.796s | 75m43.843s
64 Core - 1TB | 8m56.576s | 14m29.049s
64 Core - 256GB | 2m11.245s | 3m26.598s
------------------------------------------------------------------------
Guest stats with 2M HugePage usage - map guest pages using 8 threads
------------------------------------------------------------------------
64 Core - 4TB | 5m1.027s | 34m10.565s
64 Core - 1TB | 1m10.366s | 8m28.188s
64 Core - 256GB | 0m19.040s | 2m10.148s
-----------------------------------------------------------------------
Guest stats with 2M HugePage usage - map guest pages using 16 threads
-----------------------------------------------------------------------
64 Core - 4TB | 1m58.970s | 31m43.400s
64 Core - 1TB | 0m39.885s | 7m55.289s
64 Core - 256GB | 0m11.960s | 2m0.135s
-----------------------------------------------------------------------
Changed in v2:
- modify number of memset threads spawned to min(smp_cpus, 16).
- removed 64GB memory restriction for spawning memset threads.
Changed in v3:
- limit number of threads spawned based on
min(sysconf(_SC_NPROCESSORS_ONLN), 16, smp_cpus)
- implement memset thread specific siglongjmp in SIGBUS signal_handler.
Changed in v4
- remove sigsetjmp/siglongjmp and SIGBUS unblock/block for main thread
as main thread no longer touches any pages.
- simplify code my returning memset_thread_failed status from
touch_all_pages.
Signed-off-by: Jitendra Kolhe <jitendra.kolhe@hpe.com>
Message-Id: <1487907103-32350-1-git-send-email-jitendra.kolhe@hpe.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'backends/hostmem.c')
-rw-r--r-- | backends/hostmem.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/backends/hostmem.c b/backends/hostmem.c index 7f5de70..162c218 100644 --- a/backends/hostmem.c +++ b/backends/hostmem.c @@ -224,7 +224,7 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value, void *ptr = memory_region_get_ram_ptr(&backend->mr); uint64_t sz = memory_region_size(&backend->mr); - os_mem_prealloc(fd, ptr, sz, &local_err); + os_mem_prealloc(fd, ptr, sz, smp_cpus, &local_err); if (local_err) { error_propagate(errp, local_err); return; @@ -328,7 +328,7 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp) */ if (backend->prealloc) { os_mem_prealloc(memory_region_get_fd(&backend->mr), ptr, sz, - &local_err); + smp_cpus, &local_err); if (local_err) { goto out; } |