diff options
author | Jan Kiszka <jan.kiszka@siemens.com> | 2015-06-18 18:47:23 +0200 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2015-07-01 15:45:51 +0200 |
commit | 4b8523ee896750c37b4fa224a40d34703cbdf4c6 (patch) | |
tree | 1083d1e6c59b33c68808094d4b6844ddf935c9f0 /target-ppc | |
parent | 4840f10eff37eebc609fcc933ab985dc66df95c6 (diff) | |
download | qemu-4b8523ee896750c37b4fa224a40d34703cbdf4c6.zip qemu-4b8523ee896750c37b4fa224a40d34703cbdf4c6.tar.gz qemu-4b8523ee896750c37b4fa224a40d34703cbdf4c6.tar.bz2 |
kvm: First step to push iothread lock out of inner run loop
This opens the path to get rid of the iothread lock on vmexits in KVM
mode. On x86, the in-kernel irqchips has to be used because we otherwise
need to synchronize APIC and other per-cpu state accesses that could be
changed concurrently.
Regarding pre/post-run callbacks, s390x and ARM should be fine without
specific locking as the callbacks are empty. MIPS and POWER require
locking for the pre-run callback.
For the handle_exit callback, it is non-empty in x86, POWER and s390.
Some POWER cases could do without the locking, but it is left in
place for now.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1434646046-27150-7-git-send-email-pbonzini@redhat.com>
Diffstat (limited to 'target-ppc')
-rw-r--r-- | target-ppc/kvm.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c index afb4696..ddf469f 100644 --- a/target-ppc/kvm.c +++ b/target-ppc/kvm.c @@ -1242,6 +1242,8 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run) int r; unsigned irq; + qemu_mutex_lock_iothread(); + /* PowerPC QEMU tracks the various core input pins (interrupt, critical * interrupt, reset, etc) in PPC-specific env->irq_input_state. */ if (!cap_interrupt_level && @@ -1269,6 +1271,8 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run) /* We don't know if there are more interrupts pending after this. However, * the guest will return to userspace in the course of handling this one * anyways, so we will get a chance to deliver the rest. */ + + qemu_mutex_unlock_iothread(); } MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run) @@ -1570,6 +1574,8 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run) CPUPPCState *env = &cpu->env; int ret; + qemu_mutex_lock_iothread(); + switch (run->exit_reason) { case KVM_EXIT_DCR: if (run->dcr.is_write) { @@ -1620,6 +1626,7 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run) break; } + qemu_mutex_unlock_iothread(); return ret; } |