aboutsummaryrefslogtreecommitdiff
path: root/cpus.c
AgeCommit message (Collapse)AuthorFilesLines
2017-10-24tcg: enable multiple TCG contexts in softmmuEmilio G. Cota1-0/+2
This enables parallel TCG code generation. However, we do not take advantage of it yet since tb_lock is still held during tb_gen_code. In user-mode we use a single TCG context; see the documentation added to tcg_region_init for the rationale. Note that targets do not need any conversion: targets initialize a TCGContext (e.g. defining TCG globals), and after this initialization has finished, the context is cloned by the vCPU threads, each of them keeping a separate copy. TCG threads claim one entry in tcg_ctxs[] by atomically increasing n_tcg_ctxs. Do not be too annoyed by the subsequent atomic_read's of that variable and tcg_ctxs; they are there just to play nice with analysis tools such as thread sanitizer. Note that we do not allocate an array of contexts (we allocate an array of pointers instead) because when tcg_context_init is called, we do not know yet how many contexts we'll use since the bool behind qemu_tcg_mttcg_enabled() isn't set yet. Previous patches folded some TCG globals into TCGContext. The non-const globals remaining are only set at init time, i.e. before the TCG threads are spawned. Here is a list of these set-at-init-time globals under tcg/: Only written by tcg_context_init: - indirect_reg_alloc_order - tcg_op_defs Only written by tcg_target_init (called from tcg_context_init): - tcg_target_available_regs - tcg_target_call_clobber_regs - arm: arm_arch, use_idiv_instructions - i386: have_cmov, have_bmi1, have_bmi2, have_lzcnt, have_movbe, have_popcnt - mips: use_movnz_instructions, use_mips32_instructions, use_mips32r2_instructions, got_sigill (tcg_target_detect_isa) - ppc: have_isa_2_06, have_isa_3_00, tb_ret_addr - s390: tb_ret_addr, s390_facilities - sparc: qemu_ld_trampoline, qemu_st_trampoline (build_trampolines), use_vis3_instructions Only written by tcg_prologue_init: - 'struct jit_code_entry one_entry' - aarch64: tb_ret_addr - arm: tb_ret_addr - i386: tb_ret_addr, guest_base_flags - ia64: tb_ret_addr - mips: tb_ret_addr, bswap32_addr, bswap32u_addr, bswap64_addr Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24tcg: introduce regions to split code_gen_bufferEmilio G. Cota1-0/+12
This is groundwork for supporting multiple TCG contexts. The naive solution here is to split code_gen_buffer statically among the TCG threads; this however results in poor utilization if translation needs are different across TCG threads. What we do here is to add an extra layer of indirection, assigning regions that act just like pages do in virtual memory allocation. (BTW if you are wondering about the chosen naming, I did not want to use blocks or pages because those are already heavily used in QEMU). We use a global lock to serialize allocations as well as statistics reporting (we now export the size of the used code_gen_buffer with tcg_code_size()). Note that for the allocator we could just use a counter and atomic_inc; however, that would complicate the gathering of tcg_code_size()-like stats. So given that the region operations are not a fast path, a lock seems the most reasonable choice. The effectiveness of this approach is clear after seeing some numbers. I used the bootup+shutdown of debian-arm with '-tb-size 80' as a benchmark. Note that I'm evaluating this after enabling per-thread TCG (which is done by a subsequent commit). * -smp 1, 1 region (entire buffer): qemu: flush code_size=83885014 nb_tbs=154739 avg_tb_size=357 qemu: flush code_size=83884902 nb_tbs=153136 avg_tb_size=363 qemu: flush code_size=83885014 nb_tbs=152777 avg_tb_size=364 qemu: flush code_size=83884950 nb_tbs=150057 avg_tb_size=373 qemu: flush code_size=83884998 nb_tbs=150234 avg_tb_size=373 qemu: flush code_size=83885014 nb_tbs=154009 avg_tb_size=360 qemu: flush code_size=83885014 nb_tbs=151007 avg_tb_size=370 qemu: flush code_size=83885014 nb_tbs=151816 avg_tb_size=367 That is, 8 flushes. * -smp 8, 32 regions (80/32 MB per region) [i.e. this patch]: qemu: flush code_size=76328008 nb_tbs=141040 avg_tb_size=356 qemu: flush code_size=75366534 nb_tbs=138000 avg_tb_size=361 qemu: flush code_size=76864546 nb_tbs=140653 avg_tb_size=361 qemu: flush code_size=76309084 nb_tbs=135945 avg_tb_size=375 qemu: flush code_size=74581856 nb_tbs=132909 avg_tb_size=375 qemu: flush code_size=73927256 nb_tbs=135616 avg_tb_size=360 qemu: flush code_size=78629426 nb_tbs=142896 avg_tb_size=365 qemu: flush code_size=76667052 nb_tbs=138508 avg_tb_size=368 Again, 8 flushes. Note how buffer utilization is not 100%, but it is close. Smaller region sizes would yield higher utilization, but we want region allocation to be rare (it acquires a lock), so we do not want to go too small. * -smp 8, static partitioning of 8 regions (10 MB per region): qemu: flush code_size=21936504 nb_tbs=40570 avg_tb_size=354 qemu: flush code_size=11472174 nb_tbs=20633 avg_tb_size=370 qemu: flush code_size=11603976 nb_tbs=21059 avg_tb_size=365 qemu: flush code_size=23254872 nb_tbs=41243 avg_tb_size=377 qemu: flush code_size=28289496 nb_tbs=52057 avg_tb_size=358 qemu: flush code_size=43605160 nb_tbs=78896 avg_tb_size=367 qemu: flush code_size=45166552 nb_tbs=82158 avg_tb_size=364 qemu: flush code_size=63289640 nb_tbs=116494 avg_tb_size=358 qemu: flush code_size=51389960 nb_tbs=93937 avg_tb_size=362 qemu: flush code_size=59665928 nb_tbs=107063 avg_tb_size=372 qemu: flush code_size=38380824 nb_tbs=68597 avg_tb_size=374 qemu: flush code_size=44884568 nb_tbs=79901 avg_tb_size=376 qemu: flush code_size=50782632 nb_tbs=90681 avg_tb_size=374 qemu: flush code_size=39848888 nb_tbs=71433 avg_tb_size=372 qemu: flush code_size=64708840 nb_tbs=119052 avg_tb_size=359 qemu: flush code_size=49830008 nb_tbs=90992 avg_tb_size=362 qemu: flush code_size=68372408 nb_tbs=123442 avg_tb_size=368 qemu: flush code_size=33555560 nb_tbs=59514 avg_tb_size=378 qemu: flush code_size=44748344 nb_tbs=80974 avg_tb_size=367 qemu: flush code_size=37104248 nb_tbs=67609 avg_tb_size=364 That is, 20 flushes. Note how a static partitioning approach uses the code buffer poorly, leading to many unnecessary flushes. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-09-22memory: Get rid of address_space_init_shareableAlexey Kardashevskiy1-2/+3
Since FlatViews are shared now and ASes not, this gets rid of address_space_init_shareable(). This should cause no behavioural change. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20170921085110.25598-17-aik@ozlabs.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-13Convert error_report() to warn_report()Alistair Francis1-1/+1
Convert all uses of error_report("warning:"... to use warn_report() instead. This helps standardise on a single method of printing warnings to the user. All of the warnings were changed using these two commands: find ./* -type f -exec sed -i \ 's|error_report(".*warning[,:] |warn_report("|Ig' {} + Indentation fixed up manually afterwards. The test-qdev-global-props test case was manually updated to ensure that this patch passes make check (as the test cases are case sensitive). Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> Suggested-by: Thomas Huth <thuth@redhat.com> Cc: Jeff Cody <jcody@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Cc: Max Reitz <mreitz@redhat.com> Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Lieven <pl@kamp.de> Cc: Josh Durgin <jdurgin@redhat.com> Cc: "Richard W.M. Jones" <rjones@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Greg Kurz <groug@kaod.org> Cc: Rob Herring <robh@kernel.org> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Peter Chubb <peter.chubb@nicta.com.au> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Marcel Apfelbaum <marcel@redhat.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Alexander Graf <agraf@suse.de> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Cornelia Huck <cohuck@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Greg Kurz <groug@kaod.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed by: Peter Chubb <peter.chubb@data61.csiro.au> Acked-by: Max Reitz <mreitz@redhat.com> Acked-by: Marcel Apfelbaum <marcel@redhat.com> Message-Id: <e1cfa2cd47087c248dd24caca9c33d9af0c499b0.1499866456.git.alistair.francis@xilinx.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2017-06-07cpus: reset throttle_thread_scheduled after sleepFelipe Franciosi1-1/+1
Currently, the throttle_thread_scheduled flag is reset back to 0 before sleeping (as part of the throttling logic). Given that throttle_timer (well, any timer) may tick with a slight delay, it so happens that under heavy throttling (ie. close or on CPU_THROTTLE_PCT_MAX) the tick may schedule a further cpu_throttle_thread() work item after the flag reset, but before the previous sleep completed. This results on the vCPU thread sleeping continuously for potentially several seconds in a row. The chances of that happening can be drastically minimised by resetting the flag after the sleep. Signed-off-by: Felipe Franciosi <felipe@nutanix.com> Signed-off-by: Malcolm Crossley <malcolm@nutanix.com> Message-Id: <1495229390-18909-1-git-send-email-felipe@nutanix.com> Acked-by: Jason J. Herne <jjherne@linux.vnet.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-06migration: Mark CPU states dirty before incoming migration/loadvmDavid Gibson1-0/+9
As a rule, CPU internal state should never be updated when !cpu->kvm_vcpu_dirty (or the HAX equivalent). If that is done, then subsequent calls to cpu_synchronize_state() - usually safe and idempotent - will clobber state. However, we routinely do this during a loadvm or incoming migration. Usually this is called shortly after a reset, which will clear all the cpu dirty flags with cpu_synchronize_all_post_reset(). Nothing is expected to set the dirty flags again before the cpu state is loaded from the incoming stream. This means that it isn't safe to call cpu_synchronize_state() from a post_load handler, which is non-obvious and potentially inconvenient. We could cpu_synchronize_all_state() before the loadvm, but that would be overkill since a) we expect the state to already be synchronized from the reset and b) we expect to completely rewrite the state with a call to cpu_synchronize_all_post_init() at the end of qemu_loadvm_state(). To clear this up, this patch introduces cpu_synchronize_pre_loadvm() and associated helpers, which simply marks the cpu state as dirty without actually changing anything. i.e. it says we want to discard any existing KVM (or HAX) state and replace it with what we're going to load. Cc: Juan Quintela <quintela@redhat.com> Cc: Dave Gilbert <dgilbert@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Juan Quintela <quintela@redhat.com>
2017-05-15Merge remote-tracking branch 'ehabkost/tags/x86-and-machine-pull-request' ↵Stefan Hajnoczi1-0/+10
into staging x86 and machine queue, 2017-05-11 Highlights: * New "-numa cpu" option * NUMA distance configuration * migration/i386 vmstatification # gpg: Signature made Thu 11 May 2017 08:16:07 PM BST # gpg: using RSA key 0x2807936F984DC5A6 # gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" # gpg: Note: This key has expired! # Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6 * ehabkost/tags/x86-and-machine-pull-request: (29 commits) migration/i386: Remove support for pre-0.12 formats vmstatification: i386 FPReg migration/i386: Remove old non-softfloat 64bit FP support tests: check -numa node,cpu=props_list usecase numa: add '-numa cpu,...' option for property based node mapping numa: remove node_cpu bitmaps as they are no longer used numa: use possible_cpus for not mapped CPUs check machine: call machine init from wrapper numa: remove no longer need numa_post_machine_init() tests: numa: add case for QMP command query-cpus QMP: include CpuInstanceProperties into query_cpus output output virt-arm: get numa node mapping from possible_cpus instead of numa_get_node_for_cpu() spapr: get numa node mapping from possible_cpus instead of numa_get_node_for_cpu() pc: get numa node mapping from possible_cpus instead of numa_get_node_for_cpu() numa: do default mapping based on possible_cpus instead of node_cpu bitmaps numa: mirror cpu to node mapping in MachineState::possible_cpus numa: add check that board supports cpu_index to node mapping virt-arm: add node-id property to CPU pc: add node-id property to CPU spapr: add node-id property to sPAPR core ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-05-11QMP: include CpuInstanceProperties into query_cpus output outputIgor Mammedov1-0/+10
if board supports CpuInstanceProperties, report them for each CPU thread listed. Main motivation for this is to provide these properties introspection via QMP interface for using in test cases to verify numa node to cpu mapping, which includes not only boards that support cpu hotplug and have this info in query-hotpluggable-cpus (pc/spapr) but also for boards that don't not support hotpluggable-cpus but support numa mapping (virt-arm). Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <1494415802-227633-12-git-send-email-imammedo@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-05-11cpus: Fix CPU unplug for MTTCGBharata B Rao1-0/+6
Ensure that the unplugged CPU thread is destroyed and the waiting thread is notified about it. This is needed for CPU unplug to work correctly in MTTCG mode. Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-10cpus: call cpu_update_icount on readAlex Bennée1-4/+6
This ensures each time the vCPU thread reads the icount we update the master timer_state.qemu_icount field. This way as long as updates are in BQL protected sections (which they should be) the main-loop can never come to update the log and find time has gone backwards. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-04-10cpu-exec: update icount after each TB_EXITAlex Bennée1-13/+5
There is no particular reason we shouldn't update the global system icount time as we exit each TranslationBlock run. This ensures the main-loop doesn't have to wait until we exit to the outer loop for executed instructions to be credited to timer_state. The prepare_icount_for_run function is slightly tweaked to match the logic we run in cpu_loop_exec_tb. Based on Paolo's original suggestion. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-04-10cpus: introduce cpu_update_icount helperAlex Bennée1-2/+21
By holding off updates to timer_state.qemu_icount we can run into trouble when the non-vCPU thread needs to know the time. This helper ensures we atomically update timers_state.qemu_icount based on what has been currently executed. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-04-10cpus: don't credit executed instructions before they have runAlex Bennée1-6/+19
Outside of the vCPU thread icount time will only be tracked against timers_state.qemu_icount. We no longer credit cycles until they have completed the run. Inside the vCPU thread we adjust for passage of time by looking at how many have run so far. This is only valid inside the vCPU thread while it is running. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-04-10cpus: move icount preparation out of tcg_exec_cpuAlex Bennée1-22/+45
As icount is only supported for single-threaded execution due to the requirement for determinism let's remove it from the common tcg_exec_cpu path. Also remove the additional fiddling which shouldn't be required as the icount counters should all be rectified as you enter the loop. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-04-10cpus: check cpu->running in cpu_get_icount_raw()Alex Bennée1-1/+1
The lifetime of current_cpu is now the lifetime of the vCPU thread. However get_icount_raw() can apply a fudge factor if called while code is running to take into account the current executed instruction count. To ensure this is always the case we also check cpu->running. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-04-10cpus: remove icount handling from qemu_tcg_cpu_thread_fnAlex Bennée1-2/+2
We should never be running in multi-threaded mode with icount enabled. There is no point calling handle_icount_deadline here so remove it and assert !use_icount. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-04-10cpus: fix wrong define nameNikunj A Dadhania1-1/+1
While the configure script generates TARGET_SUPPORTS_MTTCG define, one of the define is cpus.c is checking wrong name: TARGET_SUPPORT_MTTCG Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-03-28tcg: Add a new line after incompatibility warningPranith Kumar1-1/+1
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-20hax: fix breakage in lockingVincent Palatin1-1/+2
use qemu_mutex_lock_iothread consistently in qemu_hax_cpu_thread_fn() as done in other _thread_fn functions, instead of grabbing directly the BQL. This way we ensure that iothread_locked is properly set. On v2.9.0-rc0, QEMU was dying in an assertion in the mutex code when running with '--enable-hax' either on OSX or Windows. This bug was triggered since the code modification for multithreading added new usages of qemu_mutex_iothread_locked. This fixes the breakage on both platforms, I can now run again a full Chromium OS image with HAX kernel acceleration. Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Message-Id: <20170320101549.150076-1-vpalatin@chromium.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-14icount: process QEMU_CLOCK_VIRTUAL timers in vCPU threadPaolo Bonzini1-3/+25
icount has become much slower after tcg_cpu_exec has stopped using the BQL. There is also a latent bug that is masked by the slowness. The slowness happens because every occurrence of a QEMU_CLOCK_VIRTUAL timer now has to wake up the I/O thread and wait for it. The rendez-vous is mediated by the BQL QemuMutex: - handle_icount_deadline wakes up the I/O thread with BQL taken - the I/O thread wakes up and waits on the BQL - the VCPU thread releases the BQL a little later - the I/O thread raises an interrupt, which calls qemu_cpu_kick - the VCPU thread notices the interrupt, takes the BQL to process it and waits on it All this back and forth is extremely expensive, causing a 6 to 8-fold slowdown when icount is turned on. One may think that the issue is that the VCPU thread is too dependent on the BQL, but then the latent bug comes in. I first tried removing the BQL completely from the x86 cpu_exec, only to see everything break. The only way to fix it (and make everything slow again) was to add a dummy BQL lock/unlock pair. This is because in -icount mode you really have to process the events before the CPU restarts executing the next instruction. Therefore, this series moves the processing of QEMU_CLOCK_VIRTUAL timers straight in the vCPU thread when running in icount mode. The required changes include: - make the timer notification callback wake up TCG's single vCPU thread when run from another thread. By using async_run_on_cpu, the callback can override all_cpu_threads_idle() when the CPU is halted. - move handle_icount_deadline after qemu_tcg_wait_io_event, so that the timer notification callback is invoked after the dummy work item wakes up the vCPU thread - make handle_icount_deadline run the timers instead of just waking the I/O thread. - stop processing the timers in the main loop Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-14cpus: define QEMUTimerListNotifyCB for QEMU system emulationPaolo Bonzini1-0/+5
There is no change for now, because the callback just invokes qemu_notify_event. Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-09cpus.c: add additional error_report when !TARGET_SUPPORT_MTTCGAlex Bennée1-0/+4
While we may fail the memory ordering check later that can be confusing. So in cases where TARGET_SUPPORT_MTTCG has yet to be defined we should say so specifically. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2017-03-09vl/cpus: be smarter with icount and MTTCGAlex Bennée1-4/+3
The sense of the test was inverted. Make it simple, if icount is enabled then we disabled MTTCG by default. If the user tries to force MTTCG upon us then we tell them "no". Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-03-03KVM: move SIG_IPI handling to kvm-all.cPaolo Bonzini1-61/+1
This lets us remove a bunch of CONFIG_LINUX defines. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03KVM: do not use sigtimedwait to catch SIGBUSPaolo Bonzini1-18/+13
Call kvm_on_sigbus_vcpu asynchronously from the VCPU thread. Information for the SIGBUS can be stored in thread-local variables and processed later in kvm_cpu_exec. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03cpus: reorganize signal handling codePaolo Bonzini1-31/+32
Move the KVM "eat signals" code under CONFIG_LINUX, in preparation for moving it to kvm-all.c; reraise non-MCE SIGBUS immediately, without passing it to KVM. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03cpus: remove ugly cast on sigbus_handlerPaolo Bonzini1-9/+3
The cast is there because sigbus_handler is invoked via sigfd_handler. But it feels just wrong to use struct qemu_signalfd_siginfo in the prototype of a function that is passed to sigaction. Instead, do a simple-minded conversion of qemu_signalfd_siginfo to siginfo_t. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-02-24tcg: handle EXCP_ATOMIC exception for system emulationPranith Kumar1-0/+9
The patch enables handling atomic code in the guest. This should be preferably done in cpu_handle_exception(), but the current assumptions regarding when we can execute atomic sections cause a deadlock. The current mechanism discards the flags which were set in atomic execution. We ensure they are properly saved by calling the cc->cpu_exec_enter/leave() functions around the loop. As we are running cpu_exec_step_atomic() from the outermost loop we need to avoid an abort() when single stepping over atomic code since debug exception longjmp will point to the the setlongjmp in cpu_exec(). We do this by setting a new jmp_env so that it jumps back here on an exception. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> [AJB: tweak title, merge with new patches, add mmap_lock] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> CC: Paolo Bonzini <pbonzini@redhat.com>
2017-02-24tcg: enable thread-per-vCPUAlex Bennée1-31/+103
There are a couple of changes that occur at the same time here: - introduce a single vCPU qemu_tcg_cpu_thread_fn One of these is spawned per vCPU with its own Thread and Condition variables. qemu_tcg_rr_cpu_thread_fn is the new name for the old single threaded function. - the TLS current_cpu variable is now live for the lifetime of MTTCG vCPU threads. This is for future work where async jobs need to know the vCPU context they are operating in. The user to switch on multi-thread behaviour and spawn a thread per-vCPU. For a simple test kvm-unit-test like: ./arm/run ./arm/locking-test.flat -smp 4 -accel tcg,thread=multi Will now use 4 vCPU threads and have an expected FAIL (instead of the unexpected PASS) as the default mode of the test has no protection when incrementing a shared variable. We enable the parallel_cpus flag to ensure we generate correct barrier and atomic code if supported by the front and backends. This doesn't automatically enable MTTCG until default_mttcg_enabled() is updated to check the configuration is supported. Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [AJB: Some fixes, conditionally, commit rewording] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-02-24tcg: remove global exit_requestAlex Bennée1-8/+11
There are now only two uses of the global exit_request left. The first ensures we exit the run_loop when we first start to process pending work and in the kick handler. This is just as easily done by setting the first_cpu->exit_request flag. The second use is in the round robin kick routine. The global exit_request ensured every vCPU would set its local exit_request and cause a full exit of the loop. Now the iothread isn't being held while running we can just rely on the kick handler to push us out as intended. We lightly re-factor the main vCPU thread to ensure cpu->exit_requests cause us to exit the main loop and process any IO requests that might come along. As an cpu->exit_request may legitimately get squashed while processing the EXCP_INTERRUPT exception we also check cpu->queued_work_first to ensure queued work is expedited as soon as possible. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-02-24tcg: drop global lock during TCG code executionJan Kiszka1-23/+5
This finally allows TCG to benefit from the iothread introduction: Drop the global mutex while running pure TCG CPU code. Reacquire the lock when entering MMIO or PIO emulation, or when leaving the TCG loop. We have to revert a few optimization for the current TCG threading model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not kicking it in qemu_cpu_kick. We also need to disable RAM block reordering until we have a more efficient locking mechanism at hand. Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here. These numbers demonstrate where we gain something: 20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm 20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm The guest CPU was fully loaded, but the iothread could still run mostly independent on a second core. Without the patch we don't get beyond 32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm 32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm We don't benefit significantly, though, when the guest is not fully loading a host CPU. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com> [FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex] Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> [EGC: fixed iothread lock for cpu-exec IRQ handling] Signed-off-by: Emilio G. Cota <cota@braap.org> [AJB: -smp single-threaded fix, clean commit msg, BQL fixes] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com> [PM: target-arm changes] Acked-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24tcg: rename tcg_current_cpu to tcg_current_rr_cpuAlex Bennée1-19/+22
..and make the definition local to cpus. In preparation for MTTCG the concept of a global tcg_current_cpu will no longer make sense. However we still need to keep track of it in the single-threaded case to be able to exit quickly when required. qemu_cpu_kick_no_halt() moves and becomes qemu_cpu_kick_rr_cpu() to emphasise its use-case. qemu_cpu_kick now kicks the relevant cpu as well as qemu_kick_rr_cpu() which will become a no-op in MTTCG. For the time being the setting of the global exit_request remains. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
2017-02-24tcg: add kick timer for single-threaded vCPU emulationAlex Bennée1-0/+61
Currently we rely on the side effect of the main loop grabbing the iothread_mutex to give any long running basic block chains a kick to ensure the next vCPU is scheduled. As this code is being re-factored and rationalised we now do it explicitly here. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
2017-02-24tcg: add options for enabling MTTCGKONRAD Frederic1-0/+73
We know there will be cases where MTTCG won't work until additional work is done in the front/back ends to support. It will however be useful to be able to turn it on. As a result MTTCG will default to off unless the combination is supported. However the user can turn it on for the sake of testing. Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> [AJB: move to -accel tcg,thread=multi|single, defaults] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-02-16move vm_start to cpus.cClaudio Imbrenda1-0/+42
This patch: * moves vm_start to cpus.c. * exports qemu_vmstop_requested, since it's needed by vm_start. * extracts vm_prepare_start from vm_start; it does what vm_start did, except restarting the cpus. * vm_start now calls vm_prepare_start and then restarts the cpus. Signed-off-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com> Message-Id: <1487092068-16562-2-git-send-email-imbrenda@linux.vnet.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-01-19Plumb the HAXM-based hardware acceleration supportVincent Palatin1-1/+77
Use the Intel HAX is kernel-based hardware acceleration module for Windows (similar to KVM on Linux). Based on the "target/i386: Add Intel HAX to android emulator" patch from David Chou <david.j.chou@intel.com> Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Message-Id: <7b9cae28a0c379ab459c7a8545c9a39762bd394f.1484045952.git.vpalatin@chromium.org> [Drop hax_populate_ram stub. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-01-19kvm: move cpu synchronization codeVincent Palatin1-0/+1
Move the generic cpu_synchronize_ functions to the common hw_accel.h header, in order to prepare for the addition of a second hardware accelerator. Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Message-Id: <f5c3cffe8d520011df1c2e5437bb814989b48332.1484045952.git.vpalatin@chromium.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31*_run_on_cpu: introduce run_on_cpu_data typePaolo Bonzini1-3/+4
This changes the *_run_on_cpu APIs (and helpers) to pass data in a run_on_cpu_data type instead of a plain void *. This is because we sometimes want to pass a target address (target_ulong) and this fails on 32 bit hosts emulating 64 bit guests. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31cpus: re-factor out handle_icount_deadlineAlex Bennée1-6/+13
In preparation for adding a MTTCG thread we re-factor out a bit of what will be common code to handle the QEMU_CLOCK_VIRTUAL expiration. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-18-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31tcg: cpus rm tcg_exec_all()Alex Bennée1-44/+43
In preparation for multi-threaded TCG we remove tcg_exec_all and move all the CPU cycling into the main thread function. When MTTCG is enabled we shall use a separate thread function which only handles one vCPU. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-13-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31tcg: move tcg_exec_all and helpers above thread fnAlex Bennée1-101/+99
This is a pure mechanical change in preparation for up-coming re-factoring. Instead of a forward declaration for tcg_exec_all it and the associated helper functions are moved in front of the call from qemu_tcg_cpu_thread_fn. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-12-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31cpus: make all_vcpus_paused() return boolAlex Bennée1-3/+3
Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-2-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-26tcg: Add EXCP_ATOMICRichard Henderson1-0/+2
When we cannot emulate an atomic operation within a parallel context, this exception allows us to stop the world and try again in a serial context. Reviewed-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-29qemu: use bdrv_flush_all for vm_stop et alJohn Snow1-2/+2
Reimplement bdrv_flush_all for vm_stop. In contrast to blk_flush_all, bdrv_flush_all does not have device model restrictions. This allows us to flush and halt unconditionally without error. This allows us to do things like migrate when we have a device with an open tray, but has a node that may need to be flushed, or nodes that aren't currently attached to any device and need to be flushed. Specifically, this allows us to migrate when we have a CDROM with an open tray. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-27replay: allow replay stopping and restartingPavel Dovgalyuk1-0/+1
This patch fixes bug with stopping and restarting replay through monitor. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20160926080815.6992.71818.stgit@PASHA-ISP> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: move exclusive work infrastructure from linux-userPaolo Bonzini1-0/+2
This will serve as the base for async_safe_run_on_cpu. Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with qemu_cpu_list_lock: together with a call to exclusive_idle (via cpu_exec_start/end) in cpu_list_add, this protects exclusive work against concurrent CPU addition and removal. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: move CPU work item management to common codeSergey Fedorov1-81/+1
Make CPU work core functions common between system and user-mode emulation. User-mode does not use run_on_cpu, so do not implement it. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-10-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus: Rename flush_queued_work()Sergey Fedorov1-2/+2
To avoid possible confusion, rename flush_queued_work() to process_queued_cpu_work(). Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-6-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus: Move common code out of {async_, }run_on_cpu()Sergey Fedorov1-24/+18
Move the code common between run_on_cpu() and async_run_on_cpu() into a new function queue_work_on_cpu(). Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-4-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus: pass CPUState to run_on_cpu helpersAlex Bennée1-8/+7
CPUState is a fairly common pointer to pass to these helpers. This means if you need other arguments for the async_run_on_cpu case you end up having to do a g_malloc to stuff additional data into the routine. For the current users this isn't a massive deal but for MTTCG this gets cumbersome when the only other parameter is often an address. This adds the typedef run_on_cpu_func for helper functions which has an explicit CPUState * passed as the first parameter. All the users of run_on_cpu and async_run_on_cpu have had their helpers updated to use CPUState where available. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> [Sergey Fedorov: - eliminate more CPUState in user data; - remove unnecessary user data passing; - fix target-s390x/kvm.c and target-s390x/misc_helper.c] Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts) Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> (s390 parts) Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-3-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>