aboutsummaryrefslogtreecommitdiff
path: root/accel/tcg
AgeCommit message (Collapse)AuthorFilesLines
2018-06-15translate-all: iterate over TBs in a page with PAGE_FOR_EACH_TBEmilio G. Cota1-33/+29
This commit does several things, but to avoid churn I merged them all into the same commit. To wit: - Use uintptr_t instead of TranslationBlock * for the list of TBs in a page. Just like we did in (c37e6d7e "tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB"), the rationale is the same: these are tagged pointers, not pointers. So use a more appropriate type. - Only check the least significant bit of the tagged pointers. Masking with 3/~3 is unnecessary and confusing. - Introduce the TB_FOR_EACH_TAGGED macro, and use it to define PAGE_FOR_EACH_TB, which improves readability. Note that TB_FOR_EACH_TAGGED will gain another user in a subsequent patch. - Update tb_page_remove to use PAGE_FOR_EACH_TB. In case there is a bug and we attempt to remove a TB that is not in the list, instead of segfaulting (since the list is NULL-terminated) we will reach g_assert_not_reached(). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-15tcg: move tb_ctx.tb_phys_invalidate_count to tcg_ctxEmilio G. Cota1-2/+3
Thereby making it per-TCGContext. Once we remove tb_lock, this will avoid an atomic increment every time a TB is invalidated. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-15tcg: track TBs with per-region BST'sEmilio G. Cota2-87/+16
This paves the way for enabling scalable parallel generation of TCG code. Instead of tracking TBs with a single binary search tree (BST), use a BST for each TCG region, protecting it with a lock. This is as scalable as it gets, since each TCG thread operates on a separate region. The core of this change is the introduction of struct tcg_region_tree, which contains a pointer to a GTree and an associated lock to serialize accesses to it. We then allocate an array of tcg_region_tree's, adding the appropriate padding to avoid false sharing based on qemu_dcache_linesize. Given a tc_ptr, we first find the corresponding region_tree. This is done by special-casing the first and last regions first, since they might be of size != region.size; otherwise we just divide the offset by region.stride. I was worried about this division (several dozen cycles of latency), but profiling shows that this is not a fast path. Note that region.stride is not required to be a power of two; it is only required to be a multiple of the host's page size. Note that with this design we can also provide consistent snapshots about all region trees at once; for instance, tcg_tb_foreach acquires/releases all region_tree locks before/after iterating over them. For this reason we now drop tb_lock in dump_exec_info(). As an alternative I considered implementing a concurrent BST, but this can be tricky to get right, offers no consistent snapshots of the BST, and performance and scalability-wise I don't think it could ever beat having separate GTrees, given that our workload is insert-mostly (all concurrent BST designs I've seen focus, understandably, on making lookups fast, which comes at the expense of convoluted, non-wait-free insertions/removals). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-15qht: return existing entry when qht_insert failsEmilio G. Cota1-1/+1
The meaning of "existing" is now changed to "matches in hash and ht->cmp result". This is saner than just checking the pointer value. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-15qht: require a default comparison functionEmilio G. Cota2-3/+17
qht_lookup now uses the default cmp function. qht_lookup_custom is defined to retain the old behaviour, that is a cmp function is explicitly provided. qht_insert will gain use of the default cmp in the next patch. Note that we move qht_lookup_custom's @func to be the last argument, which makes the new qht_lookup as simple as possible. Instead of this (i.e. keeping @func 2nd): 0000000000010750 <qht_lookup>: 10750: 89 d1 mov %edx,%ecx 10752: 48 89 f2 mov %rsi,%rdx 10755: 48 8b 77 08 mov 0x8(%rdi),%rsi 10759: e9 22 ff ff ff jmpq 10680 <qht_lookup_custom> 1075e: 66 90 xchg %ax,%ax We get: 0000000000010740 <qht_lookup>: 10740: 48 8b 4f 08 mov 0x8(%rdi),%rcx 10744: e9 37 ff ff ff jmpq 10680 <qht_lookup_custom> 10749: 0f 1f 80 00 00 00 00 nopl 0x0(%rax) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-15exec.c: Handle IOMMUs in address_space_translate_for_iotlb()Peter Maydell1-1/+2
Currently we don't support board configurations that put an IOMMU in the path of the CPU's memory transactions, and instead just assert() if the memory region fonud in address_space_translate_for_iotlb() is an IOMMUMemoryRegion. Remove this limitation by having the function handle IOMMUs. This is mostly straightforward, but we must make sure we have a notifier registered for every IOMMU that a transaction has passed through, so that we can flush the TLB appropriately when any of the IOMMUs change their mappings. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
2018-06-15cputlb: Pass cpu_transaction_failed() the correct physaddrPeter Maydell1-13/+31
The API for cpu_transaction_failed() says that it takes the physical address for the failed transaction. However we were actually passing it the offset within the target MemoryRegion. We don't currently have any target CPU implementations of this hook that require the physical address; fix this bug so we don't get confused if we ever do add one. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180611125633.32755-3-peter.maydell@linaro.org
2018-06-15cpu-defs.h: Document CPUIOTLBEntry 'addr' fieldPeter Maydell1-0/+12
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious use; add a comment documenting it (reverse-engineered from what the code that sets it is doing). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180611125633.32755-2-peter.maydell@linaro.org
2018-06-01Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell1-1/+0
* Linux header upgrade (Peter) * firmware.json definition (Laszlo) * IPMI migration fix (Corey) * QOM improvements (Alexey, Philippe, me) * Memory API cleanups (Jay, me, Tristan, Peter) * WHPX fixes and improvements (Lucian) * Chardev fixes (Marc-André) * IOMMU documentation improvements (Peter) * Coverity fixes (Peter, Philippe) * Include cleanup (Philippe) * -clock deprecation (Thomas) * Disable -sandbox unless CONFIG_SECCOMP (Yi Min Zhao) * Configurability improvements (me) # gpg: Signature made Fri 01 Jun 2018 17:42:13 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (56 commits) hw: make virtio devices configurable via default-configs/ hw: allow compiling out SCSI memory: Make operations using MemoryRegionIoeventfd struct pass by pointer. char: Remove unwanted crlf conversion qdev: Remove DeviceClass::init() and ::exit() qdev: Simplify the SysBusDeviceClass::init path hw/i2c: Use DeviceClass::realize instead of I2CSlaveClass::init hw/i2c/smbus: Use DeviceClass::realize instead of SMBusDeviceClass::init target/i386/kvm.c: Remove compatibility shim for KVM_HINTS_REALTIME Update Linux headers to 4.17-rc6 target/i386/kvm.c: Handle renaming of KVM_HINTS_DEDICATED scripts/update-linux-headers: Handle kernel license no longer being one file scripts/update-linux-headers: Handle __aligned_u64 virtio-gpu-3d: Define VIRTIO_GPU_CAPSET_VIRGL2 elsewhere gdbstub: Prevent fd leakage docs/interop: add "firmware.json" ipmi: Use proper struct reference for KCS vmstate vmstate: Add a VSTRUCT type tcg: remove softfloat from --disable-tcg builds qemu-options: Mark the non-functional -clock option as deprecated ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-05-31accel: Do not include "exec/address-spaces.h" if it is not necessaryPhilippe Mathieu-Daudé1-1/+0
Code change produced with: $ git grep '#include "exec/address-spaces.h"' accel | \ cut -d: -f-1 | \ xargs egrep -L "(get_system_|address_space_)" | \ xargs sed -i.bak '/#include "exec\/address-spaces.h"/d' Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20180528232719.4721-3-f4bug@amsat.org> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-05-31Make address_space_translate{, _cached}() take a MemTxAttrs argumentPeter Maydell1-1/+1
As part of plumbing MemTxAttrs down to the IOMMU translate method, add MemTxAttrs as an argument to address_space_translate() and address_space_translate_cached(). Callers either have an attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
2018-05-31Make tb_invalidate_phys_addr() take a MemTxAttrs argumentPeter Maydell1-1/+1
As part of plumbing MemTxAttrs down to the IOMMU translate method, add MemTxAttrs as an argument to tb_invalidate_phys_addr(). Its callers either have an attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
2018-05-20Remove unnecessary variables for function return valueLaurent Vivier1-4/+1
Re-run Coccinelle script scripts/coccinelle/return_directly.cocci Signed-off-by: Laurent Vivier <lvivier@redhat.com> ppc part Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2018-05-15tcg: Optionally log FPU state in TCG -d cpu loggingPeter Maydell1-3/+6
Usually the logging of the CPU state produced by -d cpu is sufficient to diagnose problems, but sometimes you want to see the state of the floating point registers as well. We don't want to enable that by default as it adds a lot of extra data to the log; instead, allow it to be optionally enabled via -d fpu. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180510130024.31678-1-peter.maydell@linaro.org
2018-05-11Merge remote-tracking branch ↵Peter Maydell2-38/+82
'remotes/pmaydell/tags/pull-target-arm-20180510' into staging target-arm queue: * hw/arm/iotkit.c: fix minor memory leak * softfloat: fix wrong-exception-flags bug for multiply-add corner case * arm: isolate and clean up DTB generation * implement Arm v8.1-Atomics extension * Fix some bugs and missing instructions in the v8.2-FP16 extension # gpg: Signature made Thu 10 May 2018 18:44:34 BST # gpg: using RSA key 3C2525ED14360CDE # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" # gpg: aka "Peter Maydell <pmaydell@gmail.com>" # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20180510: (21 commits) target/arm: Clear SVE high bits for FMOV target/arm: Fix float16 to/from int16 target/arm: Implement vector shifted FCVT for fp16 target/arm: Implement vector shifted SCVF/UCVF for fp16 target/arm: Enable ARM_FEATURE_V8_ATOMICS for user-only target/arm: Implement CAS and CASP target/arm: Fill in disas_ldst_atomic target/arm: Introduce ARM_FEATURE_V8_ATOMICS and initial decode target/riscv: Use new atomic min/max expanders tcg: Use GEN_ATOMIC_HELPER_FN for opposite endian atomic add tcg: Introduce atomic helpers for integer min/max target/xtensa: Use new min/max expanders target/arm: Use new min/max expanders tcg: Introduce helpers for integer min/max atomic.h: Work around gcc spurious "unused value" warning make sure that we aren't overwriting mc->get_hotplug_handler by accident arm/boot: split load_dtb() from arm_load_kernel() platform-bus-device: use device plug callback instead of machine_done notifier pc: simplify MachineClass::get_hotplug_handler handling softfloat: Handle default NaN mode after pickNaNMulAdd, not before ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org> # Conflicts: # target/riscv/translate.c
2018-05-10tcg: Use GEN_ATOMIC_HELPER_FN for opposite endian atomic addRichard Henderson1-42/+7
Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180508151437.4232-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-05-10tcg: Introduce atomic helpers for integer min/maxRichard Henderson2-0/+79
Given that this atomic operation will be used by both risc-v and aarch64, let's not duplicate code across the two targets. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180508151437.4232-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-05-09translator: merge max_insns into DisasContextBaseEmilio G. Cota1-11/+10
While at it, use int for both num_insns and max_insns to make sure we have same-type comparisons. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Michael Clark <mjc@sifive.com> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-04-11icount: fix cpu_restore_state_from_tb for non-tb-exit casesPavel Dovgalyuk4-20/+20
In icount mode, instructions that access io memory spaces in the middle of the translation block invoke TB recompilation. After recompilation, such instructions become last in the TB and are allowed to access io memory spaces. When the code includes instruction like i386 'xchg eax, 0xffffd080' which accesses APIC, QEMU goes into an infinite loop of the recompilation. This instruction includes two memory accesses - one read and one write. After the first access, APIC calls cpu_report_tpr_access, which restores the CPU state to get the current eip. But cpu_restore_state_from_tb resets the cpu->can_do_io flag which makes the second memory access invalid. Therefore the second memory access causes a recompilation of the block. Then these operations repeat again and again. This patch moves resetting cpu->can_do_io flag from cpu_restore_state_from_tb to cpu_loop_exit* functions. It also adds a parameter for cpu_restore_state which controls restoring icount. There is no need to restore icount when we only query CPU state without breaking the TB. Restoring it in such cases leads to the incorrect flow of the virtual time. In most cases new parameter is true (icount should be recalculated). But there are two cases in i386 and openrisc when the CPU state is only queried without the need to break the TB. This patch fixes both of these cases. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Message-Id: <20180409091320.12504.35329.stgit@pasha-VirtualBox> [rth: Make can_do_io setting unconditional; move from cpu_exec; make cpu_loop_exit_{noexc,restore} call cpu_loop_exit.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-04-06tcg: Fix out-of-line generic vector comparesRichard Henderson1-1/+1
A mistake in the type passed to sizeof, that happens to work when the out-of-line fallback itself is using host vectors, but fails when using only the base types. Tested-by: Emilio G. Cota <cota@braap.org> Reported-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-03-26tcg: Really fix cpu_io_recompileRichard Henderson1-27/+10
We have confused the number of instructions that have been executed in the TB with the number of instructions needed to repeat the I/O instruction. We have used cpu_restore_state_from_tb, which means that the guest pc is pointing to the I/O instruction. The only time the answer to the later question is not 1 is when MIPS or SH4 need to re-execute the branch for the delay slot as well. We must rely on cpu->cflags_next_tb to generate the next TB, as otherwise we have a race condition with other guest cpus within the TB cache. Fixes: 0790f86861079b1932679d0f011e431aaf4ee9e2 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20180319031545.29359-1-richard.henderson@linaro.org> Tested-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-03-12tcg: fix cpu_io_recompilePavel Dovgalyuk1-3/+15
cpu_io_recompile() function was broken by the commit 9b990ee5a3cc6aa38f81266fb0c6ef37a36c45b9. Instead of regenerating the block starting from PC of the original block, it just set the instruction counter for TCG. In most cases this was unnoticed, but in icount mode there was an exception for incorrect usage of CF_LAST_IO flag. This patch recovers recompilation of the original block and also configures translation for executing single IO instruction which caused a recompilation. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20180227095338.1060.27385.stgit@pasha-VirtualBox> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
2018-03-12cpu-exec: fix exception_index handlingPavel Dovgalyuk1-1/+4
Function cpu_handle_interrupt calls cc->cpu_exec_interrupt to process pending hardware interrupts. Under the hood cpu_exec_interrupt uses cpu->exception_index to pass information to the internal function which is usually common for exception and interrupt processing. But this value is not reset after return and may be processed again by cpu_handle_exception. This does not happen due to overwriting the exception_index at the end of cpu_handle_interrupt. But this branch may also overwrite the valid exception_index in some cases. Therefore this patch: 1. resets exception_index just after the call to cpu_exec_interrupt 2. prevents overwriting the meaningful value of exception_index Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20180227095140.1060.61357.stgit@pasha-VirtualBox> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
2018-02-08tcg: Add generic vector helpers with a scalar operandRichard Henderson2-0/+199
Use dup to convert a non-constant scalar to a third vector. Add addition, multiplication, and logical operations with an immediate. Add addition, subtraction, multiplication, and logical operations with a non-constant scalar. Allow for the front-end to build operations in which the scalar operand comes first. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-02-08tcg: Add generic helpers for saturating arithmeticRichard Henderson2-0/+288
No vector ops as yet. SSE only has direct support for 8- and 16-bit saturation; handling 32- and 64-bit saturation is much more expensive. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-02-08tcg: Add generic vector ops for multiplicationRichard Henderson2-0/+49
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-02-08tcg: Add generic vector ops for comparisonsRichard Henderson2-0/+66
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-02-08tcg: Add generic vector ops for constant shiftsRichard Henderson2-0/+159
Opcodes are added for scalar and vector shifts, but considering the varied semantics of these do not expose them to the front ends. Do go ahead and provide them in case they are needed for backend expansion. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-02-08tcg: Add generic vector expandersRichard Henderson3-1/+355
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-02-05Drop remaining bits of ia64 host supportPeter Maydell1-33/+0
We dropped support for ia64 host CPUs in the 2.11 release (removing the TCG backend for it, and advertising the support as being completely removed in the changelog). However there are a few bits and pieces of code still floating about. Remove those, too. We can drop the check in configure for "ia64 or hppa host?" entirely, because we don't support hppa hosts either any more. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1516897189-11035-1-git-send-email-peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-01-25accel/tcg: add size paremeter in tlb_fill()Laurent Vivier3-12/+17
The MC68040 MMU provides the size of the access that triggers the page fault. This size is set in the Special Status Word which is written in the stack frame of the access fault exception. So we need the size in m68k_cpu_unassigned_access() and m68k_cpu_handle_mmu_fault(). To be able to do that, this patch modifies the prototype of handle_mmu_fault handler, tlb_fill() and probe_write(). do_unassigned_access() already includes a size parameter. This patch also updates handle_mmu_fault handlers and tlb_fill() of all targets (only parameter, no code change). Signed-off-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20180118193846.24953-2-laurent@vivier.eu>
2018-01-23page_unprotect(): handle calls to pages that are PAGE_WRITEPeter Maydell2-20/+43
If multiple guest threads in user-mode emulation write to a page which QEMU has marked read-only because of cached TCG translations, the threads can race in page_unprotect: * threads A & B both try to do a write to a page with code in it at the same time (ie which we've made non-writeable, so SEGV) * they race into the signal handler with this faulting address * thread A happens to get to page_unprotect() first and takes the mmap lock, so thread B sits waiting for it to be done * A then finds the page, marks it PAGE_WRITE and mprotect()s it writable * A can then continue OK (returns from signal handler to retry the memory access) * ...but when B gets the mmap lock it finds that the page is already PAGE_WRITE, and so it exits page_unprotect() via the "not due to protected translation" code path, and wrongly delivers the signal to the guest rather than just retrying the access In particular, this meant that trying to run 'javac' in user-mode emulation would fail with a spurious guest SIGSEGV. Handle this by making page_unprotect() assume that a call for a page which is already PAGE_WRITE is due to a race of this sort and return a "fault handled" indication. Since this would cause an infinite loop if we ever called page_unprotect() for some other kind of fault than "write failed due to bad access permissions", tighten the condition in handle_cpu_signal() to check the signal number and si_code, and add a comment so that if somebody does ever find themselves debugging an infinite loop of faults they have some clue about why. (The trick for identifying the correct setting for current_tb_invalidated for thread B (needed to handle the precise-SMC case) is due to Richard Henderson. Paolo Bonzini suggested just relying on si_code rather than trying anything more complicated.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1511879725-9576-3-git-send-email-peter.maydell@linaro.org> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2018-01-23linux-user: Propagate siginfo_t through to handle_cpu_signal()Peter Maydell1-25/+14
Currently all the architecture/OS specific cpu_signal_handler() functions call handle_cpu_signal() without passing it the siginfo_t. We're going to want that so we can look at the si_code to determine whether this is a SEGV_ACCERR access violation or some other kind of fault, so change the functions to pass through the pointer to the siginfo_t rather than just the si_addr value. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1511879725-9576-2-git-send-email-peter.maydell@linaro.org> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2017-12-29tcg: add cs_base and flags to -d exec outputPaolo Bonzini2-4/+7
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20171217055023.29225-1-pbonzini@redhat.com> [rth: Also change the Chain logging in helper_lookup_tb_ptr.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-12-21cpu-exec: fix missed CPU kick during interrupt injectionDavid Hildenbrand1-9/+3
The conditional memory barrier not only looks strange but actually is wrong. On s390x, I can reproduce interrupts via cpu_interrupt() not leading to a proper kick out of emulation every now and then. cpu_interrupt() is especially used for inter CPU communication via SIGP (esp. external calls and emergency interrupts). With this patch, I was not able to reproduce. (esp. no stalls or hangs in the guest). My setup is s390x MTTCG with 16 VCPUs on 8 CPU host, running make -j16. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20171129191319.11483-1-david@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-12-18misc: remove duplicated includesPhilippe Mathieu-Daudé1-1/+0
exec: housekeeping (funny since 02d0e095031) applied using ./scripts/clean-includes Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-12-18accel/tcg/cpu-exec-common.c: Remove unnecessary include of memory-internal.hPeter Maydell1-1/+0
The cpu-exec-common.c file includes memory-internal.h, but it doesn't actually use anything from that header. Remove the unnecessary include. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-12-18translate-all: fix 'consisits' typo in commentEmilio G. Cota1-1/+1
Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-11-21accel/tcg: Handle atomic accesses to notdirty memory correctlyPeter Maydell3-13/+38
To do a write to memory that is marked as notdirty, we need to invalidate any TBs we have cached for that memory, and update the cpu physical memory dirty flags for VGA and migration. The slowpath code in notdirty_mem_write() does all this correctly, but the new atomic handling code in atomic_mmu_lookup() doesn't do anything at all, it just clears the dirty bit in the TLB. The effect of this bug is that if the first write to a notdirty page for which we have cached TBs is by a guest atomic access, we fail to invalidate the TBs and subsequently will execute incorrect code. This can be seen by trying to run 'javac' on AArch64. Use the new notdirty_call_before() and notdirty_call_after() functions to correctly handle the update to notdirty memory in the atomic codepath. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1511201308-23580-3-git-send-email-peter.maydell@linaro.org
2017-11-20Revert "cpu-exec: don't overwrite exception_index"Peter Maydell1-3/+1
This reverts commit e01cecabf3e04d22340d7e8b3616ef051c42c891, which breaks booting of aarch64 Linux images. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-11-16Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell1-42/+57
Miscellaneous bugfixes # gpg: Signature made Wed 15 Nov 2017 15:27:25 GMT # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: fix scripts/update-linux-headers.sh here document exec: Do not resolve subpage in mru_section util/stats64: Fix min/max comparisons cpu-exec: avoid cpu_exec_nocache infinite loop with record/replay cpu-exec: don't overwrite exception_index vhost-user-scsi: add missing virtqueue_size param target-i386: adds PV_TLB_FLUSH CPUID feature bit thread-posix: fix qemu_rec_mutex_trylock macro Makefile: simpler/faster "make help" ioapic/tracing: Remove last DPRINTFs Enable 8-byte wide MMIO for 16550 serial devices Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-11-15tcg: Record code_gen_buffer address for user-only memory helpersRichard Henderson3-18/+73
When we handle a signal from a fault within a user-only memory helper, we cannot cpu_restore_state with the PC found within the signal frame. Use a TLS variable, helper_retaddr, to record the unwind start point to find the faulting guest insn. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-11-14cpu-exec: avoid cpu_exec_nocache infinite loop with record/replayPavel Dovgalyuk1-41/+54
This patch ensures that icount_decr.u32.high is clear before calling cpu_exec_nocache when exception is pending. Because the exception is caused by the first instruction in the block and it cannot be executed without resetting the flag. There are two parts in the fix. First, clear icount_decr.u32.high in cpu_handle_interrupt (just before processing the "dependent" request, stored in cpu->interrupt_request or cpu->exit_request) rather than cpu_loop_exec_tb; this ensures that cpu_handle_exception is always reached with zero icount_decr.u32.high unless another interrupt has happened in the meanwhile. Second, try to cause the exception at the beginning of cpu_handle_exception, and exit immediately if the TB cannot execute. With this change, interrupts are processed and cpu_exec_nocache can make process. Signed-off-by: Maria Klimushenkova <maria.klimushenkova@ispras.ru> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20171114081818.27640.33165.stgit@pasha-VirtualBox> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-11-14cpu-exec: don't overwrite exception_indexPavel Dovgalyuk1-1/+3
This patch adds a condition before overwriting exception_index fiels. It is needed when exception_index is already set to some meaningful value. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20171114081812.27640.26372.stgit@pasha-VirtualBox> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-11-13accel/tcg/translate-all: expand cpu_restore_state addr checkAlex Bennée1-23/+29
We are still seeing signals during translation time when we walk over a page protection boundary. This expands the check to ensure the host PC is inside the code generation buffer. The original suggestion was to check versus tcg_ctx.code_gen_ptr but as we now segment the translation buffer we have to settle for just a general check for being inside. I've also fixed up the declaration to make it clear it can deal with invalid addresses. A later patch will fix up the call sites. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reported-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20171108153245.20740-2-alex.bennee@linaro.org Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <rth@twiddle.net> Tested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-11-03cpu-exec: Exit exclusive region on longjmp from step_atomicPeter Maydell1-3/+12
Commit ac03ee5331612e44be narrowed the scope of the exclusive region so it only covers when we're executing the TB, not when we're generating it. However it missed that there is more than one execution path out of cpu_tb_exec -- if the atomic insn causes an exception then the code will longjmp out, skipping the code to end the exclusive region. This causes QEMU to hang the next time the CPU calls start_exclusive(), waiting for itself to exit the region. Move the "end the region" code out to the end of the function so that it is run for both normal exit and also for exit-via-longjmp. We have to use a volatile bool flag to decide whether we need to end the region, because we can longjump out of the codegen as well as the execution. (For some reason this only reproduces for me with a clang optimized build, not a gcc debug build.) Reviewed-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Fixes: ac03ee5331612e44beb393df2b578c951d27dc0d Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1509640536-32160-1-git-send-email-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24translate-all: exit from tb_phys_invalidate if qht_remove failsEmilio G. Cota1-1/+3
Two or more threads might race while invalidating the same TB. We currently do not check for this at all despite taking tb_lock, which means we would wrongly invalidate the same TB more than once. This bug has actually been hit by users: I recently saw a report on IRC, although I have yet to see the corresponding test case. Fix this by using qht_remove as the synchronization point; if it fails, that means the TB has already been invalidated, and therefore there is nothing left to do in tb_phys_invalidate. Note that this solution works now that we still have tb_lock, and will continue working once we remove tb_lock. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1508445114-4717-1-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24tcg: enable multiple TCG contexts in softmmuEmilio G. Cota1-1/+1
This enables parallel TCG code generation. However, we do not take advantage of it yet since tb_lock is still held during tb_gen_code. In user-mode we use a single TCG context; see the documentation added to tcg_region_init for the rationale. Note that targets do not need any conversion: targets initialize a TCGContext (e.g. defining TCG globals), and after this initialization has finished, the context is cloned by the vCPU threads, each of them keeping a separate copy. TCG threads claim one entry in tcg_ctxs[] by atomically increasing n_tcg_ctxs. Do not be too annoyed by the subsequent atomic_read's of that variable and tcg_ctxs; they are there just to play nice with analysis tools such as thread sanitizer. Note that we do not allocate an array of contexts (we allocate an array of pointers instead) because when tcg_context_init is called, we do not know yet how many contexts we'll use since the bool behind qemu_tcg_mttcg_enabled() isn't set yet. Previous patches folded some TCG globals into TCGContext. The non-const globals remaining are only set at init time, i.e. before the TCG threads are spawned. Here is a list of these set-at-init-time globals under tcg/: Only written by tcg_context_init: - indirect_reg_alloc_order - tcg_op_defs Only written by tcg_target_init (called from tcg_context_init): - tcg_target_available_regs - tcg_target_call_clobber_regs - arm: arm_arch, use_idiv_instructions - i386: have_cmov, have_bmi1, have_bmi2, have_lzcnt, have_movbe, have_popcnt - mips: use_movnz_instructions, use_mips32_instructions, use_mips32r2_instructions, got_sigill (tcg_target_detect_isa) - ppc: have_isa_2_06, have_isa_3_00, tb_ret_addr - s390: tb_ret_addr, s390_facilities - sparc: qemu_ld_trampoline, qemu_st_trampoline (build_trampolines), use_vis3_instructions Only written by tcg_prologue_init: - 'struct jit_code_entry one_entry' - aarch64: tb_ret_addr - arm: tb_ret_addr - i386: tb_ret_addr, guest_base_flags - ia64: tb_ret_addr - mips: tb_ret_addr, bswap32_addr, bswap32u_addr, bswap64_addr Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24tcg: introduce regions to split code_gen_bufferEmilio G. Cota1-43/+20
This is groundwork for supporting multiple TCG contexts. The naive solution here is to split code_gen_buffer statically among the TCG threads; this however results in poor utilization if translation needs are different across TCG threads. What we do here is to add an extra layer of indirection, assigning regions that act just like pages do in virtual memory allocation. (BTW if you are wondering about the chosen naming, I did not want to use blocks or pages because those are already heavily used in QEMU). We use a global lock to serialize allocations as well as statistics reporting (we now export the size of the used code_gen_buffer with tcg_code_size()). Note that for the allocator we could just use a counter and atomic_inc; however, that would complicate the gathering of tcg_code_size()-like stats. So given that the region operations are not a fast path, a lock seems the most reasonable choice. The effectiveness of this approach is clear after seeing some numbers. I used the bootup+shutdown of debian-arm with '-tb-size 80' as a benchmark. Note that I'm evaluating this after enabling per-thread TCG (which is done by a subsequent commit). * -smp 1, 1 region (entire buffer): qemu: flush code_size=83885014 nb_tbs=154739 avg_tb_size=357 qemu: flush code_size=83884902 nb_tbs=153136 avg_tb_size=363 qemu: flush code_size=83885014 nb_tbs=152777 avg_tb_size=364 qemu: flush code_size=83884950 nb_tbs=150057 avg_tb_size=373 qemu: flush code_size=83884998 nb_tbs=150234 avg_tb_size=373 qemu: flush code_size=83885014 nb_tbs=154009 avg_tb_size=360 qemu: flush code_size=83885014 nb_tbs=151007 avg_tb_size=370 qemu: flush code_size=83885014 nb_tbs=151816 avg_tb_size=367 That is, 8 flushes. * -smp 8, 32 regions (80/32 MB per region) [i.e. this patch]: qemu: flush code_size=76328008 nb_tbs=141040 avg_tb_size=356 qemu: flush code_size=75366534 nb_tbs=138000 avg_tb_size=361 qemu: flush code_size=76864546 nb_tbs=140653 avg_tb_size=361 qemu: flush code_size=76309084 nb_tbs=135945 avg_tb_size=375 qemu: flush code_size=74581856 nb_tbs=132909 avg_tb_size=375 qemu: flush code_size=73927256 nb_tbs=135616 avg_tb_size=360 qemu: flush code_size=78629426 nb_tbs=142896 avg_tb_size=365 qemu: flush code_size=76667052 nb_tbs=138508 avg_tb_size=368 Again, 8 flushes. Note how buffer utilization is not 100%, but it is close. Smaller region sizes would yield higher utilization, but we want region allocation to be rare (it acquires a lock), so we do not want to go too small. * -smp 8, static partitioning of 8 regions (10 MB per region): qemu: flush code_size=21936504 nb_tbs=40570 avg_tb_size=354 qemu: flush code_size=11472174 nb_tbs=20633 avg_tb_size=370 qemu: flush code_size=11603976 nb_tbs=21059 avg_tb_size=365 qemu: flush code_size=23254872 nb_tbs=41243 avg_tb_size=377 qemu: flush code_size=28289496 nb_tbs=52057 avg_tb_size=358 qemu: flush code_size=43605160 nb_tbs=78896 avg_tb_size=367 qemu: flush code_size=45166552 nb_tbs=82158 avg_tb_size=364 qemu: flush code_size=63289640 nb_tbs=116494 avg_tb_size=358 qemu: flush code_size=51389960 nb_tbs=93937 avg_tb_size=362 qemu: flush code_size=59665928 nb_tbs=107063 avg_tb_size=372 qemu: flush code_size=38380824 nb_tbs=68597 avg_tb_size=374 qemu: flush code_size=44884568 nb_tbs=79901 avg_tb_size=376 qemu: flush code_size=50782632 nb_tbs=90681 avg_tb_size=374 qemu: flush code_size=39848888 nb_tbs=71433 avg_tb_size=372 qemu: flush code_size=64708840 nb_tbs=119052 avg_tb_size=359 qemu: flush code_size=49830008 nb_tbs=90992 avg_tb_size=362 qemu: flush code_size=68372408 nb_tbs=123442 avg_tb_size=368 qemu: flush code_size=33555560 nb_tbs=59514 avg_tb_size=378 qemu: flush code_size=44748344 nb_tbs=80974 avg_tb_size=367 qemu: flush code_size=37104248 nb_tbs=67609 avg_tb_size=364 That is, 20 flushes. Note how a static partitioning approach uses the code buffer poorly, leading to many unnecessary flushes. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24translate-all: use qemu_protect_rwx/none helpersEmilio G. Cota1-48/+13
The helpers require the address and size to be page-aligned, so do that before calling them. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>