aboutsummaryrefslogtreecommitdiff
path: root/accel
AgeCommit message (Collapse)AuthorFilesLines
2023-09-08arm/kvm: Enable support for KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZEShameer Kolothum1-0/+1
Now that we have Eager Page Split support added for ARM in the kernel, enable it in Qemu. This adds, -eager-split-size to -accel sub-options to set the eager page split chunk size. -enable KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE. The chunk size specifies how many pages to break at a time, using a single allocation. Bigger the chunk size, more pages need to be allocated ahead of time. Reviewed-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Message-id: 20230905091246.1931-1-shameerali.kolothum.thodi@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-07Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingStefan Hajnoczi1-1/+3
* only build util/async-teardown.c when system build is requested * target/i386: fix BQL handling of the legacy FERR interrupts * target/i386: fix memory operand size for CVTPS2PD * target/i386: Add support for AMX-COMPLEX in CPUID enumeration * compile plugins on Darwin * configure and meson cleanups * drop mkvenv support for Python 3.7 and Debian10 * add wrap file for libblkio * tweak KVM stubs # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmT5t6UUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroMmjwf+MpvVuq+nn+3PqGUXgnzJx5ccA5ne # O9Xy8+1GdlQPzBw/tPovxXDSKn3HQtBfxObn2CCE1tu/4uHWpBA1Vksn++NHdUf2 # P0yoHxGskJu5iYYTtIcNw5cH2i+AizdiXuEjhfNjqD5Y234cFoHnUApt9e3zBvVO # cwGD7WpPuSb4g38hHkV6nKcx72o7b4ejDToqUVZJ2N+RkddSqB03fSdrOru0hR7x # V+lay0DYdFszNDFm05LJzfDbcrHuSryGA91wtty7Fzj6QhR/HBHQCUZJxMB5PI7F # Zy4Zdpu60zxtSxUqeKgIi7UhNFgMcax2Hf9QEqdc/B4ARoBbboh4q4u8kQ== # =dH7/ # -----END PGP SIGNATURE----- # gpg: Signature made Thu 07 Sep 2023 07:44:37 EDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (51 commits) docs/system/replay: do not show removed command line option subprojects: add wrap file for libblkio sysemu/kvm: Restrict kvm_pc_setup_irq_routing() to x86 targets sysemu/kvm: Restrict kvm_has_pit_state2() to x86 targets sysemu/kvm: Restrict kvm_get_apic_state() to x86 targets sysemu/kvm: Restrict kvm_arch_get_supported_cpuid/msr() to x86 targets target/i386: Restrict declarations specific to CONFIG_KVM target/i386: Allow elision of kvm_hv_vpindex_settable() target/i386: Allow elision of kvm_enable_x2apic() target/i386: Remove unused KVM stubs target/i386/cpu-sysemu: Inline kvm_apic_in_kernel() target/i386/helper: Restrict KVM declarations to system emulation hw/i386/fw_cfg: Include missing 'cpu.h' header hw/i386/pc: Include missing 'cpu.h' header hw/i386/pc: Include missing 'sysemu/tcg.h' header Revert "mkvenv: work around broken pip installations on Debian 10" mkvenv: assume presence of importlib.metadata Python: Drop support for Python 3.7 configure: remove dead code meson: list leftover CONFIG_* symbols ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-09-07configure, meson: move --enable-plugins to mesonPaolo Bonzini1-1/+3
While the option still needs to be parsed in the configure script (it's needed by tests/tcg, and also to decide about recursing into contrib/plugins), passing it to Meson can be done with -D instead of using config-host.mak. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-08-31accel/tcg: spelling fixesMichael Tokarev1-1/+1
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Message-ID: <20230823065335.1919380-18-mjt@tls.msk.ru> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-ID: <20230823065335.1919380-19-mjt@tls.msk.ru> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-08-31accel: Remove HAX acceleratorPhilippe Mathieu-Daudé3-28/+0
HAX is deprecated since commits 73741fda6c ("MAINTAINERS: Abort HAXM maintenance") and 90c167a1da ("docs/about/deprecated: Mark HAXM in QEMU as deprecated"), released in v8.0.0. Per the latest HAXM release (v7.8 [*]), the latest QEMU supported is v7.2: Note: Up to this release, HAXM supports QEMU from 2.9.0 to 7.2.0. The next commit (https://github.com/intel/haxm/commit/da1b8ec072) added: HAXM v7.8.0 is our last release and we will not accept pull requests or respond to issues after this. It became very hard to build and test HAXM. Its previous maintainers made it clear they won't help. It doesn't seem to be a very good use of QEMU maintainers to spend their time in a dead project. Save our time by removing this orphan zombie code. [*] https://github.com/intel/haxm/releases/tag/v7.8.0 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230831082016.60885-1-philmd@linaro.org>
2023-08-29softmmu: Use async_run_on_cpu in tcg_commitRichard Henderson1-30/+0
After system startup, run the update to memory_dispatch and the tlb_flush on the cpu. This eliminates a race, wherein a running cpu sees the memory_dispatch change but has not yet seen the tlb_flush. Since the update now happens on the cpu, we need not use qatomic_rcu_read to protect the read of memory_dispatch. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1826 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1834 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1846 Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24accel/tcg: Update run_on_cpu_data static assertAnton Johansson1-2/+3
As we are now using vaddr for representing guest addresses, update the static assert to check that vaddr fits in the run_on_cpu_data union. Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230807155706.9580-10-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24accel/tcg: Widen address arg in tlb_compare_set()Anton Johansson1-1/+1
Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230807155706.9580-9-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24include/exec: Replace target_ulong with abi_ptr in cpu_[st|ld]*()Anton Johansson2-13/+13
Changes the address type of the guest memory read/write functions from target_ulong to abi_ptr. (abi_ptr is currently typedef'd to target_ulong but that will change in a following commit.) This will reduce the coupling between accel/ and target/. Note: Function pointers that point to cpu_[st|ld]*() in target/riscv and target/rx are also updated in this commit. Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230807155706.9580-6-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24accel/hvf: Widen pc/saved_insn for hvf_sw_breakpointAnton Johansson2-3/+3
Widens the pc and saved_insn fields of hvf_sw_breakpoint from target_ulong to vaddr. Other hvf_* functions accessing hvf_sw_breakpoint are also widened to match. Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230807155706.9580-3-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24accel/kvm: Widen pc/saved_insn for kvm_sw_breakpointAnton Johansson1-2/+1
Widens the pc and saved_insn fields of kvm_sw_breakpoint from target_ulong to vaddr. The pc argument of kvm_find_sw_breakpoint is also widened to match. Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230807155706.9580-2-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-22accel/kvm: Make kvm_dirty_ring_reaper_init() voidAkihiko Odaki1-7/+2
The returned value was always zero and had no meaning. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20230727073134.134102-7-akihiko.odaki@daynix.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-08-22accel/kvm: Free as when an error occurredAkihiko Odaki1-0/+1
An error may occur after s->as is allocated, for example if the KVM_CREATE_VM ioctl call fails. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20230727073134.134102-6-akihiko.odaki@daynix.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: tweaked commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22accel/kvm: Use negative KVM type for error propagationAkihiko Odaki1-0/+5
On MIPS, kvm_arch_get_default_type() returns a negative value when an error occurred so handle the case. Also, let other machines return negative values when errors occur and declare returning a negative value as the correct way to propagate an error that happened when determining KVM type. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20230727073134.134102-5-akihiko.odaki@daynix.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-08-22kvm: Introduce kvm_arch_get_default_type hookAkihiko Odaki1-1/+3
kvm_arch_get_default_type() returns the default KVM type. This hook is particularly useful to derive a KVM type that is valid for "none" machine model, which is used by libvirt to probe the availability of KVM. For MIPS, the existing mips_kvm_type() is reused. This function ensures the availability of VZ which is mandatory to use KVM on the current QEMU. Cc: qemu-stable@nongnu.org Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20230727073134.134102-2-akihiko.odaki@daynix.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: added doc comment for new function] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-08-10accel/tcg: Avoid reading too much in load_atom_{2,4}Richard Henderson1-2/+8
When load_atom_extract_al16_or_al8 is inexpensive, we want to use it early, in order to avoid the overhead of required_atomicity. However, we must not read past the end of the page. If there are more than 8 bytes remaining, then both the "aligned 16" and "aligned 8" paths align down so that the read has at least 16 bytes remaining on the page. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-06accel/tcg: Call save_iotlb_data from io_readx as wellMikhail Tyutin1-15/+21
Apply save_iotlb_data() to io_readx() as well as to io_writex(). This fixes SEGFAULT on qemu_plugin_hwaddr_phys_addr() call plugins for addresses inside of MMIO region. Signed-off-by: Dmitriy Solovev <d.solovev@yadro.com> Signed-off-by: Mikhail Tyutin <m.tyutin@yadro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230804110903.19968-1-m.tyutin@yadro.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-05accel/tcg: Do not issue misaligned i/oRichard Henderson1-46/+72
In the single-page case we were issuing misaligned i/o to the memory subsystem, which does not handle it properly. Split such accesses via do_{ld,st}_mmio_*. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1800 Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-05accel/tcg: Issue wider aligned i/o in do_{ld,st}_mmio_*Richard Henderson1-7/+69
If the address and size are aligned, send larger chunks to the memory subsystem. This will be required to make more use of these helpers. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-05accel/tcg: Adjust parameters and locking with do_{ld,st}_mmio_*Richard Henderson1-33/+34
Replace MMULookupPageData* with CPUTLBEntryFull, addr, size. Move QEMU_IOTHREAD_LOCK_GUARD to the caller. This simplifies the usage from do_ld16_beN and do_st16_leN, where we weren't locking the entire operation, and required hoop jumping for passing addr and size. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-31accel/tcg: Clear tcg_ctx->gen_tb on buffer overflowRichard Henderson1-0/+1
On overflow of code_gen_buffer, we unlock the guest pages we had been translating, but failed to clear gen_tb. On restart, if we cannot allocate a TB, we exit to the main loop to perform the flush of all TBs as soon as possible. With garbage in gen_tb, we hit an assert: ../src/accel/tcg/tb-maint.c:348:page_unlock__debug: \ assertion failed: (page_is_locked(pd)) Fixes: deba78709ae8 ("accel/tcg: Always lock pages before translation") Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-31kvm: Fix crash due to access uninitialized kvm_stateGavin Shan1-1/+1
Runs into core dump on arm64 and the backtrace extracted from the core dump is shown as below. It's caused by accessing uninitialized @kvm_state in kvm_flush_coalesced_mmio_buffer() due to commit 176d073029 ("hw/arm/virt: Use machine_memory_devices_init()"), where the machine's memory region is added earlier than before. main qemu_init configure_accelerators qemu_opts_foreach do_configure_accelerator accel_init_machine kvm_init virt_kvm_type virt_set_memmap machine_memory_devices_init memory_region_add_subregion memory_region_add_subregion_common memory_region_update_container_subregions memory_region_transaction_begin qemu_flush_coalesced_mmio_buffer kvm_flush_coalesced_mmio_buffer Fix it by bailing early in kvm_flush_coalesced_mmio_buffer() on the uninitialized @kvm_state. With this applied, no crash is observed on arm64. Fixes: 176d073029 ("hw/arm/virt: Use machine_memory_devices_init()") Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230731125946.2038742-1-gshan@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-24accel/tcg: Fix type of 'last' for pageflags_{find,next}Luca Bonissi1-2/+2
These should match 'start' as target_ulong, not target_long. On 32bit targets, the parameter was sign-extended to uint64_t, so only the first mmap within the upper 2GB memory can succeed. Signed-off-by: Luca Bonissi <qemu@bonslack.org> Message-Id: <327460e2-0ebd-9edb-426b-1df80d16c32a@bonslack.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-24accel/tcg: Zero-pad vaddr in tlb_debug outputAnton Johansson1-10/+10
In replacing target_ulong with vaddr and TARGET_FMT_lx with VADDR_PRIx, the zero-padding of TARGET_FMT_lx got lost. Readd 16-wide zero-padding for logging consistency. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Anton Johansson <anjo@rev.ng> Message-Id: <20230713120746.26897-1-anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-23accel/tcg: Take mmap_lock in load_atomic*_or_exitRichard Henderson1-14/+18
For user-only, the probe for page writability may race with another thread's mprotect. Take the mmap_lock around the operation. This is still faster than the start/end_exclusive fallback. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-23accel/tcg: Fix sense of read-only probes in ldst_atomicityRichard Henderson1-2/+2
In the initial commit, cdfac37be0d, the sense of the test is incorrect, as the -1/0 return was confusing. In bef6f008b981, we mechanically invert all callers while changing to false/true return, preserving the incorrectness of the test. Now that the return sense is sane, it's easy to see that if !write, then the page is not modifiable (i.e. most likely read-only, with PROT_NONE handled via SIGSEGV). Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-17accel/tcg: Zero-pad PC in TCG CPU exec trace linesPeter Maydell2-3/+3
In commit f0a08b0913befbd we changed the type of the PC from target_ulong to vaddr. In doing so we inadvertently dropped the zero-padding on the PC in trace lines (the second item inside the [] in these lines). They used to look like this on AArch64, for instance: Trace 0: 0x7f2260000100 [00000000/0000000040000000/00000061/ff200000] and now they look like this: Trace 0: 0x7f4f50000100 [00000000/40000000/00000061/ff200000] and if the PC happens to be somewhere low like 0x5000 then the field is shown as /5000/. This is because TARGET_FMT_lx is a "%08x" or "%016x" specifier, depending on TARGET_LONG_SIZE, whereas VADDR_PRIx is just PRIx64 with no width specifier. Restore the zero-padding by adding an 016 width specifier to this tracing and a couple of others that were similarly recently changed to use VADDR_PRIx without a width specifier. We can't unfortunately restore the "32-bit guests are padded to 8 hex digits and 64-bit guests to 16 hex digits" behaviour so easily. Fixes: f0a08b0913befbd ("accel/tcg/cpu-exec.c: Widen pc to vaddr") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Anton Johansson <anjo@rev.ng> Message-id: 20230711165434.4123674-1-peter.maydell@linaro.org
2023-07-15tcg: Use HAVE_CMPXCHG128 instead of CONFIG_CMPXCHG128Richard Henderson4-4/+4
We adjust CONFIG_ATOMIC128 and CONFIG_CMPXCHG128 with CONFIG_ATOMIC128_OPT in atomic128.h. It is difficult to tell when those changes have been applied with the ifdef we must use with CONFIG_CMPXCHG128. So instead use HAVE_CMPXCHG128, which triggers -Werror-undef when the proper header has not been included. Improves tcg_gen_atomic_cmpxchg_i128 for s390x host, which requires CONFIG_ATOMIC128_OPT. Without this we fall back to EXCP_ATOMIC to single-step 128-bit atomics, which is slow enough to cause some tests to time out. Reported-by: Thomas Huth <thuth@redhat.com> Tested-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-15accel/tcg: Always lock pages before translationRichard Henderson5-134/+237
We had done this for user-mode by invoking page_protect within the translator loop. Extend this to handle system mode as well. Move page locking out of tb_link_page. Reported-by: Liren Wei <lrwei@bupt.edu.cn> Reported-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Richard W.M. Jones <rjones@redhat.com>
2023-07-15accel/tcg: Return bool from page_check_rangeRichard Henderson2-13/+13
Replace the 0/-1 result with true/false. Invert the sense of the test of all callers. Document the function. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230707204054.8792-25-richard.henderson@linaro.org>
2023-07-15accel/tcg: Accept more page flags in page_check_rangeRichard Henderson1-2/+2
Only PAGE_WRITE needs special attention, all others can be handled as we do for PAGE_READ. Adjust the mask. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230707204054.8792-24-richard.henderson@linaro.org>
2023-07-15accel/tcg: Introduce page_find_range_emptyRichard Henderson1-0/+41
Use the interval tree to locate an unused range in the VM. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230707204054.8792-17-richard.henderson@linaro.org>
2023-07-15accel/tcg: Introduce page_check_range_emptyRichard Henderson1-0/+7
Examine the interval tree to validate that a region has no existing mappings. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230707204054.8792-10-richard.henderson@linaro.org>
2023-07-15accel/tcg: Split out cpu_exec_longjmp_cleanupRichard Henderson1-24/+19
Share the setjmp cleanup between cpu_exec_step_atomic and cpu_exec_setjmp. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-03plugins: force slow path when plugins instrument memory opsAlex Bennée2-9/+40
The lack of SVE memory instrumentation has been an omission in plugin handling since it was introduced. Fortunately we can utilise the probe_* functions to force all all memory access to follow the slow path. We do this by checking the access type and presence of plugin memory callbacks and if set return the TLB_MMIO flag. We have to jump through a few hoops in user mode to re-use the flag but it was the desired effect: ./qemu-system-aarch64 -display none -serial mon:stdio \ -M virt -cpu max -semihosting-config enable=on \ -kernel ./tests/tcg/aarch64-softmmu/memory-sve \ -plugin ./contrib/plugins/libexeclog.so,ifilter=st1w,afilter=0x40001808 -d plugin gives (disas doesn't currently understand st1w): 0, 0x40001808, 0xe54342a0, ".byte 0xa0, 0x42, 0x43, 0xe5", store, 0x40213010, RAM, store, 0x40213014, RAM, store, 0x40213018, RAM And for user-mode: ./qemu-aarch64 \ -plugin contrib/plugins/libexeclog.so,afilter=0x4007c0 \ -d plugin \ ./tests/tcg/aarch64-linux-user/sha512-sve gives: 1..10 ok 1 - do_test(&tests[i]) 0, 0x4007c0, 0xa4004b80, ".byte 0x80, 0x4b, 0x00, 0xa4", load, 0x5500800370, load, 0x5500800371, load, 0x5500800372, load, 0x5500800373, load, 0x5500800374, load, 0x5500800375, load, 0x5500800376, load, 0x5500800377, load, 0x5500800378, load, 0x5500800379, load, 0x550080037a, load, 0x550080037b, load, 0x550080037c, load, 0x550080037d, load, 0x550080037e, load, 0x550080037f, load, 0x5500800380, load, 0x5500800381, load, 0x5500800382, load, 0x5500800383, load, 0x5500800384, load, 0x5500800385, load, 0x5500800386, lo ad, 0x5500800387, load, 0x5500800388, load, 0x5500800389, load, 0x550080038a, load, 0x550080038b, load, 0x550080038c, load, 0x550080038d, load, 0x550080038e, load, 0x550080038f, load, 0x5500800390, load, 0x5500800391, load, 0x5500800392, load, 0x5500800393, load, 0x5500800394, load, 0x5500800395, load, 0x5500800396, load, 0x5500800397, load, 0x5500800398, load, 0x5500800399, load, 0x550080039a, load, 0x550080039b, load, 0x550080039c, load, 0x550080039d, load, 0x550080039e, load, 0x550080039f, load, 0x55008003a0, load, 0x55008003a1, load, 0x55008003a2, load, 0x55008003a3, load, 0x55008003a4, load, 0x55008003a5, load, 0x55008003a6, load, 0x55008003a7, load, 0x55008003a8, load, 0x55008003a9, load, 0x55008003aa, load, 0x55008003ab, load, 0x55008003ac, load, 0x55008003ad, load, 0x55008003ae, load, 0x55008003af (4007c0 is the ld1b in the sha512-sve) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: Robert Henry <robhenry@microsoft.com> Cc: Aaron Lindsay <aaron@os.amperecomputing.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230630180423.558337-20-alex.bennee@linaro.org>
2023-07-01accel/tcg: Assert one page in tb_invalidate_phys_page_range__lockedMark Cave-Ayland1-0/+3
Ensure that that both the start and last addresses are within the same guest page. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230629082522.606219-3-mark.cave-ayland@ilande.co.uk> [rth: Use tcg_debug_assert, simplify the expression] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-01accel/tcg: Fix start page passed to tb_invalidate_phys_page_range__lockedMark Cave-Ayland1-4/+6
Due to a copy-paste error in tb_invalidate_phys_range, the wrong start address was passed to tb_invalidate_phys_page_range__locked. Correct is to use the start of each page in turn. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Fixes: e506ad6a05 ("accel/tcg: Pass last not end to tb_invalidate_phys_range") Message-Id: <20230629082522.606219-2-mark.cave-ayland@ilande.co.uk> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-28exec/memory: Add symbol for the min value of memory listener priorityIsaku Yamahata1-0/+1
Add MEMORY_LISTNER_PRIORITY_MIN for the symbolic value for the min value of the memory listener instead of the hard-coded magic value 0. Add explicit initialization. No functional change intended. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <29f88477fe82eb774bcfcae7f65ea21995f865f2.1687279702.git.isaku.yamahata@intel.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-28exec/memory: Add symbol for memory listener priority for device backendIsaku Yamahata1-1/+1
Add MEMORY_LISTENER_PRIORITY_DEV_BACKEND for the symbolic value for memory listener to replace the hard-coded value 10 for the device backend. No functional change intended. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <8314d91688030d7004e96958f12e2c83fb889245.1687279702.git.isaku.yamahata@intel.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-28exec/memory: Add symbolic value for memory listener priority for accelIsaku Yamahata2-2/+2
Add MEMORY_LISTNER_PRIORITY_ACCEL for the symbolic value for the memory listener to replace the hard-coded value 10 for accel. No functional change intended. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <feebe423becc6e2aa375f59f6abce9a85bc15abb.1687279702.git.isaku.yamahata@intel.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-28accel/kvm: Declare kvm_direct_msi_allowed in stubsPhilippe Mathieu-Daudé1-0/+1
Avoid when calling kvm_direct_msi_enabled() from arm_gicv3_its_common.c the next commit: Undefined symbols for architecture arm64: "_kvm_direct_msi_allowed", referenced from: _its_class_name in hw_intc_arm_gicv3_its_common.c.o ld: symbol(s) not found for architecture arm64 Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230405160454.97436-3-philmd@linaro.org>
2023-06-28accel: Rename HVF 'struct hvf_vcpu_state' -> AccelCPUStatePhilippe Mathieu-Daudé1-9/+10
We want all accelerators to share the same opaque pointer in CPUState. Rename the 'hvf_vcpu_state' structure as 'AccelCPUState'. Use the generic 'accel' field of CPUState instead of 'hvf'. Replace g_malloc0() by g_new0() for readability. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20230624174121.11508-17-philmd@linaro.org>
2023-06-28accel: Remove unused hThread variable on TCG/WHPXPhilippe Mathieu-Daudé2-7/+0
On Windows hosts, cpu->hThread is assigned but never accessed: remove it. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230624174121.11508-4-philmd@linaro.org>
2023-06-26accel/tcg: Move TLB_WATCHPOINT to TLB_SLOW_FLAGS_MASKRichard Henderson1-4/+14
This frees up one bit of the primary tlb flags without impacting the TLB_NOTDIRTY logic. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26accel/tcg: Store some tlb flags in CPUTLBEntryFullRichard Henderson1-39/+57
We have run out of bits we can use within the CPUTLBEntry comparators, as TLB_FLAGS_MASK cannot overlap alignment. Store slow_flags[] in CPUTLBEntryFull, and merge with the flags from the comparator. A new TLB_FORCE_SLOW bit is set within the comparator as an indication that the slow path must be used. Move TLB_BSWAP to TLB_SLOW_FLAGS_MASK. Since we are out of bits, we cannot create a new bit without moving an old one. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26accel/tcg: Remove check_tcg_memory_orders_compatibleRichard Henderson1-27/+8
We now issue host memory barriers to match the guest memory order. Continue to disable MTTCG only if the guest has not been ported. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26tcg: Add host memory barriers to cpu_ldst.h interfacesRichard Henderson3-0/+54
Bring the helpers into line with the rest of tcg in respecting guest memory ordering. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26accel/tcg: remove CONFIG_PROFILERFei Wu3-74/+0
TBStats will be introduced to replace CONFIG_PROFILER totally, here remove all CONFIG_PROFILER related stuffs first. Signed-off-by: Vanderson M. do Rosario <vandersonmr2@gmail.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Fei Wu <fei2.wu@intel.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230607122411.3394702-2-fei2.wu@intel.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26accel/tcg: Replace target_ulong with vaddr in translator_*()Anton Johansson1-5/+5
Use vaddr for guest virtual address in translator_use_goto_tb() and translator_loop(). Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230621135633.1649-11-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup()Anton Johansson2-6/+6
Update atomic_mmu_lookup() and cpu_mmu_lookup() to take the guest virtual address as a vaddr instead of a target_ulong. Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230621135633.1649-10-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>