aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2023-05-16accel/tcg: Merge do_gen_mem_cb into callerRichard Henderson1-22/+17
As do_gen_mem_cb is called once, merge it into gen_empty_mem_cb. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Merge gen_mem_wrapped with plugin_gen_empty_mem_callbackRichard Henderson1-24/+6
As gen_mem_wrapped is only used in plugin_gen_empty_mem_callback, we can avoid the curiosity of union mem_gen_fn by inlining it. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Widen tcg_gen_code pc_start argument to uint64_tRichard Henderson2-2/+2
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Widen helper_atomic_* addresses to uint64_tRichard Henderson3-41/+57
Always pass the target address as uint64_t. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Widen helper_{ld,st}_i128 addresses to uint64_tRichard Henderson4-10/+30
Always pass the target address as uint64_t. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Widen tcg-ldst.h addresses to uint64_tRichard Henderson4-53/+87
Always pass the target address as uint64_t. Adjust tcg_out_{ld,st}_helper_args to match. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Widen gen_insn_data to uint64_tRichard Henderson5-72/+45
We already pass uint64_t to restore_state_to_opc; this changes all of the other uses from insn_start through the encoding to decoding. Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Split out memory ops to tcg-op-ldst.cRichard Henderson3-974/+1007
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/sparc64: Use atom_and_align_for_opcRichard Henderson1-9/+12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/s390x: Use atom_and_align_for_opcRichard Henderson1-4/+7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/riscv: Use atom_and_align_for_opcRichard Henderson1-5/+8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/ppc: Use atom_and_align_for_opcRichard Henderson1-1/+18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/mips: Use atom_and_align_for_opcRichard Henderson1-6/+9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/loongarch64: Use atom_and_align_for_opcRichard Henderson1-1/+5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/arm: Use atom_and_align_for_opcRichard Henderson1-17/+22
No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/aarch64: Use atom_and_align_for_opcRichard Henderson1-18/+18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Use atom_and_align_for_opcRichard Henderson1-12/+15
No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Introduce atom_and_align_for_opcRichard Henderson1-0/+95
Examine MemOp for atomicity and alignment, adjusting alignment as required to implement atomicity on the host. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Support TCG_TYPE_I128 in tcg_out_{ld,st}_helper_{args,ret}Richard Henderson1-33/+163
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Merge tcg_out_helper_load_regs into callerRichard Henderson1-48/+41
Now that tcg_out_helper_load_regs is not recursive, we can merge it into its only caller, tcg_out_helper_load_slots. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Introduce tcg_out_movext3Richard Henderson1-30/+108
With x86_64 as host, we do not have any temporaries with which to resolve cycles, but we do have xchg. As a side bonus, the set of graphs that can be made with 3 nodes and all nodes conflicting is small: two. We can solve the cycle with a single temp. This is required for x86_64 to handle stores of i128: 1 address register and 2 data registers. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add INDEX_op_qemu_{ld,st}_i128Richard Henderson15-11/+108
Add opcodes for backend support for 128-bit memory operations. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Introduce tcg_target_has_memory_bswapRichard Henderson22-26/+63
Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro with a function with a memop argument. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/riscv: Support softmmu unaligned accessesRichard Henderson1-20/+28
The system is required to emulate unaligned accesses, even if the hardware does not support it. The resulting trap may or may not be more efficient than the qemu slow path. There are linux kernel patches in flight to allow userspace to query hardware support; we can re-evaluate whether to enable this by default after that. In the meantime, softmmu now matches useronly, where we already assumed that unaligned accesses are supported. Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/loongarch64: Support softmmu unaligned accessesRichard Henderson1-7/+12
Test the final byte of an unaligned access. Use BSTRINS.D to clear the range of bits, rather than AND. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/loongarch64: Check the host supports unaligned accessesRichard Henderson1-0/+9
This should be true of all loongarch64 running Linux. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Remove helper_unaligned_{ld,st}Richard Henderson2-16/+0
These functions are now unused. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/sparc64: Use standard slow path for softmmuRichard Henderson4-429/+179
Drop the target-specific trampolines for the standard slow path. This lets us use tcg_out_helper_{ld,st}_args, and handles the new atomicity bits within MemOp. At the same time, use the full load/store helpers for user-only mode. Drop inline unaligned access support for user-only mode, as it does not handle atomicity. Use TCG_REG_T[1-3] in the tlb lookup, instead of TCG_REG_O[0-2]. This allows the constraints to be simplified. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/sparc64: Split out tcg_out_movi_s32Richard Henderson1-2/+8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/sparc64: Rename tcg_out_movi_imm32 to tcg_out_movi_u32Richard Henderson1-6/+6
Emphasize that the constant is unsigned. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16target/sparc64: Remove tcg_out_movi_s13 case from tcg_out_movi_imm32Richard Henderson1-15/+10
Shuffle the order in tcg_out_movi_int to check s13 first, and drop this check from tcg_out_movi_imm32. This might make the sequence for in_prologue larger, but not worth worrying about. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/sparc64: Rename tcg_out_movi_imm13 to tcg_out_movi_s13Richard Henderson1-10/+11
Emphasize that the constant is signed. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/sparc64: Allocate %g2 as a third temporaryRichard Henderson1-8/+7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/s390x: Use full load/store helpers in user-only modeRichard Henderson1-29/+0
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/mips: Use full load/store helpers in user-only modeRichard Henderson1-55/+2
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/arm: Use full load/store helpers in user-only modeRichard Henderson1-45/+0
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/arm: Adjust constraints on qemu_ld/stRichard Henderson3-26/+18
Always reserve r3 for tlb softmmu lookup. Fix a bug in user-only ALL_QLDST_REGS, in that r14 is clobbered by the BLNE that leads to the misaligned trap. Remove r0+r1 from user-only ALL_QLDST_REGS; I believe these had been reserved for bswap, which we no longer perform during qemu_st. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/riscv: Use full load/store helpers in user-only modeRichard Henderson1-29/+0
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/loongarch64: Use full load/store helpers in user-only modeRichard Henderson1-30/+0
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/ppc: Use full load/store helpers in user-only modeRichard Henderson1-44/+0
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/aarch64: Use full load/store helpers in user-only modeRichard Henderson1-35/+0
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Use full load/store helpers in user-only modeRichard Henderson1-48/+4
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/aarch64: Detect have_lse, have_lse2 for darwinRichard Henderson1-0/+28
These features are present for Apple M1. Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/aarch64: Detect have_lse, have_lse2 for linuxRichard Henderson2-0/+15
Notice when the host has additional atomic instructions. The new variables will also be used in generated code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Add have_atomic16Richard Henderson3-0/+46
Notice when Intel or AMD have guaranteed that vmovdqa is atomic. The new variable will also be used in generated code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16meson: Detect atomic128 support with optimizationRichard Henderson2-23/+60
There is an edge condition prior to gcc13 for which optimization is required to generate 16-byte atomic sequences. Detect this. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add 128-bit guest memory primitivesRichard Henderson6-172/+673
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/tci: Use helper_{ld,st}*_mmu for user-onlyRichard Henderson1-89/+0
We can now fold these two pieces of code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Implement helper_{ld,st}*_mmu for user-onlyRichard Henderson3-106/+257
TCG backends may need to defer to a helper to implement the atomicity required by a given operation. Mirror the interface used in system mode. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Unify helper_{be,le}_{ld,st}*Richard Henderson14-511/+146
With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. Hoist the qemu_{ld,st}_helpers arrays to tcg.c. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>