aboutsummaryrefslogtreecommitdiff
path: root/tcg/s390x
AgeCommit message (Collapse)AuthorFilesLines
2023-09-15tcg: pass vece to tcg_target_const_match()Jiajie Chen1-1/+1
Pass vece to tcg_target_const_match() to allow correct interpretation of const args of vector ops. Signed-off-by: Jiajie Chen <c@jia.je> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230908022302.180442-4-c@jia.je> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/s390x: Implement negsetcond_*Richard Henderson2-28/+54
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg: Introduce negsetcond opcodesRichard Henderson1-0/+2
Introduce a new opcode for negative setcond. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg: Unify TCG_TARGET_HAS_extr[lh]_i64_i32Richard Henderson1-2/+1
Replace the separate defines with TCG_TARGET_HAS_extr_i64_i32, so that the two parts of backend-specific type changing cannot be out of sync. Reported-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: <20230822175127.1173698-1-richard.henderson@linaro.org>
2023-07-23tcg/{i386, s390x}: Add earlyclobber to the op_add2's first outputIlya Leoshkevich2-5/+7
i386 and s390x implementations of op_add2 require an earlyclobber, which is currently missing. This breaks VCKSM in s390x guests. E.g., on x86_64 the following op: add2_i32 tmp2,tmp3,tmp2,tmp3,tmp3,tmp2 dead: 0 2 3 4 5 pref=none,0xffff is translated to: addl %ebx, %r12d adcl %r12d, %ebx Introduce a new C_N1_O1_I4 constraint, and make sure that earlyclobber of aliased outputs is honored. Cc: qemu-stable@nongnu.org Fixes: 82790a870992 ("tcg: Add markup for output requires new register") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230719221310.1968845-7-iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Split out tcg-target-reg-bits.hRichard Henderson2-5/+17
Often, the only thing we need to know about the TCG host is the register size. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Add tlb_fast_offset to TCGContextRichard Henderson1-3/+4
Disconnect the layout of ArchCPU from TCG compilation. Pass the relative offset of 'env' and 'neg.tlb.f' as a parameter. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Widen CPUTLBEntry comparators to 64-bitsRichard Henderson1-0/+1
This makes CPUTLBEntry agnostic to the address size of the guest. When 32-bit addresses are in effect, we can simply read the low 32 bits of the 64-bit field. Similarly when we need to update the field for setting TLB_NOTDIRTY. For TCG backends that could in theory be big-endian, but in practice are not (arm, loongarch, riscv), use QEMU_BUILD_BUG_ON to document and ensure this is not accidentally missed. For s390x, which is always big-endian, use HOST_BIG_ENDIAN anyway, to document the reason for the adjustment. For sparc64 and ppc64, always perform a 64-bit load, and rely on the following 32-bit comparison to ignore the high bits. Rearrange mips and ppc if ladders for clarity. Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg/s390x: Remove TARGET_LONG_BITS, TCG_TYPE_TLRichard Henderson1-4/+5
All uses replaced with TCGContext.addr_type. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-30tcg: Remove TCG_TARGET_TLB_DISPLACEMENT_BITSRichard Henderson1-1/+0
The last use was removed by e77c89fb086a. Fixes: e77c89fb086a ("cputlb: Remove static tlb sizing") Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-30tcg/s390x: Support 128-bit load/storeRichard Henderson3-4/+107
Use LPQ/STPQ when 16-byte atomicity is required. Note that these instructions require 16-byte alignment. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add page_bits and page_mask to TCGContextRichard Henderson1-2/+2
Disconnect guest page size from TCG compilation. While this could be done via exec/target_page.h, we want to cache the value across multiple memory access operations, so we might as well initialize this early. The changes within tcg/ are entirely mechanical: sed -i s/TARGET_PAGE_BITS/s->page_bits/g sed -i s/TARGET_PAGE_MASK/s->page_mask/g Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Split INDEX_op_qemu_{ld,st}* for guest address sizeRichard Henderson1-8/+16
For 32-bit hosts, we cannot simply rely on TCGContext.addr_bits, as we need one or two host registers to represent the guest address. Create the new opcodes and update all users. Since we have not yet eliminated TARGET_LONG_BITS, only one of the two opcodes will ever be used, so we can get away with treating them the same in the backends. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/s390x: Use atom_and_align_for_opcRichard Henderson1-4/+7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add INDEX_op_qemu_{ld,st}_i128Richard Henderson1-0/+2
Add opcodes for backend support for 128-bit memory operations. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Introduce tcg_target_has_memory_bswapRichard Henderson2-2/+5
Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro with a function with a memop argument. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/s390x: Use full load/store helpers in user-only modeRichard Henderson1-29/+0
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Unify helper_{be,le}_{ld,st}*Richard Henderson1-29/+2
With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. Hoist the qemu_{ld,st}_helpers arrays to tcg.c. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/s390x: Simplify constraints on qemu_ld/stRichard Henderson3-27/+12
Adjust the softmmu tlb to use R0+R1, not any of the normally available registers. Since we handle overlap betwen inputs and helper arguments, we can allow any allocatable reg. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/s390x: Use ALGFR in constructing softmmu host addressRichard Henderson1-3/+5
Rather than zero-extend the guest address into a register, use an add instruction which zero-extends the second input. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/s390x: Convert tcg_out_qemu_{ld,st}_slow_pathRichard Henderson1-25/+10
Use tcg_out_ld_helper_args, tcg_out_ld_helper_ret, and tcg_out_st_helper_args. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/s390x: Introduce prepare_host_addrRichard Henderson1-150/+113
Merge tcg_out_tlb_load, add_qemu_ldst_label, tcg_out_test_alignment, tcg_prepare_user_ldst, and some code that lived in both tcg_out_qemu_ld and tcg_out_qemu_st into one function that returns HostAddress and TCGLabelQemuLdst structures. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05tcg/s390x: Introduce HostAddressRichard Henderson1-49/+60
Collect the 3 potential parts of the host address into a struct. Reorg tcg_out_qemu_{ld,st}_direct to use it. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05tcg/s390x: Pass TCGType to tcg_out_qemu_{ld,st}Richard Henderson1-8/+14
We need to set this in TCGLabelQemuLdst, so plumb this all the way through from tcg_out_op. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Introduce tcg_out_xchgRichard Henderson1-0/+5
We will want a backend interface for register swapping. This is only properly defined for x86; all others get a stub version that always indicates failure. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Introduce tcg_out_movextRichard Henderson1-16/+3
This is common code in most qemu_{ld,st} slow paths, extending the input value for the store helper data argument or extending the return value from the load helper. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_extrl_i64_i32Richard Henderson1-0/+6
We will need a backend interface for type truncation. For those backends that did not enable TCG_TARGET_HAS_extrl_i64_i32, use tcg_out_mov. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_extu_i32_i64Richard Henderson1-4/+6
We will need a backend interface for type extension with zero. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_exts_i32_i64Richard Henderson1-3/+6
We will need a backend interface for type extension with sign. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext32uRichard Henderson1-10/+10
We will need a backend interface for performing 32-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext32sRichard Henderson1-5/+5
We will need a backend interface for performing 32-bit sign-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext16uRichard Henderson1-11/+6
We will need a backend interface for performing 16-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext16sRichard Henderson1-8/+4
We will need a backend interface for performing 16-bit sign-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext8uRichard Henderson1-9/+5
We will need a backend interface for performing 8-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext8sRichard Henderson1-7/+3
We will need a backend interface for performing 8-bit sign-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Replace tcg_abort with g_assert_not_reachedRichard Henderson1-4/+4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04tcg: Add TCG_TARGET_CALL_{RET,ARG}_I128Richard Henderson1-0/+2
Fill in the parameters for the host ABI for Int128 for those backends which require no extra modification. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04tcg: Introduce tcg_target_call_oarg_regRichard Henderson1-3/+6
Replace the flat array tcg_target_call_oarg_regs[] with a function call including the TCGCallReturnKind. Extend the set of registers for ARM to r0-r3 to match the ABI: https://github.com/ARM-software/abi-aa/blob/main/aapcs32/aapcs32.rst#result-return Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04tcg: Introduce tcg_out_addi_ptrRichard Henderson1-0/+7
Implement the function for arm, i386, and s390x, which will use it. Add stubs for all other backends. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17tcg: Remove TCG_TARGET_HAS_direct_jumpRichard Henderson2-1/+3
We now have the option to generate direct or indirect goto_tb depending on the dynamic displacement, thus the define is no longer necessary or completely accurate. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17tcg: Move tb_target_set_jmp_target declaration to tcg.hRichard Henderson1-4/+0
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17tcg: Change tb_target_set_jmp_target argumentsRichard Henderson2-8/+12
Replace 'tc_ptr' and 'addr' with 'tb' and 'n'. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17tcg: Split out tcg_out_goto_tbRichard Henderson1-15/+16
The INDEX_op_goto_tb opcode needs no register allocation. Split out a dedicated helper function for it. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17tcg: Introduce set_jmp_insn_offsetRichard Henderson1-1/+1
Similar to the existing set_jmp_reset_offset. Move any assert for TCG_TARGET_HAS_direct_jump into the new function (which now cannot be build-time). Will be unused if TCG_TARGET_HAS_direct_jump is constant 0, but we can't test for constant in the preprocessor, so just mark it G_GNUC_UNUSED. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17tcg: Split out tcg_out_exit_tbRichard Henderson1-11/+12
The INDEX_op_exit_tb opcode needs no register allocation. Split out a dedicated helper function for it. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-06tcg/s390x: Avoid the constant pool in tcg_out_moviRichard Henderson1-6/+17
Load constants in no more than two insns, which turns out to be faster than using the constant pool. Suggested-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-06tcg/s390x: Cleanup tcg_out_moviRichard Henderson1-36/+16
Merge maybe_out_small_movi, as it no longer has additional users. Use is_const_p{16,32}. Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-06tcg/s390x: Tighten constraints for 64-bit compareRichard Henderson2-13/+17
Give 64-bit comparison second operand a signed 33-bit immediate. This is the smallest superset of uint32_t and int32_t, as used by CLGFI and CGFI respectively. The rest of the 33-bit space can be loaded into TCG_TMP0. Drop use of the constant pool. Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-06tcg/s390x: Implement ctpop operationRichard Henderson2-2/+38
There is an older form that produces per-byte results, and a newer form that produces per-register results. Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-06tcg/s390x: Use tgen_movcond_int in tgen_clzRichard Henderson2-9/+12
Reuse code from movcond to conditionally copy a2 to dest, based on the condition codes produced by FLOGR. Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>