aboutsummaryrefslogtreecommitdiff
path: root/tcg/mips
AgeCommit message (Collapse)AuthorFilesLines
2023-08-24tcg: Introduce negsetcond opcodesRichard Henderson1-0/+2
Introduce a new opcode for negative setcond. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg: Unify TCG_TARGET_HAS_extr[lh]_i64_i32Richard Henderson1-2/+1
Replace the separate defines with TCG_TARGET_HAS_extr_i64_i32, so that the two parts of backend-specific type changing cannot be out of sync. Reported-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: <20230822175127.1173698-1-richard.henderson@linaro.org>
2023-06-05tcg: Split out tcg-target-reg-bits.hRichard Henderson2-8/+18
Often, the only thing we need to know about the TCG host is the register size. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Add tlb_fast_offset to TCGContextRichard Henderson1-3/+4
Disconnect the layout of ArchCPU from TCG compilation. Pass the relative offset of 'env' and 'neg.tlb.f' as a parameter. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Widen CPUTLBEntry comparators to 64-bitsRichard Henderson1-5/+8
This makes CPUTLBEntry agnostic to the address size of the guest. When 32-bit addresses are in effect, we can simply read the low 32 bits of the 64-bit field. Similarly when we need to update the field for setting TLB_NOTDIRTY. For TCG backends that could in theory be big-endian, but in practice are not (arm, loongarch, riscv), use QEMU_BUILD_BUG_ON to document and ensure this is not accidentally missed. For s390x, which is always big-endian, use HOST_BIG_ENDIAN anyway, to document the reason for the adjustment. For sparc64 and ppc64, always perform a 64-bit load, and rely on the following 32-bit comparison to ignore the high bits. Rearrange mips and ppc if ladders for clarity. Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-30tcg: Remove TCG_TARGET_TLB_DISPLACEMENT_BITSRichard Henderson1-1/+0
The last use was removed by e77c89fb086a. Fixes: e77c89fb086a ("cputlb: Remove static tlb sizing") Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Replace MIPS_BE with HOST_BIG_ENDIANRichard Henderson1-26/+20
Since e03b56863d2b, which replaced HOST_WORDS_BIGENDIAN with HOST_BIG_ENDIAN, there is no need to define a second symbol which is [0,1]. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Use qemu_build_not_reached for LO/HI_OFFRichard Henderson1-5/+3
The new(ish) macro produces a compile-time error instead of a link-time error. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Try three insns with shift and add in tcg_out_moviRichard Henderson1-0/+44
These sequences are inexpensive to test. Maxing out at three insns results in the same space as a load plus the constant pool entry. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Try tb-relative addresses in tcg_out_moviRichard Henderson1-0/+13
These addresses are often loaded by the qemu_ld/st slow path, for loading the retaddr value. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Aggressively use the constant pool for n64 callsRichard Henderson1-3/+13
Repeated calls to a single helper are common -- especially the ones for softmmu memory access. Prefer the constant pool to longer sequences to increase sharing. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Use the constant pool for 64-bit constantsRichard Henderson2-17/+49
During normal processing, the constant pool is accessible via TCG_REG_TB. During the prologue, it is accessible via TCG_REG_T9. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Split out tcg_out_movi_twoRichard Henderson1-11/+24
Emit all 32-bit signed constants, which can be loaded in two insns. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Split out tcg_out_movi_oneRichard Henderson1-6/+20
Emit all constants that can be loaded in exactly one insn. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Create and use TCG_REG_TBRichard Henderson1-10/+59
This vastly reduces the size of code generated for 64-bit addresses. The code for exit_tb, for instance, where we load a (tagged) pointer to the current TB, goes from 0x400aa9725c: li v0,64 0x400aa97260: dsll v0,v0,0x10 0x400aa97264: ori v0,v0,0xaa9 0x400aa97268: dsll v0,v0,0x10 0x400aa9726c: j 0x400aa9703c 0x400aa97270: ori v0,v0,0x7083 to 0x400aa97240: j 0x400aa97040 0x400aa97244: daddiu v0,s6,-189 Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Unify TCG_GUEST_BASE_REG testsRichard Henderson1-1/+1
In tcg_out_qemu_ld/st, we already check for guest_base matching int16_t. Mirror that when setting up TCG_GUEST_BASE_REG in the prologue. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Move TCG_GUEST_BASE_REG to S7Richard Henderson1-2/+2
No functional change; just moving the saved reserved regs to the end. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25tcg/mips: Move TCG_AREG0 to S8Richard Henderson2-3/+3
No functional change; just moving the saved reserved regs to the end. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add page_bits and page_mask to TCGContextRichard Henderson1-3/+3
Disconnect guest page size from TCG compilation. While this could be done via exec/target_page.h, we want to cache the value across multiple memory access operations, so we might as well initialize this early. The changes within tcg/ are entirely mechanical: sed -i s/TARGET_PAGE_BITS/s->page_bits/g sed -i s/TARGET_PAGE_MASK/s->page_mask/g Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/mips: Remove TARGET_LONG_BITS, TCG_TYPE_TLRichard Henderson1-19/+23
All uses replaced with TCGContext.addr_type. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Split INDEX_op_qemu_{ld,st}* for guest address sizeRichard Henderson1-24/+42
For 32-bit hosts, we cannot simply rely on TCGContext.addr_bits, as we need one or two host registers to represent the guest address. Create the new opcodes and update all users. Since we have not yet eliminated TARGET_LONG_BITS, only one of the two opcodes will ever be used, so we can get away with treating them the same in the backends. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/mips: Use atom_and_align_for_opcRichard Henderson1-6/+9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add INDEX_op_qemu_{ld,st}_i128Richard Henderson1-0/+2
Add opcodes for backend support for 128-bit memory operations. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Introduce tcg_target_has_memory_bswapRichard Henderson2-2/+5
Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro with a function with a memop argument. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/mips: Use full load/store helpers in user-only modeRichard Henderson1-55/+2
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Unify helper_{be,le}_{ld,st}*Richard Henderson1-31/+0
With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. Hoist the qemu_{ld,st}_helpers arrays to tcg.c. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/mips: Simplify constraints on qemu_ld/stRichard Henderson3-32/+13
The softmmu tlb uses TCG_REG_TMP[0-3], not any of the normally available registers. Now that we handle overlap betwen inputs and helper arguments, and have eliminated use of A0, we can allow any allocatable reg. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/mips: Reorg tlb load within prepare_host_addrRichard Henderson1-20/+18
Compare the address vs the tlb entry with sign-extended values. This simplifies the page+alignment mask constant, and the generation of the last byte address for the misaligned test. Move the tlb addend load up, and the zero-extension down. This frees up a register, which allows us use TMP3 as the returned base address register instead of A0, which we were using as a 5th temporary. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/mips: Remove MO_BSWAP handlingRichard Henderson2-240/+48
While performing the load in the delay slot of the call to the common bswap helper function is cute, it is not worth the added complexity. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/mips: Convert tcg_out_qemu_{ld,st}_slow_pathRichard Henderson1-132/+22
Use tcg_out_ld_helper_args, tcg_out_ld_helper_ret, and tcg_out_st_helper_args. This allows our local tcg_out_arg_* infrastructure to be removed. We are no longer filling the call or return branch delay slots, nor are we tail-calling for the store, but this seems a small price to pay. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/mips: Introduce prepare_host_addrRichard Henderson1-232/+172
Merge tcg_out_tlb_load, add_qemu_ldst_label, tcg_out_test_alignment, and some code that lived in both tcg_out_qemu_ld and tcg_out_qemu_st into one function that returns HostAddress and TCGLabelQemuLdst structures. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05tcg/mips: Rationalize args to tcg_out_qemu_{ld,st}Richard Henderson1-91/+95
Interpret the variable argument placement in the caller. There are several places where we already convert back from bool to type. Clean things up by using type throughout. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-02tcg/mips: Conditionalize tcg_out_exts_i32_i64Richard Henderson1-1/+3
Since TCG_TYPE_I32 values are kept sign-extended in registers, we need not extend if the register matches. This is already relied upon by comparisons. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Introduce tcg_out_xchgRichard Henderson1-0/+5
We will want a backend interface for register swapping. This is only properly defined for x86; all others get a stub version that always indicates failure. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_extrl_i64_i32Richard Henderson1-3/+6
We will need a backend interface for type truncation. For those backends that did not enable TCG_TARGET_HAS_extrl_i64_i32, use tcg_out_mov. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_extu_i32_i64Richard Henderson1-3/+6
We will need a backend interface for type extension with zero. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_exts_i32_i64Richard Henderson1-1/+6
We will need a backend interface for type extension with sign. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext32uRichard Henderson1-1/+2
We will need a backend interface for performing 32-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext32sRichard Henderson1-3/+9
We will need a backend interface for performing 32-bit sign-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext16uRichard Henderson1-0/+5
We will need a backend interface for performing 16-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext16sRichard Henderson1-3/+8
We will need a backend interface for performing 16-bit sign-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext8uRichard Henderson1-1/+8
We will need a backend interface for performing 8-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext8sRichard Henderson1-4/+8
We will need a backend interface for performing 8-bit sign-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Replace tcg_abort with g_assert_not_reachedRichard Henderson1-7/+7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-10tcg/mips: Fix TCG_TARGET_CALL_RET_I128 for o32 abiRichard Henderson1-1/+2
The return is by reference, not in 4 integer registers. This error resulted in qemu-system-i386: tcg/mips/tcg-target.c.inc:140: \ tcg_target_call_oarg_reg: Assertion `slot >= 0 && slot <= 1' failed. Fixes: 5427a9a7604 ("tcg: Add TCG_TARGET_CALL_{RET,ARG}_I128") Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04tcg: Add TCG_TARGET_CALL_{RET,ARG}_I128Richard Henderson1-0/+2
Fill in the parameters for the host ABI for Int128 for those backends which require no extra modification. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04tcg: Introduce tcg_target_call_oarg_regRichard Henderson1-4/+6
Replace the flat array tcg_target_call_oarg_regs[] with a function call including the TCGCallReturnKind. Extend the set of registers for ARM to r0-r3 to match the ABI: https://github.com/ARM-software/abi-aa/blob/main/aapcs32/aapcs32.rst#result-return Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04tcg: Introduce tcg_out_addi_ptrRichard Henderson1-0/+7
Implement the function for arm, i386, and s390x, which will use it. Add stubs for all other backends. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17tcg: Remove TCG_TARGET_HAS_direct_jumpRichard Henderson2-2/+0
We now have the option to generate direct or indirect goto_tb depending on the dynamic displacement, thus the define is no longer necessary or completely accurate. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17tcg: Always define tb_target_set_jmp_targetRichard Henderson1-0/+6
Install empty versions for !TCG_TARGET_HAS_direct_jump hosts. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>