aboutsummaryrefslogtreecommitdiff
path: root/tcg/aarch64
AgeCommit message (Collapse)AuthorFilesLines
2020-01-15tcg: Search includes in the parent source directoryPhilippe Mathieu-Daudé1-2/+2
All the *.inc.c files included by tcg/$TARGET/tcg-target.inc.c are in tcg/, their parent directory. To simplify the preprocessor search path, include the relative parent path: '..'. Patch created mechanically by running: $ for x in tcg-pool.inc.c tcg-ldst.inc.c; do \ sed -i "s,#include \"$x\",#include \"../$x\"," \ $(git grep -l "#include \"$x\""); \ done Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts) Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200101112303.20724-3-philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-11-11tcg/aarch64/tcg-target.opc.h: Add copyright/licensePeter Maydell1-3/+12
Add the copyright/license boilerplate for target/aarch64/tcg-target.opc.h. This file has only had two commits: 14e4c1e2355473ccb29 and 79525dfd08262d8, both by the same Linaro engineer. The license is GPL-2-or-later, since that's what the rest of tcg/aarch64 uses. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20191025155848.17362-2-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03tcg: TCGMemOp is now accelerator independent MemOpTony Nguyen1-13/+13
Preparation for collapsing the two byte swaps, adjust_endianness and handle_bswap, along the I/O path. Target dependant attributes are conditionalized upon NEED_CPU_H. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <81d9cd7d7f5aaadfa772d6c48ecee834e9cf7882.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-07-14tcg/aarch64: Fix output of extract2 opcodesRichard Henderson1-1/+1
This patch fixes two problems: (1) The inputs to the EXTR insn were reversed, (2) The input constraints use rZ, which means that we need to use the REG0 macro in order to supply XZR for a constant 0 input. Fixes: 464c2969d5d Reported-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg/aarch64: Use LDP to load tlb mask+tableRichard Henderson1-7/+8
This changes the code generation for the tlb from e.g. ldur x0, [x19, #0xffffffffffffffe0] ldur x1, [x19, #0xffffffffffffffe8] and x0, x0, x20, lsr #8 add x1, x1, x0 ldr x0, [x1] ldr x1, [x1, #0x18] to ldp x0, x1, [x19, #-0x20] and x0, x0, x20, lsr #8 add x1, x1, x0 ldr x0, [x1] ldr x1, [x1, #0x18] Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Move the softmmu tlb to CPUNegativeOffsetStateRichard Henderson1-23/+6
We have for some time had code within the tcg backends to handle large positive offsets from env. This move makes sure that need not happen. Indeed, we are able to assert at build time that simple offsets suffice for all hosts. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg: Create struct CPUTLBRichard Henderson1-7/+3
Move all softmmu tlb data into this structure. Arrange the members so that we are able to place mask+table together and at a smaller absolute offset from ENV. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22tcg/aarch64: Allow immediates for vector ORR and BICRichard Henderson1-7/+83
The allows immediates to be used for ORR and BIC, as well as the trivial inversions, ORC and AND. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22tcg/aarch64: Build vector immediates with two insnsRichard Henderson1-0/+47
Use MOVI+ORR or MVNI+BIC in order to build some vector constants, as opposed to dropping them to the constant pool. This includes all 16-bit constants and a similar set of 32-bit constants. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22tcg/aarch64: Use MVNI in tcg_out_dupi_vecRichard Henderson1-0/+11
The compliment of a subset of immediates can be computed with a single instruction. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22tcg/aarch64: Split up is_fimmRichard Henderson1-84/+119
There are several sub-classes of vector immediate, and only MOVI can use them all. This will enable usage of MVNI and ORRI, which use progressively fewer sub-classes. This patch adds no new functionality, merely splits the function and moves part of the logic into tcg_out_dupi_vec. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22tcg/aarch64: Support vector bitwise select valueRichard Henderson2-2/+24
The instruction set has 3 insns that perform the same operation, only varying in which operand must overlap the destination. We can represent the operation without overlap and choose based on the operands seen. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22tcg: Add support for vector compare selectRichard Henderson1-0/+1
Perform a per-element conditional move. This combination operation is easier to implement on some host vector units than plain cmp+bitsel. Omit the usual gvec interface, as this is intended to be used by target-specific gvec expansion call-backs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22tcg: Add support for vector bitwise selectRichard Henderson1-0/+1
This operation performs d = (b & a) | (c & ~a), and is present on a majority of host vector units. Include gvec expanders. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg/aarch64: Do not advertise minmax for MO_64Richard Henderson1-4/+4
The min/max instructions are not available for 64-bit elements. Fixes: 93f332a50371 Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg/aarch64: Support vector absolute valueRichard Henderson2-1/+7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg: Add support for vector absolute valueRichard Henderson1-0/+1
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg/aarch64: Support vector variable shift opcodesRichard Henderson3-1/+45
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg: Add INDEX_op_dupm_vecRichard Henderson1-0/+4
Allow the backend to expand dup from memory directly, instead of forcing the value into a temp first. This is especially important if integer/vector register moves do not exist. Note that officially tcg_out_dupm_vec is allowed to fail. If it did, we could fix this up relatively easily: VECE == 32/64: Load the value into a vector register, then dup. Both of these must work. VECE == 8/16: If the value happens to be at an offset such that an aligned load would place the desired value in the least significant end of the register, go ahead and load w/garbage in high bits. Load the value w/INDEX_op_ld{8,16}_i32. Attempt a move directly to vector reg, which may fail. Store the value into the backing store for OTS. Load the value into the vector reg w/TCG_TYPE_I32, which must work. Duplicate from the vector reg into itself, which must work. All of which is well and good, except that all supported hosts can support dupm for all vece, so all of the failure paths would be dead code and untestable. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg/aarch64: Implement tcg_out_dupm_vecRichard Henderson1-2/+35
The LD1R instruction does all the work. Note that the only useful addressing mode is a base register with no offset. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg: Add tcg_out_dupm_vec to the backend interfaceRichard Henderson1-0/+6
Currently stubbed out in all backends that support vectors. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg: Manually expand INDEX_op_dup_vecRichard Henderson1-5/+4
This case is similar to INDEX_op_mov_* in that we need to do different things depending on the current location of the source. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- v3: Added some commentary to the tcg_reg_alloc_* functions.
2019-05-13tcg: Promote tcg_out_{dup,dupi}_vec to backend interfaceRichard Henderson1-2/+10
The i386 backend already has these functions, and the aarch64 backend could easily split out one. Nothing is done with these functions yet, but this will aid register allocation of INDEX_op_dup_vec in a later patch. Adjust the aarch64 tcg_out_dupi_vec signature to match the new interface. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg: Return bool success from tcg_out_movRichard Henderson1-2/+3
This patch merely changes the interface, aborting on all failures, of which there are currently none. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-24tcg: Restart TB generation after out-of-line ldst overflowRichard Henderson1-6/+10
This is part c of relocation overflow handling. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-24tcg/aarch64: Support INDEX_op_extract2_{i32,i64}Richard Henderson2-2/+13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-24tcg: Add INDEX_op_extract2_{i32,i64}Richard Henderson1-0/+2
This will let backends implement the double-word shift operation. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28cputlb: Remove static tlb sizingRichard Henderson1-1/+0
Now that all tcg backends support TCG_TARGET_IMPLEMENTS_DYN_TLB, remove the define and the old code. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg/aarch64: enable dynamic TLB sizingRichard Henderson2-42/+60
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg: introduce dynamic TLB sizingEmilio G. Cota1-0/+1
Disabled in all TCG backends for now. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20190116170114.26802-3-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg/aarch64: Implement vector minmax arithmeticRichard Henderson2-1/+25
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg/aarch64: Implement vector saturating arithmeticRichard Henderson2-1/+25
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg: Add opcodes for vector minmax arithmeticRichard Henderson1-0/+1
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg: Add opcodes for vector saturated arithmeticRichard Henderson1-0/+1
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-12-17tcg: Add TCG_TARGET_HAS_MEMORY_BSWAPRichard Henderson1-0/+1
For now, defined universally as true, since we previously required backends to implement swapped memory operations. Future patches may now remove that support where it is onerous. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-12-17tcg/aarch64: Return false on failure from patch_relocRichard Henderson1-16/+21
This does require an extra two checks within the slow paths to replace the assert that we're moving. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-12-17tcg: Return success from patch_relocRichard Henderson1-1/+2
This will move the assert for success from within (subroutines of) patch_reloc into the callers. It will also let new code do something different when a relocation is out of range. For the moment, all backends are trivially converted to return true. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-12-17tcg/aarch64: Fold away "noaddr" branch routinesRichard Henderson1-19/+2
There are one use apiece for these. There is no longer a need for preserving branch offset operands, as we no longer re-translate. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-12-17tcg/aarch64: Remove reloc_pc26_atomicRichard Henderson1-12/+0
It is unused since b68686bd4bfeb70040b4099df993dfa0b4f37b03. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-07-19tcg/aarch64: limit mul_vec sizeAlex Bennée1-1/+2
In AdvSIMD we can only do 32x32 integer multiples although SVE is capable of larger 64 bit multiples. As a result we can end up generating invalid opcodes. Fix this by only reprting we can emit mul vector ops if the size is small enough. Fixes a crash on: sve-all-short-v8.3+sve@vq3/insn_mul_z_zi___INC.risu.bin When running on AArch64 hardware. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20180719154248.29669-1-alex.bennee@linaro.org> [rth: Removed the tcg_debug_assert -- there are plenty of other cases that we do not diagnose within the insn encoding helpers.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-15tcg: Reduce max TB opcode countRichard Henderson1-1/+1
Also, assert that we don't overflow any of two different offsets into the TB. Both unwind and goto_tb both record a uint16_t for later use. This fixes an arm-softmmu test case utilizing NEON in which there is a TB generated that runs to 7800 opcodes, and compiles to 96k on an x86_64 host. This overflows the 16-bit offset in which we record the goto_tb reset offset. Because of that overflow, we install a jump destination that goes to neverland. Boom. With this reduced op count, the same TB compiles to about 48k for aarch64, ppc64le, and x86_64 hosts, and neither assertion fires. Cc: qemu-stable@nongnu.org Reported-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-02-08tcg/aarch64: Add vector operationsRichard Henderson3-47/+569
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-09-17tcg/aarch64: Fully convert tcg_target_op_defRichard Henderson1-131/+151
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-09-17tcg: Remove tcg_regset_set32Richard Henderson1-16/+17
It's not even clear what the interface REG and VAL32 were supposed to mean. All uses had REG = 0 and VAL32 was the bitset assigned to the destination. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-09-17tcg: Remove tcg_regset_clearRichard Henderson1-1/+1
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-09-07tcg/aarch64: Use constant pool for moviRichard Henderson2-30/+33
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-07tcg: Rearrange ldst label trackingRichard Henderson2-1/+6
Dispense with TCGBackendData, as it has never been used for more than holding a single pointer. Use a define in the cpu/tcg-target.h to signal requirement for TCGLabelQemuLdst, so that we can drop the no-op tcg-be-null.h stubs. Rename tcg-be-ldst.h to tcg-ldst.inc.c. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-07tcg: Move USE_DIRECT_JUMP discriminator to tcg/cpu/tcg-target.hRichard Henderson2-9/+9
Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional function tb_target_set_jmp_target. While we're touching all backends, add a parameter for tb->tc_ptr; we're going to need it shortly for some backends. Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c. This opens the possibility for TCG_TARGET_HAS_direct_jump to be a runtime decision -- based on host cpu capabilities, the size of code_gen_buffer, or a future debugging switch. Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-05tcg: Add tcg target default memory orderingPranith Kumar1-0/+2
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20170829063313.10237-3-bobby.prani@gmail.com> [rth: Dropped ia64 hunk] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-07-09tcg/aarch64: Enable indirect jump path using LDR (literal)Pranith Kumar1-14/+28
This patch enables the indirect jump path using an LDR (literal) instruction. It will be interesting to test and see which performs better among the two paths. CC: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20170630143614.31059-3-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>