aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)AuthorFilesLines
2021-07-09meson: Introduce target-specific KconfigPhilippe Mathieu-Daudé1-0/+6
Add a target-specific Kconfig. We need the definitions in Kconfig so the minikconf tool can verify they exits. However CONFIG_FOO is only enabled for target foo via the meson.build rules. Two architecture have a particularity, ARM and MIPS. As their translators have been split you can potentially build a plain 32 bit build along with a 64-bit version including the 32-bit subset. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210131111316.232778-6-f4bug@amsat.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707131744.26027-2-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-02target/arm: Implement MVE shifts by registerPeter Maydell5-4/+57
Implement the MVE shifts by register, which perform shifts on a single general-purpose register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-19-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE shifts by immediatePeter Maydell5-10/+105
Implement the MVE shifts by immediate, which perform shifts on a single general-purpose register. These patterns overlap with the long-shift-by-immediates, so we have to rearrange the grouping a little here. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-18-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE long shifts by registerPeter Maydell5-3/+182
Implement the MVE long shifts by register, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with the shift count in another general-purpose register, which might be either positive or negative. Like the long-shifts-by-immediate, these encodings sit in the space that was previously the UNPREDICTABLE MOVS/ORRS with Rm==13,15. Because LSLL_rr and ASRL_rr overlap with both MOV_rxri/ORR_rrri and also with CSEL (as one of the previously-UNPREDICTABLE Rm==13 cases), we have to move the CSEL pattern into the same decodetree group. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-17-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE long shifts by immediatePeter Maydell5-0/+132
The MVE extension to v8.1M includes some new shift instructions which sit entirely within the non-coprocessor part of the encoding space and which operate only on general-purpose registers. They take up the space which was previously UNPREDICTABLE MOVS and ORRS encodings with Rm == 13 or 15. Implement the long shifts by immediate, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with an immediate shift count between 1 and 32. Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for the Rm==13,15 case, we need to explicitly emit code to UNDEF for the cases where v8.1M now requires that. (Trying to change MOVS and ORRS is too difficult, because the functions that generate the code are shared between a dozen different kinds of arithmetic or logical instruction for all A32, T16 and T32 encodings, and for some insns and some encodings Rm==13,15 are valid.) We make the helper functions we need for UQSHLL and SQSHLL take a 32-bit value which the helper casts to int8_t because we'll need these helpers also for the shift-by-register insns, where the shift count might be < 0 or > 32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VADDLVPeter Maydell4-1/+90
Implement the MVE VADDLV insn; this is similar to VADDV, except that it accumulates 32-bit elements into a 64-bit accumulator stored in a pair of general-purpose registers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-15-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHLCPeter Maydell4-0/+72
Implement the MVE VSHLC insn, which performs a shift left of the entire vector with carry in bits provided from a general purpose register and carry out bits written back to that register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-14-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE saturating narrowing shiftsPeter Maydell4-0/+174
Implement the MVE saturating shift-right-and-narrow insns VQSHRN, VQSHRUN, VQRSHRN and VQRSHRUN. do_srshr() is borrowed from sve_helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-13-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHRN, VRSHRNPeter Maydell4-0/+76
Implement the MVE shift-right-and-narrow insn VSHRN and VRSHRN. do_urshr() is borrowed from sve_helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-12-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSRI, VSLIPeter Maydell4-0/+62
Implement the MVE VSRI and VSLI insns, which perform a shift-and-insert operation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-11-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHLLPeter Maydell4-4/+105
Implement the MVE VHLL (vector shift left long) insn. This has two encodings: the T1 encoding is the usual shift-by-immediate format, and the T2 encoding is a special case where the shift count is always equal to the element size. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-10-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE vector shift right by immediate insnsPeter Maydell6-18/+72
Implement the MVE vector shift right by immediate insns VSHRI and VRSHRI. As with Neon, we implement these by using helper functions which perform left shifts but allow negative shift counts to indicate right shifts. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-9-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE vector shift left by immediate insnsPeter Maydell4-0/+147
Implement the MVE shift-vector-left-by-immediate insns VSHL, VQSHL and VQSHLU. The size-and-immediate encoding here is the same as Neon, and we handle it the same way neon-dp.decode does. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-8-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE logical immediate insnsPeter Maydell4-0/+95
Implement the MVE logical-immediate insns (VMOV, VMVN, VORR and VBIC). These have essentially the same encoding as their Neon equivalents, and we implement the decode in the same way. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-7-peter.maydell@linaro.org
2021-07-02target/arm: Use dup_const() instead of bitfield_replicate()Peter Maydell1-1/+1
Use dup_const() instead of bitfield_replicate() in disas_simd_mod_imm(). (We can't replace the other use of bitfield_replicate() in this file, in logic_imm_decode_wmask(), because that location needs to handle 2 and 4 bit elements, which dup_const() cannot.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-6-peter.maydell@linaro.org
2021-07-02target/arm: Use asimd_imm_const for A64 decodePeter Maydell3-82/+24
The A64 AdvSIMD modified-immediate grouping uses almost the same constant encoding that A32 Neon does; reuse asimd_imm_const() (to which we add the AArch64-specific case for cmode 15 op 1) instead of reimplementing it all. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-5-peter.maydell@linaro.org
2021-07-02target/arm: Make asimd_imm_const() publicPeter Maydell3-63/+73
The function asimd_imm_const() in translate-neon.c is an implementation of the pseudocode AdvSIMDExpandImm(), which we will also want for MVE. Move the implementation to translate.c, with a prototype in translate.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-4-peter.maydell@linaro.org
2021-07-02target/arm: Fix bugs in MVE VRMLALDAVH, VRMLSLDAVHPeter Maydell1-17/+21
The initial implementation of the MVE VRMLALDAVH and VRMLSLDAVH insns had some bugs: * the 32x32 multiply of elements was being done as 32x32->32, not 32x32->64 * we were incorrectly maintaining the accumulator in its full 72-bit form across all 4 beats of the insn; in the pseudocode it is squashed back into the 64 bits of the RdaHi:RdaLo registers after each beat In particular, fixing the second of these allows us to recast the implementation to avoid 128-bit arithmetic entirely. Since the element size here is always 4, we can also drop the parameterization of ESIZE to make the code a little more readable. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-3-peter.maydell@linaro.org
2021-07-02target/arm: Fix MVE widening/narrowing VLDR/VSTR offset calculationPeter Maydell1-8/+9
In do_ldst(), the calculation of the offset needs to be based on the size of the memory access, not the size of the elements in the vector. This meant we were getting it wrong for the widening and narrowing variants of the various VLDR and VSTR insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-2-peter.maydell@linaro.org
2021-07-02target/arm: Check NaN mode before silencing NaNJoe Komlodi2-9/+27
If the CPU is running in default NaN mode (FPCR.DN == 1) and we execute FRSQRTE, FRECPE, or FRECPX with a signaling NaN, parts_silence_nan_frac() will assert due to fpst->default_nan_mode being set. To avoid this, we check to see what NaN mode we're running in before we call floatxx_silence_nan(). Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1624662174-175828-2-git-send-email-joe.komlodi@xilinx.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-29target/arm: Improve REVSHRichard Henderson1-3/+1
The new bswap flags can implement the semantics exactly. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29target/arm: Improve vector REVRichard Henderson1-4/+2
We can eliminate the requirement for a zero-extended output, because the following store will ignore any garbage high bits. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29target/arm: Improve REV32Richard Henderson1-13/+4
For the sf version, we are performing two 32-bit bswaps in either half of the register. This is equivalent to performing one 64-bit bswap followed by a rotate. For the non-sf version, we can remove TCG_BSWAP_IZ and the preceding zero-extension. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29tcg: Add flags argument to tcg_gen_bswap16_*, tcg_gen_bswap32_i64Richard Henderson2-6/+8
Implement the new semantics in the fallback expansion. Change all callers to supply the flags that keep the semantics unchanged locally. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-24target/arm: Implement MTE3Peter Collingbourne2-32/+52
MTE3 introduces an asymmetric tag checking mode, in which loads are checked synchronously and stores are checked asynchronously. Add support for it. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210616195614.11785-1-pcc@google.com [PMM: Add line to emulation.rst] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-24target/arm: Make VMOV scalar <-> gpreg beatwise for MVEPeter Maydell3-8/+75
In a CPU with MVE, the VMOV (vector lane to general-purpose register) and VMOV (general-purpose register to vector lane) insns are not predicated, but they are subject to beatwise execution if they are not in an IT block. Since our implementation always executes all 4 beats in one tick, this means only that we need to handle PSR.ECI: * we must do the usual check for bad ECI state * we must advance ECI state if the insn succeeds * if ECI says we should not be executing the beat corresponding to the lane of the vector register being accessed then we should skip performing the move Note that if PSR.ECI is non-zero then we cannot be in an IT block. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-45-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VADDVPeter Maydell4-0/+76
Implement the MVE VADDV insn, which performs an addition across vector lanes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-44-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VHCADDPeter Maydell4-3/+19
Implement the MVE VHCADD insn, which is similar to VCADD but performs a halving step. This one overlaps with VADC. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-43-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VCADDPeter Maydell4-2/+51
Implement the MVE VCADD insn, which performs a complex add with rotate. Note that the size=0b11 encoding is VSBC. The architecture grants some leeway for the "destination and Vm source overlap" case for the size MO_32 case, but we choose not to make use of it, instead always calculating all 16 bytes worth of results before setting the destination register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-42-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VADC, VSBCPeter Maydell4-0/+99
Implement the MVE VADC and VSBC insns. These perform an add-with-carry or subtract-with-carry of the 32-bit elements in each lane of the input vectors, where the carry-out of each add is the carry-in of the next. The initial carry input is either 1 or is from FPSCR.C; the carry out at the end is written back to FPSCR.C. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-41-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VRHADDPeter Maydell4-0/+19
Implement the MVE VRHADD insn, which performs a rounded halving addition. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-40-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQDMULL (vector)Peter Maydell4-0/+70
Implement the vector form of the MVE VQDMULL insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-39-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQDMLSDH and VQRDMLSDHPeter Maydell4-0/+69
Implement the MVE VQDMLSDH and VQRDMLSDH insns, which are like VQDMLADH and VQRDMLADH except that products are subtracted rather than added. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-38-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQDMLADH and VQRDMLADHPeter Maydell4-0/+114
Implement the MVE VQDMLADH and VQRDMLADH insns. These multiply elements, and then add pairs of products, double, possibly round, saturate and return the high half of the result. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-37-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VRSHLPeter Maydell4-0/+17
Implement the MVE VRSHL insn (vector form). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-36-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VSHL insnPeter Maydell4-0/+19
Implement the MVE VSHL insn (vector form). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-35-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQRSHLPeter Maydell4-0/+19
Implement the MV VQRSHL (vector) insn. Again, the code to perform the actual shifts is borrowed from neon_helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-34-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQSHL (vector)Peter Maydell4-0/+56
Implement the MVE VQSHL insn (encoding T4, which is the vector-shift-by-vector version). The DO_SQSHL_OP and DO_UQSHL_OP macros here are derived from the neon_helper.c code for qshl_u{8,16,32} and qshl_s{8,16,32}. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-33-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQADD, VQSUB (vector)Peter Maydell4-0/+39
Implement the vector forms of the MVE VQADD and VQSUB insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-32-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQDMULH, VQRDMULH (vector)Peter Maydell4-0/+40
Implement the vector forms of the MVE VQDMULH and VQRDMULH insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-31-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQDMULL scalarPeter Maydell4-4/+119
Implement the MVE VQDMULL scalar insn. This multiplies the top or bottom half of each element by the scalar, doubles and saturates to a double-width result. Note that this encoding overlaps with VQADD and VQSUB; it uses what in VQADD and VQSUB would be the 'size=0b11' encoding. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-30-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQDMULH and VQRDMULH (scalar)Peter Maydell4-0/+38
Implement the MVE VQDMULH and VQRDMULH scalar insns, which multiply elements by the scalar, double, possibly round, take the high half and saturate. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-29-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VQADD and VQSUBPeter Maydell4-0/+87
Implement the MVE VQADD and VQSUB insns, which perform saturating addition of a scalar to each element. Note that individual bytes of each result element are used or discarded according to the predicate mask, but FPSCR.QC is only set if the predicate mask for the lowest byte of the element is set. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-28-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VPSTPeter Maydell2-0/+63
Implement the MVE VPST insn, which sets the predicate mask fields in the VPR to the immediate value encoded in the insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-27-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VBRSRPeter Maydell4-0/+49
Implement the MVE VBRSR insn, which reverses a specified number of bits in each element, setting the rest to zero. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-26-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VHADD, VHSUB (scalar)Peter Maydell4-0/+32
Implement the scalar variants of the MVE VHADD and VHSUB insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-25-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VSUB, VMUL (scalar)Peter Maydell4-0/+14
Implement the scalar forms of the MVE VSUB and VMUL insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-24-peter.maydell@linaro.org
2021-06-24target/arm: Implement MVE VADD (scalar)Peter Maydell4-0/+78
Implement the scalar form of the MVE VADD insn. This takes the scalar operand from a general purpose register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-23-peter.maydell@linaro.org
2021-06-21target/arm: Implement MVE VRMLALDAVH, VRMLSLDAVHPeter Maydell4-0/+76
Implement the MVE VRMLALDAVH and VRMLSLDAVH insns, which accumulate the results of a rounded multiply of pairs of elements into a 72-bit accumulator, returning the top 64 bits in a pair of general purpose registers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-22-peter.maydell@linaro.org
2021-06-21target/arm: Implement MVE VMLSLDAVPeter Maydell4-0/+23
Implement the MVE insn VMLSLDAV, which multiplies source elements, alternately adding and subtracting them, and accumulates into a 64-bit result in a pair of general purpose registers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-21-peter.maydell@linaro.org