aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)AuthorFilesLines
2021-09-14target/arm: Restrict cpu_exec_interrupt() handler to sysemuPhilippe Mathieu-Daudé3-7/+9
Restrict cpu_exec_interrupt() and its callees to sysemu. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210911165434.531552-8-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14accel/tcg: Add DisasContextBase argument to translator_ld*Ilya Leoshkevich3-11/+12
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> [rth: Split out of a larger patch.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-13target/arm: Merge disas_a64_insn into aarch64_tr_translate_insnRichard Henderson1-115/+109
It is confusing to have different exits from translation for various conditions in separate functions. Merge disas_a64_insn into its only caller. Standardize on the "s" name for the DisasContext, as the code from disas_a64_insn had more instances. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210821195958.41312-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13target/arm: Take an exception if PSTATE.IL is setPeter Maydell7-0/+49
In v8A, the PSTATE.IL bit is set for various kinds of illegal exception return or mode-change attempts. We already set PSTATE.IL (or its AArch32 equivalent CPSR.IL) in all those cases, but we weren't implementing the part of the behaviour where attempting to execute an instruction with PSTATE.IL takes an immediate exception with an appropriate syndrome value. Add a new TB flags bit tracking PSTATE.IL/CPSR.IL, and generate code to take an exception instead of whatever the instruction would have been. PSTATE.IL and CPSR.IL change only on exception entry, attempted exception exit, and various AArch32 mode changes via cpsr_write(). These places generally already rebuild the hflags, so the only place we need an extra rebuild_hflags call is in the illegal-return codepath of the AArch64 exception_return helper. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210821195958.41312-2-richard.henderson@linaro.org Message-Id: <20210817162118.24319-1-peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [rth: Added missing returns; set IL bit in syndrome] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-13hw/arm/virt: add ITS support in virt GICShashi Mallela1-2/+2
Included creation of ITS as part of virt platform GIC initialization. This Emulated ITS model now co-exists with kvm ITS and is enabled in absence of kvm irq kernel support in a platform. Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210910143951.92242-9-shashi.mallela@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13hw/arm/virt: KVM: Probe for KVM_CAP_ARM_VM_IPA_SIZE when creating scratch VMMarc Zyngier1-1/+6
Although we probe for the IPA limits imposed by KVM (and the hardware) when computing the memory map, we still use the old style '0' when creating a scratch VM in kvm_arm_create_scratch_host_vcpu(). On systems that are severely IPA challenged (such as the Apple M1), this results in a failure as KVM cannot use the default 40bit that '0' represents. Instead, probe for the extension and use the reported IPA limit if available. Cc: Andrew Jones <drjones@redhat.com> Cc: Eric Auger <eric.auger@redhat.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-id: 20210822144441.1290891-2-maz@kernel.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target-arm: Add support for Fujitsu A64FXShuuichirou Ishii1-0/+48
Add a definition for the Fujitsu A64FX processor. The A64FX processor does not implement the AArch32 Execution state, so there are no associated AArch32 Identification registers. For SVE, the A64FX processor supports only 128,256 and 512bit vector lengths. The Identification register values are defined based on the FX700, and have been tested and confirmed. Signed-off-by: Shuuichirou Ishii <ishii.shuuichir@fujitsu.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Enable MVE in Cortex-M55Peter Maydell1-5/+2
We now have a complete MVE emulation, so we can enable it in our Cortex-M55 model by setting the ID registers to match those of a Cortex-M55 with full MVE support. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VRINT insnsPeter Maydell4-0/+93
Implement the MVE VRINT insns, which round floating point inputs to integer values, leaving them in floating point format. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT between single and half precisionPeter Maydell4-0/+108
Implement the MVE VCVT instruction which converts between single and half precision floating point. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT with specified rounding modePeter Maydell4-0/+105
Implement the MVE VCVT which converts from floating-point to integer using a rounding mode specified by the instruction. We implement this similarly to the Neon equivalents, by passing the required rounding mode as an extra integer parameter to the helper functions. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT between fp and integerPeter Maydell2-0/+39
Implement the MVE "VCVT (between floating-point and integer)" insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT between floating and fixed pointPeter Maydell4-0/+82
Implement the MVE VCVT insns which convert between floating and fixed point. As with the Neon equivalents, these use essentially the same constant encoding as right-shift-by-immediate. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE fp scalar comparisonsPeter Maydell4-24/+131
Implement the MVE fp scalar comparisons VCMP and VPT. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE fp vector comparisonsPeter Maydell4-6/+137
Implement the MVE fp vector comparisons VCMP and VPT. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE FP max/min across vectorPeter Maydell4-6/+102
Implement the MVE VMAXNMV, VMINNMV, VMAXNMAV, VMINNMAV insns. These calculate the maximum or minimum of floating point elements across a vector, starting with a value in a general purpose register and returning the result there. The pseudocode silences a possible SNaN in the accumulating result on every iteration (by calling FPConvertNaN), but we do it only on the input ra, because if none of the inputs to float*_maxnum or float*_minnum are SNaNs then the result can't be an SNaN. Note that we can't use the float*_maxnuma() etc functions we defined earlier for VMAXNMA and VMINNMA, because we mustn't take the absolute value of the starting general-purpose register value, which could be negative. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE fp-with-scalar VFMA, VFMASPeter Maydell4-3/+56
Implement the MVE fp-with-scalar VFMA and VFMAS insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE scalar fp insnsPeter Maydell4-6/+85
Implement the MVE scalar floating point insns VADD, VSUB and VMUL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VMAXNMA and VMINNMAPeter Maydell4-0/+42
Implement the MVE VMAXNMA and VMINNMA insns; these are 2-operand, but the destination register must be the same as one of the source registers. We defer the decode of the size in bit 28 to the individual insn patterns rather than doing it in the format, because otherwise we would have a single insn pattern that overlapped with two groups (eg VMAXNMA with the VMULH_S and VMULH_U groups). Having two insn patterns per insn seems clearer than a complex multilevel nesting of overlapping and non-overlapping groups. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VCMUL and VCMLAPeter Maydell4-8/+139
Implement the MVE VCMUL and VCMLA insns. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VFMA and VFMSPeter Maydell4-0/+48
Implement the MVE VFMA and VFMS insns. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VCADDPeter Maydell4-1/+57
Implement the MVE VCADD insn. Note that here the size bit is the opposite sense to the other 2-operand fp insns. We don't check for the sz == 1 && Qd == Qm UNPREDICTABLE case, because that would mean we can't use the DO_2OP_FP macro in translate-mve.c. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VSUB, VMUL, VABD, VMAXNM, VMINNMPeter Maydell4-0/+42
Implement more simple 2-operand floating point MVE insns. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VADD (floating-point)Peter Maydell6-6/+76
Implement the MVE VADD (floating-point) insn. Handling of this is similar to the 2-operand integer insns, except that we must take care to only update the floating point exception status if the least significant bit of the predicate mask for each element is active. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm: Do hflags rebuild in cpsr_write()Peter Maydell2-2/+13
Currently we rely on all the callsites of cpsr_write() to rebuild the cached hflags if they change one of the CPSR bits which we use as a TB flag and cache in hflags. This is a bit awkward when we want to change the set of CPSR bits that we cache, because it means we need to re-audit all the cpsr_write() callsites to see which flags they are writing and whether they now need to rebuild the hflags. Switch instead to making cpsr_write() call arm_rebuild_hflags() itself if one of the bits being changed is a cached bit. We don't do the rebuild for the CPSRWriteRaw write type, because that kind of write is generally doing something special anyway. For the CPSRWriteRaw callsites in the KVM code and inbound migration we definitely don't want to recalculate the hflags; the callsites in boot.c and arm-powerctl.c have to do a rebuild-hflags call themselves anyway because of other CPU state changes they make. This allows us to drop explicit arm_rebuild_hflags() calls in a couple of places where the only reason we needed to call it was the CPSR write. This fixes a bug where we were incorrectly failing to rebuild hflags in the code path for a gdbstub write to CPSR, which meant that you could make QEMU assert by breaking into a running guest, altering the CPSR to change the value of, for example, CPSR.E, and then continuing. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210817201843.3829-1-peter.maydell@linaro.org
2021-08-26target/arm: Implement HSTR.TJDBXPeter Maydell6-0/+55
In v7A, the HSTR register has a TJDBX bit which traps NS EL0/EL1 access to the JOSCR and JMCR trivial Jazelle registers, and also BXJ. Implement these traps. In v8A this HSTR bit doesn't exist, so don't trap for v8A CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-3-peter.maydell@linaro.org
2021-08-26target/arm: Implement HSTR.TTEEPeter Maydell2-2/+18
In v7, the HSTR register has a TTEE bit which allows EL0/EL1 accesses to the Thumb2EE TEECR and TEEHBR registers to be trapped to the hypervisor. Implement these traps. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-2-peter.maydell@linaro.org
2021-08-26target/arm: Avoid assertion trying to use KVM and multiple ASesPeter Maydell1-0/+23
KVM cannot support multiple address spaces per CPU; if you try to create more than one then cpu_address_space_init() will assert. In the Arm CPU realize function, detect the configurations which would cause us to need more than one AS, and cleanly fail the realize rather than blundering on into the assertion. This turns this: $ qemu-system-aarch64 -enable-kvm -display none -cpu max -machine raspi3b qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed. Aborted into: $ qemu-system-aarch64 -enable-kvm -display none -machine raspi3b qemu-system-aarch64: Cannot enable KVM when guest CPU has EL3 enabled and this: $ qemu-system-aarch64 -enable-kvm -display none -machine mps3-an524 qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed. Aborted into: $ qemu-system-aarch64 -enable-kvm -display none -machine mps3-an524 qemu-system-aarch64: Cannot enable KVM when using an M-profile guest CPU Fixes: https://gitlab.com/qemu-project/qemu/-/issues/528 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210816135842.25302-3-peter.maydell@linaro.org
2021-08-26target/arm/cpu64: Validate sve vector lengths are supportedAndrew Jones1-56/+45
Future CPU types may specify which vector lengths are supported. We can apply nearly the same logic to validate those lengths as we do for KVM's supported vector lengths. We merge the code where we can, but unfortunately can't completely merge it because KVM requires all vector lengths, power-of-two or not, smaller than the maximum enabled length to also be enabled. The architecture only requires all the power-of-two lengths, though, so TCG will only enforce that. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-5-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu64: Replace kvm_supported with sve_vq_supportedAndrew Jones1-8/+11
Now that we have an ARMCPU member sve_vq_supported we no longer need the local kvm_supported bitmap for KVM's supported vector lengths. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-4-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/kvm64: Ensure sve vls map is completely clearAndrew Jones1-1/+1
bitmap_clear() only clears the given range. While the given range should be sufficient in this case we might as well be 100% sure all bits are zeroed by using bitmap_zero(). Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-3-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu: Introduce sve_vq_supported bitmapAndrew Jones2-0/+6
Allow CPUs that support SVE to specify which SVE vector lengths they support by setting them in this bitmap. Currently only the 'max' and 'host' CPU types supports SVE and 'host' requires KVM which obtains its supported bitmap from the host. So, we only need to initialize the bitmap for 'max' with TCG. And, since 'max' should support all SVE vector lengths we simply fill the bitmap. Future CPU types may have less trivial maps though. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-2-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25target/arm: kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()Hamza Mahfooz1-9/+8
As per commit 5626f8c6d468 ("rcu: Add automatically released rcu_read_lock variants"), RCU_READ_LOCK_GUARD() should be used instead of rcu_read_{un}lock(). Signed-off-by: Hamza Mahfooz <someguy@effective-light.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20210727235201.11491-1-someguy@effective-light.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25target/arm: Implement M-profile trapping on division by zeroPeter Maydell5-6/+26
Unlike A-profile, for M-profile the UDIV and SDIV insns can be configured to raise an exception on division by zero, using the CCR DIV_0_TRP bit. Implement support for setting this bit by making the helper functions raise the appropriate exception. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
2021-08-25target/arm: Re-indent sdiv and udiv helpersPeter Maydell1-6/+9
We're about to make a code change to the sdiv and udiv helper functions, so first fix their indentation and coding style. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210730151636.17254-2-peter.maydell@linaro.org
2021-08-25target/arm: Implement MVE interleaving loads/storesPeter Maydell4-0/+495
Implement the MVE interleaving load/store functions VLD2, VLD4, VST2 and VST4. VLD2 loads 16 bytes of data from memory and writes to 2 consecutive Qregs; VLD4 loads 16 bytes of data from memory and writes to 4 consecutive Qregs. The 'pattern' field in the encoding determines the offset into memory which is accessed and also which elements in the Qregs are written to. (The intention is that a sequence of four consecutive VLD4 with different pattern values performs a complete de-interleaving load of 64 bytes into all elements of the 4 Qregs.) VST2 and VST4 do the same, but for stores. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE scatter-gather immediate formsPeter Maydell4-36/+150
Implement the MVE VLDR/VSTR insns which do scatter-gather using base addresses from Qm plus or minus an immediate offset (possibly with writeback). Note that writeback is not predicated but it does have to honour ECI state, so we have to add an eci_mask check to the VSTR_SG macros (the VLDR_SG macros already needed this to be able to distinguish "skip beat" from "set predicated element to 0"). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE scatter-gather insnsPeter Maydell4-0/+270
Implement the MVE gather-loads and scatter-stores which form the address by adding a base value from a scalar register to an offset in each element of a vector. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VCTPPeter Maydell6-1/+58
Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so as to predicate any element at index Rn or greater is predicated. As with VPNOT, this insn itself is predicable and subject to beatwise execution. The calculation of the mask is the same as is used to determine ltpmask in mve_element_mask(), but we precalculate masklen in generated code to avoid having to have 4 helpers specialized by size. We put the decode line in with the low-overhead-loop insns in t32.decode because it's logically part of that collection of insn patterns, even though it is an MVE only insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VPNOTPeter Maydell4-0/+38
Implement the MVE VPNOT insn, which inverts the bits in VPR.P0 (subject to both predication and to beatwise execution). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMOV to/from 2 general-purpose registersPeter Maydell4-1/+91
Implement the MVE VMOV forms that move data between 2 general-purpose registers and 2 32-bit lanes in a vector register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMAXA, VMINAPeter Maydell4-0/+40
Implement the MVE VMAXA and VMINA insns, which take the absolute value of the signed elements in the input vector and then accumulate the unsigned max or min into the destination vector. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VQABS, VQNEGPeter Maydell4-0/+50
Implement the MVE 1-operand saturating operations VQABS and VQNEG. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE saturating doubling multiply accumulatesPeter Maydell4-0/+120
Implement the MVE saturating doubling multiply accumulate insns VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH. These perform a multiply, double, add the accumulator shifted by the element size, possibly round, saturate to twice the element size, then take the high half of the result. The *MLAH insns do vector * scalar + vector, and the *MLASH insns do vector * vector + scalar. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLAPeter Maydell4-0/+11
Implement the MVE VMLA insn, which multiplies a vector by a scalar and accumulates into another vector. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLADAV and VMLSLDAVPeter Maydell4-5/+150
Implement the MVE VMLADAV and VMLSLDAV insns. Like the VMLALDAV and VMLSLDAV insns already implemented, these accumulate multiplied vector elements; but they accumulate a 32-bit result rather than a 64-bit one. Note that these encodings overlap with what would be RdaHi=0b111 for VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFnPeter Maydell1-8/+8
The MVEGenDualAccOpFn is a bit misnamed, since it is used for the "long dual accumulate" operations that use a 64-bit accumulator. Rename it to MVEGenLongDualAccOpFn so we can use the former name for the 32-bit accumulator insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE narrowing movesPeter Maydell4-0/+132
Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN. These take a double-width input, narrow it (possibly saturating) and store the result to either the top or bottom half of the output element. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VABAVPeter Maydell4-0/+82
Implement the MVE VABAV insn, which computes absolute differences between elements of two vectors and accumulates the result into a general purpose register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer min/max across vectorPeter Maydell4-2/+150
Implement the MVE integer min/max across vector insns VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum from the vector elements and a general purpose register, and store the maximum back into the general purpose register. These insns overlap with VRMLALDAVH (they use what would be RdaHi=0b110). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>