aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)AuthorFilesLines
2021-10-15target/arm: Drop checks for singlestep_enabledRichard Henderson2-38/+8
GDB single-stepping is now handled generically. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13target/arm: Use cpu_*_mmu instead of helper_*_mmuRichard Henderson2-47/+11
The helper_*_mmu functions were the only thing available when this code was written. This could have been adjusted when we added cpu_*_mmuidx_ra, but now we can most easily use the newest set of interfaces. Cc: qemu-arm@nongnu.org Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13accel/tcg: Move cpu_atomic decls to exec/cpu_ldst.hRichard Henderson1-1/+0
The previous placement in tcg/tcg.h was not logical. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13target/arm: Use MO_128 for 16 byte atomicsRichard Henderson1-4/+4
Cc: qemu-arm@nongnu.org Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05tcg: Rename TCGMemOpIdx to MemOpIdxRichard Henderson2-9/+9
We're about to move this out of tcg.h, so rename it as we did when moving MemOp. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05tcg: Expand MO_SIZE to 3 bitsRichard Henderson1-1/+1
We have lacked expressive support for memory sizes larger than 64-bits for a while. Fixing that requires adjustment to several points where we used this for array indexing, and two places that develop -Wswitch warnings after the change. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-30Merge remote-tracking branch ↵Peter Maydell5-291/+307
'remotes/pmaydell/tags/pull-target-arm-20210930' into staging target-arm queue: * allwinner-h3: Switch to SMC as PSCI conduit * arm: tcg: Adhere to SMCCC 1.3 section 5.2 * xlnx-zcu102, xlnx-versal-virt: Support BBRAM and eFUSE devices * gdbstub related code cleanups * Don't put FPEXC and FPSID in org.gnu.gdb.arm.vfp XML * Use _init vs _new convention in bus creation function names * sabrelite: Connect SPI flash CS line to GPIO3_19 # gpg: Signature made Thu 30 Sep 2021 16:11:20 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20210930: (22 commits) hw/arm: sabrelite: Connect SPI flash CS line to GPIO3_19 ide: Rename ide_bus_new() to ide_bus_init() qbus: Rename qbus_create() to qbus_new() qbus: Rename qbus_create_inplace() to qbus_init() pci: Rename pci_root_bus_new_inplace() to pci_root_bus_init() ipack: Rename ipack_bus_new_inplace() to ipack_bus_init() scsi: Replace scsi_bus_new() with scsi_bus_init(), scsi_bus_init_named() target/arm: Don't put FPEXC and FPSID in org.gnu.gdb.arm.vfp XML target/arm: Move gdbstub related code out of helper.c target/arm: Fix coding style issues in gdbstub code in helper.c configs: Don't include 32-bit-only GDB XML in aarch64 linux configs docs/system/arm: xlnx-versal-virt: BBRAM and eFUSE Usage hw/arm: xlnx-zcu102: Add Xilinx eFUSE device hw/arm: xlnx-zcu102: Add Xilinx BBRAM device hw/arm: xlnx-versal-virt: Add Xilinx eFUSE device hw/arm: xlnx-versal-virt: Add Xilinx BBRAM device hw/nvram: Introduce Xilinx battery-backed ram hw/nvram: Introduce Xilinx ZynqMP eFuse device hw/nvram: Introduce Xilinx Versal eFuse device hw/nvram: Introduce Xilinx eFuse QOM ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-30memory: Name all the memory listenersPeter Xu1-0/+1
Provide a name field for all the memory listeners. It can be used to identify which memory listener is which. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20210817013553.30584-2-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-30target/arm: Don't put FPEXC and FPSID in org.gnu.gdb.arm.vfp XMLPeter Maydell1-16/+40
Currently we send VFP XML which includes D0..D15 or D0..D31, plus FPSID, FPSCR and FPEXC. The upstream GDB tolerates this, but its definition of this XML feature does not include FPSID or FPEXC. In particular, for M-profile cores there are no FPSID or FPEXC registers, so advertising those is wrong. Move FPSID and FPEXC into their own bit of XML which we only send for A and R profile cores. This brings our definition of the XML org.gnu.gdb.arm.vfp feature into line with GDB's own (at least for non-Neon cores...) and means we don't claim to have FPSID and FPEXC on M-profile. (It seems unlikely to me that any gdbstub users really care about being able to look at FPEXC and FPSID; but we've supplied them to gdb for a decade and it's not hard to keep doing so.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210921162901.17508-5-peter.maydell@linaro.org
2021-09-30target/arm: Move gdbstub related code out of helper.cPeter Maydell4-271/+277
Currently helper.c includes some code which is part of the arm target's gdbstub support. This code has a better home: in gdbstub.c and gdbstub64.c. Move it there. Because aarch64_fpu_gdb_get_reg() and aarch64_fpu_gdb_set_reg() move into gdbstub64.c, this means that they're now compiled only for TARGET_AARCH64 rather than always. That is the only case when they would ever be used, but it does mean that the ifdef in arm_cpu_register_gdb_regs_for_features() needs to be adjusted to match. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210921162901.17508-4-peter.maydell@linaro.org
2021-09-30target/arm: Fix coding style issues in gdbstub code in helper.cPeter Maydell1-7/+16
We're going to move this code to a different file; fix the coding style first so checkpatch doesn't complain. This includes deleting the spurious 'break' statements after returns in the vfp_gdb_get_reg() function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210921162901.17508-3-peter.maydell@linaro.org
2021-09-30arm: tcg: Adhere to SMCCC 1.3 section 5.2Alexander Graf1-29/+6
The SMCCC 1.3 spec section 5.2 says The Unknown SMC Function Identifier is a sign-extended value of (-1) that is returned in the R0, W0 or X0 registers. An implementation must return this error code when it receives: * An SMC or HVC call with an unknown Function Identifier * An SMC or HVC call for a removed Function Identifier * An SMC64/HVC64 call from AArch32 state To comply with these statements, let's always return -1 when we encounter an unknown HVC or SMC call. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-21hw/core: Make do_unaligned_access noreturnRichard Henderson1-1/+1
While we may have had some thought of allowing system-mode to return from this hook, we have no guests that require this. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21include/exec: Move cpu_signal_handler declarationRichard Henderson1-7/+0
There is nothing target specific about this. The implementation is host specific, but the declaration is 100% common. Reviewed-By: Warner Losh <imp@bsdimp.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21target/arm: Optimize MVE 1op-immediate insnsPeter Maydell1-5/+21
Optimize the MVE 1op-immediate insns (VORR, VBIC, VMOV) to use TCG vector ops when possible. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-13-peter.maydell@linaro.org
2021-09-21target/arm: Optimize MVE VSLI and VSRIPeter Maydell1-2/+2
Optimize the MVE shift-and-insert insns by using TCG vector ops when possible. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-12-peter.maydell@linaro.org
2021-09-21target/arm: Optimize MVE VSHLL and VMOVLPeter Maydell1-8/+59
Optimize the MVE VSHLL insns by using TCG vector ops when possible. This includes the VMOVL insn, which we handle in mve.decode as "VSHLL with zero shift count". Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-11-peter.maydell@linaro.org
2021-09-21target/arm: Optimize MVE VSHL, VSHR immediate formsPeter Maydell1-20/+63
Optimize the MVE VSHL and VSHR immediate forms by using TCG vector ops when possible. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-10-peter.maydell@linaro.org
2021-09-21target/arm: Optimize MVE VMVNPeter Maydell1-1/+1
Optimize the MVE VMVN insn by using TCG vector ops when possible. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-9-peter.maydell@linaro.org
2021-09-21target/arm: Optimize MVE VDUPPeter Maydell1-4/+8
Optimize the MVE VDUP insns by using TCG vector ops when possible. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-8-peter.maydell@linaro.org
2021-09-21target/arm: Optimize MVE VNEG, VABSPeter Maydell1-10/+22
Optimize the MVE VNEG and VABS insns by using TCG vector ops when possible. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-7-peter.maydell@linaro.org
2021-09-21target/arm: Optimize MVE arithmetic opsPeter Maydell1-9/+11
Optimize MVE arithmetic ops when we have a TCG vector operation we can use. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-6-peter.maydell@linaro.org
2021-09-21target/arm: Optimize MVE logic opsPeter Maydell1-15/+36
When not predicating, implement the MVE bitwise logical insns directly using TCG vector operations. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-5-peter.maydell@linaro.org
2021-09-21target/arm: Add TB flag for "MVE insns not predicated"Peter Maydell7-9/+92
Our current codegen for MVE always calls out to helper functions, because some byte lanes might be predicated. The common case is that in fact there is no predication active and all lanes should be updated together, so we can produce better code by detecting that and using the TCG generic vector infrastructure. Add a TB flag that is set when we can guarantee that there is no active MVE predication, and a bool in the DisasContext. Subsequent patches will use this flag to generate improved code for some instructions. In most cases when the predication state changes we simply end the TB after that instruction. For the code called from vfp_access_check() that handles lazy state preservation and creating a new FP context, we can usually avoid having to try to end the TB because luckily the new value of the flag following the register changes in those sequences doesn't depend on any runtime decisions. We do have to end the TB if the guest has enabled lazy FP state preservation but not automatic state preservation, but this is an odd corner case that is not going to be common in real-world code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
2021-09-21target/arm: Enforce that FPDSCR.LTPSIZE is 4 on inbound migrationPeter Maydell1-0/+13
Architecturally, for an M-profile CPU with the LOB feature the LTPSIZE field in FPDSCR is always constant 4. QEMU's implementation enforces this everywhere, except that we don't check that it is true in incoming migration data. We're going to add come in gen_update_fp_context() which relies on the "always 4" property. Since this is TCG-only, we don't actually need to be robust to bogus incoming migration data, and the effect of it being wrong would be wrong code generation rather than a QEMU crash; but if it did ever happen somehow it would be very difficult to track down the cause. Add a check so that we fail the inbound migration if the FPDSCR.LTPSIZE value is incorrect. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-3-peter.maydell@linaro.org
2021-09-21target/arm: Avoid goto_tb if we're trying to exit to the main loopPeter Maydell1-1/+33
Currently gen_jmp_tb() assumes that if it is called then the jump it is handling is the only reason that we might be trying to end the TB, so it will use goto_tb if it can. This is usually the case: mostly "we did something that means we must end the TB" happens on a non-branch instruction. However, there are cases where we decide early in handling an instruction that we need to end the TB and return to the main loop, and then the insn is a complex one that involves gen_jmp_tb(). For instance, for M-profile FP instructions, in gen_preserve_fp_state() which is called from vfp_access_check() we want to force an exit to the main loop if lazy state preservation is active and we are in icount mode. Make gen_jmp_tb() look at the current value of is_jmp, and only use goto_tb if the previous is_jmp was DISAS_NEXT or DISAS_TOO_MANY. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-2-peter.maydell@linaro.org
2021-09-21hvf: arm: Add rudimentary PMC supportAlexander Graf1-0/+179
We can expose cycle counters on the PMU easily. To be as compatible as possible, let's do so, but make sure we don't expose any other architectural counters that we can not model yet. This allows OSs to work that require PMU support. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210916155404.86958-10-agraf@csgraf.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-21arm: Add Hypervisor.framework build targetAlexander Graf2-0/+5
Now that we have all logic in place that we need to handle Hypervisor.framework on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we can build it. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com> Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only) Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210916155404.86958-9-agraf@csgraf.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-21hvf: arm: Implement PSCI handlingAlexander Graf3-7/+139
We need to handle PSCI calls. Most of the TCG code works for us, but we can simplify it to only handle aa64 mode and we need to handle SUSPEND differently. This patch takes the TCG code as template and duplicates it in HVF. To tell the guest that we support PSCI 0.2 now, update the check in arm_cpu_initfn() as well. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210916155404.86958-8-agraf@csgraf.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-21hvf: arm: Implement -cpu hostPeter Maydell5-6/+124
Now that we have working system register sync, we push more target CPU properties into the virtual machine. That might be useful in some situations, but is not the typical case that users want. So let's add a -cpu host option that allows them to explicitly pass all CPU capabilities of their host CPU into the guest. Signed-off-by: Alexander Graf <agraf@csgraf.de> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Sergio Lopez <slp@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210916155404.86958-7-agraf@csgraf.de [PMM: drop unnecessary #include line from .h file] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-21arm/hvf: Add a WFI handlerPeter Collingbourne1-0/+79
Sleep on WFI until the VTIMER is due but allow ourselves to be woken up on IPI. In this implementation IPI is blocked on the CPU thread at startup and pselect() is used to atomically unblock the signal and begin sleeping. The signal is sent unconditionally so there's no need to worry about races between actually sleeping and the "we think we're sleeping" state. It may lead to an extra wakeup but that's better than missing it entirely. Signed-off-by: Peter Collingbourne <pcc@google.com> Signed-off-by: Alexander Graf <agraf@csgraf.de> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210916155404.86958-6-agraf@csgraf.de [agraf: Remove unused 'set' variable, always advance PC on WFX trap, support vm stop / continue operations and cntv offsets] Signed-off-by: Alexander Graf <agraf@csgraf.de> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Sergio Lopez <slp@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-20hvf: Add Apple Silicon supportAlexander Graf2-0/+804
With Apple Silicon available to the masses, it's a good time to add support for driving its virtualization extensions from QEMU. This patch adds all necessary architecture specific code to get basic VMs working, including save/restore. Known limitations: - WFI handling is missing (follows in later patch) - No watchpoint/breakpoint support Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Sergio Lopez <slp@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210916155404.86958-5-agraf@csgraf.de [PMM: added missing #include] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-20arm: Move PMC register definitions to internals.hAlexander Graf2-44/+44
We will need PMC register definitions in accel specific code later. Move all constant definitions to common arm headers so we can reuse them. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210916155404.86958-2-agraf@csgraf.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-20target/arm: Consolidate ifdef blocks in resetPeter Maydell1-12/+10
Move an ifndef CONFIG_USER_ONLY code block up in arm_cpu_reset() so it can be merged with another earlier one. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210914120725.24992-4-peter.maydell@linaro.org
2021-09-20target/arm: Always clear exclusive monitor on resetPeter Maydell1-3/+3
There's no particular reason why the exclusive monitor should be only cleared on reset in system emulation mode. It doesn't hurt if it isn't cleared in user mode, but we might as well reduce the amount of code we have that's inside an ifdef. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210914120725.24992-3-peter.maydell@linaro.org
2021-09-20target/arm: Don't skip M-profile reset entirely in user modePeter Maydell1-0/+19
Currently all of the M-profile specific code in arm_cpu_reset() is inside a !defined(CONFIG_USER_ONLY) ifdef block. This is unintentional: it happened because originally the only M-profile-specific handling was the setup of the initial SP and PC from the vector table, which is system-emulation only. But then we added a lot of other M-profile setup to the same "if (ARM_FEATURE_M)" code block without noticing that it was all inside a not-user-mode ifdef. This has generally been harmless, but with the addition of v8.1M low-overhead-loop support we ran into a problem: the reset of FPSCR.LTPSIZE to 4 was only being done for system emulation mode, so if a user-mode guest tried to execute the LE instruction it would incorrectly take a UsageFault. Adjust the ifdefs so only the really system-emulation specific parts are covered. Because this means we now run some reset code that sets up initial values in the FPCCR and similar FPU related registers, explicitly set up the registers controlling FPU context handling in user-emulation mode so that the FPU works by design and not by chance. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/613 Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210914120725.24992-2-peter.maydell@linaro.org
2021-09-14target/arm: Restrict cpu_exec_interrupt() handler to sysemuPhilippe Mathieu-Daudé3-7/+9
Restrict cpu_exec_interrupt() and its callees to sysemu. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210911165434.531552-8-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14accel/tcg: Add DisasContextBase argument to translator_ld*Ilya Leoshkevich3-11/+12
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> [rth: Split out of a larger patch.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-13target/arm: Merge disas_a64_insn into aarch64_tr_translate_insnRichard Henderson1-115/+109
It is confusing to have different exits from translation for various conditions in separate functions. Merge disas_a64_insn into its only caller. Standardize on the "s" name for the DisasContext, as the code from disas_a64_insn had more instances. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210821195958.41312-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13target/arm: Take an exception if PSTATE.IL is setPeter Maydell7-0/+49
In v8A, the PSTATE.IL bit is set for various kinds of illegal exception return or mode-change attempts. We already set PSTATE.IL (or its AArch32 equivalent CPSR.IL) in all those cases, but we weren't implementing the part of the behaviour where attempting to execute an instruction with PSTATE.IL takes an immediate exception with an appropriate syndrome value. Add a new TB flags bit tracking PSTATE.IL/CPSR.IL, and generate code to take an exception instead of whatever the instruction would have been. PSTATE.IL and CPSR.IL change only on exception entry, attempted exception exit, and various AArch32 mode changes via cpsr_write(). These places generally already rebuild the hflags, so the only place we need an extra rebuild_hflags call is in the illegal-return codepath of the AArch64 exception_return helper. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210821195958.41312-2-richard.henderson@linaro.org Message-Id: <20210817162118.24319-1-peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [rth: Added missing returns; set IL bit in syndrome] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-13hw/arm/virt: add ITS support in virt GICShashi Mallela1-2/+2
Included creation of ITS as part of virt platform GIC initialization. This Emulated ITS model now co-exists with kvm ITS and is enabled in absence of kvm irq kernel support in a platform. Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210910143951.92242-9-shashi.mallela@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13hw/arm/virt: KVM: Probe for KVM_CAP_ARM_VM_IPA_SIZE when creating scratch VMMarc Zyngier1-1/+6
Although we probe for the IPA limits imposed by KVM (and the hardware) when computing the memory map, we still use the old style '0' when creating a scratch VM in kvm_arm_create_scratch_host_vcpu(). On systems that are severely IPA challenged (such as the Apple M1), this results in a failure as KVM cannot use the default 40bit that '0' represents. Instead, probe for the extension and use the reported IPA limit if available. Cc: Andrew Jones <drjones@redhat.com> Cc: Eric Auger <eric.auger@redhat.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-id: 20210822144441.1290891-2-maz@kernel.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target-arm: Add support for Fujitsu A64FXShuuichirou Ishii1-0/+48
Add a definition for the Fujitsu A64FX processor. The A64FX processor does not implement the AArch32 Execution state, so there are no associated AArch32 Identification registers. For SVE, the A64FX processor supports only 128,256 and 512bit vector lengths. The Identification register values are defined based on the FX700, and have been tested and confirmed. Signed-off-by: Shuuichirou Ishii <ishii.shuuichir@fujitsu.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Enable MVE in Cortex-M55Peter Maydell1-5/+2
We now have a complete MVE emulation, so we can enable it in our Cortex-M55 model by setting the ID registers to match those of a Cortex-M55 with full MVE support. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VRINT insnsPeter Maydell4-0/+93
Implement the MVE VRINT insns, which round floating point inputs to integer values, leaving them in floating point format. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT between single and half precisionPeter Maydell4-0/+108
Implement the MVE VCVT instruction which converts between single and half precision floating point. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT with specified rounding modePeter Maydell4-0/+105
Implement the MVE VCVT which converts from floating-point to integer using a rounding mode specified by the instruction. We implement this similarly to the Neon equivalents, by passing the required rounding mode as an extra integer parameter to the helper functions. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT between fp and integerPeter Maydell2-0/+39
Implement the MVE "VCVT (between floating-point and integer)" insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT between floating and fixed pointPeter Maydell4-0/+82
Implement the MVE VCVT insns which convert between floating and fixed point. As with the Neon equivalents, these use essentially the same constant encoding as right-shift-by-immediate. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE fp scalar comparisonsPeter Maydell4-24/+131
Implement the MVE fp scalar comparisons VCMP and VPT. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>