aboutsummaryrefslogtreecommitdiff
path: root/target/arm/tcg
AgeCommit message (Collapse)AuthorFilesLines
2024-04-02target/arm: take HSTR traps of cp15 accesses to EL2, not EL1Peter Maydell1-1/+1
The HSTR_EL2 register allows the hypervisor to trap AArch32 EL1 and EL0 accesses to cp15 registers. We incorrectly implemented this so they trap to EL1 when we detect the need for a HSTR trap at code generation time. (The check in access_check_cp_reg() which we do at runtime to catch traps from EL0 is correctly routing them to EL2.) Use the correct target EL when generating the code to take the trap. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2226 Fixes: 049edada5e93df ("target/arm: Make HSTR_EL2 traps take priority over UNDEF-at-EL1") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240325133116.2075362-1-peter.maydell@linaro.org
2024-03-08target/arm: Move v7m-related code from cpu32.c into a separate fileThomas Huth3-261/+293
Move the code to a separate file so that we do not have to compile it anymore if CONFIG_ARM_V7M is not set. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-id: 20240308141051.536599-2-thuth@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-03-07target/arm: Fix 32-bit SMOPARichard Henderson1-33/+46
While the 8-bit input elements are sequential in the input vector, the 32-bit output elements are not sequential in the output matrix. Do not attempt to compute 2 32-bit outputs at the same time. Cc: qemu-stable@nongnu.org Fixes: 23a5e3859f5 ("target/arm: Implement SME integer outer product") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2083 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240305163931.242795-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-03-07target/arm: Enable FEAT_ECV for 'max' CPUPeter Maydell1-0/+1
Enable all FEAT_ECV features on the 'max' CPU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240301183219.2424889-9-peter.maydell@linaro.org
2024-03-05target/arm: Do memory type alignment check when translation disabledRichard Henderson1-2/+32
If translation is disabled, the default memory type is Device, which requires alignment checking. This is more optimally done early via the MemOp given to the TCG memory operation. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reported-by: Idan Horowitz <idan.horowitz@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240301204110.656742-6-richard.henderson@linaro.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1204 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-03-05target/arm: Support 32-byte alignment in pow2_alignRichard Henderson1-7/+1
Now that we have removed TARGET_PAGE_BITS_MIN-6 from TLB_FLAGS_MASK, we can test for 32-byte alignment. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240301204110.656742-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-15target/arm: Allow access to SPSR_hyp from hyp modePeter Maydell2-19/+43
Architecturally, the AArch32 MSR/MRS to/from banked register instructions are UNPREDICTABLE for attempts to access a banked register that the guest could access in a more direct way (e.g. using this insn to access r8_fiq when already in FIQ mode). QEMU has chosen to UNDEF on all of these. However, for the case of accessing SPSR_hyp from hyp mode, it turns out that real hardware permits this, with the same effect as if the guest had directly written to SPSR. Further, there is some guest code out there that assumes it can do this, because it happens to work on hardware: an example Cortex-R52 startup code fragment uses this, and it got copied into various other places, including Zephyr. Zephyr was fixed to not use this: https://github.com/zephyrproject-rtos/zephyr/issues/47330 but other examples are still out there, like the selftest binary for the MPS3-AN536. For convenience of being able to run guest code, permit this UNPREDICTABLE access instead of UNDEFing it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240206132931.38376-5-peter.maydell@linaro.org
2024-02-15target/arm: Add Cortex-R52 IMPDEF sysregsPeter Maydell1-0/+108
Add the Cortex-R52 IMPDEF sysregs, by defining them here and also by enabling the AUXCR feature which defines the ACTLR and HACTLR registers. As is our usual practice, we make these simple reads-as-zero stubs for now. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240206132931.38376-4-peter.maydell@linaro.org
2024-02-15target/arm: The Cortex-R52 has a read-only CBARPeter Maydell1-0/+1
The Cortex-R52 implements the Configuration Base Address Register (CBAR), as a read-only register. Add ARM_FEATURE_CBAR_RO to this CPU type, so that our implementation provides the register and the associated qdev property. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240206132931.38376-3-peter.maydell@linaro.org
2024-02-15target/arm: Fix SVE/SME gross MTE suppression checksRichard Henderson2-10/+10
The TBI and TCMA bits are located within mtedesc, not desc. Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-15target/arm: Handle mte in do_ldrq, do_ldroRichard Henderson1-2/+13
These functions "use the standard load helpers", but fail to clean_data_tbi or populate mtedesc. Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-15target/arm: Split out make_svemte_descRichard Henderson3-33/+31
Share code that creates mtedesc and embeds within simd_desc. Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-15target/arm: Adjust and validate mtedesc sizem1Richard Henderson1-3/+4
When we added SVE_MTEDESC_SHIFT, we effectively limited the maximum size of MTEDESC. Adjust SIZEM1 to consume the remaining bits (32 - 10 - 5 - 12 == 5). Assert that the data to be stored fits within the field (expecting 8 * 4 - 1 == 31, exact fit). Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-15target/arm: Fix nregs computation in do_{ld,st}_zpaRichard Henderson1-8/+8
The field is encoded as [0-3], which is convenient for indexing our array of function pointers, but the true value is [1-4]. Adjust before calling do_mem_zpa. Add an assert, and move the comment re passing ZT to the helper back next to the relevant code. Cc: qemu-stable@nongnu.org Fixes: 206adacfb8d ("target/arm: Add mte helpers for sve scalar + int loads") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-03target/arm: Split out arm_env_mmu_indexRichard Henderson4-16/+16
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29include/qemu: Add TCGCPUOps typedef to typedefs.hRichard Henderson1-1/+1
QEMU coding style recommends using structure typedefs. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29target: Use vaddr in gen_intermediate_codeAnton Johansson1-1/+1
Makes gen_intermediate_code() signature target agnostic so the function can be called from accel/tcg/translate-all.c without target specifics. Signed-off-by: Anton Johansson <anjo@rev.ng> Message-Id: <20240119144024.14289-9-anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-26target/arm: Fix A64 scalar SQSHRN and SQRSHRNPeter Maydell1-1/+1
In commit 1b7bc9b5c8bf374dd we changed handle_vec_simd_sqshrn() so that instead of starting with a 0 value and depositing in each new element from the narrowing operation, it instead started with the raw result of the narrowing operation of the first element. This is fine in the vector case, because the deposit operations for the second and subsequent elements will always overwrite any higher bits that might have been in the first element's result value in tcg_rd. However in the scalar case we only go through this loop once. The effect is that for a signed narrowing operation, if the result is negative then we will now return a value where the bits above the first element are incorrectly 1 (because the narrowfn returns a sign-extended result, not one that is truncated to the element size). Fix this by using an extract operation to get exactly the correct bits of the output of the narrowfn for element 1, instead of a plain move. Cc: qemu-stable@nongnu.org Fixes: 1b7bc9b5c8bf374dd3 ("target/arm: Avoid tcg_const_ptr in handle_vec_simd_sqshrn") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2089 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240123153416.877308-1-peter.maydell@linaro.org
2024-01-26target/arm: Expose arm_cpu_mp_affinity() in 'multiprocessing.h' headerPhilippe Mathieu-Daudé1-0/+1
Declare arm_cpu_mp_affinity() prototype in the new "target/arm/multiprocessing.h" header so units in hw/arm/ can use it without having to include the huge target-specific "cpu.h". File list to include the new header generated using: $ git grep -lw arm_cpu_mp_affinity Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240118200643.29037-11-philmd@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-26target/arm: Create arm_cpu_mp_affinityRichard Henderson1-1/+1
Wrapper to return the mp affinity bits from the cpu. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240118200643.29037-10-philmd@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-26target/arm: Fix VNCR fault detection logicPeter Maydell1-1/+1
In arm_deliver_fault() we check for whether the fault is caused by a data abort due to an access to a FEAT_NV2 sysreg in the memory pointed to by the VNCR. Unfortunately part of the condition checks the wrong argument to the function, meaning that it would spuriously trigger, resulting in some instruction aborts being taken to the wrong EL and reported incorrectly. Use the right variable in the condition. Fixes: 674e5345275d425 ("target/arm: Report VNCR_EL2 based faults correctly") Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-id: 20240116165605.2523055-1-peter.maydell@linaro.org
2024-01-09target/arm: Add FEAT_NV2 to max, neoverse-n2, neoverse-v1 CPUsPeter Maydell1-1/+1
Enable FEAT_NV2 on the 'max' CPU, and stop filtering it out for the Neoverse N2 and Neoverse V1 CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Report VNCR_EL2 based faults correctlyPeter Maydell2-2/+29
If FEAT_NV2 redirects a system register access to a memory offset from VNCR_EL2, that access might fault. In this case we need to report the correct syndrome information: * Data Abort, from same-EL * no ISS information * the VNCR bit (bit 13) is set and the exception must be taken to EL2. Save an appropriate syndrome template when generating code; we can then use that to: * select the right target EL * reconstitute a correct final syndrome for the data abort * report the right syndrome if we take a FEAT_RME granule protection fault on the VNCR-based write Note that because VNCR is bit 13, we must start keeping bit 13 in template syndromes, by adjusting ARM_INSN_START_WORD2_SHIFT. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Implement FEAT_NV2 redirection of sysregs to RAMPeter Maydell3-0/+68
FEAT_NV2 requires that when HCR_EL2.{NV,NV2} == 0b11 then accesses by EL1 to certain system registers are redirected to RAM. The full list of affected registers is in the table in rule R_CSRPQ in the Arm ARM. The registers may be normally accessible at EL1 (like ACTLR_EL1), or normally UNDEF at EL1 (like HCR_EL2). Some registers redirect to RAM only when HCR_EL2.NV1 is 0, and some only when HCR_EL2.NV1 is 1; others trap in both cases. Add the infrastructure for identifying which registers should be redirected and turning them into memory accesses. This code does not set the correct syndrome or arrange for the exception to be taken to the correct target EL if the access via VNCR_EL2 faults; we will do that in the next commit. Subsequent commits will mark up the relevant regdefs to set their nv2_redirect_offset, and if relevant one of the two flags which indicates that the redirect happens only for a particular value of HCR_EL2.NV1. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-09target/arm: Handle FEAT_NV2 redirection of SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2Peter Maydell3-1/+42
Under FEAT_NV2, when HCR_EL2.{NV,NV2} == 0b11 at EL1, accesses to the registers SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2 and TFSR_EL2 (which would UNDEF without FEAT_NV or FEAT_NV2) should instead access the equivalent EL1 registers SPSR_EL1, ELR_EL1, ESR_EL1, FAR_EL1 and TFSR_EL1. Because there are only five registers involved and the encoding for the EL1 register is identical to that of the EL2 register except that opc1 is 0, we handle this by finding the EL1 register in the hash table and using it instead. Note that traps that apply to direct accesses to the EL1 register, such as active fine-grained traps or other trap bits, do not trigger when it is accessed via the EL2 encoding in this way. However, some traps that are defined by the EL2 register may apply. We therefore call the EL2 register's accessfn first. The only one of the five which has such traps is TFSR_EL2: make sure its accessfn correctly handles both FEAT_NV (where we trap to EL2 without checking ATA bits) and FEAT_NV2 (where we check ATA bits and then redirect to TFSR_EL1). (We don't need the NV1 tbflag bit until the next patch, but we introduce it here to avoid putting the NV, NV1, NV2 bits in an odd order.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Add FEAT_NV to max, neoverse-n2, neoverse-v1 CPUsPeter Maydell1-0/+1
Enable FEAT_NV on the 'max' CPU, and stop filtering it out for the Neoverse N2 and Neoverse V1 CPUs. We continue to downgrade FEAT_NV2 support to FEAT_NV for the latter two CPU types. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Treat LDTR* and STTR* as LDR/STR when NV, NV1 is 1, 1Peter Maydell1-2/+4
FEAT_NV requires (per I_JKLJK) that when HCR_EL2.{NV,NV1} is {1,1} the unprivileged-access instructions LDTR, STTR etc behave as normal loads and stores. Implement the check that handles this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Make NV reads of CurrentEL return EL2Peter Maydell1-2/+7
FEAT_NV requires that when HCR_EL2.NV is set reads of the CurrentEL register from EL1 always report EL2 rather than the real EL. Implement this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Trap sysreg accesses for FEAT_NVPeter Maydell3-10/+42
For FEAT_NV, accesses to system registers and instructions from EL1 which would normally UNDEF there but which work in EL2 need to instead be trapped to EL2. Detect this both for "we know this will UNDEF at translate time" and "we found this UNDEFs at runtime", and make the affected registers trap to EL2 instead. The Arm ARM defines the set of registers that should trap in terms of their names; for our implementation this would be both awkward and inefficent as a test, so we instead trap based on the opc1 field of the sysreg. The regularity of the architectural choice of encodings for sysregs means that in practice this captures exactly the correct set of registers. Regardless of how we try to define the registers this trapping applies to, there's going to be a certain possibility of breakage if new architectural features introduce new registers that don't follow the current rules (FEAT_MEC is one example already visible in the released sysreg XML, though not yet in the Arm ARM). This approach seems to me to be straightforward and likely to require a minimum of manual overrides. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Move FPU/SVE/SME access checks up above ARM_CP_SPECIAL_MASK checkPeter Maydell1-7/+8
In handle_sys() we don't do the check for whether the register is marked as needing an FPU/SVE/SME access check until after we've handled the special cases covered by ARM_CP_SPECIAL_MASK. This is conceptually the wrong way around, because if for example we happen to implement an FPU-access-checked register as ARM_CP_NOP, we should do the access check first. Move the access checks up so they are with all the other access checks, not sandwiched between the special-case read/write handling and the normal-case read/write handling. This doesn't change behaviour at the moment, because we happen not to define any cpregs with both ARM_CPU_{FPU,SVE,SME} and one of the cases dealt with by ARM_CP_SPECIAL_MASK. Moving this code also means we have the correct place to put the FEAT_NV/FEAT_NV2 access handling, which should come after the access checks and before we try to do any read/write action. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Always honour HCR_EL2.TSC when HCR_EL2.NV is setPeter Maydell1-3/+13
The HCR_EL2.TSC trap for trapping EL1 execution of SMC instructions has a behaviour change for FEAT_NV when EL3 is not implemented: * in older architecture versions TSC was required to have no effect (i.e. the SMC insn UNDEFs) * with FEAT_NV, when HCR_EL2.NV == 1 the trap must apply (i.e. SMC traps to EL2, as it already does in all cases when EL3 is implemented) * in newer architecture versions, the behaviour either without FEAT_NV or with FEAT_NV and HCR_EL2.NV == 0 is relaxed to an IMPDEF choice between UNDEF and trap-to-EL2 (i.e. it is permitted to always honour HCR_EL2.TSC) for AArch64 only Add the condition to honour the trap bit when HCR_EL2.NV == 1. We leave the HCR_EL2.NV == 0 case with the existing (UNDEF) behaviour, as our IMPDEF choice (both because it avoids a behaviour change for older CPU models and because we'd have to distinguish AArch32 from AArch64 if we opted to trap to EL2). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Enable trapping of ERET for FEAT_NVPeter Maydell3-6/+15
When FEAT_NV is turned on via the HCR_EL2.NV bit, ERET instructions are trapped, with the same syndrome information as for the existing FEAT_FGT fine-grained trap (in the pseudocode this is handled in AArch64.CheckForEretTrap()). Rename the DisasContext and tbflag bits to reflect that they are no longer exclusively for FGT traps, and set the tbflag bit when FEAT_NV is enabled as well as when the FGT is enabled. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Set CTR_EL0.{IDC,DIC} for the 'max' CPUPeter Maydell1-0/+10
The CTR_EL0 register has some bits which allow the implementation to tell the guest that it does not need to do cache maintenance for data-to-instruction coherence and instruction-to-data coherence. QEMU doesn't emulate caches and so our cache maintenance insns are all NOPs. We already have some models of specific CPUs where we set these bits (e.g. the Neoverse V1), but the 'max' CPU still uses the settings it inherits from Cortex-A57. Set the bits for 'max' as well, so the guest doesn't need to do unnecessary work. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-08system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()Stefan Hajnoczi4-20/+20
The Big QEMU Lock (BQL) has many names and they are confusing. The actual QemuMutex variable is called qemu_global_mutex but it's commonly referred to as the BQL in discussions and some code comments. The locking APIs, however, are called qemu_mutex_lock_iothread() and qemu_mutex_unlock_iothread(). The "iothread" name is historic and comes from when the main thread was split into into KVM vcpu threads and the "iothread" (now called the main loop thread). I have contributed to the confusion myself by introducing a separate --object iothread, a separate concept unrelated to the BQL. The "iothread" name is no longer appropriate for the BQL. Rename the locking APIs to: - void bql_lock(void) - void bql_unlock(void) - bool bql_locked(void) There are more APIs with "iothread" in their names. Subsequent patches will rename them. There are also comments and documentation that will be updated in later patches. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Fabiano Rosas <farosas@suse.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Peter Xu <peterx@redhat.com> Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Acked-by: Hyman Huang <yong.huang@smartx.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-2-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-12-19target/arm/tcg: Including missing 'exec/exec-all.h' headerPhilippe Mathieu-Daudé1-0/+1
translate_insn() ends up calling probe_access_full(), itself declared in "exec/exec-all.h": TranslatorOps::translate_insn -> aarch64_tr_translate_insn() -> is_guarded_page() -> probe_access_full() Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231130142519.28417-4-philmd@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-12-19target/arm: Restrict TCG specific helpersPhilippe Mathieu-Daudé1-0/+55
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231130142519.28417-2-philmd@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-20target/arm: Fix SME FMOPA (16-bit), BFMOPARichard Henderson1-6/+4
Perform the loop increment unconditionally, not nested within the predication. Cc: qemu-stable@nongnu.org Fixes: 3916841ac75 ("target/arm: Implement FMOPA, FMOPS (widening)") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1985 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20231117193135.1180657-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-20target/arm: enable FEAT_RNG on Neoverse-N2Marcin Juszkiewicz1-1/+1
I noticed that Neoverse-V1 has FEAT_RNG enabled so let enable it also on Neoverse-N2. Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231114103443.1652308-1-marcin.juszkiewicz@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-15target/arm/tcg: spelling fixes: alse, addresesMichael Tokarev2-2/+2
Fixes: 179e9a3baccc "target/arm: Define new TB flag for ATA0" Fixes: 5d7b37b5f675 "target/arm: Implement the CPY* instructions" Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-13target/arm/tcg: enable PMU feature for Cortex-A8 and A9Nikita Ostrenkov1-0/+2
According to the technical reference manual, the Cortex-A9 has a Perfomance Unit Monitor (PMU): https://developer.arm.com/documentation/100511/0401/performance-monitoring-unit/about-the-performance-monitoring-unit The Cortex-A8 does also. We already already define the PMU registers when emulating the Cortex-A8 and Cortex-A9, because we put them in v7_cp_reginfo[] rather than guarding them behind ARM_FEATURE_PMU. So the only thing that setting the feature bit changes is that the registers actually do something. Enable ARM_FEATURE_PMU for Cortex-A8 and Cortex-A9, to avoid this anomaly. (The A8 and A9 PMU predates the standardisation of ID_DFR0.PerfMon, so the field there is 0, but the PMU is still present.) Signed-off-by: Nikita Ostrenkov <n.ostrenkov@gmail.com> Message-id: 20231112165658.2335-1-n.ostrenkov@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: tweaked commit message; also enable PMU for A8] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-13target/arm: Correct MTE tag checking for reverse-copy MOPSPeter Maydell1-2/+10
When we are doing a FEAT_MOPS copy that must be performed backwards, we call mte_mops_probe_rev(), passing it the address of the last byte in the region we are probing. However, allocation_tag_mem_probe() wants the address of the first byte to get the tag memory for. Because we passed it (ptr, size) we could incorrectly trip the allocation_tag_mem_probe() check for "does this access run across to the following page", and if that following page happened not to be valid then we would assert. We know we will always be only dealing with a single page because the code that calls mte_mops_probe_rev() ensures that. We could make mte_mops_probe_rev() pass 'ptr - (size - 1)' to allocation_tag_mem_probe(), but then we would have to adjust the returned 'mem' pointer to get back to the tag RAM for the last byte of the region. It's simpler to just pass in a size of 1 byte, because we know that allocation_tag_mem_probe() in pure-probe single-page mode doesn't care about the size. Fixes: 69c51dc3723b ("target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231110162546.2192512-1-peter.maydell@linaro.org
2023-11-13target/arm: HVC at EL3 should go to EL3, not EL2Peter Maydell1-1/+3
AArch64 permits code at EL3 to use the HVC instruction; however the exception we take should go to EL3, not down to EL2 (see the pseudocode AArch64.CallHypervisor()). Fix the target EL. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com> Message-id: 20231109151917.1925107-1-peter.maydell@linaro.org
2023-11-06target/arm: Fix A64 LDRA immediate decodePeter Maydell2-1/+6
In commit be23a049 in the conversion to decodetree we broke the decoding of the immediate value in the LDRA instruction. This should be a 10 bit signed value that is scaled by 8, but in the conversion we incorrectly ended up scaling it only by 2. Fix the scaling factor. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1970 Fixes: be23a049 ("target/arm: Convert load (pointer auth) insns to decodetree") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20231106113445.1163063-1-peter.maydell@linaro.org
2023-11-02target/arm: Fix SVE STR incrementRichard Henderson1-2/+3
The previous change missed updating one of the increments and one of the MemOps. Add a test case for all vector lengths. Cc: qemu-stable@nongnu.org Fixes: e6dd5e782be ("target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20231031143215.29764-1-richard.henderson@linaro.org [PMM: fixed checkpatch nit] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-02target/arm: Make FEAT_MOPS SET* insns handle Xs == XZR correctlyPeter Maydell1-3/+12
Most of the registers used by the FEAT_MOPS instructions cannot use 31 as a register field value; this is CONSTRAINED UNPREDICTABLE to NOP or UNDEF (we UNDEF). However, it is permitted for the "source value" register for the memset insns SET* to be 31, which (as usual for most data-processing insns) means it should be the zero register XZR. We forgot to handle this case, with the effect that trying to set memory to zero with a "SET* Xd, Xn, XZR" sets the memory to the value that happens to be in the low byte of SP. Handle XZR when getting the SET* data value from the register file. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20231030174000.3792225-4-peter.maydell@linaro.org
2023-10-27target/arm: Fix syndrome for FGT traps on ERETPeter Maydell1-2/+2
In commit 442c9d682c94fc2 when we converted the ERET, ERETAA, ERETAB instructions to decodetree, the conversion accidentally lost the correct setting of the syndrome register when taking a trap because of the FEAT_FGT HFGITR_EL1.ERET bit. Instead of reporting a correct full syndrome value with the EC and IL bits, we only reported the low two bits of the syndrome, because the call to syn_erettrap() got dropped. Fix the syndrome values for these traps by reinstating the syn_erettrap() calls. Fixes: 442c9d682c94fc2 ("target/arm: Convert ERET, ERETAA, ERETAB to decodetree") Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231024172438.2990945-1-peter.maydell@linaro.org
2023-10-27target/arm: Move feature test functions to their own headerPeter Maydell7-1/+7
The feature test functions isar_feature_*() now take up nearly a thousand lines in target/arm/cpu.h. This header file is included by a lot of source files, most of which don't need these functions. Move the feature test functions to their own header file. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231024163510.2972081-2-peter.maydell@linaro.org
2023-10-27target/arm: Implement Neoverse N2 CPU modelPeter Maydell1-0/+103
Implement a model of the Neoverse N2 CPU. This is an Armv9.0-A processor very similar to the Cortex-A710. The differences are: * no FEAT_EVT * FEAT_DGH (data gathering hint) * FEAT_NV (not yet implemented in QEMU) * Statistical Profiling Extension (not implemented in QEMU) * 48 bit physical address range, not 40 * CTR_EL0.DIC = 1 (no explicit icache cleaning needed) * PMCR_EL0.N = 6 (always 6 PMU counters, not 20) Because it has 48-bit physical address support, we can use this CPU in the sbsa-ref board as well as the virt board. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20230915185453.1871167-3-peter.maydell@linaro.org
2023-10-27target/arm: Correct minor errors in Cortex-A710 definitionPeter Maydell1-2/+9
Correct a couple of minor errors in the Cortex-A710 definition: * ID_AA64DFR0_EL1.DebugVer is 9 (indicating Armv8.4 debug architecture) * ID_AA64ISAR1_EL1.APA is 5 (indicating more PAuth support) * there is an IMPDEF CPUCFR_EL1, like that on the Neoverse-N1 Fixes: e3d45c0a89576 ("target/arm: Implement cortex-a710") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20230915185453.1871167-2-peter.maydell@linaro.org
2023-10-22target/arm: Use tcg_gen_ext_i64Richard Henderson1-35/+2
The ext_and_shift_reg helper does this plus a shift. The non-zero check for shift count is duplicate to the one done within tcg_gen_shli_i64. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>