aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)AuthorFilesLines
2024-12-13target/arm: Add section labels for "Data Processing (register)"Richard Henderson1-2/+17
At the same time, use ### to separate 3rd-level sections. We already use ### for 4.1.92 Data Processing (immediate), but not the two following two third-level sections: 4.1.93 Branches, and 4.1.94 Loads and stores. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241211163036.2297116-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-12-12Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingStefan Hajnoczi2-2/+2
* rust: better integration with clippy, rustfmt and rustdoc * rust: interior mutability types * rust: add a bit operations module * rust: first part of QOM rework * kvm: remove unnecessary #ifdef * clock: small cleanups, improve handling of Clock lifetimes # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmdZqFkUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroOzRwf/SYUD+CJCn2x7kUH/JG893jwN1WbJ # meGZ0PQDUpOZJFWg6T4g0MuW4O+Wevy2pF4SfGojgqaYxKBbTQVkeliDEMyNUxpr # vSKXego0K3pkX3cRDXNVTaXFbsHsMt/3pfzMQM6ocF9qbL+Emvx7Og6WdAcyJ4hc # lA17EHlnrWKUSnqN/Ow/pZXsa4ijCklXFFh4barfbdGVhMQc2QekUU45GsP2AvGT # NkXTQC05HqxBaAIDeSxbprDSzNihyT71dAooVoxqKboprPu5uoUSJwgaD8rADPr4 # EOfsz61V4mji+DWDcIzTtYoAdY41vVXI9lvCKOcCFkimA29xO0W6P7mG2w== # =JSh5 # -----END PGP SIGNATURE----- # gpg: Signature made Wed 11 Dec 2024 09:57:29 EST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (49 commits) rust: qom: change the parent type to an associated type rust: qom: split ObjectType from ObjectImpl trait rust: qom: move bridge for TypeInfo functions out of pl011 rust: qdev: move bridge for realize and reset functions out of pl011 rust: qdev: move device_class_init! body to generic function, ClassInitImpl implementation to macro rust: qom: move ClassInitImpl to the instance side rust: qom: convert type_info! macro to an associated const rust: qom: rename Class trait to ClassInitImpl rust: qom: add default definitions for ObjectImpl rust: add a bit operation module rust: add bindings for interrupt sources rust: define prelude rust: cell: add BQL-enforcing RefCell variant rust: cell: add BQL-enforcing Cell variant bql: check that the BQL is not dropped within marked sections qom/object: Remove type_register() script/codeconverter/qom_type_info: Deprecate MakeTypeRegisterStatic and MakeTypeRegisterNotStatic ui: Replace type_register() with type_register_static() target/xtensa: Replace type_register() with type_register_static() target/sparc: Replace type_register() with type_register_static() ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-12-11target/arm: Set default NaN pattern explicitlyPeter Maydell1-0/+2
Set the default NaN pattern explicitly for the arm target. This includes setting it for the old linux-user nwfpe emulation. For nwfpe, our default doesn't match the real kernel, but we avoid making a behaviour change in this commit. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241202131347.498124-41-peter.maydell@linaro.org
2024-12-11target/arm: Copy entire float_status in is_ebfRichard Henderson1-13/+7
Now that float_status has a bunch of fp parameters, it is easier to copy an existing structure than create one from scratch. Begin by copying the structure that corresponds to the FPSR and make only the adjustments required for BFloat16 semantics. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20241203203949.483774-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-12-11target/arm: Set Float3NaNPropRule explicitlyPeter Maydell1-0/+5
Set the Float3NaNPropRule explicitly for Arm, and remove the ifdef from pickNaNMulAdd(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241202131347.498124-18-peter.maydell@linaro.org
2024-12-11target/arm: Set FloatInfZeroNaNRule explicitlyPeter Maydell1-0/+3
Set the FloatInfZeroNaNRule explicitly for the Arm target, so we can remove the ifdef from pickNaNMulAdd(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241202131347.498124-6-peter.maydell@linaro.org
2024-12-10arm: Replace type_register() with type_register_static()Zhao Liu2-2/+2
Replace type_register() with type_register_static() because type_register() will be deprecated. Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20241029085934.2799066-2-zhao1.liu@intel.com
2024-11-26target/arm/tcg/: fix typo in FEAT namePierrick Bouvier1-1/+1
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241122225049.1617774-5-pierrick.bouvier@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-11-26target/arm/tcg/cpu32.c: swap ATCM and BTCM register namesMichael Tokarev1-2/+2
According to Cortex-R5 r1p2 manual, register with opcode2=0 is BTCM and with opcode2=1 is ATCM, - exactly the opposite from how qemu labels them. Just swap the labels to avoid confusion, - both registers are implemented as always-zero. Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241121171602.3273252-1-mjt@tls.msk.ru Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-11-19target/arm/hvf: Add trace.h headerPeter Maydell2-1/+2
The documentation for trace events says that every subdirectory which has trace events should have a trace.h header, whose only content is an include of the trace/trace-<subdir>.h file. When we added the trace events in target/arm/hvf/ we forgot to create this file and instead hvf.c directly includes trace/trace-target_arm_hvf.h. Create the standard trace.h file to bring this into line with the convention. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20241108162909.4080314-3-peter.maydell@linaro.org
2024-11-19arm/ptw: Honour WXN/UWXN and SIF in short-format descriptorsPavel Skripkin1-31/+24
Currently the handling of page protection in the short-format descriptor is open-coded. This means that we forgot to update it to handle some newer architectural features, including: * handling of SCTLR.{UWXN,WXN} * handling of SCR.SIF Make the short-format descriptor code call the same get_S1prot() that we already use for the LPAE descriptor format. This makes the code simpler and means it now correctly honours the WXN/UWXN and SIF bits. Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Message-id: 20241118152537.45277-1-paskripkin@gmail.com [PMM: fixed a couple of checkpatch nits, tweaked commit message] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-11-19arm/ptw: Make get_S1prot accept decoded APPavel Skripkin1-8/+9
AP in armv7 short descriptor mode has 3 bits and also domain, which makes it incompatible with other arm schemas. To make it possible to share get_S1prot between armv8, armv7 long format, armv7 short format and armv6 it's easier to make caller decode AP. Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Message-id: 20241118152526.45185-1-paskripkin@gmail.com [PMM: fixed checkpatch nit] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-11-16target/arm: Drop user-only special case in sve_stN_rRichard Henderson1-4/+0
This path is reachable with plugins enabled, and provoked with run-plugin-catch-syscalls-with-libinline.so. Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20241112141232.321354-1-richard.henderson@linaro.org>
2024-11-05target/arm: Enable FEAT_CMOW for -cpu maxGustavo Romero4-0/+12
FEAT_CMOW introduces support for controlling cache maintenance instructions executed in EL0/1 and is mandatory from Armv8.8. On real hardware, the main use for this feature is to prevent processes from invalidating or flushing cache lines for addresses they only have read permission, which can impact the performance of other processes. QEMU implements all cache instructions as NOPs, and, according to rule [1], which states that generating any Permission fault when a cache instruction is implemented as a NOP is implementation-defined, no Permission fault is generated for any cache instruction when it lacks read and write permissions. QEMU does not model any cache topology, so the PoU and PoC are before any cache, and rules [2] apply. These rules state that generating any MMU fault for cache instructions in this topology is also implementation-defined. Therefore, for FEAT_CMOW, we do not generate any MMU faults either, instead, we only advertise it in the feature register. [1] Rule R_HGLYG of section D8.14.3, Arm ARM K.a. [2] Rules R_MZTNR and R_DNZYL of section D8.14.3, Arm ARM K.a. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241104142606.941638-1-gustavo.romero@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-11-05target/arm: Fix SVE SDOT/UDOT/USDOT (4-way, indexed)Peter Maydell1-1/+8
Our implementation of the indexed version of SVE SDOT/UDOT/USDOT got the calculation of the inner loop terminator wrong. Although we correctly account for the element size when we calculate the terminator for the first iteration: intptr_t segend = MIN(16 / sizeof(TYPED), opr_sz_n); we don't do that when we move it forward after the first inner loop completes. The intention is that we process the vector in 128-bit segments, which for a 64-bit element size should mean (1, 2), (3, 4), (5, 6), etc. This bug meant that we would iterate (1, 2), (3, 4, 5, 6), (7, 8, 9, 10) etc and apply the wrong indexed element to some of the operations, and also index off the end of the vector. You don't see this bug if the vector length is small enough that we don't need to iterate the outer loop, i.e. if it is only 128 bits, or if it is the 64-bit special case from AA32/AA64 AdvSIMD. If the vector length is 256 bits then we calculate the right results for the elements in the vector but do index off the end of the vector. Vector lengths greater than 256 bits see wrong answers. The instructions that produce 32-bit results behave correctly. Fix the recalculation of 'segend' for subsequent iterations, and restore a version of the comment that was lost in the refactor of commit 7020ffd656a5 that explains why we only need to clamp segend to opr_sz_n for the first iteration, not the later ones. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2595 Fixes: 7020ffd656a5 ("target/arm: Macroize helper_gvec_{s,u}dot_idx_{b,h}") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241101185544.2130972-1-peter.maydell@linaro.org
2024-11-05target/arm: Add new MMU indexes for AArch32 Secure PL1&0Peter Maydell6-20/+86
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0) This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime. We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits. The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it. Fix this by adding two new MMU indexes: * ARMMMUIdx_E30_0 is for Secure PL0 * ARMMMUIdx_E30_3_PAN is for Secure PL1 when PAN is enabled The existing ARMMMUIdx_E3 is used to mean "Secure PL1 without PAN" (and would be named ARMMMUIdx_E30_3 in an AArch32-centric scheme). These extra two indexes bring us up to the maximum of 16 that the core code can currently support. This commit: * adds the new MMU index handling to the various places where we deal in MMU index values * adds assertions that we aren't AArch32 EL3 in a couple of places that currently use the E10 indexes, to document why they don't also need to handle the E30 indexes * documents in a comment why regime_has_2_ranges() doesn't need updating Notes for backporting: this commit depends on the preceding revert of 4c2c04746932; that revert and this commit should probably be backported to everywhere that we originally backported 4c2c04746932. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2588 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241101142845.1712482-3-peter.maydell@linaro.org
2024-11-05Revert "target/arm: Fix usage of MMU indexes when EL3 is AArch32"Peter Maydell8-81/+34
This reverts commit 4c2c0474693229c1f533239bb983495c5427784d. This commit tried to fix a problem with our usage of MMU indexes when EL3 is AArch32, using what it described as a "more complicated approach" where we share the same MMU index values for Secure PL1&0 and NonSecure PL1&0. In theory this should work, but the change didn't account for (at least) two things: (1) The design change means we need to flush the TLBs at any point where the CPU state flips from one to the other. We already flush the TLB when SCR.NS is changed, but we don't flush the TLB when we take an exception from NS PL1&0 into Mon or when we return from Mon to NS PL1&0, and the commit didn't add any code to do that. (2) The ATS12NS* address translate instructions allow Mon code (which is Secure) to do a stage 1+2 page table walk for NS. I thought this was OK because do_ats_write() does a page table walk which doesn't use the TLBs, so because it can pass both the MMU index and also an ARMSecuritySpace argument we can tell the table walk that we want NS stage1+2, not S. But that means that all the code within the ptw that needs to find e.g. the regime EL cannot do so only with an mmu_idx -- all these functions like regime_sctlr(), regime_el(), etc would need to pass both an mmu_idx and the security_space, so they can tell whether this is a translation regime controlled by EL1 or EL3 (and so whether to look at SCTLR.S or SCTLR.NS, etc). In particular, because regime_el() wasn't updated to look at the ARMSecuritySpace it would return 1 even when the CPU was in Monitor mode (and the controlling EL is 3). This meant that page table walks in Monitor mode would look at the wrong SCTLR, TCR, etc and would generally fault when they should not. Rather than trying to make the complicated changes needed to rescue the design of 4c2c04746932, we revert it in order to instead take the route that that commit describes as "the most straightforward" fix, where we add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN". This revert will re-expose the "spurious alignment faults in Secure PL0" issue #2326; we'll fix it again in the next commit. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Message-id: 20241101142845.1712482-2-peter.maydell@linaro.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-11-05target/arm: Explicitly set 2-NaN propagation rulePeter Maydell1-8/+17
Set the 2-NaN propagation rule explicitly in the float_status words we use. We wrap this plus the pre-existing setting of the tininess-before-rounding flag in a new function arm_set_default_fp_behaviours() to avoid repetition, since we have a lot of float_status words at this point. The situation with FPA11 emulation in linux-user is a little odd, and arguably "correct" behaviour there would be to exactly match a real Linux kernel's FPA11 emulation. However FPA11 emulation is essentially dead at this point and so it seems better to continue with QEMU's current behaviour and leave a comment describing the situation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241025141254.2141506-4-peter.maydell@linaro.org
2024-10-29target/arm: kvm: require KVM_CAP_DEVICE_CTRLPaolo Bonzini2-21/+12
The device control API was added in 2013, assume that it is present. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20241024113126.44343-1-pbonzini@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-10-29target/arm: Fix arithmetic underflow in SETM instructionIdo Plat1-1/+1
Pass the stage size to step function callback, otherwise do_setm would hang when size is larger then page size because stage size would underflow. This fix changes do_setm to be more inline with do_setp. Cc: qemu-stable@nongnu.org Fixes: 0e92818887dee ("target/arm: Implement the SET* instructions") Signed-off-by: Ido Plat <ido.plat1@ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241025024909.799989-1-ido.plat1@ibm.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-10-29target/arm: Don't assert in regime_is_user() for E10 mmuidx valuesPeter Maydell1-4/+1
In regime_is_user() we assert if we're passed an ARMMMUIdx_E10_* mmuidx value. This used to make sense because we only used this function in ptw.c and would never use it on this kind of stage 1+2 mmuidx, only for an individual stage 1 or stage 2 mmuidx. However, when we implemented FEAT_E0PD we added a callsite in aa64_va_parameters(), which means this can now be called for stage 1+2 mmuidx values if the guest sets the TCG_ELX.{E0PD0,E0PD1} bits to enable use of the feature. This will then result in an assertion failure later, for instance on a TLBI operation: #6 0x00007ffff6d0e70f in g_assertion_message_expr (domain=0x0, file=0x55555676eeba "../../target/arm/internals.h", line=978, func=0x555556771d48 <__func__.5> "regime_is_user", expr=<optimised out>) at ../../../glib/gtestutils.c:3279 #7 0x0000555555f286d2 in regime_is_user (env=0x555557f2fe00, mmu_idx=ARMMMUIdx_E10_0) at ../../target/arm/internals.h:978 #8 0x0000555555f3e31c in aa64_va_parameters (env=0x555557f2fe00, va=18446744073709551615, mmu_idx=ARMMMUIdx_E10_0, data=true, el1_is_aa32=false) at ../../target/arm/helper.c:12048 #9 0x0000555555f3163b in tlbi_aa64_get_range (env=0x555557f2fe00, mmuidx=ARMMMUIdx_E10_0, value=106721347371041) at ../../target/arm/helper.c:5214 #10 0x0000555555f317e8 in do_rvae_write (env=0x555557f2fe00, value=106721347371041, idxmap=21, synced=true) at ../../target/arm/helper.c:5260 #11 0x0000555555f31925 in tlbi_aa64_rvae1is_write (env=0x555557f2fe00, ri=0x555557fbeae0, value=106721347371041) at ../../target/arm/helper.c:5302 #12 0x0000555556036f8f in helper_set_cp_reg64 (env=0x555557f2fe00, rip=0x555557fbeae0, value=106721347371041) at ../../target/arm/tcg/op_helper.c:965 Since we do know whether these mmuidx values are for usermode or not, we can easily make regime_is_user() handle them: ARMMMUIdx_E10_0 is user, and the other two are not. Cc: qemu-stable@nongnu.org Fixes: e4c93e44ab103f ("target/arm: Implement FEAT_E0PD") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20241017172331.822587-1-peter.maydell@linaro.org
2024-10-29target/arm: Store FPSR cumulative exception bits in env->vfp.fpsrPeter Maydell1-40/+16
Currently we store the FPSR cumulative exception bits in the float_status fields, and use env->vfp.fpsr only for the NZCV bits. (The QC bit is stored in env->vfp.qc[].) This works for TCG, but if QEMU was built without CONFIG_TCG (i.e. with KVM support only) then we use the stub versions of vfp_get_fpsr_from_host() and vfp_set_fpsr_to_host() which do nothing, throwing away the cumulative exception bit state. The effect is that if the FPSR state is round-tripped from KVM to QEMU then we lose the cumulative exception bits. In particular, this will happen if the VM is migrated. There is no user-visible bug when using KVM with a QEMU binary that was built with CONFIG_TCG. Fix this by always storing the cumulative exception bits in env->vfp.fpsr. If we are using TCG then we may also keep pending cumulative exception information in the float_status fields, so we continue to fold that in on reads. This change will also be helpful for implementing FEAT_AFP later, because that includes a feature where in some situations we want to cause input denormals to be flushed to zero without affecting the existing state of the FPSR.IDC bit, so we need a place to store IDC which is distinct from the various float_status fields. (Note for stable backports: the bug goes back to 4a15527c9fee but this code was refactored in commits ea8618382aba..a8ab8706d4cc461, so fixing it in branches without those refactorings will mean either backporting the refactor or else implementing a conceptually similar fix for the old code.) Cc: qemu-stable@nongnu.org Fixes: 4a15527c9fee ("target/arm/vfp_helper: Restrict the SoftFloat use to TCG") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241011162401.3672735-1-peter.maydell@linaro.org
2024-10-29arm/kvm: add support for MTECornelia Huck4-3/+90
Extend the 'mte' property for the virt machine to cover KVM as well. For KVM, we don't allocate tag memory, but instead enable the capability. If MTE has been enabled, we need to disable migration, as we do not yet have a way to migrate the tags as well. Therefore, MTE will stay off with KVM unless requested explicitly. [gankulkarni: This patch is rework of commit b320e21c48 which broke TCG since it made the TCG -cpu max report the presence of MTE to the guest even if the board hadn't enabled MTE by wiring up the tag RAM. This meant that if the guest then tried to use MTE QEMU would segfault accessing the non-existent tag RAM.] Signed-off-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Gustavo Romero <gustavo.romero@linaro.org> Signed-off-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> Message-id: 20241008114302.4855-1-gankulkarni@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-10-13target/arm: Fix alignment fault priority in get_phys_addr_lpaeRichard Henderson1-21/+30
Now that we have the MemOp for the access, we can order the alignment fault caused by memory type before the permission fault for the page. For subsequent page hits, permission and stage 2 checks are known to pass, and so the TLB_CHECK_ALIGNED fault raised in generic code is not mis-ordered. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13target/arm: Implement TCGCPUOps.tlb_fill_alignRichard Henderson4-34/+23
Fill in the tlb_fill_align hook. Handle alignment not due to memory type, since that's no longer handled by generic code. Pass memop to get_phys_addr. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13target/arm: Move device detection earlier in get_phys_addr_lpaeRichard Henderson1-24/+25
Determine cache attributes, and thence Device vs Normal memory, earlier in the function. We have an existing regime_is_stage2 if block into which this can be slotted. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13target/arm: Pass MemOp to get_phys_addr_lpaeRichard Henderson1-2/+4
Pass the value through from get_phys_addr_nogpc. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13target/arm: Pass MemOp through get_phys_addr_twostageRichard Henderson1-4/+6
Pass memop through get_phys_addr_twostage with its recursion with get_phys_addr_nogpc. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13target/arm: Pass MemOp to get_phys_addr_nogpcRichard Henderson1-6/+8
Zero is the safe do-nothing value for callers to use. Pass the value through from get_phys_addr_gpc and get_phys_addr_with_space_nogpc. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13target/arm: Pass MemOp to get_phys_addr_gpcRichard Henderson1-5/+6
Zero is the safe do-nothing value for callers to use. Pass the value through from get_phys_addr. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13target/arm: Pass MemOp to get_phys_addr_with_space_nogpcRichard Henderson3-6/+8
Zero is the safe do-nothing value for callers to use. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13target/arm: Pass MemOp to get_phys_addrRichard Henderson4-7/+8
Zero is the safe do-nothing value for callers to use. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13include/exec/memop: Rename get_alignment_bitsRichard Henderson1-2/+2
Rename to use "memop_" prefix, like other functions that operate on MemOp. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-04Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingPeter Maydell1-2/+2
* pc: Add a description for the i8042 property * kvm: support for nested FRED * tests/unit: fix warning when compiling test-nested-aio-poll with LTO * kvm: refactoring of VM creation * target/i386: expose IBPB-BRTYPE and SBPB CPUID bits to the guest * hw/char: clean up serial * remove virtfs-proxy-helper * target/i386/kvm: Report which action failed in kvm_arch_put/get_registers * qom: improvements to object_resolve_path*() # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmb++MsUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroPVnwf/cdvfxvDm22tEdlh8vHlV17HtVdcC # Hw334M/3PDvbTmGzPBg26lzo4nFS6SLrZ8ETCeqvuJrtKzqVk9bI8ssZW5KA4ijM # nkxguRPHO8E6U33ZSucc+Hn56+bAx4I2X80dLKXJ87OsbMffIeJ6aHGSEI1+fKVh # pK7q53+Y3lQWuRBGhDIyKNuzqU4g+irpQwXOhux63bV3ADadmsqzExP6Gmtl8OKM # DylPu1oK7EPZumlSiJa7Gy1xBqL4Rc4wGPNYx2RVRjp+i7W2/Y1uehm3wSBw+SXC # a6b7SvLoYfWYS14/qCF4cBL3sJH/0f/4g8ZAhDDxi2i5kBr0/5oioDyE/A== # =/zo4 # -----END PGP SIGNATURE----- # gpg: Signature made Thu 03 Oct 2024 21:04:27 BST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (23 commits) qom: update object_resolve_path*() documentation qom: set *ambiguous on all paths qom: rename object_resolve_path_type() "ambiguousp" target/i386/kvm: Report which action failed in kvm_arch_put/get_registers kvm: Allow kvm_arch_get/put_registers to accept Error** accel/kvm: refactor dirty ring setup minikconf: print error entirely on stderr 9p: remove 'proxy' filesystem backend driver hw/char: Extract serial-mm hw/char/serial.h: Extract serial-isa.h hw: Remove unused inclusion of hw/char/serial.h target/i386: Expose IBPB-BRTYPE and SBPB CPUID bits to the guest kvm: refactor core virtual machine creation into its own function kvm/i386: replace identity_base variable with a constant kvm/i386: refactor kvm_arch_init and split it into smaller functions kvm: replace fprintf with error_report()/printf() in kvm_init() kvm/i386: fix return values of is_host_cpu_intel() kvm/i386: make kvm_filter_msr() and related definitions private to kvm module hw/i386/pc: Add a description for the i8042 property tests/unit: remove block layer code from test-nested-aio-poll ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org> # Conflicts: # hw/arm/Kconfig # hw/arm/pxa2xx.c
2024-10-03kvm: Allow kvm_arch_get/put_registers to accept Error**Julia Suvorova1-2/+2
This is necessary to provide discernible error messages to the caller. Signed-off-by: Julia Suvorova <jusual@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20240927104743.218468-2-jusual@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-01target/arm: Avoid target_ulong for physical address lookupsArd Biesheuvel2-10/+10
target_ulong is typedef'ed as a 32-bit integer when building the qemu-system-arm target, and this is smaller than the size of an intermediate physical address when LPAE is being used. Given that Linux may place leaf level user page tables in high memory when built for LPAE, the kernel will crash with an external abort as soon as it enters user space when running with more than ~3 GiB of system RAM. So replace target_ulong with vaddr in places where it may carry an address value that is not representable in 32 bits. Fixes: f3639a64f602ea ("target/arm: Use softmmu tlbs for page table walking") Cc: qemu-stable@nongnu.org Reported-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-id: 20240927071051.1444768-1-ardb+git@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-28Merge tag 'pull-request-2024-09-25' of https://gitlab.com/thuth/qemu into ↵Peter Maydell1-1/+0
staging * Convert more Avocado tests to the new functional test framework * Clean up assert() statements, use g_assert_not_reached() when possible * Improve output of the gitlab CI jobs # -----BEGIN PGP SIGNATURE----- # # iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmbz7xgRHHRodXRoQHJl # ZGhhdC5jb20ACgkQLtnXdP5wLbWm6A//eVn+tzyyKCX/xdXlf7XyVpezvRpTFPOS # HyO0WMkCf2kGmu6qYKx/fDZg86opdQzPLH2gPkuVrGOMZ0Z2630DjH0jNih8lL9Q # J1oRX5YlU92chlzNmq59WB/j9CKd91ILtOoaPBuZkDob57yGEYVzCPqetVvF7L2+ # +rbnccrNPumGJFt035fxUGiGfgsmp28MHQzDwQdyr38uGjyNlqvqidfC8Vj1qzqP # B7HvhGB/vkF0eHaanMt2el/ZuLKf+qeCi//F/CiXGMYnuKXyShA/Db6xvMElw1jB # aQdwphP71IO+cxjJLaNjDHKGFstArsM/E21qlaSTBi+FTmPiwVULpVTiBmWsjhOh # /klpdgRHf0hL2MciYKyOWgjlTocx3rEKjCTe2U5tpta9fp9CrlgMQotjDZIbohGI # ULNahrW3Zmg4EmXDApfhYMXsQsSgWas9QSkmxzJzDp0VC7tf2Oq7RxeySrlw9MCx # OG2qQY+rNcJ3NnpATjfAJpT1kg/IahDOCNHfLEaj1u13XVQIthVADvHwy5WxbwRP # mwp3V9e9sUoznkM2eV646lzmkMim/WdYBF0YpT7eBs80+GoXZ0thx9IqWmwzX/ox # rndBczVN+RY6PydJP40yljdvS7ArRT73wHqL6yKHfDpvFc4/p5mxTWwLQ3yJbXbE # T3I+wtgfBU8= # =FH7b # -----END PGP SIGNATURE----- # gpg: Signature made Wed 25 Sep 2024 12:08:08 BST # gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5 # gpg: issuer "thuth@redhat.com" # gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full] # gpg: aka "Thomas Huth <thuth@redhat.com>" [full] # gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full] # gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown] # Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5 * tag 'pull-request-2024-09-25' of https://gitlab.com/thuth/qemu: (44 commits) .gitlab-ci.d: Make separate collapsible log sections for build and test .gitlab-ci.d: Split build and test in cross build job templates scripts/checkpatch.pl: emit error when using assert(false) tests/qtest: remove return after g_assert_not_reached() qom: remove return after g_assert_not_reached() qobject: remove return after g_assert_not_reached() migration: remove return after g_assert_not_reached() hw/ppc: remove return after g_assert_not_reached() hw/pci: remove return after g_assert_not_reached() hw/net: remove return after g_assert_not_reached() hw/hyperv: remove return after g_assert_not_reached() include/qemu: remove return after g_assert_not_reached() tcg/loongarch64: remove break after g_assert_not_reached() fpu: remove break after g_assert_not_reached() target/riscv: remove break after g_assert_not_reached() target/arm: remove break after g_assert_not_reached() hw/tpm: remove break after g_assert_not_reached() hw/scsi: remove break after g_assert_not_reached() hw/net: remove break after g_assert_not_reached() hw/acpi: remove break after g_assert_not_reached() ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-24target/arm: remove break after g_assert_not_reached()Pierrick Bouvier1-1/+0
This patch is part of a series that moves towards a consistent use of g_assert_not_reached() rather than an ad hoc mix of different assertion mechanisms. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-ID: <20240919044641.386068-22-pierrick.bouvier@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-09-20license: Update deprecated SPDX tag LGPL-2.0+ to LGPL-2.0-or-laterPhilippe Mathieu-Daudé1-1/+1
The 'LGPL-2.0+' license identifier has been deprecated since license list version 2.0rc2 [1] and replaced by the 'LGPL-2.0-or-later' [2] tag. [1] https://spdx.org/licenses/LGPL-2.0+.html [2] https://spdx.org/licenses/LGPL-2.0-or-later.html Mechanical patch running: $ sed -i -e s/LGPL-2.0+/LGPL-2.0-or-later/ \ $(git grep -l 'SPDX-License-Identifier: LGPL-2.0+$') Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-09-20mark <zlib.h> with for-crc32 in a consistent mannerMichael Tokarev2-2/+2
in many cases, <zlib.h> is only included for crc32 function, and in some of them, there's a comment saying that, but in a different way. In one place (hw/net/rtl8139.c), there was another #include added between the comment and <zlib.h> include. Make all such comments to be on the same line as #include, make it consistent, and also add a few missing comments, including hw/nvram/mac_nvram.c which uses adler32 instead. There's no code changes. Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-09-19target/arm: Correct ID_AA64ISAR1_EL1 value for neoverse-v1Peter Maydell1-1/+1
The Neoverse-V1 TRM is a bit confused about the layout of the ID_AA64ISAR1_EL1 register, and so its table 3-6 has the wrong value for this ID register. Trust instead section 3.2.74's list of which fields are set. This means that we stop incorrectly reporting FEAT_XS as present, and now report the presence of FEAT_BF16. Cc: qemu-stable@nongnu.org Reported-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240917161337.3012188-1-peter.maydell@linaro.org
2024-09-19target/arm: Convert scalar [US]QSHRN, [US]QRSHRN, SQSHRUN to decodetreeRichard Henderson2-127/+63
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-30-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert vector [US]QSHRN, [US]QRSHRN, SQSHRUN to decodetreeRichard Henderson2-14/+186
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-29-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert SQSHL, UQSHL, SQSHLU (immediate) to decodetreeRichard Henderson2-131/+128
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-28-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Widen NeonGenNarrowEnvFn return to 64 bitsRichard Henderson5-78/+93
While these functions really do return a 32-bit value, widening the return type means that we need do less marshalling between TCG types. Remove NeonGenNarrowEnvFn typedef; add NeonGenOne64OpEnvFn. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240912024114.1097832-27-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert VQSHL, VQSHLU to gvecRichard Henderson6-110/+94
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-26-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert handle_scalar_simd_shli to decodetreeRichard Henderson2-35/+13
This includes SHL and SLI. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-25-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert handle_scalar_simd_shri to decodetreeRichard Henderson2-70/+86
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR, SRSRA, URSRA, SRI. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-24-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert SHRN, RSHRN to decodetreeRichard Henderson2-48/+55
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Split out subroutines of handle_shri_with_rndaccRichard Henderson1-56/+82
There isn't a lot of commonality along the different paths of handle_shri_with_rndacc. Split them out to separate functions, which will be usable during the decodetree conversion. Simplify 64-bit rounding operations to not require double-word arithmetic. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-22-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>