aboutsummaryrefslogtreecommitdiff
path: root/target/arm/ptw.c
AgeCommit message (Collapse)AuthorFilesLines
2023-07-17target/arm/ptw.c: Account for FEAT_RME when applying {N}SW, SA bitsPeter Maydell1-5/+8
In get_phys_addr_twostage() the code that applies the effects of VSTCR.{SA,SW} and VTCR.{NSA,NSW} only updates result->f.attrs.secure. Now we also have f.attrs.space for FEAT_RME, we need to keep the two in sync. These bits only have an effect for Secure space translations, not for Root, so use the input in_space field to determine whether to apply them rather than the input is_secure. This doesn't actually make a difference because Root translations are never two-stage, but it's a little clearer. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230710152130.3928330-4-peter.maydell@linaro.org
2023-07-17target/arm: Fix S1_ptw_translate() debug pathPeter Maydell1-5/+32
In commit fe4a5472ccd6 we rearranged the logic in S1_ptw_translate() so that the debug-access "call get_phys_addr_*" codepath is used both when S1 is doing ptw reads from stage 2 and when it is doing ptw reads from physical memory. However, we didn't update the calculation of s2ptw->in_space and s2ptw->in_secure to account for the "ptw reads from physical memory" case. This meant that debug accesses when in Secure state broke. Create a new function S2_security_space() which returns the correct security space to use for the ptw load, and use it to determine the correct .in_secure and .in_space fields for the stage 2 lookup for the ptw load. Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230710152130.3928330-3-peter.maydell@linaro.org Fixes: fe4a5472ccd6 ("target/arm: Use get_phys_addr_with_struct in S1_ptw_translate") Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-17target/arm/ptw.c: Add comments to S1Translate struct fieldsPeter Maydell1-0/+40
Add comments to the in_* fields in the S1Translate struct that explain what they're doing. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230710152130.3928330-2-peter.maydell@linaro.org
2023-07-03plugins: force slow path when plugins instrument memory opsAlex Bennée1-7/+7
The lack of SVE memory instrumentation has been an omission in plugin handling since it was introduced. Fortunately we can utilise the probe_* functions to force all all memory access to follow the slow path. We do this by checking the access type and presence of plugin memory callbacks and if set return the TLB_MMIO flag. We have to jump through a few hoops in user mode to re-use the flag but it was the desired effect: ./qemu-system-aarch64 -display none -serial mon:stdio \ -M virt -cpu max -semihosting-config enable=on \ -kernel ./tests/tcg/aarch64-softmmu/memory-sve \ -plugin ./contrib/plugins/libexeclog.so,ifilter=st1w,afilter=0x40001808 -d plugin gives (disas doesn't currently understand st1w): 0, 0x40001808, 0xe54342a0, ".byte 0xa0, 0x42, 0x43, 0xe5", store, 0x40213010, RAM, store, 0x40213014, RAM, store, 0x40213018, RAM And for user-mode: ./qemu-aarch64 \ -plugin contrib/plugins/libexeclog.so,afilter=0x4007c0 \ -d plugin \ ./tests/tcg/aarch64-linux-user/sha512-sve gives: 1..10 ok 1 - do_test(&tests[i]) 0, 0x4007c0, 0xa4004b80, ".byte 0x80, 0x4b, 0x00, 0xa4", load, 0x5500800370, load, 0x5500800371, load, 0x5500800372, load, 0x5500800373, load, 0x5500800374, load, 0x5500800375, load, 0x5500800376, load, 0x5500800377, load, 0x5500800378, load, 0x5500800379, load, 0x550080037a, load, 0x550080037b, load, 0x550080037c, load, 0x550080037d, load, 0x550080037e, load, 0x550080037f, load, 0x5500800380, load, 0x5500800381, load, 0x5500800382, load, 0x5500800383, load, 0x5500800384, load, 0x5500800385, load, 0x5500800386, lo ad, 0x5500800387, load, 0x5500800388, load, 0x5500800389, load, 0x550080038a, load, 0x550080038b, load, 0x550080038c, load, 0x550080038d, load, 0x550080038e, load, 0x550080038f, load, 0x5500800390, load, 0x5500800391, load, 0x5500800392, load, 0x5500800393, load, 0x5500800394, load, 0x5500800395, load, 0x5500800396, load, 0x5500800397, load, 0x5500800398, load, 0x5500800399, load, 0x550080039a, load, 0x550080039b, load, 0x550080039c, load, 0x550080039d, load, 0x550080039e, load, 0x550080039f, load, 0x55008003a0, load, 0x55008003a1, load, 0x55008003a2, load, 0x55008003a3, load, 0x55008003a4, load, 0x55008003a5, load, 0x55008003a6, load, 0x55008003a7, load, 0x55008003a8, load, 0x55008003a9, load, 0x55008003aa, load, 0x55008003ab, load, 0x55008003ac, load, 0x55008003ad, load, 0x55008003ae, load, 0x55008003af (4007c0 is the ld1b in the sha512-sve) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: Robert Henry <robhenry@microsoft.com> Cc: Aaron Lindsay <aaron@os.amperecomputing.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230630180423.558337-20-alex.bennee@linaro.org>
2023-07-03target/arm: make arm_casq_ptw CONFIG_TCG onlyAlex Bennée1-2/+2
The ptw code is accessed by non-TCG code (specifically arm_pamax and arm_cpu_get_phys_page_attrs_debug) but most of it is really only for TCG emulation. Seeing as we already assert for a non TARGET_AARCH64 build lets extend the test rather than further messing with the ifdef ladder. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230630180423.558337-19-alex.bennee@linaro.org>
2023-06-23target/arm: Implement the granule protection checkRichard Henderson1-17/+232
Place the check at the end of get_phys_addr_with_struct, so that we check all physical results. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Use get_phys_addr_with_struct for stage2Richard Henderson1-10/+1
This fixes a bug in which we failed to initialize the result attributes properly after the memset. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Move s1_is_el0 into S1TranslateRichard Henderson1-15/+12
Instead of passing this to get_phys_addr_lpae, stash it in the S1Translate structure. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Use get_phys_addr_with_struct in S1_ptw_translateRichard Henderson1-28/+18
Do not provide a fast-path for physical addresses, as those will need to be validated for GPC. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Handle no-execute for Realm and Root regimesRichard Henderson1-6/+46
While Root and Realm may read and write data from other spaces, neither may execute from other pa spaces. This happens for Stage1 EL3, EL2, EL2&0, and Stage2 EL1&0. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Handle Block and Page bits for security spaceRichard Henderson1-16/+73
With Realm security state, bit 55 of a block or page descriptor during the stage2 walk becomes the NS bit; during the stage1 walk the bit 5 NS bit is RES0. With Root security state, bit 11 of the block or page descriptor during the stage1 walk becomes the NSE bit. Rather than collecting an NS bit and applying it later, compute the output pa space from the input pa space and unconditionally assign. This means that we no longer need to adjust the output space earlier for the NSTable bit. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: NSTable is RES0 for the RME EL3 regimeRichard Henderson1-14/+14
Test in_space instead of in_secure so that we don't switch out of Root space. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Pipe ARMSecuritySpace through ptw.cRichard Henderson1-15/+71
Add input and output space members to S1Translate. Set and adjust them in S1_ptw_translate, and the various points at which we drop secure state. Initialize the space in get_phys_addr; for now leave get_phys_addr_with_secure considering only secure vs non-secure spaces. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Remove __attribute__((nonnull)) from ptw.cRichard Henderson1-4/+2
This was added in 7e98e21c098 as part of a reorg in which one of the argument had been legally NULL, and this caught actual instances. Now that the reorg is complete, this serves little purpose. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Introduce ARMMMUIdx_Phys_{Realm,Root}Richard Henderson1-2/+8
With FEAT_RME, there are four physical address spaces. For now, just define the symbols, and mention them in the same spots as the other Phys indexes in ptw.c. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Adjust the order of Phys and Stage2 ARMMMUIdxRichard Henderson1-7/+5
It will be helpful to have ARMMMUIdx_Phys_* to be in the same relative order as ARMSecuritySpace enumerators. This requires the adjustment to the nstable check. While there, check for being in secure state rather than rely on clearing the low bit making no change to non-secure state. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-07target/arm: Only include tcg/oversized-guest.h if CONFIG_TCGRichard Henderson1-2/+3
Fixes the build for --disable-tcg. This header is only needed for cross-hosting. Without CONFIG_TCG, we know this is an AArch64 host, CONFIG_ATOMIC64 will be set, and the TCG_OVERSIZED_GUEST block will never be compiled. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Split out tcg/oversized-guest.hRichard Henderson1-0/+1
Move a use of TARGET_LONG_BITS out of tcg/tcg.h. Include the new file only where required. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05target/arm: Fix test of TCG_OVERSIZED_GUESTRichard Henderson1-1/+6
The symbol is always defined, even if to 0. We wanted to test for TCG_OVERSIZED_GUEST == 0. This fixed, the #error is reached while building arm-softmmu, because TCG_OVERSIZED_GUEST is not true (nor supposed to be true) for arm32 guest on a 32-bit host. But that's ok, because this feature doesn't apply to arm32. Add an #ifdef for TARGET_AARCH64. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-12target/arm: Correct AArch64.S2MinTxSZ 32-bit EL1 input size checkPeter Maydell1-12/+2
In check_s2_mmu_setup() we have a check that is attempting to implement the part of AArch64.S2MinTxSZ that is specific to when EL1 is AArch32: if !s1aarch64 then // EL1 is AArch32 min_txsz = Min(min_txsz, 24); Unfortunately we got this wrong in two ways: (1) The minimum txsz corresponds to a maximum inputsize, but we got the sense of the comparison wrong and were faulting for all inputsizes less than 40 bits (2) We try to implement this as an extra check that happens after we've done the same txsz checks we would do for an AArch64 EL1, but in fact the pseudocode is *loosening* the requirements, so that txsz values that would fault for an AArch64 EL1 do not fault for AArch32 EL1, because it does Min(old_min, 24), not Max(old_min, 24). You can see this also in the text of the Arm ARM in table D8-8, which shows that where the implemented PA size is less than 40 bits an AArch32 EL1 is still OK with a configured stage2 T0SZ for a 40 bit IPA, whereas if EL1 is AArch64 then the T0SZ must be big enough to constrain the IPA to the implemented PA size. Because of part (2), we can't do this as a separate check, but have to integrate it into aa64_va_parameters(). Add a new argument to that function to indicate that EL1 is 32-bit. All the existing callsites except the one in get_phys_addr_lpae() can pass 'false', because they are either doing a lookup for a stage 1 regime or else they don't care about the tsz/tsz_oob fields. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1627 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230509092059.3176487-1-peter.maydell@linaro.org
2023-05-12target/arm: Fix handling of SW and NSW bits for stage 2 walksPeter Maydell1-25/+51
We currently don't correctly handle the VSTCR_EL2.SW and VTCR_EL2.NSW configuration bits. These allow configuration of whether the stage 2 page table walks for Secure IPA and NonSecure IPA should do their descriptor reads from Secure or NonSecure physical addresses. (This is separate from how the translation table base address and other parameters are set: an NS IPA always uses VTTBR_EL2 and VTCR_EL2 for its base address and walk parameters, regardless of the NSW bit, and similarly for Secure.) Provide a new function ptw_idx_for_stage_2() which returns the MMU index to use for descriptor reads, and use it to set up the .in_ptw_idx wherever we call get_phys_addr_lpae(). For a stage 2 walk, wherever we call get_phys_addr_lpae(): * .in_ptw_idx should be ptw_idx_for_stage_2() of the .in_mmu_idx * .in_secure should be true if .in_mmu_idx is Stage2_S This allows us to correct S1_ptw_translate() so that it consistently always sets its (out_secure, out_phys) to the result it gets from the S2 walk (either by calling get_phys_addr_lpae() or by TLB lookup). This makes better conceptual sense because the S2 walk should return us an (address space, address) tuple, not an address that we then randomly assign to S or NS. Our previous handling of SW and NSW was broken, so guest code trying to use these bits to put the s2 page tables in the "other" address space wouldn't work correctly. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1600 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230504135425.2748672-3-peter.maydell@linaro.org
2023-05-12target/arm: Don't allow stage 2 page table walks to downgrade to NSPeter Maydell1-2/+3
Bit 63 in a Table descriptor is only the NSTable bit for stage 1 translations; in stage 2 it is RES0. We were incorrectly looking at it all the time. This causes problems if: * the stage 2 table descriptor was incorrectly setting the RES0 bit * we are doing a stage 2 translation in Secure address space for a NonSecure stage 1 regime -- in this case we would incorrectly do an immediate downgrade to NonSecure A bug elsewhere in the code currently prevents us from getting to the second situation, but when we fix that it will be possible. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230504135425.2748672-2-peter.maydell@linaro.org
2023-04-20target/arm: Implement FEAT_PAN3Peter Maydell1-1/+13
FEAT_PAN3 adds an EPAN bit to SCTLR_EL1 and SCTLR_EL2, which allows the PAN bit to make memory non-privileged-read/write if it is user-executable as well as if it is user-read/write. Implement this feature and enable it in the AArch64 'max' CPU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230331145045.2584941-4-peter.maydell@linaro.org
2023-04-10target/arm: Copy guarded bit in combine_cacheattrsRichard Henderson1-0/+1
The guarded bit comes from the stage1 walk. Fixes: Coverity CID 1507929 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230407185149.3253946-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-04-10target/arm: PTE bit GP only applies to stage1Richard Henderson1-5/+5
Only perform the extract of GP during the stage1 walk. Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230407185149.3253946-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-03-06target/arm: Rewrite check_s2_mmu_setupRichard Henderson1-76/+97
Integrate neighboring code from get_phys_addr_lpae which computed starting level, as it is easier to validate when doing both at the same time. Mirror the checks at the start of AArch{64,32}.S2Walk, especially S2InvalidSL and S2InconsistentSL. This reverts 49ba115bb74, which was incorrect -- there is nothing in the ARM pseudocode that depends on TxSZ, i.e. outputsize; the pseudocode is consistent in referencing PAMax. Fixes: 49ba115bb74 ("target/arm: Pass outputsize down to check_s2_mmu_setup") Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230227225832.816605-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-28accel/tcg: Add 'size' param to probe_access_fullRichard Henderson1-1/+1
Change to match the recent change to probe_access_flags. All existing callers updated to supply 0, so no change in behaviour. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-28accel/tcg: Add 'size' param to probe_access_flags()Daniel Henrique Barboza1-1/+1
probe_access_flags() as it is today uses probe_access_full(), which in turn uses probe_access_internal() with size = 0. probe_access_internal() then uses the size to call the tlb_fill() callback for the given CPU. This size param ('fault_size' as probe_access_internal() calls it) is ignored by most existing .tlb_fill callback implementations, e.g. arm_cpu_tlb_fill(), ppc_cpu_tlb_fill(), x86_cpu_tlb_fill() and mips_cpu_tlb_fill() to name a few. But RISC-V riscv_cpu_tlb_fill() actually uses it. The 'size' parameter is used to check for PMP (Physical Memory Protection) access. This is necessary because PMP does not make any guarantees about all the bytes of the same page having the same permissions, i.e. the same page can have different PMP properties, so we're forced to make sub-page range checks. To allow RISC-V emulation to do a probe_acess_flags() that covers PMP, we need to either add a 'size' param to the existing probe_acess_flags() or create a new interface (e.g. probe_access_range_flags). There are quite a few probe_* APIs already, so let's add a 'size' param to probe_access_flags() and re-use this API. This is done by open coding what probe_access_full() does inside probe_acess_flags() and passing the 'size' param to probe_acess_internal(). Existing probe_access_flags() callers use size = 0 to not change their current API usage. 'size' is asserted to enforce single page access like probe_access() already does. No behavioral changes intended. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-Id: <20230223234427.521114-2-dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-27target/arm: Don't access TCG code when debugging with KVMFabiano Rosas1-0/+4
When TCG is disabled this part of the code should not be reachable, so wrap it with an ifdef for now. Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-03target/arm: Fix physical address resolution for Stage2Richard Henderson1-1/+1
Conversion to probe_access_full missed applying the page offset. Cc: qemu-stable@nongnu.org Reported-by: Sid Manning <sidneym@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230126233134.103193-1-richard.henderson@linaro.org Fixes: f3639a64f602 ("target/arm: Use softmmu tlbs for page table walking") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: Fix in_debug path in S1_ptw_translateRichard Henderson1-2/+2
During the conversion, the test against get_phys_addr_lpae got inverted, meaning that successful translations went to the 'failed' label. Cc: qemu-stable@nongnu.org Fixes: f3639a64f60 ("target/arm: Use softmmu tlbs for page table walking") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1417 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230114054605.2977022-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Add PMSAv8r functionalityTobias Röhmel1-22/+104
Add PMSAv8r translation. Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-7-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Make stage_2_format for cache attributes optionalTobias Röhmel1-2/+8
The v8R PMSAv8 has a two-stage MPU translation process, but, unlike VMSAv8, the stage 2 attributes are in the same format as the stage 1 attributes (8-bit MAIR format). Rather than converting the MAIR format to the format used for VMSA stage 2 (bits [5:2] of a VMSA stage 2 descriptor) and then converting back to do the attribute combination, allow combined_attrs_nofwb() to accept s2 attributes that are already in the MAIR format. We move the assert() to combined_attrs_fwb(), because that function really does require a VMSA stage 2 attribute format. (We will never get there for v8R, because PMSAv8 does not implement FEAT_S2FWB.) Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-4-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm:Set lg_page_size to 0 if either S1 or S2 asks for itPeter Maydell1-3/+13
In get_phys_addr_twostage() we set the lg_page_size of the result to the maximum of the stage 1 and stage 2 page sizes. This works for the case where we do want to create a TLB entry, because we know the common TLB code only creates entries of the TARGET_PAGE_SIZE and asking for a size larger than that only means that invalidations invalidate the whole larger area. However, if lg_page_size is smaller than TARGET_PAGE_SIZE this effectively means "don't create a TLB entry"; in this case if either S1 or S2 said "this covers less than a page and can't go in a TLB" then the final result also should be marked that way. Set the resulting page size to 0 if either stage asked for a less-than-a-page entry, and expand the comment to explain what's going on. This has no effect for VMSA because currently the VMSA lookup always returns results that cover at least TARGET_PAGE_SIZE; however when we add v8R support it will reuse this code path, and for v8R the S1 and S2 results can be smaller than TARGET_PAGE_SIZE. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221212142708.610090-1-peter.maydell@linaro.org
2022-11-22target/arm: Use signed quantity to represent VMSAv8-64 translation levelArd Biesheuvel1-2/+2
The LPA2 extension implements 52-bit virtual addressing for 4k and 16k translation granules, and for the former, this means an additional level of translation is needed. This means we start counting at -1 instead of 0 when doing a walk, and so 'level' is now a signed quantity, and should be typed as such. So turn it from uint32_t into int32_t. This avoids a level of -1 getting misinterpreted as being >= 3, and terminating a page table walk prematurely with a bogus output address. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-22target/arm: Don't do two-stage lookup if stage 2 is disabledPeter Maydell1-3/+4
In get_phys_addr_with_struct(), we call get_phys_addr_twostage() if the CPU supports EL2. However, we don't check here that stage 2 is actually enabled. Instead we only check that inside get_phys_addr_twostage() to skip stage 2 translation. This means that even if stage 2 is disabled we still tell the stage 1 lookup to do its page table walks via stage 2. This works by luck for normal CPU accesses, but it breaks for debug accesses, which are used by the disassembler and also by semihosting file reads and writes, because the debug case takes a different code path inside S1_ptw_translate(). This means that setups that use semihosting for file loads are broken (a regression since 7.1, introduced in recent ptw refactoring), and that sometimes disassembly in debug logs reports "unable to read memory" rather than showing the guest insns. Fix the bug by hoisting the "is stage 2 enabled?" check up to get_phys_addr_with_struct(), so that we handle S2 disabled the same way we do the "no EL2" case, with a simple single stage lookup. Reported-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221121212404.1450382-1-peter.maydell@linaro.org
2022-11-21target/arm: Limit LPA2 effective output address when TCR.DS == 0Ard Biesheuvel1-0/+8
With LPA2, the effective output address size is at most 48 bits when TCR.DS == 0. This case is currently unhandled in the page table walker, where we happily assume LVA/64k granule when outputsize > 48 and param.ds == 0, resulting in the wrong conversion to be used from a page table descriptor to a physical address. if (outputsize > 48) { if (param.ds) { descaddr |= extract64(descriptor, 8, 2) << 50; } else { descaddr |= extract64(descriptor, 12, 4) << 48; } So cap the outputsize to 48 when TCR.DS is cleared, as per the architecture. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221116170316.259695-1-ardb@kernel.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04target/arm: Two fixes for secure ptwRichard Henderson1-7/+8
Reversed the sense of non-secure in get_phys_addr_lpae, and failed to initialize attrs.secure for ARMMMUIdx_Phys_S. Fixes: 48da29e4 ("target/arm: Add ptw_idx to S1Translate") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1293 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04target/arm: Fix Privileged Access Never (PAN) for aarch32Timofey Kutergin1-5/+30
When we implemented the PAN support we theoretically wanted to support it for both AArch32 and AArch64, but in practice several bugs made it essentially unusable with an AArch32 guest. Fix all those problems: - Use CPSR.PAN to check for PAN state in aarch32 mode - throw permission fault during address translation when PAN is enabled and kernel tries to access user acessible page - ignore SCTLR_XP bit for armv7 and armv8 (conflicts with SCTLR_SPAN). Signed-off-by: Timofey Kutergin <tkutergin@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221027112619.2205229-1-tkutergin@gmail.com [PMM: tweak commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Use the max page size in a 2-stage ptwRichard Henderson1-1/+10
We had only been reporting the stage2 page size. This causes problems if stage1 is using a larger page size (16k, 2M, etc), but stage2 is using a smaller page size, because cputlb does not set large_page_{addr,mask} properly. Fix by using the max of the two page sizes. Reported-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Implement FEAT_HAFDBS, dirty bit portionRichard Henderson1-0/+16
Perform the atomic update for hardware management of the dirty bit. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Implement FEAT_HAFDBS, access flag portionRichard Henderson1-22/+155
Perform the atomic update for hardware management of the access flag. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-13-richard.henderson@linaro.org [PMM: Fix accidental PROT_WRITE to PAGE_WRITE; add missing main-loop.h include] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Tidy merging of attributes from descriptor and tableRichard Henderson1-18/+16
Replace some gotos with some nested if statements. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Consider GP an attribute in get_phys_addr_lpaeRichard Henderson1-4/+2
Both GP and DBM are in the upper attribute block. Extend the computation of attrs to include them, then simplify the setting of guarded. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Don't shift attrs in get_phys_addr_lpaeRichard Henderson1-16/+15
Leave the upper and lower attributes in the place they originate from in the descriptor. Shifting them around is confusing, since one cannot read the bit numbers out of the manual. Also, new attributes have been added which would alter the shifts. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20221024051851.3074715-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Fix fault reporting in get_phys_addr_lpaeRichard Henderson1-18/+13
Always overriding fi->type was incorrect, as we would not properly propagate the fault type from S1_ptw_translate, or arm_ldq_ptw. Simplify things by providing a new label for a translation fault. For other faults, store into fi directly. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Remove loop from get_phys_addr_lpaeRichard Henderson1-92/+92
The unconditional loop was used both to iterate over levels and to control parsing of attributes. Use an explicit goto in both cases. While this appears less clean for iterating over levels, we will need to jump back into the middle of this loop for atomic updates, which is even uglier. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Move S1_ptw_translate outside arm_ld[lq]_ptwRichard Henderson1-19/+22
Separate S1 translation from the actual lookup. Will enable lpae hardware updates. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Add ptw_idx to S1TranslateRichard Henderson1-17/+54
Hoist the computation of the mmu_idx for the ptw up to get_phys_addr_with_struct and get_phys_addr_twostage. This removes the duplicate check for stage2 disabled from the middle of the walk, performing it only once. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Introduce regime_is_stage2Richard Henderson1-8/+6
Reduce the amount of typing required for this check. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>