aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)AuthorFilesLines
2019-02-05target/arm: Enable TBI for user-onlyRichard Henderson1-0/+6
This has been enabled in the linux kernel since v3.11 (commit d50240a5f6cea, 2013-09-03, "arm64: mm: permit use of tagged pointers at EL0"). Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190204132126.3255-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Compute TB_FLAGS for TBI for user-onlyPeter Maydell2-42/+24
Enables, but does not turn on, TBI for CONFIG_USER_ONLY. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190204132126.3255-4-richard.henderson@linaro.org [PMM: adjusted #ifdeffery to placate clang, which otherwise complains about static functions that are unused in the CONFIG_USER_ONLY build] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Clean TBI for data operations in the translatorRichard Henderson1-101/+116
This will allow TBI to be used in user-only mode, as well as avoid ping-ponging the softmmu TLB when TBI is in use. It will also enable other armv8 extensions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190204132126.3255-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignoreRichard Henderson4-36/+39
Split out gen_top_byte_ignore in preparation of handling these data accesses; the new tbflags field is not yet honored. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190204132126.3255-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Enable BTI for -cpu maxRichard Henderson1-0/+4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190128223118.5255-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Set btype for indirect branchesRichard Henderson1-1/+36
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190128223118.5255-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Reset btype for direct branchesRichard Henderson1-0/+6
This is all of the non-exception cases of DISAS_NORETURN. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190128223118.5255-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Default handling of BTYPE during translationRichard Henderson3-2/+152
The branch target exception for guarded pages has high priority, and only 8 instructions are valid for that case. Perform this check before doing any other decode. Clear BTYPE after all insns that neither set BTYPE nor exit via exception (DISAS_NORETURN). Not yet handled are insns that exit via DISAS_NORETURN for some other reason, like direct branches. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190128223118.5255-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Cache the GP bit for a page in MemTxAttrsRichard Henderson1-0/+6
Caching the bit means that we will not have to re-walk the page tables to look up the bit during translation. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190128223118.5255-6-richard.henderson@linaro.org [PMM: no need to OR in guarded bit status] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Add BT and BTYPE to tb->flagsRichard Henderson4-7/+23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190128223118.5255-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Add PSTATE.BTYPERichard Henderson2-2/+9
Place this in its own field within ENV, as that will make it easier to reset from within TCG generated code. With the change to pstate_read/write, exception entry and return are automatically handled. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190128223118.5255-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Introduce isar_feature_aa64_btiRichard Henderson1-0/+10
Also create field definitions for id_aa64pfr1 from ARMv8.5. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190128223118.5255-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: fix decoding of B{,L}RA{A,B}Remi Denis-Courmont1-1/+1
A flawed test lead to the instructions always being treated as unallocated encodings. Fixes: https://bugs.launchpad.net/bugs/1813460 Signed-off-by: Remi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: fix AArch64 virtual address space sizeRemi Denis-Courmont1-1/+1
Since QEMU does not support the ARMv8.2-LVA, Large Virtual Address, extension (yet), the VA address space is 48-bits plus a sign bit. User mode can only handle the positive half of the address space, so that makes a limit of 48 bits. (With LVA, it would be 53 and 52 bits respectively.) The incorrectly large address space conflicts with PAuth instructions, which use bits 48-54 and 56-63 for the pointer authentication code. This also conflicts with (as yet unsupported by QEMU) data tagging and with the ARMv8.5-MTE extension. Signed-off-by: Remi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: Always enable pac keys for user-onlyRichard Henderson2-60/+3
Drop the pac properties. This approach cannot work as written because the properties are applied before arm_cpu_reset, which zeros SCTLR_EL1 (amongst everything else). We can re-introduce the properties if they turn out to be useful. But since linux 5.0 enables all of the keys, they may not be. Fixes: 1ae9cfbd470 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01arm: Clarify the logic of set_pc()Julia Suvorova3-19/+25
Until now, the set_pc logic was unclear, which raised questions about whether it should be used directly, applying a value to PC or adding additional checks, for example, set the Thumb bit in Arm cpu. Let's set the set_pc logic for “Configure the PC, as was done in the ELF file” and implement synchronize_with_tb hook for preserving PC to cpu_tb_exec. Signed-off-by: Julia Suvorova <jusual@mail.ru> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20190129121817.7109-1-jusual@mail.ru Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: Enable API, APK bits in SCR, HCRRichard Henderson1-0/+6
These bits become writable with the ARMv8.3-PAuth extension. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190129143511.12311-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: Add a timer to predict PMU counter overflowAaron Lindsay OS3-2/+92
Make PMU overflow interrupts more accurate by using a timer to predict when they will overflow rather than waiting for an event to occur which allows us to otherwise check them. Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190124162401.5111-3-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: Send interrupts on PMU counter overflowAaron Lindsay OS1-10/+51
Whenever we notice that a counter overflow has occurred, send an interrupt. This is made more reliable with the addition of a timer in a follow-on commit. Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190124162401.5111-2-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm/translate-a64: Fix mishandling of size in FCMLA decodePeter Maydell1-1/+1
In disas_simd_indexed(), for the case of "complex fp", each indexable element is a complex pair, so the total size is twice that indicated in the 'size' field in the encoding. We were trying to do this "double the size" operation with a left shift by 1, but this is incorrect because the 'size' field is a MO_8/MO_16/MO_32/MO_64 value, and doubling the size should be done by a simple increment. This meant we were mishandling FCMLA (by element) of values where the real and imaginary parts are 32-bit floats, and would incorrectly UNDEF this encoding. (No other insns take this code path, and for 16-bit floats it happens that 1 << 1 and 1 + 1 are both the same). Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190129140411.682-3-peter.maydell@linaro.org
2019-02-01target/arm/translate-a64: Fix FCMLA decoding errorPeter Maydell1-1/+1
The FCMLA (by element) instruction exists in the "vector x indexed element" encoding group, but not in the "scalar x indexed element" group. Correctly UNDEF the unallocated encodings. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190129140411.682-2-peter.maydell@linaro.org
2019-02-01target/arm/translate-a64: Don't underdecode SDOT and UDOTPeter Maydell1-1/+1
In the AdvSIMD scalar x indexed element and vector x indexed element encoding group, the SDOT and UDOT instructions are vector only, and their opcode is unallocated in the scalar group. Correctly UNDEF this unallocated encoding. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190125182626.9221-8-peter.maydell@linaro.org
2019-02-01target/arm/translate-a64: Don't underdecode FP insnsPeter Maydell1-1/+21
In the encoding groups * floating-point data-processing (1 source) * floating-point data-processing (2 source) * floating-point data-processing (3 source) * floating-point immediate * floating-point compare * floating-ponit conditional compare * floating-point conditional select bit 31 is M and bit 29 is S (and bit 30 is 0, already checked at this point in the decode). None of these groups allocate any encoding for M=1 or S=1. We checked this in disas_fp_compare(), disas_fp_ccomp() and disas_fp_csel(), but missed it in disas_fp_1src(), disas_fp_2src(), disas_fp_3src() and disas_fp_imm(). We also missed that in the fp immediate encoding the imm5 field must be all zeroes. Correctly UNDEF the unallocated encodings here. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190125182626.9221-7-peter.maydell@linaro.org
2019-02-01target/arm/translate-a64: Don't underdecode add/sub extended registerPeter Maydell1-1/+2
In the "add/subtract (extended register)" encoding group, the "opt" field in bits [23:22] must be zero. Correctly UNDEF the unallocated encodings where this field is not zero. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190125182626.9221-6-peter.maydell@linaro.org
2019-02-01target/arm/translate-a64: Don't underdecode SIMD ld/st singlePeter Maydell1-1/+10
In the AdvSIMD load/store single structure encodings, the non-post-indexed case should have zeroes in [20:16] (which is the Rm field for the post-indexed case). Bit 31 must also be zero (a check we got right in ldst_multiple but not here). Correctly UNDEF these unallocated encodings. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190125182626.9221-5-peter.maydell@linaro.org
2019-02-01target/arm/translate-a64: Don't underdecode SIMD ld/st multiplePeter Maydell1-1/+6
In the AdvSIMD load/store multiple structures encodings, the non-post-indexed case should have zeroes in [20:16] (which is the Rm field for the post-indexed case). Correctly UNDEF the currently unallocated encodings which have non-zeroes in those bits. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190125182626.9221-4-peter.maydell@linaro.org
2019-02-01target/arm/translate-a64: Don't underdecode PRFMPeter Maydell1-1/+1
The PRFM prefetch insn in the load/store with imm9 encodings requires idx field 0b00; we were underdecoding this by only checking !is_unpriv (which is equivalent to idx != 2). Correctly UNDEF the unallocated encodings where idx == 0b01 and 0b11 as well as 0b10. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190125182626.9221-3-peter.maydell@linaro.org
2019-02-01target/arm/translate-a64: Don't underdecode system instructionsPeter Maydell1-1/+5
The "system instructions" and "system register move" subcategories of "branches, exception generating and system instructions" for A64 only apply if bits [23:22] are zero; other values are currently unallocated. Correctly UNDEF these unallocated encodings. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190125182626.9221-2-peter.maydell@linaro.org
2019-01-29target/arm: Don't clear supported PMU events when initializing PMCEID1Aaron Lindsay OS3-19/+22
A bug was introduced during a respin of: commit 57a4a11b2b281bb548b419ca81bfafb214e4c77a target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0 This patch introduced two calls to get_pmceid() during CPU initialization - one each for PMCEID0 and PMCEID1. In addition to building the register values, get_pmceid() clears an internal array mapping event numbers to their implementations (supported_event_map) before rebuilding it. This is an optimization since much of the logic is shared. However, since it was called twice, the contents of supported_event_map reflect only the events in PMCEID1 (the second call to get_pmceid()). Fix this bug by moving the initialization of PMCEID0 and PMCEID1 back into a single function call, and name it more appropriately since it is doing more than simply generating the contents of the PMCEID[01] registers. Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190123195814.29253-1-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-29target/arm: v8m: Ensure IDAU is respected if SAU is disabledThomas Roth1-9/+10
The current behavior of v8m_security_lookup in helper.c only checks whether the IDAU specifies a higher security if the SAU is enabled. If SAU.ALLNS is set to 1, this will lead to addresses being treated as non-secure, even though the IDAU indicates that they must be secure. This patch changes the behavior to also check the IDAU if the SAU is currently disabled. (This brings the behaviour here into line with the v8M Arm ARM SecurityCheck() pseudocode.) Signed-off-by: Thomas Roth <code@stacksmashing.net> Message-id: CAGGekkuc+-tvp5RJP7CM+Jy_hJF7eiRHZ96132sb=hPPCappKg@mail.gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: added pseudocode ref to the commit message, fixed comment style] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-29target/arm: Fix validation of 32-bit address spaces for aa32Richard Henderson1-7/+14
When tsz == 0, aarch32 selects the address space via exclusion, and there are no "top_bits" remaining that require validation. Fixes: ba97be9f4a4 Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190125184913.5970-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Implement PMSWINCAaron Lindsay1-2/+37
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181211151945.29137-14-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: PMU: Set PMCR.N to 4Aaron Lindsay1-5/+5
This both advertises that we support four counters and enables them because the pmu_num_counters() reads this value from PMCR. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-13-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: PMU: Add instruction and cycle eventsAaron Lindsay1-46/+44
The instruction event is only enabled when icount is used, cycles are always supported. Always defining get_cycle_count (but altering its behavior depending on CONFIG_USER_ONLY) allows us to remove some CONFIG_USER_ONLY #defines throughout the rest of the code. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-12-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Finish implementation of PM[X]EVCNTR and PM[X]EVTYPERAaron Lindsay2-17/+282
Add arrays to hold the registers, the definitions themselves, access functions, and logic to reset counters when PMCR.P is set. Update filtering code to support counters other than PMCCNTR. Support migration with raw read/write functions. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181211151945.29137-11-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0Aaron Lindsay4-11/+79
This commit doesn't add any supported events, but provides the framework for adding them. We store the pm_event structs in a simple array, and provide the mapping from the event numbers to array indexes in the supported_event_map array. Because the value of PMCEID[01] depends upon which events are supported at runtime, generate it dynamically. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-10-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Make PMCEID[01]_EL0 64 bit registers, add PMCEID[23]Aaron Lindsay2-4/+19
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-9-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Define FIELDs for ID_DFR0Aaron Lindsay1-0/+9
This is immediately necessary for the PMUv3 implementation to check ID_DFR0.PerfMon to enable/disable specific features, but defines the full complement of fields for possible future use elsewhere. Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-8-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Implement PMOVSSETAaron Lindsay1-0/+28
Add an array for PMOVSSET so we only define it for v7ve+ platforms Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181211151945.29137-7-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Allow AArch32 access for PMCCFILTRAaron Lindsay1-1/+26
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181211151945.29137-6-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Filter cycle counter based on PMCCFILTR_EL0Aaron Lindsay3-8/+101
Rename arm_ccnt_enabled to pmu_counter_enabled, and add logic to only return 'true' if the specified counter is enabled and neither prohibited or filtered. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aclindsa@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181211151945.29137-5-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Swap PMU values before/after migrationsAaron Lindsay2-2/+28
Because of the PMU's design, many register accesses have side effects which are inter-related, meaning that the normal method of saving CP registers can result in inconsistent state. These side-effects are largely handled in pmu_op_start/finish functions which can be called before and after the state is saved/restored. By doing this and adding raw read/write functions for the affected registers, we avoid migration-related inconsistencies. Signed-off-by: Aaron Lindsay <aclindsa@gmail.com> Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-4-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Reorganize PMCCNTR accessesAaron Lindsay2-53/+98
pmccntr_read and pmccntr_write contained duplicate code that was already being handled by pmccntr_sync. Consolidate the duplicated code into two functions: pmccntr_op_start and pmccntr_op_finish. Add a companion to c15_ccnt in CPUARMState so that we can simultaneously save both the architectural register value and the last underlying cycle count - this ensures time isn't lost and will also allow us to access the 'old' architectural register value in order to detect overflows in later patches. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aclindsa@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-3-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Tidy TBI handling in gen_a64_set_pcRichard Henderson1-43/+23
We can perform this with fewer operations. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108223129.5570-32-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Enable PAuth for user-onlyRichard Henderson2-0/+63
Add 4 attributes that controls the EL1 enable bits, as we may not always want to turn on pointer authentication with -cpu max. However, by default they are enabled. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190108223129.5570-31-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Enable PAuth for -cpu maxRichard Henderson1-0/+4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108223129.5570-30-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Add PAuth system registersRichard Henderson1-0/+70
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108223129.5570-29-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Implement pauth_computepacRichard Henderson1-1/+241
This is the main crypto routine, an implementation of QARMA. This matches, as much as possible, ARM pseudocode. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190108223129.5570-28-richard.henderson@linaro.org [PMM: fixed minor checkpatch nits] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Implement pauth_addpacRichard Henderson1-1/+41
This is not really functional yet, because the crypto is not yet implemented. This, however follows the AddPAC pseudo function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108223129.5570-27-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Implement pauth_authRichard Henderson1-1/+20
This is not really functional yet, because the crypto is not yet implemented. This, however follows the Auth pseudo function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108223129.5570-26-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>