aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)AuthorFilesLines
2020-04-03target/arm: Remove obsolete TODO note from get_phys_addr_lpae()Peter Maydell1-6/+1
An old comment in get_phys_addr_lpae() claims that the code does not support the different format TCR for VTCR_EL2. This used to be true but it is not true now (in particular the aa64_va_parameters() and aa32_va_parameters() functions correctly handle the different register format by checking whether the mmu_idx is Stage2). Remove the out of date parts of the comment. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200331143407.3186-1-peter.maydell@linaro.org
2020-04-03target/arm: PSTATE.PAN should not clear exec bitsPeter Maydell1-2/+4
Our implementation of the PSTATE.PAN bit incorrectly cleared all access permission bits for privileged access to memory which is user-accessible. It should only affect the privileged read and write permissions; execute permission is dealt with via XN/PXN instead. Fixes: 81636b70c226dc27d7ebc8d Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200330170651.20901-1-peter.maydell@linaro.org
2020-04-03target/arm: don't expose "ieee_half" via gdbstubAlex Bennée1-1/+6
While support for parsing ieee_half in the XML description was added to gdb in 2019 (a6d0f249) there is no easy way for the gdbstub to know if the gdb end will understand it. Disable it for now and allow older gdbs to successfully connect to the default -cpu max SVE enabled QEMUs. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200402143913.24005-1-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-30target/arm: fix incorrect current EL bug in aarch32 exception emulationChangbin Du1-1/+4
The arm_current_el() should be invoked after mode switching. Otherwise, we get a wrong current EL value, since current EL is also determined by current mode. Fixes: 4a2696c0d4 ("target/arm: Set PAN bit as required on exception entry") Signed-off-by: Changbin Du <changbin.du@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200328140232.17278-1-changbin.du@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-23target/arm: Move computation of index in handle_simd_dupeRichard Henderson1-1/+2
Coverity reports a BAD_SHIFT with ctz32(imm5), with imm5 == 0. This is an invalid encoding, but we diagnose that just below by rejecting size > 3. Avoid the warning by sinking the computation of index below the check. Reported-by: Coverity (CID 1421965) Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200320160622.8040-4-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-23target/arm: Assert immh != 0 in disas_simd_shift_immRichard Henderson1-0/+3
Coverity raised a shed-load of errors cascading from inferring that clz32(immh) might yield 32, from immh might be 0. While immh cannot be 0 from encoding, it is not obvious even to a human how we've checked that: via the filtering provided by data_proc_simd[]. Reported-by: Coverity (CID 1421923, and more) Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200320160622.8040-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-23target/arm: Rearrange disabled check for watchpointsRichard Henderson1-5/+6
Coverity rightly notes that ctz32(bas) on 0 will return 32, which makes the len calculation a BAD_SHIFT. A value of 0 in DBGWCR<n>_EL1.BAS is reserved. Simply move the existing check we have for this case. Reported-by: Coverity (CID 1421964) Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200320160622.8040-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-19Merge remote-tracking branch ↵Peter Maydell2-5/+5
'remotes/ehabkost/tags/x86-and-machine-pull-request' into staging x86 and machine queue for 5.0 soft freeze Bug fixes: * memory encryption: Disable mem merge (Dr. David Alan Gilbert) Features: * New EPYC CPU definitions (Babu Moger) * Denventon-v2 CPU model (Tao Xu) * New 'note' field on versioned CPU models (Tao Xu) Cleanups: * x86 CPU topology cleanups (Babu Moger) * cpu: Use DeviceClass reset instead of a special CPUClass reset (Peter Maydell) # gpg: Signature made Wed 18 Mar 2020 01:16:43 GMT # gpg: using RSA key 5A322FD5ABC4D3DBACCFD1AA2807936F984DC5A6 # gpg: issuer "ehabkost@redhat.com" # gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full] # Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6 * remotes/ehabkost/tags/x86-and-machine-pull-request: hw/i386: Rename apicid_from_topo_ids to x86_apicid_from_topo_ids hw/i386: Update structures to save the number of nodes per package hw/i386: Remove unnecessary initialization in x86_cpu_new machine: Add SMP Sockets in CpuTopology hw/i386: Consolidate topology functions hw/i386: Introduce X86CPUTopoInfo to contain topology info cpu: Use DeviceClass reset instead of a special CPUClass reset machine/memory encryption: Disable mem merge hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs i386: Add 2nd Generation AMD EPYC processors i386: Add missing cpu feature bits in EPYC model target/i386: Add new property note to versioned CPU models target/i386: Add Denverton-v2 (no MPX) CPU model Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-18Merge remote-tracking branch ↵Peter Maydell5-64/+335
'remotes/stsquad/tags/pull-testing-and-gdbstub-170320-1' into staging Testing and gdbstub updates: - docker updates for VirGL - re-factor gdbstub for static GDBState - re-factor gdbstub for dynamic arrays - add SVE support to arm gdbstub - add some guest debug tests to check-tcg - add aarch64 userspace register tests - remove packet size limit to gdbstub - simplify gdbstub monitor code - report vContSupported in gdbstub to use proper single-step # gpg: Signature made Tue 17 Mar 2020 17:47:46 GMT # gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44 # gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full] # Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44 * remotes/stsquad/tags/pull-testing-and-gdbstub-170320-1: (28 commits) gdbstub: Fix single-step issue by confirming 'vContSupported+' feature to gdb gdbstub: do not split gdb_monitor_write payload gdbstub: change GDBState.last_packet to GByteArray tests/tcg/aarch64: add test-sve-ioctl guest-debug test tests/tcg/aarch64: add SVE iotcl test tests/tcg/aarch64: add a gdbstub testcase for SVE registers tests/guest-debug: add a simple test runner configure: allow user to specify what gdb to use tests/tcg/aarch64: userspace system register test target/arm: don't bother with id_aa64pfr0_read for USER_ONLY target/arm: generate xml description of our SVE registers target/arm: default SVE length to 64 bytes for linux-user target/arm: explicitly encode regnum in our XML target/arm: prepare for multiple dynamic XMLs gdbstub: extend GByteArray to read register helpers target/i386: use gdb_get_reg helpers target/m68k: use gdb_get_reg helpers target/arm: use gdb_get_reg helpers gdbstub: add helper for 128 bit registers gdbstub: move mem_buf to GDBState and use GByteArray ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-18Merge remote-tracking branch 'remotes/armbru/tags/pull-error-2020-03-17' ↵Peter Maydell1-6/+2
into staging Error reporting patches for 2020-03-17 # gpg: Signature made Tue 17 Mar 2020 16:30:49 GMT # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-error-2020-03-17: hw/sd/ssi-sd: fix error handling in ssi_sd_realize xen-block: Use one Error * variable instead of two hw/misc/ivshmem: Use one Error * variable instead of two Use &error_abort instead of separate assert() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-17cpu: Use DeviceClass reset instead of a special CPUClass resetPeter Maydell2-5/+5
The CPUClass has a 'reset' method. This is a legacy from when TYPE_CPU used not to inherit from TYPE_DEVICE. We don't need it any more, as we can simply use the TYPE_DEVICE reset. The 'cpu_reset()' function is kept as the API which most places use to reset a CPU; it is now a wrapper which calls device_cold_reset() and then the tracepoint function. This change should not cause CPU objects to be reset more often than they are at the moment, because: * nobody is directly calling device_cold_reset() or qdev_reset_all() on CPU objects * no CPU object is on a qbus, so they will not be reset either by somebody calling qbus_reset_all()/bus_cold_reset(), or by the main "reset sysbus and everything in the qbus tree" reset that most devices are reset by Note that this does not change the need for each machine or whatever to use qemu_register_reset() to arrange to call cpu_reset() -- that is necessary because CPU objects are not on any qbus, so they don't get reset when the qbus tree rooted at the sysbus bus is reset, and this isn't being changed here. All the changes to the files under target/ were made using the included Coccinelle script, except: (1) the deletion of the now-inaccurate and not terribly useful "CPUClass::reset" comments was done with a perl one-liner afterwards: perl -n -i -e '/ CPUClass::reset/ or print' target/*/*.c (2) this bit of the s390 change was done by hand, because the Coccinelle script is not sophisticated enough to handle the parent_reset call being inside another function: | @@ -96,8 +96,9 @@ static void s390_cpu_reset(CPUState *s, cpu_reset_type type) | S390CPU *cpu = S390_CPU(s); | S390CPUClass *scc = S390_CPU_GET_CLASS(cpu); | CPUS390XState *env = &cpu->env; |+ DeviceState *dev = DEVICE(s); | |- scc->parent_reset(s); |+ scc->parent_reset(dev); | cpu->env.sigp_order = 0; | s390_cpu_set_state(S390_CPU_STATE_STOPPED, cpu); Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20200303100511.5498-1-peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2020-03-17target/arm: don't bother with id_aa64pfr0_read for USER_ONLYAlex Bennée1-5/+15
For system emulation we need to check the state of the GIC before we report the value. However this isn't relevant to exporting of the value to linux-user and indeed breaks the exported value as set by modify_arm_cp_regs. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200316172155.971-20-alex.bennee@linaro.org>
2020-03-17target/arm: generate xml description of our SVE registersAlex Bennée3-5/+261
We also expose a the helpers to read/write the the registers. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200316172155.971-19-alex.bennee@linaro.org>
2020-03-17target/arm: default SVE length to 64 bytes for linux-userAlex Bennée1-3/+4
The Linux kernel chooses the default of 64 bytes for SVE registers on the basis that it is the largest size on known hardware that won't grow the signal frame. We still honour the sve-max-vq property and userspace can expand the number of lanes by calling PR_SVE_SET_VL. This should not make any difference to SVE enabled software as the SVE is of course vector length agnostic. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200316172155.971-18-alex.bennee@linaro.org>
2020-03-17target/arm: explicitly encode regnum in our XMLAlex Bennée3-8/+13
This is described as optional but I'm not convinced of the numbering when multiple target fragments are sent. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200316172155.971-17-alex.bennee@linaro.org>
2020-03-17target/arm: prepare for multiple dynamic XMLsAlex Bennée3-24/+30
We will want to generate similar dynamic XML for gdbstub support of SVE registers (the upstream doesn't use XML). To that end lightly rename a few things to make the distinction. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200316172155.971-16-alex.bennee@linaro.org>
2020-03-17gdbstub: extend GByteArray to read register helpersAlex Bennée4-15/+12
Instead of passing a pointer to memory now just extend the GByteArray to all the read register helpers. They can then safely append their data through the normal way. We don't bother with this abstraction for write registers as we have already ensured the buffer being copied from is the correct size. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Damien Hedde <damien.hedde@greensocs.com> Message-Id: <20200316172155.971-15-alex.bennee@linaro.org>
2020-03-17target/arm: use gdb_get_reg helpersAlex Bennée1-11/+7
This is cleaner than poking memory directly and will make later clean-ups easier. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200316172155.971-12-alex.bennee@linaro.org>
2020-03-17Use &error_abort instead of separate assert()Markus Armbruster1-6/+2
Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20200313170517.22480-2-armbru@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alexander Bulekov <alxndr@bu.edu> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [Unused Error *variable deleted]
2020-03-16qom/object: Use common get/set uint helpersFelipe Franciosi1-19/+3
Several objects implemented their own uint property getters and setters, despite them being straightforward (without any checks/validations on the values themselves) and identical across objects. This makes use of an enhanced API for object_property_add_uintXX_ptr() which offers default setters. Some of these setters used to update the value even if the type visit failed (eg. because the value being set overflowed over the given type). The new setter introduces a check for these errors, not updating the value if an error occurred. The error is propagated. Signed-off-by: Felipe Franciosi <felipe@nutanix.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-12target/arm: kvm: Inject events at the last stage of syncBeata Michalska2-10/+20
KVM_SET_VCPU_EVENTS might actually lead to vcpu registers being modified. As such this should be the last step of sync to avoid potential overwriting of whatever changes KVM might have done. Signed-off-by: Beata Michalska <beata.michalska@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-id: 20200312003401.29017-2-beata.michalska@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-12target/arm/kvm: Let kvm_arm_vgic_probe() return a bitmapEric Auger2-6/+11
Convert kvm_arm_vgic_probe() so that it returns a bitmap of supported in-kernel emulation VGIC versions instead of the max version: at the moment values can be v2 and v3. This allows to expose the case where the host GICv3 also supports GICv2 emulation. This will be useful to choose the default version in KVM accelerated mode. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200311131618.7187-5-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-12target/arm: Disable clean_data_tbi for system modeRichard Henderson1-0/+11
We must include the tag in the FAR_ELx register when raising an addressing exception. Which means that we should not clear out the tag during translation. We cannot at present comply with this for user mode, so we retain the clean_data_tbi function for the moment, though it no longer does what it says on the tin for system mode. This function is to be replaced with MTE, so don't worry about the slight misnaming. Buglink: https://bugs.launchpad.net/qemu/+bug/1867072 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200308012946.16303-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-12target/arm: Check addresses for disabled regimesRichard Henderson1-1/+34
We fail to validate the upper bits of a virtual address on a translation disabled regime, as per AArch64.TranslateAddressS1Off. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200308012946.16303-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-12target/arm: Fix some comment typosPeter Maydell2-2/+2
Fix a couple of comment typos. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200303174950.3298-5-peter.maydell@linaro.org
2020-03-12target/arm: Recalculate hflags correctly after writes to CONTROLPeter Maydell3-4/+16
A write to the CONTROL register can change our current EL (by writing to the nPRIV bit). That means that we can't assume that s->current_el is still valid in trans_MSR_v7m() when we try to rebuild the hflags. Add a new helper rebuild_hflags_m32_newel() which, like the existing rebuild_hflags_a32_newel(), recalculates the current EL from scratch, and use it in trans_MSR_v7m(). This fixes an assertion about an hflags mismatch when the guest changes privilege by writing to CONTROL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200303174950.3298-4-peter.maydell@linaro.org
2020-03-12target/arm: Update hflags in trans_CPS_v7m()Peter Maydell1-1/+4
For M-profile CPUs, the FAULTMASK value affects the CPU's MMU index (it changes the NegPri bit). We update the hflags after calls to the v7m_msr helper in trans_MSR_v7m() but forgot to do so in trans_CPS_v7m(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200303174950.3298-3-peter.maydell@linaro.org
2020-03-05target/arm: Clean address for DC ZVARichard Henderson1-1/+1
This data access was forgotten when we added support for cleaning addresses of TBI information. Fixes: 3a471103ac1823ba Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200302175829.2183-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Use DEF_HELPER_FLAGS for helper_dc_zvaRichard Henderson1-1/+1
The function does not write registers, and only reads them by implication via the exception path. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200302175829.2183-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Move helper_dc_zva to helper-a64.cRichard Henderson4-94/+92
This is an aarch64-only function. Move it out of the shared file. This patch is code movement only. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200302175829.2183-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Apply TBI to ESR_ELx in helper_exception_returnRichard Henderson1-1/+22
We missed this case within AArch64.ExceptionReturn. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Introduce core_to_aa64_mmu_idxRichard Henderson2-1/+7
If by context we know that we're in AArch64 mode, we need not test for M-profile when reconstructing the full ARMMMUIdx. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Optimize cpu_mmu_indexRichard Henderson2-15/+13
We now cache the core mmu_idx in env->hflags. Rather than recompute from scratch, extract the field. All of the uses of cpu_mmu_index within target/arm are within helpers, and env->hflags is always stable within a translation block from whence helpers are called. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Replicate TBI/TBID bits for single range regimesRichard Henderson1-2/+4
Replicate the single TBI bit from TCR_EL2 and TCR_EL3 so that we can unconditionally use pointer bit 55 to index into our composite TBI1:TBI0 field. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200302175829.2183-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TTLB bitRichard Henderson1-30/+55
This bit traps EL1 access to tlb maintenance insns. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TPU bitRichard Henderson1-20/+31
This bit traps EL1 access to cache maintenance insns that operate to the point of unification. There are no longer any references to plain aa64_cacheop_access, so remove it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TPCP bitRichard Henderson1-8/+31
This bit traps EL1 access to cache maintenance insns that operate to the point of coherency or persistence. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TACR bitRichard Henderson1-4/+14
This bit traps EL1 access to the auxiliary control registers. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TSW bitRichard Henderson1-6/+16
These bits trap EL1 access to set/way cache maintenance insns. Buglink: https://bugs.launchpad.net/bugs/1863685 Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.{TVM,TRVM} bitsRichard Henderson1-27/+55
These bits trap EL1 access to various virtual memory controls. Buglink: https://bugs.launchpad.net/bugs/1855072 Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Improve masking in arm_hcr_el2_effRichard Henderson1-4/+27
Update the {TGE,E2H} == '11' masking to ARMv8.6. If EL2 is configured for aarch32, disable all of the bits that are RES0 in aarch32 mode. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Remove EL2 and EL3 setup from user-onlyRichard Henderson1-6/+0
We have disabled EL2 and EL3 for user-only, which means that these registers "don't exist" and should not be set. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Disable has_el2 and has_el3 for user-onlyRichard Henderson1-2/+4
In arm_cpu_reset, we configure many system registers so that user-only behaves as it should with a minimum of ifdefs. However, we do not set all of the system registers as required for a cpu with EL2 and EL3. Disabling EL2 and EL3 mean that we will not look at those registers, which means that we don't have to worry about configuring them. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-4-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Add HCR_EL2 bit definitions from ARMv8.6Richard Henderson1-0/+7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Improve masking of HCR/HCR2 RES0 bitsRichard Henderson1-13/+25
Don't merely start with v8.0, handle v7VE as well. Ensure that writes from aarch32 mode do not change bits in the other half of the register. Protect reads of aa64 id registers with ARM_FEATURE_AARCH64. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Implement (trivially) ARMv8.2-TTCNPPeter Maydell3-0/+7
The ARMv8.2-TTCNP extension allows an implementation to optimize by sharing TLB entries between multiple cores, provided that software declares that it's ready to deal with this by setting a CnP bit in the TTBRn_ELx. It is mandatory from ARMv8.2 onward. For QEMU's TLB implementation, sharing TLB entries between different cores would not really benefit us and would be a lot of work to implement. So we implement this extension in the "trivial" manner: we allow the guest to set and read back the CnP bit, but don't change our behaviour (this is an architecturally valid implementation choice). The only code path which looks at the TTBRn_ELx values for the long-descriptor format where the CnP bit is defined is already doing enough masking to not get confused when the CnP bit at the bottom of the register is set, so we can simply add a comment noting why we're relying on that mask. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200225193822.18874-1-peter.maydell@linaro.org
2020-02-28target/arm: Implement ARMv8.3-CCIDXPeter Maydell2-1/+35
The ARMv8.3-CCIDX extension makes the CCSIDR_EL1 system ID registers have a format that uses the full 64 bit width of the register, and adds a new CCSIDR2 register so AArch32 can get at the high 32 bits. QEMU doesn't implement caches, so we just treat these ID registers as opaque values that are set to the correct constant values for each CPU. The only thing we need to do is allow 64-bit values in our cssidr[] array and provide the CCSIDR2 accessors. We don't set the CCIDX field in our 'max' CPU because the CCSIDR constant values we use are the same as the ones used by the Cortex-A57 and they are in the old 32-bit format. This means that the extra regdef added here is unused currently, but it means that whenever in the future we add a CPU that does need the new 64-bit format it will just work when we set the cssidr values and the ID registers for it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224182626.29252-1-peter.maydell@linaro.org
2020-02-28target/arm: Implement v8.4-RCPCPeter Maydell3-1/+96
The v8.4-RCPC extension implements some new instructions: * LDAPUR, LDAPURB, LDAPURH, LDAPRSB, LDAPRSH, LDAPRSW * STLUR, STLURB, STLURH These are all in a new subgroup of encodings that sits below the top-level "Loads and Stores" group in the Arm ARM. The STLUR* instructions have standard store-release semantics; the LDAPUR* have Load-AcquirePC semantics, but (as with LDAPR*) we choose to implement them as the slightly stronger Load-Acquire. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-4-peter.maydell@linaro.org
2020-02-28target/arm: Implement v8.3-RCPCPeter Maydell3-0/+30
The v8.3-RCPC extension implements three new load instructions which provide slightly weaker consistency guarantees than the existing load-acquire operations. For QEMU we choose to simply implement them with a full LDAQ barrier. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-3-peter.maydell@linaro.org
2020-02-28target/arm: Fix wrong use of FIELD_EX32 on ID_AA64DFR0Peter Maydell1-2/+2
We missed an instance of using FIELD_EX32 on a 64-bit ID register, in isar_feature_aa64_pmu_8_4(). Fix it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-2-peter.maydell@linaro.org