aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)AuthorFilesLines
2019-09-30Merge remote-tracking branch ↵Peter Maydell3-94/+69
'remotes/pmaydell/tags/pull-target-arm-20190927' into staging target-arm queue: * Fix the CBAR register implementation for Cortex-A53, Cortex-A57, Cortex-A72 * Fix direct booting of Linux kernels on emulated CPUs which have an AArch32 EL3 (incorrect NSACR settings meant they could not access the FPU) * semihosting cleanup: do more work at translate time and less work at runtime # gpg: Signature made Fri 27 Sep 2019 15:32:43 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20190927: hw/arm/boot: Use the IEC binary prefix definitions hw/arm/boot.c: Set NSACR.{CP11,CP10} for NS kernel boots tests/tcg: add linux-user semihosting smoke test for ARM target/arm: remove run-time semihosting checks for linux-user target/arm: remove run time semihosting checks target/arm: handle A-profile semihosting at translate time target/arm: handle M-profile semihosting at translate time tests/tcg: clean-up some comments after the de-tangling target/arm: fix CBAR register for AArch64 CPUs Signed-off-by: Peter Maydell <peter.maydell@linaro.org> # Conflicts: # tests/tcg/arm/Makefile.target
2019-09-27target/arm: remove run time semihosting checksAlex Bennée1-74/+22
Now we do all our checking and use a common EXCP_SEMIHOST for semihosting operations we can make helper code a lot simpler. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190913151845.12582-5-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27target/arm: handle A-profile semihosting at translate timeAlex Bennée1-4/+15
As for the other semihosting calls we can resolve this at translate time. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190913151845.12582-4-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27target/arm: handle M-profile semihosting at translate timeAlex Bennée2-13/+16
We do this for other semihosting calls so we might as well do it for M-profile as well. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190913151845.12582-3-alex.bennee@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27target/arm: fix CBAR register for AArch64 CPUsLuc Michel1-3/+16
For AArch64 CPUs with a CBAR register, we have two views for it: - in AArch64 state, the CBAR_EL1 register (S3_1_C15_C3_0), returns the full 64 bits CBAR value - in AArch32 state, the CBAR register (cp15, opc1=1, CRn=15, CRm=3, opc2=0) returns a 32 bits view such that: CBAR = CBAR_EL1[31:18] 0..0 CBAR_EL1[43:32] This commit fixes the current implementation where: - CBAR_EL1 was returning the 32 bits view instead of the full 64 bits value, - CBAR was returning a truncated 32 bits version of the full 64 bits one, instead of the 32 bits view - CBAR was declared as cp15, opc1=4, CRn=15, CRm=0, opc2=0, which is the CBAR register found in the ARMv7 Cortex-Ax CPUs, but not in ARMv8 CPUs. Signed-off-by: Luc Michel <luc.michel@greensocs.com> Message-id: 20190912110103.1417887-1-luc.michel@greensocs.com [PMM: Added a comment about the two different kinds of CBAR] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-26target/i386: Fix broken build with WHPX enabledPhilippe Mathieu-Daudé1-0/+1
The WHPX build is broken since commit 12e9493df92 which removed the "hw/boards.h" where MachineState is declared: $ ./configure \ --enable-hax --enable-whpx $ make x86_64-softmmu/all [...] CC x86_64-softmmu/target/i386/whpx-all.o target/i386/whpx-all.c: In function 'whpx_accel_init': target/i386/whpx-all.c:1378:25: error: dereferencing pointer to incomplete type 'MachineState' {aka 'struct MachineState'} whpx->mem_quota = ms->ram_size; ^~ make[1]: *** [rules.mak:69: target/i386/whpx-all.o] Error 1 CC x86_64-softmmu/trace/generated-helpers.o make[1]: Target 'all' not remade because of errors. make: *** [Makefile:471: x86_64-softmmu/all] Error 2 Restore this header, partially reverting commit 12e9493df92. Fixes: 12e9493df92 Reported-by: Ilias Maratos <i.maratos@gmail.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190920113329.16787-2-philmd@redhat.com>
2019-09-26target/alpha: Tidy helper_fp_exc_raise_sRichard Henderson1-11/+4
Remove a redundant masking of ignore. Once that's gone it is obvious that the system-mode inner test is redundant with the outer test. Move the fpcr_exc_enable masking up and tidy. No functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190921043256.4575-8-richard.henderson@linaro.org>
2019-09-26target/alpha: Mask IOV exception with INV for user-onlyRichard Henderson1-0/+7
The kernel masks the integer overflow exception with the software invalid exception mask. Include IOV in the set of exception bits masked by fpcr_exc_enable. Fixes the new float_convs test. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190921043256.4575-7-richard.henderson@linaro.org>
2019-09-26target/alpha: Write to fpcr_flush_to_zero onceRichard Henderson1-5/+4
Tidy the computation of the value; no functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190921043256.4575-6-richard.henderson@linaro.org>
2019-09-26target/alpha: Handle SWCR_MAP_DMZ earlierRichard Henderson1-4/+1
Since we're converting the swcr to fpcr format for exceptions, it's trivial to add FPCR_DNZ to the set of fpcr bits overriden. No functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190921043256.4575-5-richard.henderson@linaro.org>
2019-09-26target/alpha: Fix SWCR_TRAP_ENABLE_MASKRichard Henderson1-9/+14
The CONFIG_USER_ONLY adjustment blindly mashed the swcr exception enable bits into the fpcr exception disable bits. However, fpcr_exc_enable has already converted the exception disable bits into the exception status bits in order to make it easier to mask status bits at runtime. Instead, merge the swcr enable bits with the fpcr before we convert to status bits. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190921043256.4575-4-richard.henderson@linaro.org>
2019-09-26target/alpha: Fix SWCR_MAP_UMZRichard Henderson1-1/+1
We were setting the wrong bit. The fp_status.flush_to_zero setting is overwritten by either the constant 1 or the value of fpcr_flush_to_zero depending on bits within an fp insn. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190921043256.4575-3-richard.henderson@linaro.org>
2019-09-26target/alpha: Use array for FPCR_DYN conversionRichard Henderson1-16/+8
This is a bit more straight-forward than using a switch statement. No functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190921043256.4575-2-richard.henderson@linaro.org>
2019-09-23Merge remote-tracking branch ↵Peter Maydell5-225/+544
'remotes/davidhildenbrand/tags/s390x-tcg-2019-09-23' into staging Fix a bunch of BUGs in the mem-helpers (including the MVC instruction), especially, to make them behave correctly on faults. # gpg: Signature made Mon 23 Sep 2019 09:01:21 BST # gpg: using RSA key 1BD9CAAD735C4C3A460DFCCA4DDE10F700FF835A # gpg: issuer "david@redhat.com" # gpg: Good signature from "David Hildenbrand <david@redhat.com>" [unknown] # gpg: aka "David Hildenbrand <davidhildenbrand@gmail.com>" [full] # Primary key fingerprint: 1BD9 CAAD 735C 4C3A 460D FCCA 4DDE 10F7 00FF 835A * remotes/davidhildenbrand/tags/s390x-tcg-2019-09-23: (30 commits) tests/tcg: target/s390x: Test MVC tests/tcg: target/s390x: Test MVO s390x/tcg: MVO: Fault-safe handling s390x/tcg: MVST: Fault-safe handling s390x/tcg: MVZ: Fault-safe handling s390x/tcg: MVN: Fault-safe handling s390x/tcg: MVCIN: Fault-safe handling s390x/tcg: NC: Fault-safe handling s390x/tcg: XC: Fault-safe handling s390x/tcg: OC: Fault-safe handling s390x/tcg: MVCLU: Fault-safe handling s390x/tcg: MVC: Fault-safe handling on destructive overlaps s390x/tcg: MVCS/MVCP: Use access_memmove() s390x/tcg: Fault-safe memmove s390x/tcg: Fault-safe memset s390x/tcg: Always use MMU_USER_IDX for CONFIG_USER_ONLY s390x/tcg: MVST: Fix storing back the addresses to registers s390x/tcg: MVST: Check for specification exceptions s390x/tcg: MVCS/MVCP: Properly wrap the length s390x/tcg: MVCOS: Lengths are 32 bit in 24/31-bit mode ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-23s390x/tcg: MVO: Fault-safe handlingDavid Hildenbrand1-12/+15
Each operand can have a maximum length of 16. Make sure to prepare all reads/writes before writing. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVST: Fault-safe handlingDavid Hildenbrand1-7/+17
Access at most single pages and document why. Using the access helpers might over-indicate watchpoints within the same page, I guess we can live with that. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVZ: Fault-safe handlingDavid Hildenbrand1-4/+13
We can process a maximum of 256 bytes, crossing two pages. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVN: Fault-safe handlingDavid Hildenbrand1-4/+13
We can process a maximum of 256 bytes, crossing two pages. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCIN: Fault-safe handlingDavid Hildenbrand1-3/+12
We can process a maximum of 256 bytes, crossing two pages. Calculate the accessed range upfront - src is accessed right-to-left. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: NC: Fault-safe handlingDavid Hildenbrand1-4/+13
We can process a maximum of 256 bytes, crossing two pages. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: XC: Fault-safe handlingDavid Hildenbrand1-6/+12
We can process a maximum of 256 bytes, crossing two pages. While at it, increment the length once. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: OC: Fault-safe handlingDavid Hildenbrand1-4/+13
We can process a maximum of 256 bytes, crossing two pages. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCLU: Fault-safe handlingDavid Hildenbrand1-3/+5
The last remaining bit is padding with two bytes. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVC: Fault-safe handling on destructive overlapsDavid Hildenbrand1-2/+3
The last remaining bit for MVC is handling destructive overlaps in a fault-safe way. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCS/MVCP: Use access_memmove()David Hildenbrand1-14/+12
As we are moving between address spaces, we can use access_memmove() without checking for destructive overlaps (especially of real storage locations): "Each storage operand is processed left to right. The storage-operand-consistency rules are the same as for MOVE (MVC), except that when the operands overlap in real storage, the use of the common real- storage locations is not necessarily recognized." Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: Fault-safe memmoveDavid Hildenbrand1-99/+139
Replace fast_memmove() variants by access_memmove() variants, that first try to probe access to all affected pages (maximum is two pages). Introduce access_get_byte()/access_set_byte(). We might be able to speed up memmove in special cases even further (do single-byte access, use memmove() for remaining bytes in page), however, we'll skip that for now. In MVCOS, simply always call access_memmove_as() and drop the TODO about LAP. LAP is already handled in the MMU. Get rid of adj_len_to_page(), which is now unused. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: Fault-safe memsetDavid Hildenbrand1-20/+103
Replace fast_memset() by access_memset(), that first tries to probe access to all affected pages (maximum is two). We'll use the same mechanism for other types of accesses soon. Only in very rare cases (especially TLB_NOTDIRTY), we'll have to fallback to ld/st helpers. Try to speed up that case as suggested by Richard. We'll rework most involved handlers soon to do all accesses via new fault-safe helpers, especially MVC. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: Always use MMU_USER_IDX for CONFIG_USER_ONLYDavid Hildenbrand2-0/+8
Although we basically ignore the index all the time for CONFIG_USER_ONLY, let's simply skip all the checks and always return MMU_USER_IDX in cpu_mmu_index() and get_mem_index(). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVST: Fix storing back the addresses to registersDavid Hildenbrand4-19/+19
24 and 31-bit address space handling is wrong when it comes to storing back the addresses to the register. While at it, read gprs 0 implicitly. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVST: Check for specification exceptionsDavid Hildenbrand1-0/+3
Bit position 32-55 of general register 0 must be zero. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCS/MVCP: Properly wrap the lengthDavid Hildenbrand1-0/+6
... and don't perform any move in case the length is zero. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCOS: Lengths are 32 bit in 24/31-bit modeDavid Hildenbrand1-3/+11
Triggered by a review comment from Richard, also MVCOS has a 32-bit length in 24/31-bit addressing mode. Add a new helper. Rename wrap_length() to wrap_length31(). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCS/MVCP: Check for special operation exceptionsDavid Hildenbrand1-0/+12
Let's perform the documented checks. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCLU/MVCLE: Process max 4k bytes at a timeDavid Hildenbrand1-23/+31
Let's stay within single pages. ... and indicate cc=3 in case there is work remaining. Keep unicode padding simple. While reworking, properly wrap the addresses. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVPG: Properly wrap the addressesDavid Hildenbrand1-2/+9
We have to mask of any unused bits. While at it, document what exactly is missing. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVPG: Check for specification exceptionsDavid Hildenbrand1-0/+7
Perform the checks documented in the PoP. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVC: Use is_destructive_overlap()David Hildenbrand1-1/+1
Let's use the new helper, that also detects destructive overlaps when wrapping. We'll make the remaining code (e.g., fast_memmove()) aware of wrapping later. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVC: Increment the length onceDavid Hildenbrand1-8/+12
Let's increment the length once. While at it, cleanup the comment. The memset() example is given as a programming note in the PoP, so drop the description. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCL: Process max 4k bytes at a timeDavid Hildenbrand1-6/+38
Process max 4k bytes at a time, writing back registers between the accesses. The instruction is interruptible. "For operands longer than 2K bytes, access exceptions are not recognized for locations more than 2K bytes beyond the current location being processed." Note that on z/Architecture, 2k vs. 4k access cannot get differentiated as long as pages are not crossed. This seems to be a leftover from ESA/390. Simply stay within single pages. MVCL handling is quite different than MVCLE/MVCLU handling, so split up the handlers. Defer interrupt handling, as that will require more thought, add a TODO for that. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCL: Detect destructive overlapsDavid Hildenbrand1-1/+18
We'll have to zero-out unused bit positions, so make sure to write the addresses back. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCL: Zero out unused bits of addressDavid Hildenbrand1-2/+21
We have to zero out unused bits in 24 and 31-bit addressing mode. Provide a new helper. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: Reset exception_index to -1 instead of 0David Hildenbrand1-3/+3
We use the marker "-1" for "no exception". s390_cpu_do_interrupt() might get confused by that. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/cpumodel: Add the z15 name to the description of gen15aChristian Borntraeger1-1/+1
We now know that gen15a is called z15. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-09-23s390x/kvm: Officially require at least kernel 3.15Thomas Huth1-0/+7
Since QEMU v2.10, the KVM acceleration does not work on older kernels anymore since the code accidentally requires the KVM_CAP_DEVICE_CTRL capability now - it should have been optional instead. Instead of fixing the bug, we asked in the ChangeLog of QEMU 2.11 - 3.0 that people should speak up if they still need support of QEMU running with KVM on older kernels, but seems like nobody really complained. Thus let's make this official now and turn it into a proper error message, telling the users to use at least kernel 3.15 now. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20190913091443.27565-1-thuth@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-09-20Merge remote-tracking branch ↵Peter Maydell1-4/+4
'remotes/vivier2/tags/trivial-branch-pull-request' into staging Trivial patches 20190919 # gpg: Signature made Thu 19 Sep 2019 14:50:55 BST # gpg: using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C # gpg: issuer "laurent@vivier.eu" # gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full] # gpg: aka "Laurent Vivier <laurent@vivier.eu>" [full] # gpg: aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full] # Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F 5173 F30C 38BD 3F2F BE3C * remotes/vivier2/tags/trivial-branch-pull-request: configure: Add xkbcommon configure options kvm: Fix typo in header of kvm_device_access() Fix cacheline detection on FreeBSD/powerpc. build: Don't ignore qapi-visit-core.c target/m68k/fpu_helper.c: rename the access arguments Replace '-machine accel=xyz' with '-accel xyz' cutils: Move size_to_str() from "qemu-common.h" to "qemu/cutils.h" vfio: fix a typo Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-19target/m68k/fpu_helper.c: rename the access argumentsKONRAD Frederic1-4/+4
The "access" arguments clash with a macro under Windows with MinGW: CC m68k-softmmu/target/m68k/fpu_helper.o target/m68k/fpu_helper.c: In function 'fmovem_predec': target/m68k/fpu_helper.c:405:56: error: macro "access" passed 4 arguments, but takes just 2 size = access(env, addr, &env->fregs[i], ra); So this renames them access_fn. Tested with: ./configure --target-list=m68k-softmmu make -j8 Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1568296920-29939-1-git-send-email-frederic.konrad@adacore.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-09-17gdbstub: riscv: fix the fflags registersKONRAD Frederic1-2/+4
While debugging an application with GDB the following might happen: (gdb) return Make xxx return now? (y or n) y Could not fetch register "fflags"; remote failure reply 'E14' This is because riscv_gdb_get_fpu calls riscv_csrrw_debug with a wrong csr number (8). It should use the csr_register_map in order to reach the riscv_cpu_get_fflags callback. Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17target/riscv: Use TB_FLAGS_MSTATUS_FS for floating pointAlistair Francis1-1/+1
Use the TB_FLAGS_MSTATUS_FS macro when enabling floating point in the tb flags. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17target/riscv: Fix mstatus dirty maskAlistair Francis1-1/+1
This is meant to mask off the hypervisor bits, but a typo caused it to mask MPP instead. Fixes: 1f0419cb04 ("target/riscv: Allow setting mstatus virtulisation bits") Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17target/riscv: Use both register name and ABI nameAtish Patra1-8/+11
Use both the generic register name and ABI name for the general purpose registers and floating point registers. Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>