aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)AuthorFilesLines
2022-04-13target/i386: Remove unused XMMReg, YMMReg types and CPUState fieldsPeter Maydell1-18/+0
In commit b7711471f5 in 2014 we refactored the handling of the x86 vector registers so that instead of separate structs XMMReg, YMMReg and ZMMReg for representing the 16-byte, 32-byte and 64-byte width vector registers and multiple fields in the CPU state, we have a single type (XMMReg, later renamed to ZMMReg) and a single struct field (xmm_regs). However, in 2017 in commit c97d6d2cdf97ed some of the old struct types and CPU state fields got added back, when we merged in the hvf support (which had developed in a separate fork that had presumably not had the refactoring of b7711471f5), as part of code handling xsave. Commit f585195ec07 then almost immediately dropped that xsave code again in favour of sharing the xsave handling with KVM, but forgot to remove the now unused CPU state fields and struct types. Delete the unused types and CPUState fields. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20220412110047.1497190-1-peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-13target/i386: do not access beyond the low 128 bits of SSE registersPaolo Bonzini1-28/+47
The i386 target consolidates all vector registers so that instead of XMMReg, YMMReg and ZMMReg structs there is a single ZMMReg that can fit all of SSE, AVX and AVX512. When TCG copies data from and to the SSE registers, it uses the full 64-byte width. This is not a correctness issue because TCG never lets guest code see beyond the first 128 bits of the ZMM registers, however it causes uninitialized stack memory to make it to the CPU's migration stream. Fix it by only copying the low 16 bytes of the ZMMReg union into the destination register. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06hw: hyperv: Initial commit for Synthetic Debugging deviceJon Doron1-0/+6
Signed-off-by: Jon Doron <arilou@gmail.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220216102500.692781-5-arilou@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06hyperv: Add support to process syndbg commandsJon Doron5-8/+135
SynDbg commands can come from two different flows: 1. Hypercalls, in this mode the data being sent is fully encapsulated network packets. 2. SynDbg specific MSRs, in this mode only the data that needs to be transfered is passed. Signed-off-by: Jon Doron <arilou@gmail.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220216102500.692781-4-arilou@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06hyperv: Add definitions for syndbgJon Doron1-0/+37
Add all required definitions for hyperv synthetic debugger interface. Signed-off-by: Jon Doron <arilou@gmail.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220216102500.692781-3-arilou@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06whpx: Added support for breakpoints and steppingIvan Shcherbakov4-14/+788
Below is the updated version of the patch adding debugging support to WHPX. It incorporates feedback from Alex Bennée and Peter Maydell regarding not changing the emulation logic depending on the gdb connection status. Instead of checking for an active gdb connection to determine whether QEMU should intercept the INT1 exceptions, it now checks whether any breakpoints have been set, or whether gdb has explicitly requested one or more CPUs to do single-stepping. Having none of these condition present now has the same effect as not using gdb at all. Message-Id: <0e7f01d82e9e$00e9c360$02bd4a20$@sysprogs.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Remove qemu-common.h include from most unitsMarc-André Lureau34-34/+0
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220323155743.1585078-33-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Move CPU softfloat unions to cpu-float.hMarc-André Lureau15-2/+15
The types are no longer used in bswap.h since commit f930224fffe ("bswap.h: Remove unused float-access functions"), there isn't much sense in keeping it there and having a dependency on fpu/. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220323155743.1585078-29-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06include: move target page bits declaration to page-vary.hMarc-André Lureau1-1/+1
Since the implementation unit is page-vary.c. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220323155743.1585078-24-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Replace qemu_real_host_page variables with inlined functionsMarc-André Lureau4-14/+14
Replace the global variables with inlined helper functions. getpagesize() is very likely annotated with a "const" function attribute (at least with glibc), and thus optimization should apply even better. This avoids the need for a constructor initialization too. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220323155743.1585078-12-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Replace TARGET_WORDS_BIGENDIANMarc-André Lureau11-22/+22
Convert the TARGET_WORDS_BIGENDIAN macro, similarly to what was done with HOST_BIG_ENDIAN. The new TARGET_BIG_ENDIAN macro is either 0 or 1, and thus should always be defined to prevent misuse. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Suggested-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220323155743.1585078-8-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Replace config-time define HOST_WORDS_BIGENDIANMarc-André Lureau32-79/+79
Replace a config-time define with a compile time condition define (compatible with clang and gcc) that must be declared prior to its usage. This avoids having a global configure time define, but also prevents from bad usage, if the config header wasn't included before. This can help to make some code independent from qemu too. gcc supports __BYTE_ORDER__ from about 4.6 and clang from 3.2. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> [ For the s390x parts I'm involved in ] Acked-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220323155743.1585078-7-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Replace qemu_gettimeofday() with g_get_real_time()Marc-André Lureau2-25/+20
GLib g_get_real_time() is an alternative to gettimeofday() which allows to simplify our code. For semihosting, a few bits are lost on POSIX host, but this shouldn't be a big concern. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20220307070401.171986-5-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06qapi, target/i386/sev: Add cpu0-id to query-sev-capabilitiesDov Murik1-1/+41
Add a new field 'cpu0-id' to the response of query-sev-capabilities QMP command. The value of the field is the base64-encoded unique ID of CPU0 (socket 0), which can be used to retrieve the signed CEK of the CPU from AMD's Key Distribution Service (KDS). Signed-off-by: Dov Murik <dovmurik@linux.ibm.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20220228093014.882288-1-dovmurik@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-02Merge tag 'pull-request-2022-04-01' of https://gitlab.com/thuth/qemu into ↵Peter Maydell1-4/+4
staging * Fix some compilation issues * Fix overflow calculation in s390x emulation * Update location of lockdown.yml in MAINTAINERS file # gpg: Signature made Fri 01 Apr 2022 12:27:38 BST # gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5 # gpg: issuer "thuth@redhat.com" # gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full] # gpg: aka "Thomas Huth <thuth@redhat.com>" [full] # gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full] # gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown] # Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5 * tag 'pull-request-2022-04-01' of https://gitlab.com/thuth/qemu: trace: fix compilation with lttng-ust >= 2.13 9p: move P9_XATTR_SIZE_MAX from 9p.h to 9p.c meson.build: Fix dependency of page-vary-common.c to config-poison.h target/s390x: Fix determination of overflow condition code after subtraction target/s390x: Fix determination of overflow condition code after addition misc: Fixes MAINTAINERS's path .github/workflows/lockdown.yml Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-01Merge tag 'pull-target-arm-20220401' of ↵Peter Maydell3-5/+22
https://git.linaro.org/people/pmaydell/qemu-arm into staging target-arm queue: * target/arm: Fix some bugs in secure EL2 handling * target/arm: Fix assert when !HAVE_CMPXCHG128 * MAINTAINERS: change Fred Konrad's email address # gpg: Signature made Fri 01 Apr 2022 15:59:59 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * tag 'pull-target-arm-20220401' of https://git.linaro.org/people/pmaydell/qemu-arm: target/arm: Don't use DISAS_NORETURN in STXP !HAVE_CMPXCHG128 codegen MAINTAINERS: change Fred Konrad's email address target/arm: Determine final stage 2 output PA space based on original IPA target/arm: Take VSTCR.SW, VTCR.NSW into account in final stage 2 walk target/arm: Check VSTCR.SW when assigning the stage 2 output PA space target/arm: Fix MTE access checks for disabled SEL2 Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-01Merge tag 'pull-riscv-to-apply-20220401' of github.com:alistair23/qemu into ↵Peter Maydell2-6/+13
staging Sixth RISC-V PR for QEMU 7.0 This is a last minute RISC-V PR for 7.0. It includes a fix to avoid leaking no translation TLB entries. This incorrectly cached uncachable baremetal entries. This would break Linux boot while single stepping. As the fix is pretty straight forward (flush the cache more often) it's being pulled in for 7.0. At the same time I have included a RISC-V vector extension fixup patch. # gpg: Signature made Fri 01 Apr 2022 00:33:58 BST # gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054 # gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full] # Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054 * tag 'pull-riscv-to-apply-20220401' of github.com:alistair23/qemu: target/riscv: rvv: Add missing early exit condition for whole register load/store target/riscv: Avoid leaking "no translation" TLB entries Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-01target/arm: Don't use DISAS_NORETURN in STXP !HAVE_CMPXCHG128 codegenPeter Maydell1-1/+6
In gen_store_exclusive(), if the host does not have a cmpxchg128 primitive then we generate bad code for STXP for storing two 64-bit values. We generate a call to the exit_atomic helper, which never returns, and set is_jmp to DISAS_NORETURN. However, this is forgetting that we have already emitted a brcond that jumps over this call for the case where we don't hold the exclusive. The effect is that we don't generate any code to end the TB for the exclusive-not-held execution path, which falls into the "exit with TB_EXIT_REQUESTED" code that gen_tb_end() emits. This then causes an assert at runtime when cpu_loop_exec_tb() sees an EXIT_REQUESTED TB return that wasn't for an interrupt or icount. In particular, you can hit this case when using the clang sanitizers and trying to run the xlnx-versal-virt acceptance test in 'make check-acceptance'. This bug was masked until commit 848126d11e93ff ("meson: move int128 checks from configure") because we used to set CONFIG_CMPXCHG128=1 and avoid the buggy codepath, but after that we do not. Fix the bug by not setting is_jmp. The code after the exit_atomic call up to the fail_label is dead, but TCG is smart enough to eliminate it. We do need to set 'tmp' to some valid value, though (in the same way the exit_atomic-using code in tcg/tcg-op.c does). Resolves: https://gitlab.com/qemu-project/qemu/-/issues/953 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220331150858.96348-1-peter.maydell@linaro.org
2022-04-01target/arm: Determine final stage 2 output PA space based on original IPAIdan Horowitz1-3/+5
As per the AArch64.S2Walk() pseudo-code in the ARMv8 ARM, the final decision as to the output address's PA space based on the SA/SW/NSA/NSW bits needs to take the input IPA's PA space into account, and not the PA space of the result of the stage 2 walk itself. Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220327093427.1548629-4-idan.horowitz@gmail.com [PMM: fixed commit message typo] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-01target/arm: Take VSTCR.SW, VTCR.NSW into account in final stage 2 walkIdan Horowitz1-0/+10
As per the AArch64.SS2InitialTTWState() psuedo-code in the ARMv8 ARM the initial PA space used for stage 2 table walks is assigned based on the SW and NSW bits of the VSTCR and VTCR registers. This was already implemented for the recursive stage 2 page table walks in S1_ptw_translate(), but was missing for the final stage 2 walk. Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220327093427.1548629-3-idan.horowitz@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-01target/arm: Check VSTCR.SW when assigning the stage 2 output PA spaceIdan Horowitz1-1/+1
As per the AArch64.SS2OutputPASpace() psuedo-code in the ARMv8 ARM when the PA space of the IPA is non secure, the output PA space is secure if and only if all of the bits VTCR.<NSW, NSA>, VSTCR.<SW, SA> are not set. Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220327093427.1548629-2-idan.horowitz@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-01target/arm: Fix MTE access checks for disabled SEL2Idan Horowitz2-2/+2
While not mentioned anywhere in the actual specification text, the HCR_EL2.ATA bit is treated as '1' when EL2 is disabled at the current security state. This can be observed in the psuedo-code implementation of AArch64.AllocationTagAccessIsEnabled(). Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220328173107.311267-1-idan.horowitz@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-01target/s390x: Fix determination of overflow condition code after subtractionBruno Haible1-2/+2
Reported by Paul Eggert in https://lists.gnu.org/archive/html/bug-gnulib/2021-09/msg00050.html This program currently prints different results when run with TCG instead of running on real s390x hardware: #include <stdio.h> int overflow_32 (int x, int y) { int sum; return __builtin_sub_overflow (x, y, &sum); } int overflow_64 (long long x, long long y) { long sum; return __builtin_sub_overflow (x, y, &sum); } int a1 = 0; int b1 = -2147483648; long long a2 = 0L; long long b2 = -9223372036854775808L; int main () { { int a = a1; int b = b1; printf ("a = 0x%x, b = 0x%x\n", a, b); printf ("no_overflow = %d\n", ! overflow_32 (a, b)); } { long long a = a2; long long b = b2; printf ("a = 0x%llx, b = 0x%llx\n", a, b); printf ("no_overflow = %d\n", ! overflow_64 (a, b)); } } Signed-off-by: Bruno Haible <bruno@clisp.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/618 Message-Id: <20220323162621.139313-3-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-04-01target/s390x: Fix determination of overflow condition code after additionBruno Haible1-2/+2
This program currently prints different results when run with TCG instead of running on real s390x hardware: #include <stdio.h> int overflow_32 (int x, int y) { int sum; return ! __builtin_add_overflow (x, y, &sum); } int overflow_64 (long long x, long long y) { long sum; return ! __builtin_add_overflow (x, y, &sum); } int a1 = -2147483648; int b1 = -2147483648; long long a2 = -9223372036854775808L; long long b2 = -9223372036854775808L; int main () { { int a = a1; int b = b1; printf ("a = 0x%x, b = 0x%x\n", a, b); printf ("no_overflow = %d\n", overflow_32 (a, b)); } { long long a = a2; long long b = b2; printf ("a = 0x%llx, b = 0x%llx\n", a, b); printf ("no_overflow = %d\n", overflow_64 (a, b)); } } Signed-off-by: Bruno Haible <bruno@clisp.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/616 Message-Id: <20220323162621.139313-2-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-04-01target/riscv: rvv: Add missing early exit condition for whole register ↵Yueh-Ting (eop) Chen1-0/+5
load/store According to v-spec (section 7.9): The instructions operate with an effective vector length, evl=NFIELDS*VLEN/EEW, regardless of current settings in vtype and vl. The usual property that no elements are written if vstart ≥ vl does not apply to these instructions. Instead, no elements are written if vstart ≥ evl. Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <164762720573.18409.3931931227997483525-0@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-04-01target/riscv: Avoid leaking "no translation" TLB entriesPalmer Dabbelt1-6/+8
The ISA doesn't allow bare mappings to be cached, as the caches are translations and bare mppings are not translated. We cache these translations in QEMU in order to utilize the TLB code, but that leaks out to the guest. Suggested-by: phantom@zju.edu.cn # no name in the From field Fixes: 1e0d985fa9 ("target/riscv: Only flush TLB if SATP.ASID changes") Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20220330165913.8836-1-palmer@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-03-31target/sh4: Remove old README.sh4 fileThomas Huth1-150/+0
This file didn't have any non-trivial update since it was initially added in 2006, and looking at the content, it seems incredibly outdated, saying e.g. "The sh4 target is not ready at all yet for integration in qemu" or "A sh4 user-mode has also somewhat started but will be worked on afterwards"... Sounds like nobody is interested in this README file anymore, so let's simply remove it now. Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp> Message-Id: <20220329151955.472306-1-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-03-29target/mips: Fix address space range declaration on n32WANG Xuerui1-1/+1
This bug is probably lurking there for so long, I cannot even git-blame my way to the commit first introducing it. Anyway, because n32 is also TARGET_MIPS64, the address space range cannot be determined by looking at TARGET_MIPS64 alone. Fix this by only declaring 48-bit address spaces for n64, or the n32 user emulation will happily hand out memory ranges beyond the 31-bit limit and crash. Confirmed to make the minimal reproducing example in the linked issue behave. Closes: https://gitlab.com/qemu-project/qemu/-/issues/939 Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> Signed-off-by: WANG Xuerui <xen0n@gentoo.org> Tested-by: Andreas K. Huettel <dilfridge@gentoo.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220328035942.3299661-1-xen0n@gentoo.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2022-03-26target/ppc: fix helper_xvmadd* argument orderMatheus Ferst1-10/+10
When the xsmadd* insns were moved to decodetree, the helper arguments were reordered to better match the PowerISA description. The same macro is used to declare xvmadd* helpers, but the translation macro of these insns was not changed accordingly. Reported-by: Víctor Colombo <victor.colombo@eldorado.org.br> Fixes: e4318ab2e423 ("target/ppc: move xs[n]madd[am][ds]p/xs[n]msub[am][ds]p to decodetree") Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Tested-by: Víctor Colombo <victor.colombo@eldorado.org.br> Message-Id: <20220325111851.718966-1-matheus.ferst@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-03-25target/arm: Fix sve_ld1_z and sve_st1_z vs MMIORichard Henderson1-2/+8
Both of these functions missed handling the TLB_MMIO flag during the conversion to handle MTE. Fixes: 10a85e2c8ab6 ("target/arm: Reuse sve_probe_page for gather loads") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/925 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220324010932.190428-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-25Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingPeter Maydell5-15/+35
Bugfixes. # gpg: Signature made Thu 24 Mar 2022 17:44:49 GMT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: build: disable fcf-protection on -march=486 -m16 target/i386: properly reset TSC on reset target/i386: tcg: high bits SSE cmp operation must be ignored configure: remove dead int128 test KVM: x86: workaround invalid CPUID[0xD,9] info on some AMD processors i386: Set MCG_STATUS_RIPV bit for mce SRAR error target/i386/kvm: Free xsave_buf when destroying vCPU Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-24target/i386: properly reset TSC on resetPaolo Bonzini2-1/+14
Some versions of Windows hang on reboot if their TSC value is greater than 2^54. The calibration of the Hyper-V reference time overflows and fails; as a result the processors' clock sources are out of sync. The issue is that the TSC _should_ be reset to 0 on CPU reset and QEMU tries to do that. However, KVM special cases writing 0 to the TSC and thinks that QEMU is trying to hot-plug a CPU, which is correct the first time through but not later. Thwart this valiant effort and reset the TSC to 1 instead, but only if the CPU has been run once. For this to work, env->tsc has to be moved to the part of CPUArchState that is not zeroed at the beginning of x86_cpu_reset. Reported-by: Vadim Rozenfeld <vrozenfe@redhat.com> Supersedes: <20220324082346.72180-1-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-24target/i386: tcg: high bits SSE cmp operation must be ignoredPaolo Bonzini1-4/+2
High bits in the immediate operand of SSE comparisons are ignored, they do not result in an undefined opcode exception. This is mentioned explicitly in the Intel documentation. Reported-by: sonicadvance1@gmail.com Closes: https://gitlab.com/qemu-project/qemu/-/issues/184 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-23KVM: x86: workaround invalid CPUID[0xD,9] info on some AMD processorsPaolo Bonzini3-9/+16
Some AMD processors expose the PKRU extended save state even if they do not have the related PKU feature in CPUID. Worse, when they do they report a size of 64, whereas the expected size of the PKRU extended save state is 8, therefore the esa->size == eax assertion does not hold. The state is already ignored by KVM_GET_SUPPORTED_CPUID because it was not enabled in the host XCR0. However, QEMU kvm_cpu_xsave_init() runs before QEMU invokes arch_prctl() to enable dynamically-enabled save states such as XTILEDATA, and KVM_GET_SUPPORTED_CPUID hides save states that have yet to be enabled. Therefore, kvm_cpu_xsave_init() needs to consult the host CPUID instead of KVM_GET_SUPPORTED_CPUID, and dies with an assertion failure. When setting up the ExtSaveArea array to match the host, ignore features that KVM does not report as supported. This will cause QEMU to skip the incorrect CPUID leaf instead of tripping the assertion. Closes: https://gitlab.com/qemu-project/qemu/-/issues/916 Reported-by: Daniel P. Berrangé <berrange@redhat.com> Analyzed-by: Yang Zhong <yang.zhong@intel.com> Reported-by: Peter Krempa <pkrempa@redhat.com> Tested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-23i386: Set MCG_STATUS_RIPV bit for mce SRAR errorluofei1-1/+1
In the physical machine environment, when a SRAR error occurs, the IA32_MCG_STATUS RIPV bit is set, but qemu does not set this bit. When qemu injects an SRAR error into virtual machine, the virtual machine kernel just call do_machine_check() to kill the current task, but not call memory_failure() to isolate the faulty page, which will cause the faulty page to be allocated and used repeatedly. If used by the virtual machine kernel, it will cause the virtual machine to crash Signed-off-by: luofei <luofei@unicloud.com> Message-Id: <20220120084634.131450-1-luofei@unicloud.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-23target/i386/kvm: Free xsave_buf when destroying vCPUPhilippe Mathieu-Daudé1-0/+2
Fix vCPU hot-unplug related leak reported by Valgrind: ==132362== 4,096 bytes in 1 blocks are definitely lost in loss record 8,440 of 8,549 ==132362== at 0x4C3B15F: memalign (vg_replace_malloc.c:1265) ==132362== by 0x4C3B288: posix_memalign (vg_replace_malloc.c:1429) ==132362== by 0xB41195: qemu_try_memalign (memalign.c:53) ==132362== by 0xB41204: qemu_memalign (memalign.c:73) ==132362== by 0x7131CB: kvm_init_xsave (kvm.c:1601) ==132362== by 0x7148ED: kvm_arch_init_vcpu (kvm.c:2031) ==132362== by 0x91D224: kvm_init_vcpu (kvm-all.c:516) ==132362== by 0x9242C9: kvm_vcpu_thread_fn (kvm-accel-ops.c:40) ==132362== by 0xB2EB26: qemu_thread_start (qemu-thread-posix.c:556) ==132362== by 0x7EB2159: start_thread (in /usr/lib64/libpthread-2.28.so) ==132362== by 0x9D45DD2: clone (in /usr/lib64/libc-2.28.so) Reported-by: Mark Kanda <mark.kanda@oracle.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Mark Kanda <mark.kanda@oracle.com> Message-Id: <20220322120522.26200-1-philippe.mathieu.daude@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-23target/i386: force maximum rounding precision for fildl[l]Alex Bennée1-0/+13
The instruction description says "It is loaded without rounding errors." which implies we should have the widest rounding mode possible. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/888 Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220315121251.2280317-4-alex.bennee@linaro.org>
2022-03-22m68k/nios2-semi: fix gettimeofday() result checkMarc-André Lureau2-2/+2
gettimeofday() returns 0 for success. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-03-21Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingPeter Maydell1-4/+13
Bugfixes. # gpg: Signature made Mon 21 Mar 2022 14:57:57 GMT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: hw/i386/amd_iommu: Fix maybe-uninitialized error with GCC 12 target/i386: kvm: do not access uninitialized variable on older kernels Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-21Merge tag 'pull-misc-2022-03-21' of git://repo.or.cz/qemu/armbru into stagingPeter Maydell6-9/+9
Miscellaneous patches patches for 2022-03-21 # gpg: Signature made Mon 21 Mar 2022 14:48:16 GMT # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * tag 'pull-misc-2022-03-21' of git://repo.or.cz/qemu/armbru: Use g_new() & friends where that makes obvious sense 9pfs: Use g_new() & friends where that makes obvious sense scripts/coccinelle: New use-g_new-etc.cocci block-qdict: Fix -Werror=maybe-uninitialized build failure Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-21Use g_new() & friends where that makes obvious senseMarkus Armbruster6-9/+9
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer, for two reasons. One, it catches multiplication overflowing size_t. Two, it returns T * rather than void *, which lets the compiler catch more type errors. This commit only touches allocations with size arguments of the form sizeof(T). Patch created mechanically with: $ spatch --in-place --sp-file scripts/coccinelle/use-g_new-etc.cocci \ --macro-file scripts/cocci-macro-file.h FILES... Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20220315144156.1595462-4-armbru@redhat.com> Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
2022-03-20target/ppc: Replicate Double->Single-Precision resultLucas Coutinho1-4/+44
Power ISA v3.1 formalizes the previously undefined result in words 1 and 3 to be a copy of the result in words 0 and 2. This affects: xvcvsxdsp, xvcvuxdsp, xvcvdpsp. And the previously undefined result in word 1 to be a copy of the result in word 0. This affects: xscvdpsp. Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br> Message-Id: <20220316200427.3410437-1-lucas.coutinho@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-03-20target/ppc: Replicate double->int32 result for some vector insnsRichard Henderson1-6/+39
Power ISA v3.1 formalizes the previously undefined result in words 1 and 3 to be a copy of the result in words 0 and 2. This affects: xscvdpsxws, xscvdpuxws, xvcvdpsxws, xvcvdpuxws. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/852 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> [ clg: checkpatch fixes ] Message-Id: <20220315053934.377519-1-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-03-20target/i386: kvm: do not access uninitialized variable on older kernelsPaolo Bonzini1-4/+13
KVM support for AMX includes a new system attribute, KVM_X86_XCOMP_GUEST_SUPP. Commit 19db68ca68 ("x86: Grant AMX permission for guest", 2022-03-15) however did not fully consider the behavior on older kernels. First, it warns too aggressively. Second, it invokes the KVM_GET_DEVICE_ATTR ioctl unconditionally and then uses the "bitmask" variable, which remains uninitialized if the ioctl fails. Third, kvm_ioctl returns -errno rather than -1 on errors. While at it, explain why the ioctl is needed and KVM_GET_SUPPORTED_CPUID is not enough. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-18target/arm: Make rvbar settable after realizeEdgar E. Iglesias3-9/+16
Make the rvbar property settable after realize. This is done in preparation to model the ZynqMP's runtime configurable rvbar. Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20220316164645.2303510-3-edgar.iglesias@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-18target/arm: Log fault address for M-profile faultsPeter Maydell1-0/+6
For M-profile, the fault address is not always exposed to the guest in a fault register (for instance the BFAR bus fault address register is only updated for bus faults on data accesses, not instruction accesses). Currently we log the address only if we're putting it into a particular guest-visible register. Since we always have it, log it generically, to make logs of i-side faults a bit clearer. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20220315204306.2797684-3-peter.maydell@linaro.org
2022-03-18target/arm: Log M-profile vector table accessesPeter Maydell2-0/+10
Currently the CPU_LOG_INT logging misses some useful information about loads from the vector table. Add logging where we load vector table entries. This is particularly helpful for cases where the user has accidentally not put a vector table in their image at all, which can result in confusing guest crashes at startup. Here's an example of the new logging for a case where the vector table contains garbage: Loaded reset SP 0x0 PC 0x0 from vector table Loaded reset SP 0xd008f8df PC 0xf000bf00 from vector table Taking exception 3 [Prefetch Abort] on CPU 0 ...with CFSR.IACCVIOL ...BusFault with BFSR.STKERR ...taking pending nonsecure exception 3 ...loading from element 3 of non-secure vector table at 0xc ...loaded new PC 0x20000558 ---------------- IN: 0x20000558: 08000079 stmdaeq r0, {r0, r3, r4, r5, r6} (The double reset logging is the result of our long-standing "CPUs all get reset twice" weirdness; it looks a bit ugly but it'll go away if we ever fix that :-)) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20220315204306.2797684-2-peter.maydell@linaro.org
2022-03-18target/arm: Fix handling of LPAE block descriptorsPeter Maydell1-2/+8
LPAE descriptors come in three forms: * table descriptors, giving the address of the next level page table * page descriptors, which occur only at level 3 and describe the mapping of one page (which might be 4K, 16K or 64K) * block descriptors, which occur at higher page table levels, and describe the mapping of huge pages QEMU's page-table-walk code treats block and page entries identically, simply ORing in a number of bits from the input virtual address that depends on the level of the page table that we stopped at; we depend on the previous masking of descaddr with descaddrmask to have already cleared out the low bits of the descriptor word. This is not quite right: the address field in a block descriptor is smaller, and so there are bits which are valid address bits in a page descriptor or a table descriptor but which are not supposed to be part of the address in a block descriptor, and descaddrmask does not clear them. We previously mostly got away with this because those descriptor bits are RES0; however with FEAT_BBM (part of Armv8.4) block descriptor bit 16 is defined to be the nT bit. No emulated QEMU CPU has FEAT_BBM yet, but if the host CPU has it then we might see it when using KVM or hvf. Explicitly zero out all the descaddr bits we're about to OR vaddr bits into. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/790 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220304165628.2345765-1-peter.maydell@linaro.org
2022-03-18target/arm: Fix pauth_check_trap vs SEL2Richard Henderson1-1/+1
When arm_is_el2_enabled was introduced, we missed updating pauth_check_trap. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/788 Fixes: e6ef0169264b ("target/arm: use arm_is_el2_enabled() where applicable") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20220315021205.342768-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-18target/arm: Fix sve2 ldnt1 and stnt1Richard Henderson2-5/+51
For both ldnt1 and stnt1, the meaning of the Rn and Rm are different from ld1 and st1: the vector and integer registers are reversed, and the integer register 31 refers to XZR instead of SP. Secondly, the 64-bit version of ldnt1 was being interpreted as 32-bit unpacked unscaled offset instead of 64-bit unscaled offset, which discarded the upper 32 bits of the address coming from the vector argument. Thirdly, validate that the memory element size is in range for the vector element size for ldnt1. For ld1, we do this via independent decode patterns, but for ldnt1 we need to do it manually. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/826 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20220308031655.240710-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>