Age | Commit message (Collapse) | Author | Files | Lines |
|
With VTLB different TLB entry may have different page size, and
page size is set in PS field of TLB entry. However with STLB, all
the TLB entries have the same page size, page size comes from register
CSR_STLBPS, PS field of TLB entry is not used.
Here PS field of TLB entry is used with all TLB entries, even with
STLB. It is convenient with TLB maintainance operation.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
|
|
Function loongarch_cpu_post_init() is implemented and used in the
same file target/loongarch/cpu.c, it can be defined as static function.
This patch moves implementation about function loongarch_cpu_post_init()
before it is referenced. And it is only code movement, no function
change.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Move function definition specified with kvm to the corresponding
directory. Also remove header file "cpu.h" including outside of
macro QEMU_KVM_LOONGARCH_H.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
|
|
This patch replaces uses of the generic TRANS macro with TRANS64 for
instructions that are only valid when 64-bit support is available.
This improves correctness and avoids potential assertion failures or
undefined behavior during translation on 32-bit-only configurations.
Signed-off-by: WANG Rui <wangrui@loongson.cn>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
|
|
For all 32-bit systems and 64-bit Windows systems, "long" is 4 bytes long.
Due to using "long" for a linear address, svm_canonicalization would
set all high bits to 1 when (assuming 48-bit linear address) the segment
base is bigger than 0x7FFF.
This fixes booting guests under TCG when the guest IDT and GDT bases are
above 0x7FFF, thereby resulting in incorrect bases. When an interrupt
arrives, it would trigger a #PF exception; the #PF would trigger again,
resulting in a #DF exception; the #PF would trigger for the third time,
resulting in triple-fault, and eventually causes a shutdown VM-Exit to
the hypervisor right after guest boot.
Cc: qemu-stable@nongnu.org
Signed-off-by: Zero Tang <zero.tangptr@gmail.com>
|
|
For now, qemu save/load CPU exception info(such as exception_nr and
has_error_code), while the exception error_code is ignored. This will
cause the dest hypervisor reinject a vCPU exception with error_code(0),
potentially causing a guest kernel panic.
For instance, if src VM stopped with an user-mode write #PF (error_code 6),
the dest hypervisor will reinject an #PF with error_code(0) when vCPU resume,
then guest kernel panic as:
BUG: unable to handle page fault for address: 00007f80319cb010
#PF: supervisor read access in user mode
#PF: error_code(0x0000) - not-present page
RIP: 0033:0x40115d
To fix it, support save/load exception error_code.
Signed-off-by: Xin Wang <wangxinxin.wang@huawei.com>
Link: https://lore.kernel.org/r/20250819145834.3998-1-wangxinxin.wang@huawei.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This reverts commit 00268e00027459abede448662f8794d78eb4b0a4.
(The only conflict is in the !is_tdx_vm() part of the condition,
which is safe to keep).
mark_unavailable_features() actively blocks usage of the feature,
so it is a functional change, not merely a emitting warning.
The commit was intended to merely warn if PDCM was enabled when
the performance counters are not, so revert it.
Reported-by: Christian A. Ehrhardt <christian.ehrhardt@canonical.com>
Analyzed-by: Daniel P. Berrangé <berrange@redhat.com>
Analyzed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-ID: <20250819150235.785559-1-pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
According to the specification, [X]VLDI should trigger an invalid instruction
exception only when Bit[12] is 1 and Bit[11:8] > 12. This patch fixes an issue
where an exception was incorrectly raised even when Bit[12] was 0.
Test case:
```
.global main
main:
vldi $vr0, 3328
ret
```
Reported-by: Zhou Qiankang <wszqkzqk@qq.com>
Signed-off-by: WANG Rui <wangrui@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20250804132212.4816-1-wangrui@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
|
|
CPUID[0x1]
Currently, the addressable ID encoding for CPUID[0x1].EBX[bits 16-23]
(Maximum number of addressable IDs for logical processors in this
physical package) is covered by vendor_cpuid_only_v2 compat property.
The previous consideration was to avoid breaking migration and this
compat property makes it unfriendly to backport the commit f985a1195ba2
("i386/cpu: Fix number of addressable IDs field for CPUID.01H.EBX
[23:16]").
However, NetBSD booting is broken since the commit 88dd4ca06c83
("i386/cpu: Use APIC ID info to encode cache topo in CPUID[4]"),
because NetBSD calculates smt information via `lp_max` / `core_max` for
legacy Intel CPUs which doesn't support 0xb leaf, where `lp_max` is from
CPUID[0x1].EBX.bits[16-23] and `core_max` is from CPUID[0x4].0x0.bits[26
-31].
The commit 88dd4ca0 changed the encoding rule of `core_max` but didn't
update `lp_max`, so that NetBSD would get the wrong smt information,
which leads to the module loading failure.
Luckily, the commit f985a1195ba2 ("i386/cpu: Fix number of addressable
IDs field for CPUID.01H.EBX[23:16]") updated the encoding rule for
`lp_max` and accidentally fixed the NetBSD issue too. This also shows
that using CPUID[0x1] and CPUID[0x4].0x0 to calculate HT/SMT information
is a common practice to detect CPU topology on legacy Intel CPUs.
Therefore, it's necessary to backport the commit f985a1195ba2 to
previous stable QEMU to help address the similar issues as well. Then
the compat property is not needed any more since all stable QEMUs will
follow the same encoding way.
So, in CPUID[0x1], move addressable ID encoding out of compat property.
Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Inspired-by: Chuang Xu <xuchuangxclwt@bytedance.com>
Fixes: commit f985a1195ba2 ("i386/cpu: Fix number of addressable IDs field for CPUID.01H.EBX[23:16]")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3061
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Tested-by: Michael Tokarev <mjt@tls.msk.ru>
Message-ID: <20250804053548.1808629-1-zhao1.liu@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
|
|
staging
target-arm queue:
* Add missing 64-bit PMCCNTR in AArch32 mode
* Reinstate bogus AArch32 DBGDTRTX register for migration compat
* fix big-endian handling of AArch64 FPU registers in gdbstub
* fix handling of setting SVE registers from gdbstub
* hw/intc/arm_gicv3_kvm: fix writing of enable/active/pending state to KVM
* hw/display/framebuffer: Add cast to force 64x64 multiply
* tests/tcg: Fix run for tests with specific plugin
# -----BEGIN PGP SIGNATURE-----
#
# iQJMBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmiM4mgZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3lH/D/iniJpHRVDVAvHcYe7vSgLl
# HHfdEro/lOJJbaktQwOwkSuyl5HFy3YoIg3/5K2kX40DRkeA/M1HWkaWpwpCUReV
# 6XS8fCDmxw5M0oncJsTD1cYxCAAHm/CSt2uvdwgHo6nU+vnEa85ml3Q57phLEkvl
# 2R6xjXDD2FY3Xi6l2Jvqhnx/y60D5YnZVo/G9jcwRI2kIvpwTxukge5rGRTeagzL
# fKwsgr8jThvWyzTJtd88n36uD8xiH8/IfHh+e0kGYfzPRjEGfN3rKh4OlyfRyv7D
# AVI8qgVz0ex7DEjJTCS2nNYmNhO8hTE+cybcsH6AU2e3V7/vqg3Lh0/1cWlmvGnR
# 8L0/RBy0exPI1kRABfjXPV4VtNSuByxp+F+s4LvUrxgnnbv29ldOnQNHn3BZJtZn
# OuuixZNa3/tJFa+2U20fPW+q2H9uhPhvLn5fhtCx1ucYONLMrWl3Z8Q3/qwbW+5e
# FR459UaVHUvqKDGL6cjnaQ3VclrsXngCbeBmLm7fDfniRf/4uIc3q6RzdwY3waj3
# t7D/+GmLwZzajEaCU1NcI+Uz+yO/wJhEXUtWAzm6xeowYfOEeZc1pRgGWSqy4qvi
# L9vKmZtRW5LvwLwpMLdcoB3BOIszSDy7AylX4onSWl3Vp3GYiOhYqv9OKlQoUGtu
# xjFCVDCB/0FPl9b+xoYK
# =lN06
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 01 Aug 2025 11:51:04 EDT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20250801' of https://gitlab.com/pm215/qemu:
tests/tcg: Fix run for tests with specific plugin
target/arm: Fix handling of setting SVE registers from gdb
target/arm: Fix big-endian handling of NEON gdb remote debugging
target/arm: Reinstate bogus AArch32 DBGDTRTX register for migration compat
hw/display/framebuffer: Add cast to force 64x64 multiply
hw/intc/arm_gicv3_kvm: Write all 1's to clear enable/active
hw/intc/arm_gicv3_kvm: Remove writes to ICPENDR registers
target/arm: add support for 64-bit PMCCNTR in AArch32 mode
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The code to handle setting SVE registers via the gdbstub is broken:
* it sets each pair of elements in the zregs[].d[] array in the
wrong order for the most common (little endian) case: the least
significant 64-bit value comes first
* it makes no attempt to handle target_endian()
* it does a simple copy out of the (target endian) gdbstub buffer
into the (host endan) zregs data structure, which is wrong on
big endian hosts
Fix all these problems:
* use ldq_p() to read from the gdbstub buffer
* check target_big_endian() to see if we need to handle the
128-bit values the opposite way around
Cc: qemu-stable@nongnu.org
Signed-off-by: Vacha Bhavsar <vacha.bhavsar@oss.qualcomm.com>
Message-id: 20250722173736.2332529-3-vacha.bhavsar@oss.qualcomm.com
[PMM: adjusted commit message, fixed spacing]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
In the code for allowing the gdbstub to set the value of an AArch64
FP/SIMD register, we weren't accounting for target_big_endian()
being true. This meant that for aarch64_be-linux-user we would
set the two halves of the FP register the wrong way around.
The much more common case of a little-endian guest is not affected;
nor are big-endian hosts.
Correct the handling of this case.
Cc: qemu-stable@nongnu.org
Signed-off-by: Vacha Bhavsar <vacha.bhavsar@oss.qualcomm.com>
Message-id: 20250722173736.2332529-2-vacha.bhavsar@oss.qualcomm.com
[PMM: added comment, expanded commit message, fixed missing space]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
In commit 655659a74a we fixed some bugs in the encoding of the
Debug Communications Channel registers, including that we were
incorrectly exposing an AArch32 register at p14, 3, c0, c5, 0.
Unfortunately removing a register is a break of forwards migration
compatibility for TCG, because we will fail the migration if the
source QEMU passes us a cpreg which the destination QEMU does not
have. We don't have a mechanism for saying "it's OK to ignore this
sysreg in the inbound data", so for the 10.1 release reinstate the
incorrect AArch32 register.
(We probably have had other cases in the past of breaking migration
compatibility like this, but we didn't notice because we didn't test
and in any case not that many people care about TCG migration
compatibility. KVM migration compat is not affected because for KVM
we treat the kernel as the source of truth for what system registers
are present.)
Fixes: 655659a74a36b ("target/arm: Correct encoding of Debug Communications Channel registers")
Reported-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250731134338.250203-1-peter.maydell@linaro.org
|
|
In the PMUv3, a new AArch32 64-bit (MCRR/MRRC) accessor for the
PMCCNTR was added. In QEMU we forgot to implement this, so only
provide the 32-bit accessor. Since we have a 64-bit PMCCNTR
sysreg for AArch64, adding the 64-bit AArch32 version is easy.
We add the PMCCNTR to the v8_cp_reginfo because PMUv3 was added
in the ARMv8 architecture. This is consistent with how we
handle the existing PMCCNTR support, where we always implement
it for all v7 CPUs. This is arguably something we should
clean up so it is gated on ARM_FEATURE_PMU and/or an ID
register check for the relevant PMU version, but we should
do that as its own tidyup rather than being inconsistent between
this PMCCNTR accessor and the others.
Since the register name is the same as the 32-bit PMCCNTR, we set
ARM_CP_NO_GDB on the 32-bit one to avoid generating an invalid GDB XML.
See https://developer.arm.com/documentation/ddi0601/2024-06/AArch32-Registers/PMCCNTR--Performance-Monitors-Cycle-Count-Register?lang=en
Note for potential backporting:
* this code in cpregs-pmu.c will be in helper.c on stable
branches that don't have commit ae2086426d37
Cc: qemu-stable@nongnu.org
Signed-off-by: Alex Richardson <alexrichardson@google.com>
Message-id: 20250725170136.145116-1-alexrichardson@google.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
On LoongArch64 system, the high 32 bit of 64 bit virtual address should be
0x00000[0-7]yyy or 0xffff8yyy. The bit from 47 to 63 should be all 0 or
all 1.
Function get_physical_address() only checks bit 48 to 63, there will be
problem with the following test case. On physical machine, there is bus
error report and program exits abnormally. However on qemu TCG system
emulation mode, the program runs normally. The virtual address
0xffff000000000000ULL + addr and addr are treated the same on TLB entry
checking. This patch fixes this issue.
void main()
{
void *addr, *addr1;
int val;
addr = malloc(100);
*(int *)addr = 1;
addr1 = 0xffff000000000000ULL + addr;
val = *(int *)addr1;
printf("val %d \n", val);
}
Cc: qemu-stable@nongnu.org
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Acked-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20250714015446.746163-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
|
|
RISC-V AIA Spec states:
"For a machine-level environment, extension Smaia encompasses all added
CSRs and all modifications to interrupt response behavior that the AIA
specifies for a hart, over all privilege levels. For a supervisor-level
environment, extension Ssaia is essentially the same as Smaia except
excluding the machine-level CSRs and behavior not directly visible to
supervisor level."
Since midelegh is an AIA machine-mode CSR, add Smaia extension check in
aia_smode32 predicate.
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Jay Chang <jay.chang@sifive.com>
Reviewed-by: Nutty Liu<liujingqi@lanxincomputing.com>
Message-ID: <20250701030021.99218-3-jay.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
RISC-V Privileged Spec states:
"In harts with S-mode, the medeleg and mideleg registers must exist, and
setting a bit in medeleg or mideleg will delegate the corresponding trap
, when occurring in S-mode or U-mode, to the S-mode trap handler. In
harts without S-mode, the medeleg and mideleg registers should not
exist."
Add smode predicate to ensure these CSRs are only accessible when S-mode
is supported.
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Jay Chang <jay.chang@sifive.com>
Reviewed-by: Nutty Liu<liujingqi@lanxincomputing.com>
Message-ID: <20250701030021.99218-2-jay.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
When supervisor CSRs are accessed from VU-mode, a virtual instruction
exception should be raised instead of an illegal instruction.
Fixes: c1fbcecb3a (target/riscv: Fix csr number based privilege checking)
Signed-off-by: Xu Lu <luxu.kernel@bytedance.com>
Reviewed-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Nutty Liu <liujingqi@lanxincomputing.com>
Message-ID: <20250708060720.7030-1-luxu.kernel@bytedance.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
This reverts commit 28c12c1f2f50d7f7f1ebfc587c4777ecd50aac5b.
As reported in [1] this commit is breaking Linux vector code, and
although a simpler reproducer was provided, the fix itself isn't trivial
due to the amount and the nature of the changes. And we really do not
want to keep Linux broken while we work on it.
The revert will fix Linux and will give us time to do a proper fix.
[1] https://mail.gnu.org/archive/html/qemu-devel/2025-07/msg02525.html
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Message-ID: <20250710100525.372985-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
GETPC() should always be called from the top level helper, e.g. the
first helper that is called by the translation code. We stopped doing
that in commit 3157a553ec, and then we introduced problems when
unwinding the exceptions being thrown by helper_mret(), as reported by
[1].
Call GETPC() at the top level helper and pass the value along.
[1] https://gitlab.com/qemu-project/qemu/-/issues/3020
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Fixes: 3157a553ec ("target/riscv: Add Smrnmi mnret instruction")
Closes: https://gitlab.com/qemu-project/qemu/-/issues/3020
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Nutty Liu <liujingqi@lanxincomputing.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20250714133739.1248296-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
pmp_is_in_range() prefers to match addresses within the interval
[start, end]. To archieve this, pmpaddrX is decremented during the end
address update.
In TOR mode, a rule is ignored if its start address is greater than or
equal to its end address.
However, if pmpaddrX is set to 0, this decrement operation causes the
calulated end address to wrap around to UINT_MAX. In this scenario, the
address guard for this PMP entry would become ineffective.
This patch addresses the issue by moving the guard check earlier,
preventing the problematic wraparound when pmpaddrX is zero.
Signed-off-by: Vac Chen <vacantron@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20250706065554.42953-1-vacantron@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
Misc HW patches
- Fix MIPS MVPControl.EVP update
- Fix qxl_unpack_chunks() chunk size calculation
- Fix Cadence GEM register mask initialization
- Fix AddressSpaceDispatch use after free
- Fix building npcm7xx/npcm8xx bootroms
- Include missing headers
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmiItwoACgkQ4+MsLN6t
# wN5OGw//SFNgCvin6ic3H+QoUNwrRAH7eFuVfAKSKGopSqWf19imHy8rZl/8DYeo
# WsCRUPkVcAGzgRHZFc+8VYGdSR5GW7AulSzHh7fGQ8EFNunu3cnGsDflVV6UjgRP
# wnCfFuyrnyGfXVWkkjWYqCLI78AR0hB0Gp1E5nR4ZwGM4OhatDjKpYxWlRZbnjSA
# pBArLw8eKUrq90RekVpsa15oF9eMU89HzDBfxYvk0tb4//BWBiWfgQ+cz7j9f1wC
# wtTOEQ2BTkvGhqhe9VacV4YpQDXE9comlTked48GzHGqsAgp55NcB6FAR438qiG1
# 3z7LpL4LQn39+oC0S9cR2OahIGFEveOvGJoj014Iny4QR/ghNzt3F2Z9tgPISIKj
# MhJ0Bu7K7X+RWikY9xiAu24ORrRd5O6EItgLsl+24vkySOKODZ85WdKtIx0DQ7Yj
# rvRTkFDs/3K3kzMfZ20Jpeu7Bc74qUgsii27rivM/9rN0R9w+Br8MWLe0QSFalUe
# 08NoRZMVuSPCWlvJGGb0SRYpVAZsZaE9Ucd8wQzEcjHdVu0/+7KQfACXrJ09Y8sq
# lTgytCL8gO2jSEAh4cN/Ds1uBc8X5KKL32hNzRgddZVujqAuriBjAYEEk1pc7qe4
# yBxVkhASOpY53b1O2UqanajT2vY4T3JX5w+Jqn1HubZ/ZUwcK64=
# =H2Ie
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 29 Jul 2025 07:56:58 EDT
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'hw-misc-20250729' of https://github.com/philmd/qemu:
hw/display/sm501: fix missing error-report.h
roms/Makefile: fix npcmNxx_bootrom build rules
system/physmem: fix use-after-free with dispatch
hw/xen/passthrough: add missing error-report include
hw/net/cadence_gem: fix register mask initialization
migration: rename target.c to vfio.c
hw/vfio/vfio-migration: Remove unnecessary 'qemu/typedefs.h' include
hw/display/qxl-render: fix qxl_unpack_chunks() chunk size calculation
target/mips: Only update MVPControl.EVP bit if executed by master VPE
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
According to the 'MIPS MT Application-Specific Extension' manual:
If the VPE executing the instruction is not a Master VPE,
with the MVP bit of the VPEConf0 register set, the EVP bit
is unchanged by the instruction.
Modify the DVPE/EVPE opcodes to only update the MVPControl.EVP bit
if executed on a master VPE.
Cc: qemu-stable@nongnu.org
Reported-by: Hansni Bu
Buglink: https://bugs.launchpad.net/qemu/+bug/1926277
Fixes: f249412c749 ("mips: Add MT halting and waking of VPEs")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-ID: <20210427133343.159718-1-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
|
|
* rust: small cleanups + script to update packages
* target/i386: AVX bugfix
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCgAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmiDfdIUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroO94Af7BJomIpZfOvtE/NJFXNfjdMrVQMhc
# A1BzFahs0MY0Zg3SzVu+wQa6yG2m4sHlqFVQBBCoUCL8Fu7UQoCJesMkCvI6KQly
# rZ/5Pp6zZWs4CXR+3mBsw0YqPGG/+rjPxsJf32Z04yrCFPZha7+V9Y+ABDCHv3cZ
# IIRQwzIPNu0kv8qeBeXZ5ZfBghsmRiQTJTCv0agezp+5jMH1mtATLUqnKiOMLlLh
# ERcw6n74bY7MXqIfFlYRfNmJ+v2jHZQbP3MhEk8ReXfhx2yC9axpppfm6a/bDjhU
# iCSSgAi7+Kj/7GPp6TdDmvQTvg3tKRdiEcvnxF95EIvcsu8L8wEPNJAzFA==
# =H4e7
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 25 Jul 2025 08:51:30 EDT
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
target/i386: fix width of third operand of VINSERTx128
scripts: add script to help distros use global Rust packages
rust/pl011: merge device_class.rs into device.rs
rust: devices are not staticlibs
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Table A-5 of the Intel manual incorrectly lists the third operand of
VINSERTx128 as Wqq, but it is actually a 128-bit value. This is
visible when W is a memory operand close to the end of the page.
Fixes the recently-added poly1305_kunit test in linux-next.
(No testcase yet, but I plan to modify test-avx2 to use memory
close to the end of the page. This would work because the test
vectors correctly have the memory operand as xmm2/m128).
Reported-by: Eric Biggers <ebiggers@kernel.org>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: qemu-stable@nongnu.org
Fixes: 79068477686 ("target/i386: reimplement 0x0f 0x3a, add AVX", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Linux zeroes LORC_EL1 on boot at EL2, without further interaction with FEAT_LOR afterwards.
Stub out LORC_EL1 accesses as FEAT_LOR is a mandatory extension on Armv8.1+.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
In our implementation of the SVE2p1 contiguous load to 128-bit
element insns such as LD1D (scalar plus scalar, single register), we
got the order of the arguments to the DO_LD1_2() macro wrong. Here
the first argument is the element size and the second is the memory
size, and the element size is always the same size or larger than
the memory size.
For the 128-bit versions, we want to load either 32-bit or 64-bit
values from memory and extend them to the 128-bit vector element, but
were trying to load 128 bit values and then stuff them into 32-bit or
64-bit vector elements. Correct the macro ordering.
Fixes: fc5f060bcb7b ("target/arm: Implement {LD1, ST1}{W, D} (128-bit element) for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250723165458.3509150-7-peter.maydell@linaro.org
|
|
Our implementation of the helper functions for the LD1Q and ST1Q
insns reused the existing DO_LD1_ZPZ_D and DO_ST1_ZPZ_D macros. This
passes the wrong esize (8, not 16) to sve_ldl_z().
Create new macros DO_LD1_ZPZ_Q and DO_ST1_ZPZ_Q which pass the
correct esize, and use them for the LD1Q and ST1Q helpers.
Fixes: d2aa9a804ee ("target/arm: Implement LD1Q, ST1Q for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250723165458.3509150-6-peter.maydell@linaro.org
|
|
Unlike the "LD1D (scalar + vector)" etc instructions, LD1Q is
vector + scalar. This means that:
* the vector and the scalar register are in opposite fields
in the encoding
* 31 in the scalar register field is XZR, not XSP
The same applies for ST1Q.
This means we can't reuse the trans_LD1_zprz() and trans_ST1_zprz()
functions for LD1Q and ST1Q. Split them out to use their own
trans functions.
Note that the change made here to sve.decode requires the decodetree
bugfix "decodetree: Infer argument set before inferring format" to
avoid a spurious compile-time error about "dtype".
Fixes: d2aa9a804ee678f ("target/arm: Implement LD1Q, ST1Q for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250723165458.3509150-5-peter.maydell@linaro.org
|
|
Instead of trying to pack mtedesc into the upper 17 bits of a 32-bit
gvec descriptor, pass the gvec descriptor in the lower 32 bits and
the mte descriptor in the upper 32 bits of a 64-bit operand.
This fixes two bugs:
(1) in gen_sve_ldr() and gen_sve_str() call gen_mte_checkN() with a
length value which is the SVE vector length and can be up to 256
bytes. We don't assert there that it fits in the descriptor, so
we would just fail to do the MTE checks on the right length of memory
if the VL is more than 32 bytes
(2) the new-in-SVE2p1 insns LD3Q, LD4Q, ST3Q, ST4Q also involve
transfers of more than 32 bytes of memory. In this case we would
assert at translate time.
(Note for potential backporting: this commit depends on the previous
"target/arm: Expand the descriptor for SME/SVE memory ops to i64".)
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20250723165458.3509150-3-peter.maydell@linaro.org
[PMM: expand commit message to clarify that we are fixing bugs here]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
We have run out of room attempting to pack both the gvec
descriptor and the mte descriptor into 32 bits.
Here, change nothing except the parameter type, which
affects all declarations, the function typedefs, and the
type used with tcg expansion.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20250723165458.3509150-2-peter.maydell@linaro.org
|
|
Commit a2260983c655 ("hvf: arm: Add support for GICv3") added GICv3 support
by implementing emulation for a few system registers. ICC_RPR_EL1 was
defined but not plugged in the sysreg handlers (for no good reason).
Fix it.
Fixes: a2260983c655 ("hvf: arm: Add support for GICv3")
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20250714160139.10404-3-zenghui.yu@linux.dev
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Quoting Peter Maydell:
" hvf_sysreg_read_cp() and hvf_sysreg_write_cp() do not check the .access
field of the ARMCPRegInfo to ensure that they forbid writes to registers
that are marked with a .access field that says they're read-only (and
ditto reads to write-only registers). "
Before we add more registers in GIC sysreg handlers, let's get it correct
by adding the .access checks to hvf_sysreg_read_cp() and
hvf_sysreg_write_cp(). With that, a sysreg access with invalid permission
will result in an UNDEFINED exception.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Message-id: 20250714160139.10404-2-zenghui.yu@linux.dev
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
For the LD1Q instruction (gather load of quadwords) we use the
LD1_zprz pattern with MO_128 elements. At this element size there is
no signed vs unsigned distinction, and we only set the 'u' bit in the
arg_LD1_zprz struct because we share the code and decode struct with
smaller element sizes.
However, we set u=0 in the decode pattern line but then accidentally
asserted that it was 1 in the trans function. Since our usual convention
is that the "default" is unsigned and we only mark operations as signed
when they really do need to extend, change the decode pattern line to
set u=1 to match the assert.
Fixes: d2aa9a804ee6 ("target/arm: Implement LD1Q, ST1Q for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-11-peter.maydell@linaro.org
|
|
The FMAXNMQV and FMINNMQV insns use the default NaN as their identity
value for inactive source vector elements. We open-coded this in
sve_helper.c, hoping to avoid a function call. However, this fails
to account for FPCR.AH=1 changing the default NaN value to set the
sign bit. Use a call to floatN_default_nan() to obtain this value.
Fixes: 1de7ecfc12d05 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-10-peter.maydell@linaro.org
|
|
In the part of the SVE DO_REDUCE macro used by the SVE2p1 FMAXQV,
FMINQV, etc insns, we incorrectly applied the H() macro twice when
calculating an offset to add to the vn pointer. This has no effect
on little-endian hosts but on big-endian hosts the two invocations
will cancel each other out and we will access the wrong part of the
array.
The "s * 16" part of the expression is already aligned, so we only
need to use the H macro on the "e". Correct the macro usage.
Fixes: 1de7ecfc12d05 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-9-peter.maydell@linaro.org
|
|
When we implemented the FMAXQV and FMINQV insns we accidentally
inverted the sense of the FPCR.AH test, so we gave the AH=1 behaviour
when FPCR.AH was zero, and vice-versa. (The difference is limited to
handling of negative zero and NaN inputs.)
Fixes: 1de7ecfc12d05 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20250718173032.2498900-8-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the FMLA and FMLS insns in
the SVE floating-point multiply-add (indexed) insn group. Implement
these.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-7-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the FMLA and FMLS insns in
the "SVE floating-point multiply-accumulate writing addend" group,
encoded as sz=0b00.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-6-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds a bfloat16 version of the FMUL insn in the
floating-point multiply (indexed) instruction group. The encoding
is slightly bespoke; in our implementation we use MO_8 to indicate
bfloat16, as with the other B16B16 insns.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-5-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the SVE floating point
(predicated) instructions, which are encoded via sz=0b00. Add the
BFMAX and BFMIN insns. These have separate behaviour for AH=1 and
AH=0; we have already implemented the AH=1 helper for the SME2
versions of these insns.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-4-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the SVE floating point
(predicated) instructions, which are encoded via sz=0b00.
Add BFADD, BFSUB, BFMUL, BFMAXNM, BFMINNM; these are all the insns
in this group which do not change behaviour for AH=1.
We will deal with BFMAX/BFMIN (which do have different AH=1
behaviour) in a following commit.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-3-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the SVE floating point
(unpredicated) instructions, which are encoded via sz==0b00.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-2-peter.maydell@linaro.org
|
|
If you try to build aarch64-linux-user with clang and --enable-debug then it
fails to compile:
ld: libqemu-aarch64-linux-user.a.p/target_arm_cpu64.c.o: in function `cpu_arm_set_sve':
../../target/arm/cpu64.c:321:(.text+0x1254): undefined reference to `kvm_arm_sve_supported'
This is a regression introduced in commit f86d4220, which switched
the kvm-stub.c file away from being built for all arm targets to only
being built for system emulation binaries. It doesn't affect gcc,
presumably because even at -O0 gcc folds away the always-false
kvm_enabled() condition but clang does not.
We would prefer not to build kvm-stub.c once for usermode and once
for system-emulation binaries, and we can't build it just once for
both because it includes cpu.h. So instead provide always-false
versions of the five functions that are valid to call without KVM
support in kvm_arm.h.
Fixes: f86d42205c2eba ("target/arm/meson: accelerator files are not needed in user mode")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3033
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-id: 20250714135152.1896214-1-peter.maydell@linaro.org
|
|
We don't implement the Debug Communications Channel (DCC), but
we do attempt to provide dummy versions of its system registers
so that software that tries to access them doesn't fall over.
However, we got the tx/rx register definitions wrong. These
should be:
AArch32:
DBGDTRTX p14 0 c0 c5 0 (on writes)
DBGDTRRX p14 0 c0 c5 0 (on reads)
AArch64:
DBGDTRTX_EL0 2 3 0 5 0 (on writes)
DBGDTRRX_EL0 2 3 0 5 0 (on reads)
DBGDTR_EL0 2 3 0 4 0 (reads and writes)
where DBGDTRTX and DBGDTRRX are effectively different names for the
same 32-bit register, which has tx behaviour on writes and rx
behaviour on reads. The AArch64-only DBGDTR_EL0 is a 64-bit wide
register whose top and bottom halves map to the DBGDTRRX and DBGDTRTX
registers.
Currently we have just one cpreg struct, which:
* calls itself DBGDTR_EL0
* uses the DBGDTRTX_EL0/DBGDTRRX_EL0 encoding
* is marked as ARM_CP_STATE_BOTH but has the wrong opc1
value for AArch32
* is implemented as RAZ/WI
Correct the encoding so:
* we name the DBGDTRTX/DBGDTRRX register correctly
* we split it into AA64 and AA32 versions so we can get the
AA32 encoding right
* we implement DBGDTR_EL0 at its correct encoding
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2986
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250708141049.778361-1-peter.maydell@linaro.org
|
|
We don't synchronize vcpu registers from the hardware accelerator (e.g., by
cpu_synchronize_state()) in the Dabort handler, so env->pc points to the
instruction which has nothing to do with the Dabort at all.
And it doesn't seem to make much sense to log PC in every Dabort handler,
let's just remove it from this trace event.
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Reviewed-by: Mads Ynddal <mads@ynddal.dk>
Message-id: 20250713154719.4248-1-zenghui.yu@linux.dev
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Commit 40da501d8989 ("i386/tdx: handle TDG.VP.VMCALL<GetQuote>") added
redundant qemu_mutex_init(&tdx->lock) in tdx_guest_init by mistake.
Fix it by removing the redundant one.
Fixes: 40da501d8989 ("i386/tdx: handle TDG.VP.VMCALL<GetQuote>")
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Link: https://lore.kernel.org/r/20250717103707.688929-1-xiaoyao.li@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The implementation of host_cpu_max_instance_init() was merged into
host_cpu_instance_init() by commit 29f1ba338baf ("target/i386: merge
host_cpu_instance_init() and host_cpu_max_instance_init()"), while the
declaration of it remains in host-cpu.h.
Clean it up.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Link: https://lore.kernel.org/r/20250716063117.602050-1-xiaoyao.li@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Take tdx_guest->lock when injecting the event notification interrupt into
the guest.
Fixes CID 1612364.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
In x86_cpu_post_initfn(), the initialization of x86_ext_save_areas[]
marks the unsupported xsave areas based on Host support.
This step must be done before accel_cpu_instance_init(), otherwise,
KVM's assertion on host xsave support would fail:
qemu-system-x86_64: ../target/i386/kvm/kvm-cpu.c:149:
kvm_cpu_xsave_init: Assertion `esa->size == eax' failed.
(on AMD EPYC 7302 16-Core Processor)
Move x86_ext_save_areas[] initialization to .instance_init and place it
before accel_cpu_instance_init().
Fixes: commit 5f158abef44c ("target/i386: move accel_cpu_instance_init to .instance_init")
Reported-by: Paolo Abeni <pabeni@redhat.com>
Tested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Link: https://lore.kernel.org/r/20250717023933.2502109-1-zhao1.liu@intel.com
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|