Age | Commit message (Collapse) | Author | Files | Lines |
|
staging
* Remove unused 32-bit arm Linux headers
* Fix some small issues in the functional tests and docs
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCgAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmh99uIRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbUxsQ//XlRxmO5iChFc68yFF/zy7iVgLa5mQDws
# MeFQm5agBSRp7kK0zwb08FxE9nOzwh9OljdUUWfg858OWiHeFLiMyn85c/RM7SBn
# qovku4TfmP7TyII/czU7KbejvJvA6xrV7Adm1ltiwmV/fAueJ/RTknzY7Omy0hgV
# crRJP+xU1MWAg892QkRPrwOS1HfAsrJJs5XFkNJS9SzYhR1SSUwCGKl2qtADCUdP
# Vik88CiwMWhHiyutbsqQX1AOo+UHcNq1r+IcabqZqLed2au4sChxTq9S9xEKTLQ5
# 3QEFG2vy/QkgIpWeOkpYBQt7kQyyo0XUMECL16CY8tLaDq6/sgooSGU4WK7TxgAU
# 66GiV/VA0nJi2QkOdx9mH1BGcEjR9UMvjnvNdOUYZ2nwfg77vjHjDdI6+DFITf/W
# 3EPCKGaZBijYqsLxK2kAzM21lj+6XGXcuYnUGVWw5xte+2J4pr7LRuj1ZgWgQo8i
# qU9pS9HAz37IQumAz5ibi8/MeeyRqQkGjqyXvCL5v3PM4Ct6atgbpN0pW17GnUZ7
# oJQMxpnsie1l2VmdwbMZX8MOmnTA37AX2fMfhsjctGRtiF+Jd4XW7gAtDdsdhBtz
# I56DxTt+2oe/P35xpggH7s75Xj9k78QpWcG4HQcCxEsXNbFk0kWf6+UMCGyP1H5F
# 6kcO8zKrqBo=
# =HPaT
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 21 Jul 2025 04:14:26 EDT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2025-07-21' of https://gitlab.com/thuth/qemu:
docs/devel: fix over-quoting of QEMU_TEST_KEEP_SCRATCH
functional: always enable all python warnings
functional: ensure sockets and files are closed
functional: ensure log handlers are closed
linux-headers: Remove the 32-bit arm headers
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCAAdFiEEIV1G9IJGaJ7HfzVi7wSWWzmNYhEFAmh91p0ACgkQ7wSWWzmN
# YhG+2wgAqw3G2TGRPT29ObyYDcd2Z54jdnNpX5gEND/UnqENprXfdD3PR58bnxe3
# uJGPRkMXgkIDit61lshsb8DF8x9ZEIlm/Ax5FM0ksBczWDYHiyEuXoyt2Uai1kWY
# fLBkVfjFqCu1AGniboCZiC4ZawZXIqkx/+DI3J/XRqa+bSCQ18I15dsLD/yxU/pp
# Hwxp07/d+UayANdxs0mZ5Lr7a1ktTgytCt0O2jQNHlMzfOvdBbVbF/WGclMWfNgI
# 68HWPY7P8k8jRTRFx3H/0LyYQrPyseTpa3zHC+zW9jNskkPkhCwlAY4UDC8x8LII
# OjsDc/0nre626rNCiJlifD3UJ7t86A==
# =xj23
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 21 Jul 2025 01:56:45 EDT
# gpg: using RSA key 215D46F48246689EC77F3562EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [full]
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* tag 'net-pull-request' of https://github.com/jasowang/qemu:
net/vhost-user: Remove unused "err" from chr_closed_bh() (CID 1612365)
net/passt: Initialize "error" variable in net_passt_send() (CID 1612368)
net/passt: Check return value of g_remove() in net_passt_cleanup() (CID 1612369)
net/passt: Remove dead code in passt_vhost_user_start error path (CID 1612371)
net/vhost-user: Remove unused "err" from net_vhost_user_event() (CID 1612372)
net/passt: Remove unused "err" from passt_vhost_user_event() (CID 1612375)
hw/net/npcm_gmac.c: Drop 'buf' local variable
hw/net/npcm_gmac.c: Correct test for when to reallocate packet buffer
hw/net/npcm_gmac.c: Unify length and prev_buf_size variables
hw/net/npcm_gmac.c: Send the right data for second packet in a row
tap: fix net_init_tap() return code
net/tap: drop too small packets
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Some CA files may contain multiple intermediaries and roots of trust.
These may not fit into the hard-coded limit of 16.
Extend the validation code to allocate enough space to load all of the
certificates present in the CA file and ensure they are cleaned up.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Henry Kleynhans <hkleynhans@fb.com>
[DB: drop MAX_CERTS constant & whitespace tweaks]
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
Commit a2260983c655 ("hvf: arm: Add support for GICv3") added GICv3 support
by implementing emulation for a few system registers. ICC_RPR_EL1 was
defined but not plugged in the sysreg handlers (for no good reason).
Fix it.
Fixes: a2260983c655 ("hvf: arm: Add support for GICv3")
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20250714160139.10404-3-zenghui.yu@linux.dev
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Quoting Peter Maydell:
" hvf_sysreg_read_cp() and hvf_sysreg_write_cp() do not check the .access
field of the ARMCPRegInfo to ensure that they forbid writes to registers
that are marked with a .access field that says they're read-only (and
ditto reads to write-only registers). "
Before we add more registers in GIC sysreg handlers, let's get it correct
by adding the .access checks to hvf_sysreg_read_cp() and
hvf_sysreg_write_cp(). With that, a sysreg access with invalid permission
will result in an UNDEFINED exception.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Message-id: 20250714160139.10404-2-zenghui.yu@linux.dev
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
For the LD1Q instruction (gather load of quadwords) we use the
LD1_zprz pattern with MO_128 elements. At this element size there is
no signed vs unsigned distinction, and we only set the 'u' bit in the
arg_LD1_zprz struct because we share the code and decode struct with
smaller element sizes.
However, we set u=0 in the decode pattern line but then accidentally
asserted that it was 1 in the trans function. Since our usual convention
is that the "default" is unsigned and we only mark operations as signed
when they really do need to extend, change the decode pattern line to
set u=1 to match the assert.
Fixes: d2aa9a804ee6 ("target/arm: Implement LD1Q, ST1Q for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-11-peter.maydell@linaro.org
|
|
The FMAXNMQV and FMINNMQV insns use the default NaN as their identity
value for inactive source vector elements. We open-coded this in
sve_helper.c, hoping to avoid a function call. However, this fails
to account for FPCR.AH=1 changing the default NaN value to set the
sign bit. Use a call to floatN_default_nan() to obtain this value.
Fixes: 1de7ecfc12d05 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-10-peter.maydell@linaro.org
|
|
In the part of the SVE DO_REDUCE macro used by the SVE2p1 FMAXQV,
FMINQV, etc insns, we incorrectly applied the H() macro twice when
calculating an offset to add to the vn pointer. This has no effect
on little-endian hosts but on big-endian hosts the two invocations
will cancel each other out and we will access the wrong part of the
array.
The "s * 16" part of the expression is already aligned, so we only
need to use the H macro on the "e". Correct the macro usage.
Fixes: 1de7ecfc12d05 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-9-peter.maydell@linaro.org
|
|
When we implemented the FMAXQV and FMINQV insns we accidentally
inverted the sense of the FPCR.AH test, so we gave the AH=1 behaviour
when FPCR.AH was zero, and vice-versa. (The difference is limited to
handling of negative zero and NaN inputs.)
Fixes: 1de7ecfc12d05 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20250718173032.2498900-8-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the FMLA and FMLS insns in
the SVE floating-point multiply-add (indexed) insn group. Implement
these.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-7-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the FMLA and FMLS insns in
the "SVE floating-point multiply-accumulate writing addend" group,
encoded as sz=0b00.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-6-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds a bfloat16 version of the FMUL insn in the
floating-point multiply (indexed) instruction group. The encoding
is slightly bespoke; in our implementation we use MO_8 to indicate
bfloat16, as with the other B16B16 insns.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-5-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the SVE floating point
(predicated) instructions, which are encoded via sz=0b00. Add the
BFMAX and BFMIN insns. These have separate behaviour for AH=1 and
AH=0; we have already implemented the AH=1 helper for the SME2
versions of these insns.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-4-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the SVE floating point
(predicated) instructions, which are encoded via sz=0b00.
Add BFADD, BFSUB, BFMUL, BFMAXNM, BFMINNM; these are all the insns
in this group which do not change behaviour for AH=1.
We will deal with BFMAX/BFMIN (which do have different AH=1
behaviour) in a following commit.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-3-peter.maydell@linaro.org
|
|
FEAT_SVE_B16B16 adds bfloat16 versions of the SVE floating point
(unpredicated) instructions, which are encoded via sz==0b00.
Fixes: 7b1613a1020d2942 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-2-peter.maydell@linaro.org
|
|
commit ad8e0e8a0088 removed the "======" underlining the file title
which broke documentation rendering. Add it back.
Fixes: ad8e0e8a0088 ("docs: add support for gb200-bmc")
Cc: Ed Tanous <etanous@nvidia.com>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Ed Tanous <etanous@nvidia.com>
Message-id: 20250715061904.97540-1-clg@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Coverity Scan noted an unusual pattern in the
MAX78000 aes device, with duplicated calls to
set_decrypt. This commit adds a comment noting
why the implementation is correct.
Signed-off-by: Jackson Donaldson <jcksn@duck.com>
Message-id: 20250716002622.84685-1-jcksn@duck.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
In commit b0438861efe ("host-utils: Avoid using __builtin_subcll on
buggy versions of Apple Clang") we added a workaround for a bug in
Apple Clang 14 where its __builtin_subcll() implementation was wrong.
This bug was only present in Apple Clang 14, not in upstream clang,
and is not present in Apple Clang versions 15 and newer.
Since commit 4e035201 we have required at least Apple Clang 15, so we
no longer build with the buggy versions. We can therefore drop the
workaround. This is effectively a revert of b0438861efe.
This should not be backported to stable branches, which may still
need to support Apple Clang 14.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3030
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20250714145033.1908788-1-peter.maydell@linaro.org
|
|
If you try to build aarch64-linux-user with clang and --enable-debug then it
fails to compile:
ld: libqemu-aarch64-linux-user.a.p/target_arm_cpu64.c.o: in function `cpu_arm_set_sve':
../../target/arm/cpu64.c:321:(.text+0x1254): undefined reference to `kvm_arm_sve_supported'
This is a regression introduced in commit f86d4220, which switched
the kvm-stub.c file away from being built for all arm targets to only
being built for system emulation binaries. It doesn't affect gcc,
presumably because even at -O0 gcc folds away the always-false
kvm_enabled() condition but clang does not.
We would prefer not to build kvm-stub.c once for usermode and once
for system-emulation binaries, and we can't build it just once for
both because it includes cpu.h. So instead provide always-false
versions of the five functions that are valid to call without KVM
support in kvm_arm.h.
Fixes: f86d42205c2eba ("target/arm/meson: accelerator files are not needed in user mode")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3033
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-id: 20250714135152.1896214-1-peter.maydell@linaro.org
|
|
Coverity points out that the ivshmem-pci code has some error handling
cases where it incorrectly tries to use an invalid filedescriptor.
These generally happen because ivshmem_recv_msg() calls
qemu_chr_fe_get_msgfd(), which might return -1, but the code in
process_msg() generally assumes that the file descriptor was provided
when it was supposed to be. In particular:
* the error case in process_msg() only needs to close the fd
if one was provided
* process_msg_shmem() should fail if no fd was provided
Coverity: CID 1508726
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 20250711145012.1521936-1-peter.maydell@linaro.org
|
|
We don't implement the Debug Communications Channel (DCC), but
we do attempt to provide dummy versions of its system registers
so that software that tries to access them doesn't fall over.
However, we got the tx/rx register definitions wrong. These
should be:
AArch32:
DBGDTRTX p14 0 c0 c5 0 (on writes)
DBGDTRRX p14 0 c0 c5 0 (on reads)
AArch64:
DBGDTRTX_EL0 2 3 0 5 0 (on writes)
DBGDTRRX_EL0 2 3 0 5 0 (on reads)
DBGDTR_EL0 2 3 0 4 0 (reads and writes)
where DBGDTRTX and DBGDTRRX are effectively different names for the
same 32-bit register, which has tx behaviour on writes and rx
behaviour on reads. The AArch64-only DBGDTR_EL0 is a 64-bit wide
register whose top and bottom halves map to the DBGDTRRX and DBGDTRTX
registers.
Currently we have just one cpreg struct, which:
* calls itself DBGDTR_EL0
* uses the DBGDTRTX_EL0/DBGDTRRX_EL0 encoding
* is marked as ARM_CP_STATE_BOTH but has the wrong opc1
value for AArch32
* is implemented as RAZ/WI
Correct the encoding so:
* we name the DBGDTRTX/DBGDTRRX register correctly
* we split it into AA64 and AA32 versions so we can get the
AA32 encoding right
* we implement DBGDTR_EL0 at its correct encoding
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2986
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250708141049.778361-1-peter.maydell@linaro.org
|
|
We don't synchronize vcpu registers from the hardware accelerator (e.g., by
cpu_synchronize_state()) in the Dabort handler, so env->pc points to the
instruction which has nothing to do with the Dabort at all.
And it doesn't seem to make much sense to log PC in every Dabort handler,
let's just remove it from this trace event.
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Reviewed-by: Mads Ynddal <mads@ynddal.dk>
Message-id: 20250713154719.4248-1-zenghui.yu@linux.dev
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
When pushing a context, the lower-level context becomes valid if it
had V=1, and so on. Iterate lower level contexts and send them
pending interrupts if they become enabled.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-51-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
This is needed by the next patch which will re-send on all lower
rings when pushing a context.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-50-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Implement the phys (aka hard) VP push. PowerVM uses this operation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-49-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Implement set LGS for the POOL ring.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-48-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
xive2 must take into account redistribution of group interrupts if
the VP directed priority exceeds the group interrupt priority after
this operation. The xive1 code is not group aware so implement this
for xive2.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-47-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
When pushing a context, any presented group interrupt should be
redistributed before processing pending interrupts to present
highest priority.
This can occur when pushing the POOL ring when the valid PHYS
ring has a group interrupt presented, because they share signal
registers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-46-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Implement pool context push TIMA op.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-45-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Certain TIMA operations should only be performed when a ring is valid,
others when the ring is invalid, and they are considered undefined if
used incorrectly. Add checks for this condition.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-44-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
After pulling the pool context, if a pool irq had been presented and
was cleared in the process, there could be a pending irq in phys that
should be presented. Process the phys irq ring after pulling pool ring
to catch this case and avoid losing irqs.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-43-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
When the pool context is pulled, the shared pool/phys signal is
reset, which loses the qemu irq if a phys interrupt was presented.
Only reset the signal if a poll irq was presented.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-42-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
In preparation to implement POOL context push, add support for POOL
NVP context save/restore.
The NVP p bit is defined in the spec as follows:
If TRUE, the CPPR of a Pool VP in the NVP is updated during store of
the context with the CPPR of the Hard context it was running under.
It's not clear whether non-pool VPs always or never get CPPR updated.
Before this patch, OS contexts always save CPPR, so we will assume that
is the behaviour.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-41-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Add some assertions to try to ensure presented group interrupts do
not get lost without being redistributed, if they become precluded
by CPPR or preempted by a higher priority interrupt.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-40-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
When CPPR priority is decreased, pending interrupts do not need to be
re-checked if one is already presented because by definition that will
be the highest priority.
This prevents a presented group interrupt from being lost.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-39-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
OS-push operation must re-present pending interrupts. Use the
newly created xive2_tctx_process_pending() function instead of
duplicating the logic.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-38-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
The second part of the set CPPR operation is to process (or re-present)
any pending interrupts after CPPR is adjusted.
Split this presentation processing out into a standalone function that
can be used in other places.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-37-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Have xive_tctx_notify() also set the new PIPR value and rename it to
xive_tctx_pipr_set(). This can replace the last xive_tctx_pipr_update()
caller because it does not need to update IPB (it already sets it).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-36-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
The relationship between an interrupt signaled in the TIMA and the QEMU
irq line to the processor to be 1:1, so they should be raised and
lowered together and "just in case" lowering should be avoided (it could
mask
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-35-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
The tctx "signaling" registers (PIPR, CPPR, NSR) raise an interrupt on
the target CPU thread. The POOL and PHYS rings both raise hypervisor
interrupts, so they both share one set of signaling registers in the
PHYS ring. The PHYS NSR register contains a field that indicates which
ring has presented the interrupt being signaled to the CPU.
This sharing results in all the "alt_regs" throughout the code. alt_regs
is not very descriptive, and worse is that the name is used for
conversions in both directions, i.e., to find the presenting ring from
the signaling ring, and the signaling ring from the presenting ring.
Instead of alt_regs, use the names sig_regs and sig_ring, and regs and
ring for the presenting ring being worked on. Add a helper function to
get the sign_regs, and add some asserts to ensure the POOL regs are
never used to signal interrupts.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-34-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Further split xive_tctx_pipr_update() by splitting out a new function
that is used to re-compute the PIPR from IPB. This is generally only
used with XIVE1, because group interrputs require more logic.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-33-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
xive_tctx_pipr_present() as implemented with xive_tctx_pipr_update()
causes VP-directed (group==0) interrupt to be presented in PIPR and NSR
despite being a lower priority than the currently presented group
interrupt.
This must not happen. The IPB bit should record the low priority VP
interrupt, but PIPR and NSR must not present the lower priority
interrupt.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-32-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
xive_tctx_pipr_update() is used for multiple things. In an effort
to make things simpler and less overloaded, split out the function
that is used to present a new interrupt to the tctx.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-31-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
A group interrupt that gets preempted by a higher priority interrupt
delivery must be redistributed otherwise it would get lost.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-30-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Have the match_nvt method only perform a TCTX match but don't present
the interrupt, the caller presents. This has no functional change, but
allows for more complicated presentation logic after matching.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-29-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
When disabling (pulling) an xive interrupt context, we need
to redistribute any active group interrupts to other threads
that can handle the interrupt if possible. This support had
already been added for the OS context but had not yet been
added to the pool or physical context.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-28-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Add support for redistributing a presented group interrupt if it
is precluded as a result of changing the CPPR value. Without this,
group interrupts can be lost.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-27-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Booting AIX in a PowerVM partition requires the use of the "Acknowledge
O/S Interrupt to even O/S reporting line" special operation provided by
the IBM XIVE interrupt controller. This operation is invoked by writing
a byte (data is irrelevant) to offset 0xC10 of the Thread Interrupt
Management Area (TIMA). It can be used by software to notify the XIVE
logic that the interrupt was received.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-26-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Change pregs to pool_regs, for clarity.
[npiggin: split from larger patch]
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-25-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|
|
Add more tracing around notification, redistribution, and escalation.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-24-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
|