| Age | Commit message (Collapse) | Author | Files | Lines |
|
The commit 42139bb9b7dc ("lib: sbi: Replace sbi_hart_pmp_xyz() and
sbi_hart_map/unmap_addr()") changed logic by calling
sbi_hart_protection_configure(). But when protection doesn't exist
the function is returning SBI_EINVAL.
But on systems without protection this is correct configuration
that's why do not hang when system don't have any HART protection.
Fixes: 42139bb9b7dc ("lib: sbi: Replace sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr()")
Signed-off-by: Michal Simek <michal.simek@amd.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/bb8641e5f82654e3989537cea85f165f67a7044e.1767801896.git.michal.simek@amd.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The "RISC-V C API" [1] defines architecture extension test macros
says naming rule for the test macros is __riscv_<ext_name>, where
<ext_name> is all lower-case.
Three extensions dealing with atomics implementation are:
"zaamo" consists of AMO instructions,
"zalrsc" - LR/SC,
"a" extension means both "zaamo" and "zalrsc"
Built-in test macros are __riscv_a, __riscv_zaamo and __riscv_zalrsc.
Alternative to the __riscv_a macro name, __riscv_atomic, is deprecated.
Use correct test macro __riscv_zaamo for the AMO variant of atomics.
It used to be __riscv_atomic that is both deprecated and incorrect
because it tests for the "a" extension; i.e. both "zaamo" and "zalrsc"
If ISA enables only zaamo but not zalrsc, code as it was would not compile.
Older toolchains may have neither __riscv_zaamo nor __riscv_zalrsc, so
query __riscv_atomic - it should be treated as both __riscv_zaamo and
__riscv_zalrsc, in all present cases __riscv_zaamo is more favorable
so take is as alternative for __riscv_zaamo
[1] https://github.com/riscv-non-isa/riscv-c-api-doc
Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@mobileye.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251228073321.1533844-1-vladimir.kondratiev@mobileye.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Currently, OpenSBI returns SBI_ERR_ALREADY_STARTED when attempting to
start a HW counter that is already started and SBI_ERR_ALREADY_STOPPED
when attempting to stop a HW counter that is already stopped. However,
this is not yet implemented for FW counters.
Add the necessary checks to return the same error codes when attempting
the same actions on FW counters.
Signed-off-by: James Raphael Tiovalen <jamestiotio@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251213104146.422972-1-jamestiotio@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
There are 2 locations where memory regions moved in a bulk,
but this implemented in a region-by region move or even swap.
Use more effective way. Note, last entry, dom->regions[count], always
exists and is empty, copying it replaces clear_region()
Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@mobileye.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20251208125617.2557594-1-vladimir.kondratiev@mobileye.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Expected trap must always clear MPRV. Currently it doesn't. There is a
security issue here where if firmware was doing ld/st with MPRV=1 and
since there would be a expected trap, opensbi will continue to run as
MPRV=1. Security impact is DoS where opensbi will just keep trapping.
Signed-off-by: Deepak Gupta <debug@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251124220339.3695940-1-debug@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
By default the OpenSBI itself is covered by 2 memregions for RX/RW
sections. This is required by platforms with Smepmp to enforce
proper permissions in M mode. Note: M-mode only regions can't
have RWX permissions with Smepmp. Platforms with traditional PMPs
won't be able to benefit from it, as both regions are effectively
RWX in M mode, but usually it's harmless to so. Now we provide
these platforms with an option to disable this logic. It saves 1
PMP entry. For platforms really in short of PMPs, it does make a
difference.
Note: Platform requesting single OpenSBI memregion must be using
traditional (old) PMP. We expect the platform code to do
the right thing.
Signed-off-by: Bo Gan <ganboing@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251218104243.562667-5-ganboing@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The helper function is renamed as sbi_domain_memregion_is_subset,
and made public in header file.
Also add a convenient helper of sbi_domain_for_each_memregion_idx.
Signed-off-by: Bo Gan <ganboing@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251218104243.562667-4-ganboing@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Factor out logic in `sbi_hart_oldpmp_configure` into function
`sbi_domain_get_oldpmp_flags`, analogous to `sbi_domain_get_smepmp_flags`.
Platform specific hart-protection implementation can now leverage it.
Signed-off-by: Bo Gan <ganboing@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251218104243.562667-3-ganboing@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
sbi_hart_pmp_fence can now be utilized by other hart-protection
implementation.
Signed-off-by: Bo Gan <ganboing@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251218104243.562667-2-ganboing@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
A clarification has been added to the RISC-V privileged specification
regarding synchronization requirements when xenvcfg.ADUE changes.
(Refer, the following commit in the RISC-V Privileged ISA spec
https://github.com/riscv/riscv-isa-manual/commit/4e540263db8ae3a27d132a1752cc0fad222facd8)
As-per these requirements, the SBI FWFT ADUE implementation must
flush TLBs upon changes in ADUE state on a hart.
Signed-off-by: Andrew Waterman <andrew@sifive.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251127112121.334023-3-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The __sbi_sfence_vma_all() can be shared by different parts of
OpenSBI so rename __tlb_flush_all() to __sbi_sfence_vma_all()
and make it global function.
Signed-off-by: Andrew Waterman <andrew@sifive.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251127112121.334023-2-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The PMP programming is a significant part of sbi_hart.c so factor-out
this into separate sources sbi_hart_pmp.c and sbi_hart_pmp.h for better
maintainability.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-6-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr() functions can
now be replaced by various sbi_hart_protection_xyz() functions.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-5-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Implement PMP and ePMP based hart protection abstraction so
that usage of sbi_hart_pmp_xyz() functions can be replaced
with sbi_hart_protection_xyz() functions.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-4-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Currently, PMP and ePMP are the only hart protection mechanisms
available in OpenSBI but new protection mechanisms (such as Smmpt)
will be added in the near future.
To allow multiple hart protection mechanisms, introduce hart
protection abstraction and related APIs.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-3-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Currently, the unconfiguring PMP is implemented directly inside
switch_to_next_domain_context() whereas rest of the PMP programming
is done via functions implemented in sbi_hart.c.
Introduce a separate sbi_hart_pmp_unconfigure() function so that
all PMP programming is in one place.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-2-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Currently, platforms do not provide complete memory region information
to OpenSBI. Generally, memory regions are only created for the few MMIO
devices that have M-mode drivers. As a result, most MMIO devices fall
inside the default S-mode RWX memory region, which does _not_ have the
MMIO flag set.
In fact, OpenSBI relies on certain S-mode MMIO devices being inside
non-MMIO memory regions. Both fdt_domain_based_fixup_one() and
mpxy_rpmi_sysmis_xfer() call sbi_domain_check_addr() with the MMIO flag
cleared, and that function currently requires an exact flag match. Those
access checks will thus erroneously fail if the platform creates memory
regions with the correct flags for these devices (or for a larger MMIO
region containing these devices).
We should not ignore the MMIO flag entirely, because
sbi_domain_check_addr() is also used to check the permissions of S-mode
shared memory buffers, and S-mode should not be using MMIO device
addresses as memory buffers. But when checking if S-mode is allowed to
do MMIO accesses, we need to recognize that MMIO devices appear in
memory regions both with and without the MMIO flag set.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251121193808.1528050-2-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The QoS Identifiers extension (Ssqosid) introduces the srmcfg register,
which configures a hart with two identifiers: a Resource Control ID
(RCID) and a Monitoring Counter ID (MCID). These identifiers accompany
each request issued by the hart to shared resource controllers.
If extension Smstateen is implemented together with Ssqosid, then
Ssqosid also requires the SRMCFG bit in mstateen0 to be implemented. If
mstateen0.SRMCFG is 0, attempts to access srmcfg in privilege modes less
privileged than M-mode raise an illegal-instruction exception. If
mstateen0.SRMCFG is 1 or if extension Smstateen is not implemented,
attempts to access srmcfg when V=1 raise a virtual-instruction exception.
This extension can be found in the RISC-V Instruction Set Manual:
https://github.com/riscv/riscv-isa-manual
Changes in v5:
- Remove SBI_HART_EXT_SSQOSID dependency SBI_HART_PRIV_VER_1_12
Changes in v4:
- Remove extraneous parentheses around SMSTATEEN0_SRMCFG
Changes in v3:
- Check SBI_HART_EXT_SSQOSID when swapping SRMCFG
Changes in v2:
- Remove trap-n-detect
- Context switch CSR_SRMCFG
Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
Reviewed-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20251114115722.1831-1-cp0613@linux.alibaba.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Calculate number of used memory regions using helper function when needed.
Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@mobileye.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251111104327.1170919-3-vladimir.kondratiev@mobileye.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
In the sanitize_domain, code that checks for the case when one
memory region covered by the other, was never executed. Quote:
/* Sort the memory regions */
for (i = 0; i < (count - 1); i++) {
<snip>
}
/* Remove covered regions */
while(i < (count - 1)) {
Here "while" loop never executed because condition "i < (count - 1)"
is always false after the "for" loop just above.
In addition, when clearing region, "root_memregs_count"
should be adjusted as well, otherwise code that adds memory region
in the "root_add_memregion" will use wrong position:
/* Append the memregion to root memregions */
nreg = &root.regions[root_memregs_count];
empty entry will be created in the middle of regions array, new
regions will be added after this empty entry while sanitizing code
will stop when reaching empty entry.
Fixes: 3b03cdd60ce5 ("lib: sbi: Add regions merging when sanitizing domain region")
Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@mobileye.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251111104327.1170919-2-vladimir.kondratiev@mobileye.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Before this patch sbi_pmu_ctr_start() ignores flags received in
sbi_pmu_ctr_cfg_match() including inhibit ones. To prevent it,
save flags together with event_data and use them both in
sbi_pmu_ctr_start().
Fixes: 1db95da2997b ("lib: sbi: sbi_pmu: fixed hw counters start for hart")
Signed-off-by: Shifrin Dmitry <dmitry.shifrin@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251110113140.80561-1-dmitry.shifrin@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
switch
When SmePMP is enabled, clearing firmware PMP entries during a domain
context switch can temporarily revoke access to OpenSBI’s own code and
data, leading to faults.
Keep firmware PMP entries enabled across switches so firmware regions
remain accessible and executable.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-9-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Add fw_smepmp_ids bitmap to track PMP entries that protect firmware
regions. Allow us to preserve these critical entries across domain
transitions and check inconsistent firmware entry allocation.
Also add sbi_hart_smepmp_is_fw_region() helper function to query
whether a given SmePMP entry protects firmware regions.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-8-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
During domain context switches, all PMP entries are reconfigured
which can clear firmware access permissions, causing M-mode access
faults under SmePMP.
Sort domain regions to place firmware regions first, ensuring
consistent firmware PMP entries so they won't be revoked during
domain context switches.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-7-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Add a new memregion flag, SBI_DOMAIN_MEMREGION_FW and mark the
OpenSBI code and data regions.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-6-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Previously, when memory regions exceed available PMP entries,
some regions were silently ignored. If the last entry that covers
the full 64-bit address space is not added to a domain, the next
stage S-mode software won't have permission to access and fetch
instructions from its memory. So return early with error message
to catch such situation.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-5-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The reg->flag is encoded with 6 bits to specify RWX
permissions for M-mode and S-/U-mode. However, only
16 of the possible encodings are valid on SmePMP.
Add a warning message when an unsupported permission
encoding is detected.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-4-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
According to the RISC‑V Privileged Specification, SmePMP
regions that grant no access in any privilege mode are
valid. Allow such regions to be specified.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-3-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Move sbi_hart_get_smepmp_flags() from sbi_hart.c to sbi_domain.c and
rename it to sbi_domain_get_smepmp_flags() to better reflect its
purpose of converting domain memory region flags to PMP configuration.
Also removes unused parameters (scratch and dom).
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-2-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The last core who performs the system suspend is responsible for
restoring the system after waking up. Add the system_resume callback for
restoring the system from suspend.
Suggested-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-11-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
A platform may contain multiple IPI devices. In certain use cases,
such as power management, it may be necessary to send an IPI through a
specific device to wake up a CPU. For example, if an IMSIC is powered
down and reset, the core cannot receive IPIs from it, so the wake-up must
instead be triggered through the CLINT.
Suggested-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-8-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Using ISA string "xsfcease" to detect the support of the custom
instruction "CEASE".
Reviewed-by: Cyan Yang <cyan.yang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-6-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Using ISA string "xsfcflushdlone" to detect the support of the
SiFive L1D cache flush custom instruction.
Reviewed-by: Cyan Yang <cyan.yang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-5-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Previously, in sbi_pmu_ctr_cfg_match() function, ctr_idx was used immediately
after pmu_ctr_find_fw() or pmu_ctr_find_hw() calls. In first case, array index
was (ctr_idx - num_hw_ctrs), in second - ctr_idx. But pmu_ctr_find_fw() and
pmu_ctr_find_hw() functions can return negative value, in which case writing
in arrays with such indexes would corrupt sbi_pmu_hart_state structure.
To avoid this situation, direct ctr_idx value check added.
Signed-off-by: Alexander Chuprunov <alexander.chuprunov@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250918090706.2217603-4-alexander.chuprunov@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Deleted spaces before brace in pmu_ctr_start_fw() for correct alignment.
Signed-off-by: Alexander Chuprunov <alexander.chuprunov@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250918090706.2217603-3-alexander.chuprunov@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Generally, hardware performance counters can only be started, stopped,
or configured from machine-mode using mcountinhibit and mhpmeventX CSRs.
Also, in opensbi only sbi_pmu_ctr_cfg_match() managed mhpmeventX. But
in generic Linux driver, when perf starts, Linux calls both
sbi_pmu_ctr_cfg_match() and sbi_pmu_ctr_start(), while after hart suspend
only sbi_pmu_ctr_start() command called through SBI interface. This doesn't
work properly in case when suspend state resets HPM registers. In order
to keep counter integrity, sbi_pmu_ctr_start() modified. First, we're saving
hw_counters_data, and after hart suspend this value is restored if
event is currently active.
Signed-off-by: Alexander Chuprunov <alexander.chuprunov@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250918090706.2217603-2-alexander.chuprunov@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Some of the platforms use platform specific CSR access functions for
configuring implementation specific CSRs (such as PMA registers).
Extend the common csr_read_num() and csr_write_num() to allow custom
CSRs so that platform specific CSR access functions are not needed.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Link: https://lore.kernel.org/r/20250930153216.89853-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Add error handling code to sbi_domain_context_enter to prevent the
target domain from being the same as the current domain.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250903044619.394019-4-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
When entering sbi_domain_context_enter for the first time, the hart
context may not be initialized. Add initialization code.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250903044619.394019-3-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Add error handling to switch_to_next_domain_context to ensure
legal input. When switching contexts, ensure that the target to
be switched is different from the current one.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250903044619.394019-2-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Simplify the code and remove preprocessor checks by treating menvcfg and
menvcfgh together as one 64-bit value.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250908055646.2391370-3-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The only purpose of this function is to program the initial values of
mideleg and medeleg. However, both of these CSRs are now saved/restored
across non-retentive suspend, so the values from this function are
always overwritten by the restored values.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250908055646.2391370-2-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
OpenSBI updates mideleg when registering or unregistering the PMU SSE
event. The updated CSR value must be saved across non-retentive suspend,
or PMU SSE events will not be delivered after the hart is resumed.
Fixes: b31a0a24279d ("lib: sbi: pmu: Add SSE register/unregister() callbacks")
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250908055646.2391370-1-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The platform specfic IPI init is not need anymore because using
IPI device rating multiple IPI devices can be registered in any
order as part of the platform specific early init.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20250904052410.546818-3-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
A platform can have multiple IPI devices (such as ACLINT MSWI,
AIA IMSIC, etc). Currently, OpenSBI rely on platform calling
the sbi_ipi_set_device() function in correct order and prefer
the first avaiable IPI device which is fragile.
Instead of the above, introduce IPI device rating and prefer
the highest rated IPI device. This further allows extending
the sbi_ipi_raw_clear() to clear all available IPI devices.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Tested-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20250904052410.546818-2-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Now that the allocator cannot run out of nodes in the middle of an
allocation, the code can be simplified greatly. First it moves bytes
from the beginning and/or end of the node to new nodes in the free
list as necessary. These new nodes are inserted into the free list
in address order. Then it moves the original node to the used list.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250617032306.1494528-4-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
Currently the heap has a fixed housekeeping factor of 16, which means
1/16 of the heap is reserved for list nodes. But this is not enough when
there are many small allocations; in the worst case, 1/3 of the heap is
needed for list nodes (32 byte heap_node for each 64 byte allocation).
This has caused allocation failures on some platforms.
Let's avoid trying to guess the best ratio. Instead, allocate more nodes
as needed. To avoid recursion, the nodes are permanent allocations. So
to minimize fragmentation, allocate them in small batches from the end
of the last free space node. Bootstrap the free space list by embedding
one node in the heap control struct.
Some error paths are avoided because the nodes are allocated up front.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250617032306.1494528-3-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
sbi_dbtr_read_trig returned the saved state of tdata{1-3}, when it
should have returned the updated state read from CSRs.
Update sbi_dbtr_read_trig to return updated state read from CSRs.
Signed-off-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Jesse Taube <jesse@rivosinc.com>
Link: https://lore.kernel.org/r/20250811152947.851208-1-jesse@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
The linux kernel needs icount to implement hardware breakpoints.
Signed-off-by: Jesse Taube <jesse@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250724183120.1822667-1-jesse@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|
|
We do not need to iterate over all values in the loop,
we can break the loop when we found a valid counter
that is not started yet.
Signed-off-by: Manuel Hernández Méndez <maherme.dev@gmail.com>
Link: https://lore.kernel.org/r/20250721160712.8766-1-maherme.dev@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
|