Age | Commit message (Collapse) | Author | Files | Lines |
|
MM can not use the dynamic PCD, so, Move GetAcpiS3EnableFlag into
DxeSmm code. This can make PiSmmCpuEntryCommon to be a function
for SMM and MM.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
MM can not use the gBS service, so move SMM profile data allocation
into function. This can make InitSmmProfileInternal() to a common
function for both SMM and MM.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
MM can not use the gRT service, so use SMM Variable protocol to
set SmmProfileBase instead of gRT->SetVariable for both SMM and
MM.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
MM can not use the SMM Access Protocol, so get SMRAM info from
gEfiSmmSmramMemoryGuid instead of via SMM Access Protocol for both SMM
and MM.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
Centralize the SMM Non-Mmram Memory Management related code into
the NonMmramMapDxeSmm.c. The file SmmCpuMemoryManagement.c will be
target to use for both SMM and MM in subsequent patches.
No function impact.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
Move common code into PiSmmCpuCommon.c to facilitate common usage
in both SMM and MM. The PiSmmCpuCommon.h will be utilized for both
modes in subsequent patches.
No function impact.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
Rename the file PiSmmCpuDxeSmm.h to PiSmmCpuCommon.h to facilitate
common usage in both SMM and MM. The renamed file PiSmmCpuCommon.h
will be utilized for both modes in subsequent patches.
No function impact.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
This patch update the gSmst to gMmst for SMM and MM common
usage.
No function impact.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
This library provides an interface to request non-MMRAM pages to be
mapped/unblocked from inside MM environment.
For MM modules that need to access areas outside of
MMRAMs, the agents responsible for setting up these regions must use
this API to enable access to these memory areas from within MM. During
the IPL, when RestrictedMemoryAccess is enabled,
this unblocked memory is specifically used to create a BuildResourceHob,
which allocates storage for the SMM accessible DRAM (non-MMIO) range.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
This HOB indicates to x86 standalone MM whether S3 is enabled.
The value shall match with the PcdAcpiS3Enable.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Co-Authored-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
MM CPU Sync Config controls how BSP synchronizes with APs in x86
SMM environment.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Co-authored-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Co-authored-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
Add Unblock Region HOB which defines the GUIDed HOB that describes
the memory region to be unblocked in MM environment.
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Co-authored-by: Jiaxin Wu <jiaxin.wu@intel.com>
Co-authored-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
|
|
Commit 2f499c36db51980ad43fc6b578c7678a1720bd9c commented out the
RandomTestCase tests in CpuPageTableLibTestHost, but it left the
test suite being registered without any tests. This causes a failure
for tools that check to ensure tests are being registered with test
suites.
This patch comments out the test suite in addition to the tests
being added to it.
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
|
|
In this commit, we rename IsAddressValid function to
IsSmmProfilePFAddressAbove4GValid and remove unneeded
code logic in it.
Currently, IsAddressValid is only used in the function
RestorePageTableAbove4G. It's used to identify if a SMM
profile PF address above 4G is inside mProtectionMemRange
or not. So we can remove the PcdCpuSmmProfileEnable FALSE
condition related code logic in it. Also the function name
is change to be more detailed and specific.
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
Remove unneeded calling of SmmProfileMapPFAddress () in
SmmProfileMapPFAddress if SMM profile is not started.
Previously, before SMM profile is started at ReadyToLock,
SMM page table only covers [0, 4G]. The access to the range
above 4G will cause PF. SmmProfileMapPFAddress is needed
here to map the PF address before SMM profile is started.
Now we always create full mapping SMM page table in the
SmmInitPageTable(). When SMM profile is enabled, before
SMM profile is started at ReadyToLock, SMM page table
covers [0, MaxSupportedPhysicalAddress]. So the case that
access to the range above 4G causes PF won't happen
anymore.
Then we can remove the calling of SmmProfileMapPFAddress
before SMM profile is started.
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
Rename SmiDefaultPFHandler to SmiProfileMapPFAddress
and move the implementation to SmmProfileArch.c since
it only will be used when SMM profile is enabled.
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
In this commit, we remove duplicate CpuDeadLoop in
SmiPfHandler where mCpuSmmRestrictedMemoryAccess is
TRUE.
With last commit, we always call CpuDeadLoop if SMM
profile is disabled. Then the CpuDeadLoop calling
for the condition (mCpuSmmRestrictedMemoryAccess &&
IsSmmCommBufferForbiddenAddress (PFAddress)) is not
needed anymore. We also modify the IA32 related code
to be aligned with X64.
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
Always call CpuDeadLoop() in SmiPFHandler if SMM
profile is disabled.
Previously, when PcdCpuSmmRestrictedMemoryAccess is
FALSE, SMM page table only covers [0, 4g]. When code
access to range above 4g happens, SmiPFHandler will map
the accessed not-present range to present. After we
always create full mapping page table, the dynamic page
table creation logic is only needed when SMM profile is
enabled. So we use CpuDeadLoop() in SmiPFHandler to cover
the all the PF exception when SMM profile is disabled
Considering that [0, 4g] is always mapped in SMM page
table, we also modify the IA32 SmiPFHandler code to be
aligned with X64 code.
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
In this commit, we only set some special bits in paging entry
content when SMM profile is enabled.
Previously, we set Pml4Entry sub-entries number and set the
IA32_PG_PMNT bit for first 4 PdptEntry. It's to make sure that
the paging structures cover [0, 4G] won't be reclaimed during
dynamic page table creation.
In last commit, we always create full mapping SMM page table
regardless PcdCpuSmmRestrictedMemoryAccess. With this change,
we only need to dynamic create SMM page table in smm PF handler
when PcdCpuSmmProfileEnable is TRUE.
So the sub-entries number and IA32_PG_PMNT bit in paging entry
is only needed to set when PcdCpuSmmProfileEnable is TRUE.
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
In this commit, we always create full mapping SMM page
table in SmmInitPageTable regardless the value of the
PcdCpuSmmRestrictedMemoryAccess.
Previously, when PcdCpuSmmRestrictedMemoryAccess is false,
only [0, 4G] is mapped in smm page table in SmmInitPageTable.
If the range above 4G is accessed in SMM, SmiPFHandler will
create new paging entry for the accessed range. To simplify
the code logic, we also create full mapping SMM page table
in SmmInitPageTable when PcdCpuSmmRestrictedMemoryAccess is
false. Then we don't need to dynamic create paging entry for
range above 4G except SMM profile is enabled.
The comparison of SMM page table before and after the change
under different configuration are listed here:
1.PcdCpuSmmRestrictedMemoryAccess is TRUE
No change
2.PcdCpuSmmRestrictedMemoryAccess is FALSE and
PcdCpuSmmProfileEnable is TRUE
Before: the SMM page table when ReadyToLock covers
1. SMRAM range 2.SMM profile range
3. MMIO range below 4G
After: the SMM page table when ReadyToLock covers
1. SMRAM range 2.SMM profile range
3. MMIO range below 4G and above 4G
3.PcdCpuSmmRestrictedMemoryAccess is FALSE and
PcdCpuSmmProfileEnable is FALSE
Before: the SMM page table when ReadyToLock covers
[0, 4G]
After: the SMM page table when ReadyToLock covers
[0, MaxSupportPhysicalAddress]
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
This reverts commit bef0d333dc "UefiCpuPkg/PiSmmCpuDxeSmm:
Fix system hang when SmmProfile enable".
The commit bef0d333dc was added to modify the code logic in
InitPaging() to fix a code assert issue. Previously, the root
cause of this issue is that we try to only set NX attribute
for not-present MMIO range above 4G when SMM profile feature
is enabled, which is not allowed by CpuPageTableLib.
But after we always create full mapping initial SMM page
table in the next commit, this code assert issue won't happen
anymore since MMIO range above 4g will also be present in SMM
page table before InitPaging().
Meanwhile another issue was introduced by commit bef0d333dc:
In the entrypoint of PiSmmCpuDxe driver, we will set some
pages in stack range as not-present in SMM page table if
PcdCpuSmmStackGuard or PcdControlFlowEnforcementPropertyMask
is TRUE. But in commit bef0d333dc, all SMRAM range are set
to present in InitPaging() if SMM profile is enabled. Then
the stack guard and shadow stack features do not work anymore.
So let's revert the commit "UefiCpuPkg/PiSmmCpuDxeSmm: Fix
system hang when SmmProfile enable"
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
This patch is to avoid use global variable in InitSmmS3Cr3. No
function impact.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
|
|
The SmmS3Cr3 is only used by S3Resume PEIM to switch CPU from 32bit
to 64bit, it should be the CR3 for Non-SMM environment and init by
InitSmmS3Cr3 function. No need set to SMM CR3.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
|
|
This patch is to clean the PcdCpuFeaturesInitOnS3Resume since it's
unused after commit 077760fe
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
|
|
Iterate through the page table to find the appropriate page table
entry for page creation if one of the following cases is met:
1) StartBit > EndBit: The PageSize of current entry is bigger than
the platform-specified PageSize granularity.
2) IA32_PG_P bit is 0 & IA32_PG_PS bit is not 0: The current entry
is present and it's a non-leaf entry.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
|
|
If 2MB-page is selected, PDE entry might exist, it's incorrect to assert
it's not exist. Detailed see blow case analysis (it's similar case if
address exceeds 4G):
Assume the Default Page table has covered below 6M size range:
[0000000000001000, 0000000000601000)
Then, with PageTableMap API, below Page table entry will be
created if 1G-page or 2M-page mode is selected:
[0000000000001000, 0000000000002000) --> 4K
[0000000000002000, 0000000000003000) --> 4K
...
[00000000001FF000, 0000000000200000) --> 4k
[0000000000200000, 0000000000400000) --> 2M
[0000000000400000, 0000000000600000) --> 2M
[0000000000600000, 0000000000601000) --> 4K
Above will cover 2M aligned address (0000000000600000) in page table. If
Page Fault happen by accessing 0000000000602000, need create the page
entry:
[0000000000602000, 0000000000603000) --> 4K
But PDE entry has been created/existed in page table with 0 PS bit.
So, this patch removes the assert check. The page table entry created
will be the platform-specified PageSize granularity.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
|
|
Before the commit 701b5797 & 4ceefd6d, 2MB-page will be created to
cover [0: 4G] by default if SmmProfile enabled, and it will be go
through to change 2MB-page into 4KB-page during page table update
(InitPaging). If so, there was no problem to assert PDE entry exist
in the RestorePageTableBelow4G.
But after above commits, PageTableMap API is used to create/update
the page table, 1G-page will be the default page table mode, and
only covers the limited address range. Those not covered ranges
will be marked as non-present in 1g-page level address. If so,
2M-page address might not exist, it's incorrect to assert PDE
entry exist in the RestorePageTableBelow4G.
The correct behavior should check PDE entry exist or not, if not,
PDE should be allocated and assigned to PDPTE.
Note:
RestorePageTableBelow4G () does not use 1G page size entries
for the creation of new pages, maintaining consistency with the
behavior of the original code.
The purpose of this patch is to ensure that a Page Directory Entry
(PDE) exists prior to its usage.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
|
|
There is a bug in the existing code: the single step is always enabled
once the Page Fault (#PF) occurs, but it is only disabled when the SMM
Profile feature actually starts (see DebugExceptionHandler).
If the SMM Profile feature has not been started, this will result in
the single-step mode remaining enabled if a Page Fault occurs.
This patch is to enable the single-step debugging mode by setting the
Trap Flag only after SmmProfile feature starts.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
|
|
Change debug print levels to modern DEBUG_ format.
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
|
|
This commit is to fix smm code assert issue when SMM Profile
is enabled.
When SMM Profile is enabled, the function InitProtectedMemRange()
retrives MMIO ranges from GCD and store the MMIO ranges in the
mProtectionMemRange. When ReadyToLock, the function InitPaging()
modifies the page table based on the mProtectionMemRange. If the
MMIO ranges in mProtectionMemRange is not 4k aligned, code will
assert when modifying page table.
In this commit, we skip the MMIO ranges that BaseAddress and Length
are not 4k aligned when creating mProtectionMemRange. This will only
cause each access to the skipped MMIO range to be logged. In current
failure case on QEMU and QSP SimicsOpenBoard, the skipped MMIO range
is [0xFED00000, 0xFED00400] for HPET. Considering that the probability
of HPET MMIO range being accessed is very small in SMM, the solution
in this commit is acceptable and simple.
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
Structure assignment may depend on the compiler to expand to memcpy.
For this, we may need to add -mno-memcpy to the compilation flag.
Here, we reduce dependencies and use CopyMem for data conversion without
memcpy.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Chao Li <lichao@loongson.cn>
Signed-off-by: Dongyan Qian <qiandongyan@loongson.cn>
Co-authored-by: Chao Li <lichao@loongson.cn>
|
|
Given that the second parameter can be universally set to TRUE across
all use cases, its removal simplifies the function interface and the
associated code paths.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
|
|
Analysis of the current usage patterns revealed that this parameter
should consistently set to TRUE.
Specifically, the parameter was found to be False in the following
scenarios:
1. During the initial volatile register setup for the first AP wake-up
in both the PEI and DXE phases. In these instances, the volatile
registers are pre-initialized in MpInitLibInitialize(),
and manually setting them to zero does not require altering the DR
state.
2. When switching the BSP, the new BSP does not synchronize the DR.
This behavior is now adjusted to ensure the DR state is synchronized,
aligning with a more logical and expected behavior when transitioning
BSP roles.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
|
|
In previoud commit, we remove the ApInitReconfig status. Now there
are only two status ApInitConfig and ApInitDone.
Only the very first waking up AP needs to set ApInitConfig status.
Therefore, if this is not the first wake up, set ApInitDone status
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
|
|
ApInitReconfig status is used to indicate that when AP wakes up, AP
need to restore volatile registers from BSP and use InitSipiSipi. Since
we handle the volatile registers well, we can use WakeUpByInitSipiSipi
flag to replace ApInitReconfig. Avoid using ApInitReconfig can simplify
code.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
|
|
When enable stack guard, APs needs separate GDTs.
In current code, APs will lose their separate GDTs when AP get disabled
and later re-enabled. This is because when re-enabling AP, AP restores
volatile registers from BSP.
This patch updates the AP management to ensure that each AP saves and
restores its own set of volatile registers to solve this issue.
Key changes include:
- APs now maintain their own volatile register space, eliminating
dependency on the BSP's register state.
- Special handling is implemented for the first AP wake-up during the
PEI and DXE phases, where the volatile registers are synchronized from
the BSP.
- When switching BSP, remove manual handling the global variable
CpuMpData->CpuData[Index].VolatileRegisters. The manually handling
in previous code is because, old BSP may not save volatile registers
after the AP procedure and new BSP's VolatileRegisters buffer may be
used by other APs. Now, since AP always save/restore volatile registers
from their own buffer, no need to do the special handling.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
|
|
BSP should save and sync to AP the init timer count instead of
current timer count.
Also, BSP can check the init timer count to know if the local apic
timer is enabled. Only sync the setting when it is enabled.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
|
|
This update ensures the consistency of Local APIC timer settings across
all processors when a BSP switch occurs.
The Local APIC timer is utilized in two distinct scenarios:
1. As a delay mechanism within the timer library.
2. To generate periodic timer interrupts during the DXE phase.
For scenario 1, APs can simply inherit the initial settings from the
BSP. Even the local APIC timer setting is changed by BSP later, AP
can still use the old setting. Therefore, the code to save the Local
APIC timer can be moved to MpInitLibInitialize().
For scenario 2, because normal AP doesn't enable timer interrupt, we
only need to care SwitchBsp case. It is crucial that the periodic
timer interrupts remain operational after BSP is switched. To achieve
this, the Local APIC timer settings on old BSP are now preserved and
synced to new BSP.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
|
|
CPU_AP_DATA contains AP's information such as CpuHealthy and
VolatileRegisters. Exchange the whole CPU_AP_DATA buffer instead
some fields to make code more simple.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
|
|
This reverts commit ae59b8ba4166384cbfa32a921aac289bcff2aef9.
The commit ae59b8ba41 was added to modify the GenSmmPageTable()
to map SMRAM in 4K page granularity. It was to urgently fix a
smm hang issue by avoiding page split in paging structures that
covers SMRAM range when SMI happens. But finally the smm hang
issue was root caused and fixed by commit 839bd17973.
Meanwhile a smm page table creation related issue was introduced
by commit ae59b8ba41:
In the function GenSmmPageTable(), the paging level for the range
outside SMRAM is depend on the Input parameter PagingMode. However,
the paging level for SMRAM range is depend on m5LevelPagingNeeded.
If the two paging levels are different, then the smm page table is
created incorrectly.
So let's revert the commit "UefiCpuPkg/PiSmmCpuDxeSmm:Map SMRAM
in 4K page granularity"
Signed-off-by: Dun Tan <dun.tan@intel.com>
|
|
This patch is to consume the PcdCpuSmmApSyncTimeout2 to
enhance the flexibility of timeout configuration.
In some cases, certain processors may not be able to enter
SMI, and prolonged waiting could lead to kernel soft/hard
lockup. We have now defined two timeouts. The first timeout
can be set to a smaller value to reduce the waiting period.
Processors that are unable to enter SMI will be woken up
through SMIIPL to enter SMI, followed by a second waiting
period. The second timeout can be set to a larger value to
prevent delays in processors entering SMI case due to the
long instruction execution.
This patch adjust the location of PcdCpuSmmApSyncTimeout2
to avoid conflict.
Signed-off-by: Yanbo Huang <yanbo.huang@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
|
|
This reverts commit cb3134612d11102fe066c94c8fa7edb20d62c1a8.
Intel server platform sync this commit will hit conflict since our code base is old.
We don't want to cherry-pick the dependent patches to avoid potential issue.
We need to revert this commit first and then fix the conflict and reapply the change.
Sorry for the incovenience.
Signed-off-by: Yanbo Huang <yanbo.huang@intel.com>
|
|
MMIO ranges within the mProtectionMemRange array may exceed 4G
and should be configured as 'Present & NX'. However, the initial
attribute for these MMIO addresses in the page table is
'non-present'. Other attributes should not be set or updated for
a non-present range if the present bit mask is zero, as this could
result in an error during the InitPaging for the page table update
process.
This patch is to resolve the error to make sure MMIO page table
can be configured correctly.
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
|
|
CONFIDENTIAL_COMPUTING_GUEST_ATTR is not a simple SEV level anymore
and includes a feature mask since the previous commit.
Fix AmdMemEncryptionAttrCheck to check the level and feature
correctly and add DebugVirtualization support.
Since the actual feature flag is not set yet, this should cause
no behavioural change.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Alexey Kardashevskiy <aik@amd.com>
---
Changes:
v5:
* "rb" from Tom
|
|
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
|
|
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
On the LoongArch platform:
the a0 register can be used as both a function parameter and a return value.
Due to parameter EFI_SYSTEM_CONTEXT being overwritten by an invalid context address,
when calling GetExceptionType, incorrect parameter address causes memory access exception.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4796
Cc: Chao Li <lichao@loongson.cn>
Signed-off-by: Dongyan Qian <qiandongyan@loongson.cn>
|