summaryrefslogtreecommitdiff
path: root/UefiCpuPkg
AgeCommit message (Collapse)AuthorFilesLines
9 daysUefiCpuPkg: Add dump interrupt type on LoongArch64Chao Li3-0/+61
If the exception type is INT, we need to know which interrupt could not be handled, so we added a method to dump them. Cc: Ray Ni <ray.ni@intel.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Zhiguang Liu <zhiguang.liu@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Signed-off-by: Chao Li <lichao@loongson.cn>
9 daysUefiCpuPkg: Adjust the exception handler logic on LoongArch64Chao Li3-16/+8
There is a problem with LoongArch64 exception handler, it returns a unhandled value when we get an exception type, the correct value should be right shifted 16 bits, so fix it. Cc: Ray Ni <ray.ni@intel.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Zhiguang Liu <zhiguang.liu@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Signed-off-by: Chao Li <lichao@loongson.cn>
12 daysUefiPkg/PiSmmCpuDxeSmm: Set SmmProfile Variable only for DXE SMMZhao,Yanxin1-6/+9
Some platforms plan to move the Standalone MM CPU driver into the FSP. However, there is no variable service support in FSP. Therefore, the SetVariable logic for the Standalone MM CPU will be removed. With this change, users can dump the SmmProfile data from the Memory Allocation HOB: gMmProfileDataHobGuid. This change does not impact the DXE SMM, which will still retrieve the SmmProfile data from the variable service. Signed-off-by: Yanxin Zhao <yanxin.zhao@intel.com>
2024-12-17UefiCpuPkg: x86 CpuDxe: Allocate AP Exception Stack Below 4GBOliver Smith-Denny1-3/+25
When setting up the APs' exception stacks, the x86 CpuDxe allocates any range and then copies over the existing GDT and IDT and adds the appropriate new entries for this AP, then installs them. This can cause an issue if the allocated buffer is over 4GB because the next time the AP is started, it goes through an INIT-SIPI-SIPI, stepping through real mode -> protected mode -> long mode and when it is in protected mode it needs a 32 code segment descriptor or else it will fault when trying to execute. If the GDT lives above 4GB, it cannot be accessed by the protected mode code and the triple fault is seen. This patch updates CpuDxe's MP management code to allocate the exception stacks for all APs below 4GB explicitly to avoid this problem, such as it does with the BSP's GDT that first gets populated to the APs. Signed-off-by: Oliver Smith-Denny <osde@microsoft.com>
2024-12-12UefiCpuPkg: Remove macro MAX_LOONGARCH_EXCEPTIONChao Li3-21/+0
Since the UEFI 2.11 has been released, the macro MAX_LOONGARCH_EXCEPTION has been added in MdePkg, so it is deleted in LoongArch folder header file. Cc: Ray Ni <ray.ni@intel.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Zhiguang Liu <zhiguang.liu@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Signed-off-by: Chao Li <lichao@loongson.cn>
2024-12-02UefiCpuPkg/CpuMmuLib: Adjust default memory attributes on LoongArchChao Li1-0/+1
When updating memory attributes, if only access attributes are changed, the default memory cache attribute is NULL and a CACHE_CC is added by default. Signed-off-by: Chao Li <lichao@loongson.cn>
2024-11-25UefiCpuPkg/PiSmmCpuDxeSmm:Check resource HOB range before mappingDun Tan1-0/+10
This commit is to check if the resource HOB range does not exceed the max supported physical address. The function BuildMemoryMapFromResDescHobs is to build Memory Region from resource HOBs. Then the memory maps will be used during creating or modifying SMM page table. If the resource HOB range exceeds the max supported physical address, then subsequent calling of PageTableMap() will fail. Signed-off-by: Dun Tan <dun.tan@intel.com>
2024-11-15UefiCpuPkg: Fix unchecked returns and potential integer overflowskenlautner19-73/+423
Resolves several issues in UefiCpuPkg related to: 1. Unchecked returns leading to potential NULL or uninitialized access. 2. Potential unchecked integer overflows. 3. Incorrect comparison between integers of different sizes. Co-authored-by: kenlautner <85201046+kenlautner@users.noreply.github.com> Signed-off-by: Chris Fernald <chfernal@microsoft.com>
2024-11-13MdePkg: MdeLibs.dsc.inc: Apply StackCheckLibNull to All Module TypesOliver Smith-Denny1-8/+2
Now that the ResetVectors are USER_DEFINED modules, they will not be linked against StackCheckLibNull, which were the only modules causing issues. So, we can now remove the kludge we had before and the requirement for every DSC to include StackCheckLibNull for SEC modules and just apply StackCheckLibNull globally. This also changes every DSC to drop the SEC definition of StackCheckLibNull. Continuous-integration-options: PatchCheck.ignore-multi-package Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
2024-11-13UefiCpuPkg: Make the ResetVector USER_DEFINEDOliver Smith-Denny2-2/+2
The x86 reset vector is the initial FW code to run on an AP. It should not link to any libraries and is implemented entirely in assembly. This module is currently labled as SEC, because it runs during the SEC phase, but by having it SEC, it will be linked to all NULL libraries linked globally. This causes issue with StackCheckLib (though any NULL library being applied globally has the same issue) because BaseTools will attempt to link the library and add an extern to _ModuleEntryPoint, which does not exist for this module. Moving this module to USER_DEFINED instructs BaseTools to not link any NULL libraries to it, which is the desired behavior, and leads to a much cleaner global NULL library implementation, in this case for StackCheckLib. This change was tested on OVMF IA32/X64 and proved to work as before. Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
2024-11-12UefiCpuPkg: SmmProfile: Use public Architectural MSRs from MdePkgVivian Nowka-Keane2-24/+35
Replaced local Msr defines with inclusion of Register/Amd/Msr.h. Signed-off-by: Vivian Nowka-Keane <vnowkakeane@linux.microsoft.com>
2024-11-12UefiCpuPkg: Use public Architectural MSRs from MdePkgVivian Nowka-Keane5-24/+31
Replaced local Msr defines with inclusion of Register/Amd/Msr.h in Amd libraries. Signed-off-by: Vivian Nowka-Keane <vnowkakeane@linux.microsoft.com>
2024-11-11UefiCpuPkg/MtrrLib: Fix unit test read overflowMichael D Kinney1-1/+1
Change conditional check to check the array index before reading the array member to prevent read past end of buffer. Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
2024-11-07UefiCpuPkg/SecCore: Consume PcdMaxMappingAddressBeforeTempRamExitJiaxin Wu3-81/+126
Consume PcdMaxMappingAddressBeforeTempRamExit for page table creation in permanent memory before Temp Ram Exit. This patch will create the full page table in two steps: Step 1: Create the max address in page table before the Temporary RAM exit. Step 2: Create the full range page table after the Temporary RAM exit. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-11-07UefiCpuPkg/UefiCpuPkg.dec: Add PcdMaxMappingAddressBeforeTempRamExitJiaxin Wu1-0/+8
This change is made for boot performance considerations. Before the Temporary RAM is disabled, the permanent memory is in UC state, causing the creation of the page table in permanent memory to take more time with larger page table sizes. Therefore, this patch adds the PcdMaxMappingAddressBeforeTempRamExit to provide the platform with the capability to control the max mapping address in page table before Temp Ram Exit. The value of 0xFFFFFFFFFFFFFFFF, then firmware will map entire physical address space. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-11-01UefiCpuPkg: Remove AMD 32-bit SMRAM save state mapPhil Noh3-157/+101
Per AMD64 Architecture Programmer's Manual Volume 2: System Programming - 10.2.3 SMRAM State-Save Area (Rev 24593), the AMD64 architecture does not use the legacy SMM state-save area format (Table 10-2) for 32-bit SMRAM save state map. Clean up codes for the invalid save state map. Signed-off-by: Phil Noh <Phil.Noh@amd.com>
2024-10-30UefiCpuPkg/MmUnblockMemoryLib: Check if buffer range is validDun Tan1-0/+90
Check if input buffer range unblockable: 1.The input buffer range to block should be totally covered by one or multi memory allocation HOB 2.All the memory allocation HOB that overlap with the input buffer range should be EfiRuntimeServicesData, EfiACPIMemoryNVS or EfiReservedMemoryType. Signed-off-by: Dun Tan <dun.tan@intel.com>
2024-10-29UefiCpuPkg/PiSmmCpuDxeSmm: Fix extraneous parenthesesMike Beaton1-1/+1
Without this change, when building OvmfPkg with -D SMM_REQUIRE using the XCODE5 toolchain we get: error: equality comparison with extraneous parentheses which stops the build. Signed-off-by: Mike Beaton <mjsbeaton@gmail.com>
2024-10-16UefiCpuPkg/PiSmmCpuDxeSmm: Save and restore CR2 only if SmiProfile enableJiaxin Wu1-2/+18
A page fault (#PF) that triggers an update to the page table only occurs if SmiProfile is enabled. Therefore, it is necessary to save and restore the CR2 register if SmiProfile is configured to be enabled. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-10-12UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuPlatformHookBeforeMmiHandler funcJiaxin Wu1-8/+23
This patch is for PiSmmCpuDxeSmm driver to add one round wait/release sync for BSP and AP to perform the SMM CPU Platform Hook before executing MMI Handler: SmmCpuPlatformHookBeforeMmiHandler (). With the function, SMM CPU driver can perform the platform specific items after one round BSP and AP sync (to make sure all APs in SMI) and before the MMI handlers. After the change, steps #1 and #2 are additional requirements if the MmCpuSyncModeTradition mode is selected. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-10-12UefiCpuPkg: Add SmmCpuPlatformHookBeforeMmiHandlerJiaxin Wu2-2/+35
This patch is to add SmmCpuPlatformHookBeforeMmiHandler interface in SmmCpuPlatformHookLib. The new API can be used to perform the platform specific items before executing MMI Handler. For example, Intel can leverage this API to clear the pending SMI bit after all CPUs finish the sync and before the MMI handlers. If so, the the redundant SMI can be avoided after CPU exit from current SMI. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-10-12UefiCpuPkg/PiSmmCpuDxeSmm: Clarification for BSP & APs Sync FlowJiaxin Wu1-22/+22
This patch does not impact functionality. It aims to clarify the synchronization flow between the BSP and APs to enhance code readability and understanding: Steps #6 and #11 are the basic synchronization requirements for all cases. Steps #1 is additional requirements if the MmCpuSyncModeTradition mode is selected. Steps #1, #2, #3, #4, #5, #7, #8, #9, and #10 are additional requirements if the system needs to configure the MTRR. Steps #9 and #10 are additional requirements if the system needs to support the mSmmDebugAgentSupport. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-10-10UefiCpuPkg/MpLib: Remove NotifyOnS3SmmInitDonePpiZhiguang Liu2-65/+0
Previously, the SMM S3 resume code required taking control of APs to perform SMM rebase, which would overwrite the context set by MpLib. As a result, MpLib needed to wake up APs using InitSipiSipi to restore the context after SMM S3 resume. With the recent change where SMM rebase occurs in the early PEI phase, the SMM S3 resume code no longer modifies AP context. Therefore, the forced use of InitSipiSipi after SMM S3 resume is no longer necessary. Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
2024-10-10UefiCpuPkg/S3: Skip CR3 modification in S3Resume for 64-bit PEIZhiguang Liu1-3/+5
Previously, when PEI was 32-bit and DXE was 64-bit, S3 resume code had to set or change the CR3 register before executing 64-bit code. However, with both PEI and DXE now may being 64-bit, this modification is unnecessary as PEI already utilizes sufficiently large page tables. Additionally, there is a bug in the current implementation where the changed CR3 during S3 resume could map only below 4G MMIO, which could lead to issues if end of PEI notify attempts to access above 4G. Overall, skipping the CR3 modification in S3Resume when PEI is 64-bit can fix the bug and also avoid unnecessary logic. Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
2024-10-04UefiCpuPkg: RiscV64: initialize FPUHeinrich Schuchardt8-0/+80
The OpenSSL library uses floating point registers. The is no guarantee that a prior firmware stage has enabled the FPU. Provide a library BaseRiscVFpuLib to * Enable the FPU and set it to state 'dirty'. * Clear the fcsr CSR. Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
2024-09-20UefiCpuPkg/MtrrLib: MtrrLibIsMtrrSupported always return FALSE in TD-GuestMin M Xu1-0/+7
Currently, TDX exposes MTRR CPUID bit to TDX VM. So based on the CPUID, the guest software components (OVMF/TDVF and guest kernel) will access MTRR MSRs. One problem for guest to use of MTRR is the change of MTRR setting needs to set CR0.CD=1, which will case #VE for TDX. For Linux kernel, there is a mechanism called SW defined MTRR introduced by the patch https://lore.kernel.org/all/20230502120931. 20719-4-jgross@suse.com/. If this is integrated for TDX guest, then Linux kernel will not access any MTRR MSRs. So we update MtrrLibIsMtrrSupported() to always return false for TD-Guest, then TDVF will not access MTRR MSRs at all. Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Binbin Wu <binbin.wu@intel.com> Signed-off-by: Min Xu <min.m.xu@intel.com>
2024-09-17UefiCpuPkg/AmdSmmCpuFeaturesLib: Skip SMBASE configurationPhil Noh2-5/+28
This patch is to avoid configure SMBASE if SmBase relocation has been done. If gSmmBaseHobGuid found, means SmBase info has been relocated and recorded in the SmBase array. No need to do the relocation in SmmCpuFeaturesInitializeProcessor(). Signed-off-by: Phil Noh <Phil.Noh@amd.com>
2024-09-13UefiCpuPkg: Add StackCheckLibOliver Smith-Denny1-2/+8
SecCore and SecCoreNative require StackCheckLib and so the NULL instance is linked against them here. Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
2024-09-06UefiCpuPkg/PiSmmCpuDxeSmm: Remove RestrictedMemoryAccess check for MM CPUJiaxin Wu6-31/+46
The PcdCpuSmmRestrictedMemoryAccess is declared as either a dynamic or fixed PCD. It is not recommended for use in the MM CPU driver. Furthermore, IsRestrictedMemoryAccess() is only needed for SMM. Therefor, there is no need for MM to consume the PcdCpuSmmRestrictedMemoryAccess. So, this patch is to add the SMM specific file for its own functions, with the change, the dependency of the MM CPU driver on PcdCpuSmmRestrictedMemoryAccess can be removed. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06UefiCpuPkg/PiSmmCpuDxeSmm: Clean mCpuSmmRestrictedMemoryAccessJiaxin Wu1-7/+4
Currently, mCpuSmmRestrictedMemoryAccess is only used by the IsRestrictedMemoryAccess(). And IsRestrictedMemoryAccess() can consume the PcdCpuSmmRestrictedMemoryAccess directly. Therefore, mCpuSmmRestrictedMemoryAccess can be cleaned to simply the code logic. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06UefiCpuPkg/PiSmmCpuDxeSmm: Update IfReadOnlyPageTableNeededJiaxin Wu1-18/+1
After the 9f29fbd3, full mapping SMM page table is always created regardless the value of the PcdCpuSmmRestrictedMemoryAccess. If so, SMM PageTable Attributes can be set to ready-only since there is no need to update it. So, this patch is to remove restricted memory access check when setting the SMM PageTable attributes. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06UefiCpuPkg/PiSmmCpuDxeSmm: Correct SetPageTableAttributes func usageJiaxin Wu2-12/+8
SetPageTableAttributes() will use the IfReadOnlyPageTableNeeded() to determine whether it is necessary to set the page table itself to read-only. And IfReadOnlyPageTableNeeded() has already token into account the status of IsRestrictedMemoryAccess(). Therefore, there is no need for an additional call to IsRestrictedMemoryAccess() before calling the SetPageTableAttributes(). Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06UefiCpuPkg/PiSmmCpuDxeSmm: Deadloop if PFAddr is not supported by systemJiaxin Wu1-1/+1
Deadloop if PFAddr is not supported by system, no need check SMM CPU RestrictedMemory access enable or not. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06UefiCpuPkg/PiSmmCpuDxeSmm: Always save and restore CR2Jiaxin Wu1-14/+4
Following the commit 9f29fbd3, full mapping SMM page table is always created regardless the value of the PcdCpuSmmRestrictedMemoryAccess. Consequently, a page fault (#PF) that triggers an update to the page table occurs only when SmiProfile is enabled. Therefore, it is necessary to save and restore the CR2 register when SmiProfile is configured to be enabled. And the operation of saving and restoring CR2 is considered to be not heavy operation compared to the saving and restoring of CR3. As a result, the condition check for SmiProfile has been removed, and CR2 is now saved and restored unconditionally, without the need for additional condition checks. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06UefiCpuPkg/PiSmmCpuDxeSmm: Fix IsSmmCommBufferForbiddenAddress checkJiaxin Wu2-1/+5
SmiPFHandler depends on the IsSmmCommBufferForbiddenAddress() to do the forbidden address check: For SMM, verifying whether an address is forbidden is necessary only when RestrictedMemoryAccess is enabled. For MM, all accessible address is recorded in the ResourceDescriptor HOB, so no need check the RestrictedMemoryAccess is enabled or not. This patch is to move RestrictedMemoryAccess check into SMM IsSmmCommBufferForbiddenAddress to align with above behavior. With the change, SmiPFHandler doesn't need to check the RestrictedMemoryAccess enable or not. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06UefiCpuPkg/PiSmmCpuDxeSmm: Avoid to access MCA_CAP if CPU does not supportJiaxin Wu1-5/+3
Do not access MCA_CAP MSR unless the CPU supports the SmmRegFeatureControl Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-02UefiCpuPkg/UefiCpuPkg.ci.yaml: Add PrEval CI configJoey Vagedes1-0/+3
Adds an entry to the package's CI configuration file that enable policy 5 for stuart_pr_eval. With this Policy, all INFs used by the package are extracted from the provided DSC file and compared against the list of changed *.inf (INF) files in the PR. If there is a match, stuart_pr_eval will specify that this package is affected by the PR and needs to be tested. Signed-off-by: Joey Vagedes <joey.vagedes@gmail.com>
2024-08-30UefiCpuPkg: Using the new name of LoongArch CSR 0x20 registerChao Li2-2/+2
Since the LoongArch SPEC has adjusted the CSR 0x20 register name and the MdePkg also added the new name, so enable it in UefiCpuPkg. Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Signed-off-by: Chao Li <lichao@loongson.cn>
2024-08-28UefiCpuPkg/MpInitLib: Skip X2APIC enabling when BSP in X2APIC alreadyRay Ni1-1/+3
The BSP's APIC mode is synced to all APs in CollectProcessorCount(). So, it's safe to skip the X2 APIC enabling in AutoEnableX2Apic() which runs later when BSP's APIC mode is X2 APIC already. Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
2024-08-28UefiCpuPkg/MpInitLib: Sync BSP's APIC mode to APs in InitConfig pathRay Ni2-5/+28
The change saves the BSP's initial APIC mode and syncs to all APs in first time wakeup. It allows certain platforms to switch to X2 APIC as early as possible and also independent on CpuFeaturePei/Dxe. The platform should switch BSP to X2 APIC mode first before the CpuMpPeim runs. Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
2024-08-28UefiCpuPkg/MpInitLib: Separate X2APIC enabling to subfunctionRay Ni1-23/+42
It's very confusing that auto X2 APIC enabling and APIC ID sorting are all performed inside CollectProcessorCount(). The change is to separate the X2 APIC enabling to AutoEnableX2Apic() and call that from MpInitLibInitialize(). SortApicId() is called from MpInitLibInitialize() as well. Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
2024-08-28UefiCpuPkg/UefiCpuPkg.dsc: Include PiSmmCpuStandaloneMm and required LibsJiaxin Wu1-2/+8
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28UefiCpuPkg/PiSmmCpuDxeSmm: Simplify SMM Profile Size CalculationJiaxin Wu4-11/+11
The motivation of this change is to simplify the logic in StandaloneMmIpl when allocating the memory for SMM profile data. IPL does not need to detect the CPU feature regarding MSR DS Area. That change requires the PCD value contains the MSR DS Area. So, the size of SmmProfileData will be simplified to the PcdCpuSmmProfileSize directly. mMsrDsAreaSize will be within the PcdCpuSmmProfileSize if mBtsSupported is TRUE. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28UefiCpuPkg/PiSmmCpuDxeSmm: Avoid PcdCpuSmmProfileEnable check in MMJiaxin Wu12-17/+83
For MM, gMmProfileDataHobGuid Memory Allocation HOB is defined to indicate SMM profile feature enabled or not. If the HOB exist, SMM profile base address and size will be returned in the HOB, so no need to consume the PcdCpuSmmProfileEnable feature PCD to check enable or disable. To achieve above purpose, Add the IsSmmProfileEnabled () function. With this change, Both MM and SMM can use the new function for SMM profile feature check. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28UefiCpuPkg/PiSmmCpuDxeSmm: Cleanup SMM_CPU_SYNC_MODEJiaxin Wu4-30/+24
Use MM_CPU_SYNC_MODE instead of SMM_CPU_SYNC_MODE. Cleanup the duplicate definition. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28UefiCpuPkg/PiSmmCpuDxeSmm: Refine DxeSmm PageTable update logicJiaxin Wu5-144/+174
This patch is to refine the updatePageTable logic for DxeSmm. For DxeSmm, PageTable will be updated in the first SMI when SMM ready to lock happen: IF SMM Profile is TRUE: 1. Mark mProtectionMemRange attribute: SmrrBase:Present, SMM profile base:Present&Nx, MMRAM ranges:Present, MMIO ranges: Present&Nx. 2. Mark the ranges not in mProtectionMemRange as RP (non-present). IF SMM Profile is FALSE: 1. Mark Non-MMRAM ranges as NX. 2. IF RestrictedMemoryAccess is TRUE: Forbidden Address mark as RP (IsUefiPageNotPresent is TRUE). Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28UefiCpuPkg/PiSmmCpuDxeSmm: Add PiSmmCpuStandaloneMm.infJiaxin Wu1-0/+136
This patch is to add PiSmmCpuStandaloneMm.inf for MM CPU support. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28UefiCpuPkg/PiSmmCpuDxeSmm: Check logging PF address for MMJiaxin Wu4-0/+70
This patch is to make sure only logging PF address for MM can run into the SmmProfilePFHandler. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28UefiCpuPkg/PiSmmCpuDxeSmm: Start SMM Profile early for MMJiaxin Wu4-5/+24
SMM Profile start can be started early in SMM CPU EntryPoint since page table for SMM profile is ready. No need wait smm ready to lock. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28UefiCpuPkg/PiSmmCpuDxeSmm: Differentiate PerformRemainingTasksJiaxin Wu5-103/+189
For MM: SMRAM & PageTable itself & SMM Paging State shall be configured once the gEdkiiPiMmMemoryAttributesTableGuid is installed by SMM core. It will happen after MmIpl.Entrypoint. PerformRemainingTasks will be called before MmIpl.Entrypoint exit. For SMM: SMRAM & PageTable itself & SMM Paging State are still configured in the first SMI when SMM ready to lock happen. So, this patch is to differentiate PerformRemainingTasks for MM and SMM. Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Dun Tan <dun.tan@intel.com> Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Wei6 Xu <wei6.xu@intel.com> Cc: Yuanhao Xie <yuanhao.xie@intel.com>