aboutsummaryrefslogtreecommitdiff
path: root/target/i386
AgeCommit message (Collapse)AuthorFilesLines
2024-10-31target/i386: Introduce GraniteRapids-v2 modelTao Su1-0/+17
Update GraniteRapids CPU model to add AVX10 and the missing features(ss, tsc-adjust, cldemote, movdiri, movdir64b). Tested-by: Xuelian Guo <xuelian.guo@intel.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20241028024512.156724-7-tao1.su@linux.intel.com Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20241031085233.425388-9-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Add AVX512 state when AVX10 is supportedTao Su1-1/+9
AVX10 state enumeration in CPUID leaf D and enabling in XCR0 register are identical to AVX512 state regardless of the supported vector lengths. Given that some E-cores will support AVX10 but not support AVX512, add AVX512 state components to guest when AVX10 is enabled. Based on a patch by Tao Su <tao1.su@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Tested-by: Xuelian Guo <xuelian.guo@intel.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20241031085233.425388-8-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Add feature dependencies for AVX10Tao Su2-0/+20
Since the highest supported vector length for a processor implies that all lesser vector lengths are also supported, add the dependencies of the supported vector lengths. If all vector lengths aren't supported, clear AVX10 enable bit as well. Note that the order of AVX10 related dependencies should be kept as: CPUID_24_0_EBX_AVX10_128 -> CPUID_24_0_EBX_AVX10_256, CPUID_24_0_EBX_AVX10_256 -> CPUID_24_0_EBX_AVX10_512, CPUID_24_0_EBX_AVX10_VL_MASK -> CPUID_7_1_EDX_AVX10, CPUID_7_1_EDX_AVX10 -> CPUID_24_0_EBX, so that prevent user from setting weird CPUID combinations, e.g. 256-bits and 512-bits are supported but 128-bits is not, no vector lengths are supported but AVX10 enable bit is still set. Since AVX10_128 will be reserved as 1, adding these dependencies has the bonus that when user sets -cpu host,-avx10-128, CPUID_7_1_EDX_AVX10 and CPUID_24_0_EBX will be disabled automatically. Tested-by: Xuelian Guo <xuelian.guo@intel.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20241028024512.156724-5-tao1.su@linux.intel.com Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20241031085233.425388-7-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: add CPUID.24 features for AVX10Tao Su2-0/+23
Introduce features for the supported vector bit lengths. Signed-off-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20241028024512.156724-3-tao1.su@linux.intel.com Link: https://lore.kernel.org/r/20241028024512.156724-4-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Tested-by: Xuelian Guo <xuelian.guo@intel.com> Link: https://lore.kernel.org/r/20241031085233.425388-6-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: add AVX10 feature and AVX10 version propertyTao Su3-8/+63
When AVX10 enable bit is set, the 0x24 leaf will be present as "AVX10 Converged Vector ISA leaf" containing fields for the version number and the supported vector bit lengths. Introduce avx10-version property so that avx10 version can be controlled by user and cpu model. Per spec, avx10 version can never be 0, the default value of avx10-version is set to 0 to determine whether it is specified by user. The default can come from the device model or, for the max model, from KVM's reported value. Signed-off-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20241028024512.156724-3-tao1.su@linux.intel.com Link: https://lore.kernel.org/r/20241028024512.156724-4-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Tested-by: Xuelian Guo <xuelian.guo@intel.com> Link: https://lore.kernel.org/r/20241031085233.425388-5-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: return bool from x86_cpu_filter_featuresPaolo Bonzini1-9/+11
Prepare for filtering non-boolean features such as AVX10 version. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20241031085233.425388-4-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: do not rely on ExtSaveArea for accelerator-supported XCR0 bitsPaolo Bonzini2-6/+31
Right now, QEMU is using the "feature" and "bits" fields of ExtSaveArea to query the accelerator for the support status of extended save areas. This is a problem for AVX10, which attaches two feature bits (AVX512F and AVX10) to the same extended save states. To keep the AVX10 hacks to the minimum, limit usage of esa->features and esa->bits. Instead, just query the accelerator for the 0xD leaf. Do it in common code and clear esa->size if an extended save state is unsupported. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20241031085233.425388-3-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: cpu: set correct supported XCR0 features for TCGPaolo Bonzini1-2/+4
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20241031085233.425388-2-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: use + to put flags togetherPaolo Bonzini1-12/+12
This gives greater opportunity for reassociation on x86 targets, since addition can use the LEA instruction. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: use higher-precision arithmetic to compute CFPaolo Bonzini1-0/+37
If the operands of the arithmetic instruction fit within a half-register, it's easiest to use a comparison instruction to compute the carry. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: use compiler builtin to compute PFPaolo Bonzini4-48/+17
This removes the 256 byte parity table from the executable. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: make flag variables unsignedPaolo Bonzini1-23/+23
This makes it easier for the compiler to understand which bits are set, and it also removes "cltq" instructions to canonicalize the output value as 32-bit signed. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: add a note about gen_jcc1Paolo Bonzini1-0/+4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: add a few more trivial CCPrepare casesPaolo Bonzini1-0/+3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: optimize TEST+Jxx sequencesPaolo Bonzini1-0/+22
Mostly used for TEST+JG and TEST+JLE, but it is easy to cover also JBE/JA and JL/JGE; shaves about 0.5% TCG ops. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: optimize computation of ZF from CC_OP_DYNAMICPaolo Bonzini3-3/+21
Most uses of CC_OP_DYNAMIC are for CMP/JB/JE or similar sequences. We can optimize many of them to avoid computation of the flags. This eliminates both TCG ops to set up the new cc_op, and helper instructions because evaluating just ZF is much cheaper. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Wrap cc_op_live with a validity checkRichard Henderson4-7/+21
Assert that op is known and that cc_op_live_ is populated. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Introduce cc_op_sizeRichard Henderson3-13/+26
Replace arithmetic on cc_op with a helper function. Assert that the op has a size and that it is valid for the configuration. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240701025115.1265117-6-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Rearrange CCOpRichard Henderson1-6/+8
Give the first few enumerators explicit integer constants, align the BWLQ enumerators. This will be used to simplify ((op - CC_OP_*B) & 3). Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240701025115.1265117-4-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: remove CC_OP_CLRPaolo Bonzini5-26/+4
Just use CC_OP_EFLAGS; it is not that likely that the flags computed by CC_OP_CLR survive the end of the basic block, in which case there is no need to spill cc_op_src. cc_op_src now does need spilling if the XOR is followed by a memory operation, but this only costs 0.2% extra TCG ops. They will be recouped by simplifications in how QEMU evaluates ZF at runtime, which are even greater with this change. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Tidy cc_op_str usageRichard Henderson1-6/+11
Make const. Use the read-only strings directly; do not copy them into an on-stack buffer with snprintf. Allow for holes in the cc_op_str array, now present with CC_OP_POPCNT. Fixes: 460231ad369 ("target/i386: give CC_OP_POPCNT low bits corresponding to MO_TL") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240701025115.1265117-2-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: use tcg_gen_ext_tl when applicablePaolo Bonzini1-8/+8
Prefer it to gen_ext_tl in the common case where the destination is known. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386/hvf: fix handling of XSAVE-related CPUID bitsPaolo Bonzini1-21/+31
The call to xgetbv() is passing the ecx value for cpuid function 0xD, index 0. The xgetbv call thus returns false (OSXSAVE is bit 27, which is well out of the range of CPUID[0xD,0].ECX) and eax is not modified. While fixing it, cache the whole computation of supported XCR0 bits since it will be used for more than just CPUID leaf 0xD. Furthermore, unsupported subleafs of CPUID 0xD (including all those corresponding to zero bits in host's XCR0) must be hidden; if OSXSAVE is not set at all, the whole of CPUID leaf 0xD plus the XSAVE bit must be hidden. Finally, unconditionally drop XSTATE_BNDREGS_MASK and XSTATE_BNDCSR_MASK; real hardware will only show them if the MPX bit is set in CPUID; this is never the case for hvf_get_supported_cpuid() because QEMU's Hypervisor.framework support does not handle the VMX fields related to MPX (even in the unlikely possibility that the host has MPX enabled). So hide those bits in the new cache_host_xcr0(). Cc: Phil Dennis-Jordan <lists@philjordan.eu> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Expose new feature bits in CPUID 8000_0021_EAX/EBXBabu Moger2-2/+18
Newer AMD CPUs support ERAPS (Enhanced Return Address Prediction Security) feature that enables the auto-clear of RSB entries on a TLB flush, context switches and VMEXITs. The number of default RSP entries is reflected in RapSize. Add the feature bit and feature word to support these features. CPUID_Fn80000021_EAX Bits Feature Description 24 ERAPS: Indicates support for enhanced return address predictor security. CPUID_Fn80000021_EBX Bits Feature Description 31-24 Reserved 23:16 RapSize: Return Address Predictor size. RapSize x 8 is the minimum number of CALL instructions software needs to execute to flush the RAP. 15-00 MicrocodePatchSize. Read-only. Reports the size of the Microcode patch in 16-byte multiples. If 0, the size of the patch is at most 5568 (15C0h) bytes. Link: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/57238.zip Signed-off-by: Babu Moger <babu.moger@amd.com> Link: https://lore.kernel.org/r/7c62371fe60af1e9bbd853f5f8e949bf2d908bd0.1729807947.git.babu.moger@amd.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Expose bits related to SRSO vulnerabilityBabu Moger2-4/+12
Add following bits related Speculative Return Stack Overflow (SRSO). Guests can make use of these bits if supported. These bits are reported via CPUID Fn8000_0021_EAX. =================================================================== Bit Feature Description =================================================================== 27 SBPB Indicates support for the Selective Branch Predictor Barrier. 28 IBPB_BRTYPE MSR_PRED_CMD[IBPB] flushes all branch type predictions. 29 SRSO_NO Not vulnerable to SRSO. 30 SRSO_USER_KERNEL_NO Not vulnerable to SRSO at the user-kernel boundary. =================================================================== Link: https://www.amd.com/content/dam/amd/en/documents/corporate/cr/speculative-return-stack-overflow-whitepaper.pdf Link: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/57238.zip Signed-off-by: Babu Moger <babu.moger@amd.com> Link: https://lore.kernel.org/r/dadbd70c38f4e165418d193918a3747bd715c5f4.1729807947.git.babu.moger@amd.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Add PerfMonV2 feature bitSandipan Das2-0/+30
CPUID leaf 0x80000022, i.e. ExtPerfMonAndDbg, advertises new performance monitoring features for AMD processors. Bit 0 of EAX indicates support for Performance Monitoring Version 2 (PerfMonV2) features. If found to be set during PMU initialization, the EBX bits can be used to determine the number of available counters for different PMUs. It also denotes the availability of global control and status registers. Add the required CPUID feature word and feature bit to allow guests to make use of the PerfMonV2 features. Signed-off-by: Sandipan Das <sandipan.das@amd.com> Signed-off-by: Babu Moger <babu.moger@amd.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/a96f00ee2637674c63c61e9fc4dee343ea818053.1729807947.git.babu.moger@amd.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31target/i386: Fix minor typo in NO_NESTED_DATA_BP feature bitBabu Moger2-2/+2
Rename CPUID_8000_0021_EAX_No_NESTED_DATA_BP to CPUID_8000_0021_EAX_NO_NESTED_DATA_BP. No functional change intended. Signed-off-by: Babu Moger <babu.moger@amd.com> Link: https://lore.kernel.org/r/a6749acd125670d3930f4ca31736a91b1d965f2f.1729807947.git.babu.moger@amd.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31i386/cpu: Drop the check of phys_bits in host_cpu_realizefn()Xiaoyao Li1-13/+3
The check of cpu->phys_bits to be in range between [32, TARGET_PHYS_ADDR_SPACE_BITS] in host_cpu_realizefn() is duplicated with check in x86_cpu_realizefn(). Since the ckeck in x86_cpu_realizefn() is called later and can cover all the x86 cases. Remove the one in host_cpu_realizefn(). Opportunistically adjust cpu->phys_bits directly in host_cpu_adjust_phys_bits(), which matches more with the function name. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20240929085747.2023198-1-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-30target/i386: fix CPUID check for LFENCE and SFENCEPaolo Bonzini1-2/+2
LFENCE and SFENCE were introduced with the original SSE instruction set; marking them incorrectly as cpuid(SSE2) causes failures for CPU models that lack SSE2, for example pentium3. Reported-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Guenter Roeck <linux@roeck-us.net> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-22target/i386: Remove ra parameter from ptw_translateRichard Henderson1-9/+9
This argument is no longer used. Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20241013184733.1423747-4-richard.henderson@linaro.org>
2024-10-22target/i386: Use probe_access_full_mmu in ptw_translateRichard Henderson1-6/+4
The probe_access_full_mmu function was designed for this purpose, and does not report the memory operation event to plugins. Cc: qemu-stable@nongnu.org Fixes: 6d03226b422 ("plugins: force slow path when plugins instrument memory ops") Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20241013184733.1423747-3-richard.henderson@linaro.org>
2024-10-22target/i386: Walk NPT in guest real modeAlexander Graf1-3/+14
When translating virtual to physical address with a guest CPU that supports nested paging (NPT), we need to perform every page table walk access indirectly through the NPT, which we correctly do. However, we treat real mode (no page table walk) special: In that case, we currently just skip any walks and translate VA -> PA. With NPT enabled, we also need to then perform NPT walk to do GVA -> GPA -> HPA which we fail to do so far. The net result of that is that TCG VMs with NPT enabled that execute real mode code (like SeaBIOS) end up with GPA==HPA mappings which means the guest accesses host code and data. This typically shows as failure to boot guests. This patch changes the page walk logic for NPT enabled guests so that we always perform a GVA -> GPA translation and then skip any logic that requires an actual PTE. That way, all remaining logic to walk the NPT stays and we successfully walk the NPT in real mode. Cc: qemu-stable@nongnu.org Fixes: fe441054bb3f0 ("target-i386: Add NPT support") Signed-off-by: Alexander Graf <graf@amazon.com> Reported-by: Eduard Vlad <evlad@amazon.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240921085712.28902-1-graf@amazon.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-18Merge tag 'pull-error-2024-10-18' of https://repo.or.cz/qemu/armbru into stagingPeter Maydell1-32/+27
Error reporting patches for 2024-10-18 # -----BEGIN PGP SIGNATURE----- # # iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmcSXQQSHGFybWJydUBy # ZWRoYXQuY29tAAoJEDhwtADrkYZTRKcP/R/nmE22MJBDT8LLZEaQpvkqEURpHFVY # uHcLPBfezWy2A9qgWiPMKEs9Q7L3qpJq2FKCPFx7VyzctMcYt2W70AzVpaBOBkTN # g5JAyFaJ3cGj6VT/HDZrBeIpySHZI1ynZyRqLvay5aV6l2dIzMWAcpFI4w6He0yJ # 9CVV5z8K3zh7a7HjkBeWeKn75W2v6cE1PnRlPIsA4Q05LGVU6iHOhZ9LCJYpgIlL # StJh1zlscSItMbHnfdx0iEiEuoP/nqwoFbA+XpDRzZOLX6+dm2oVwFoApv95bE+/ # CZ8QIy3zda6+V1AGhTfBqDV/NfZZCqzi58YPOo+ny4+sNKXsU7/z2OQzGNVd7NqF # fpflJAPOe+1tuAd/c40VrJn/DN+TgYVV199kMNfbBojMNaoJh262uvQ9L0NuLcW+ # v0cKYRJsTIIHOFj7NwHR8ALY6ZlE3pdLvz9AivFuLLtK+RtfKw2YQvTDTmqXgRsG # J6glqTeN+2M9cYb7/r6Kc/P9TGEaSEoCwmAadfmfwLSW/m1UkrqNzn+iC4m1iLe1 # bq+N1iW5T4nhibw8dFCvD4AwFSP9VQNAy5AlKW78Y+K/xAC2781A8PHV9QAIM1/t # Kz6FRts0Jg6uyB0I7AAZ9k18i1oiEqoz3SjGWpQlTiI7VCMCpgHX6nvwWFPf3Zxa # Rn0SUg10eUW9 # =sR8Q # -----END PGP SIGNATURE----- # gpg: Signature made Fri 18 Oct 2024 14:05:08 BST # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * tag 'pull-error-2024-10-18' of https://repo.or.cz/qemu/armbru: qerror: QERR_PROPERTY_VALUE_OUT_OF_RANGE is no longer used, drop hw/intc/openpic: Improve errors for out of bounds property values target/i386/cpu: Improve errors for out of bounds property values target/i386/cpu: Avoid mixing signed and unsigned in property setters block: Adjust check_block_size() signature block: Improve errors about block sizes error: Drop superfluous #include "qapi/qmp/qerror.h" qga: Improve error for guest-set-user-password parameter @crypted qga/qapi-schema: Drop obsolete note on "unsupported" errors Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-10-18target/i386/cpu: Improve errors for out of bounds property valuesMarkus Armbruster1-11/+9
The error message for a "stepping" value that is out of bounds is a bit odd: $ qemu-system-x86_64 -cpu qemu64,stepping=16 qemu-system-x86_64: can't apply global qemu64-x86_64-cpu.stepping=16: Property .stepping doesn't take value 16 (minimum: 0, maximum: 15) The "can't apply global" part is an unfortunate artifact of -cpu's implementation. Left for another day. The remainder feels overly verbose. Change it to qemu64-x86_64-cpu: can't apply global qemu64-x86_64-cpu.stepping=16: parameter 'stepping' can be at most 15 Likewise for "family", "model", and "tsc-frequency". Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-ID: <20241010150144.986655-6-armbru@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
2024-10-18target/i386/cpu: Avoid mixing signed and unsigned in property settersMarkus Armbruster1-24/+21
Properties "family", "model", and "stepping" are visited as signed integers. They are backed by bits in CPUX86State member @cpuid_version. The code to extract and insert these bits mixes signed and unsigned. Not actually wrong, but avoiding such mixing is good practice. Visit them as unsigned integers instead. This adds a few mildly ugly cast in arguments of error_setg(). The next commit will get rid of them again. Property "tsc-frequency" is also visited as signed integer. The value ultimately flows into the kernel, where it is 31 bits unsigned. The QEMU code freely mixes int, uint32_t, int64_t. I elect not to attempt draining this swamp today. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-ID: <20241010150144.986655-5-armbru@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
2024-10-17target/i386: Use only 16 and 32-bit operands for IN/OUTRichard Henderson1-4/+4
The REX.W prefix is ignored for these instructions. Mirror the solution already used for INS/OUTS: X86_SIZE_z. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2581 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Cc: qemu-stable@nongnu.org Link: https://lore.kernel.org/r/20241015004144.2111817-1-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386/tcg: Use DPL-level accesses for interrupts and call gatesPaolo Bonzini1-6/+11
Stack accesses should be explicit and use the privilege level of the target stack. This ensures that SMAP is not applied when the target stack is in ring 3. This fixes a bug wherein i386/tcg assumed that an interrupt return, or a far call using the CALL or JMP instruction, was always going from kernel or user mode to kernel mode when using a call gate. This assumption is violated if the call gate has a DPL that is greater than 0. Analyzed-by: Robert R. Henry <rrh.henry@gmail.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/249 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: assert that cc_op* and pc_save are preservedPaolo Bonzini1-9/+3
Now all decoding has been done before any code generation. There is no need anymore to save and restore cc_op* and pc_save but, for the time being, assert that this is indeed the case. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: list instructions still in translate.cPaolo Bonzini1-0/+31
Group them so that it is easier to figure out which two-byte opcodes to tackle together. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: do not check PREFIX_LOCK in old-style decoderPaolo Bonzini1-18/+8
It is already checked before getting there. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: convert CMPXCHG8B/CMPXCHG16B to new decoderPaolo Bonzini4-129/+124
The gen_cmpxchg8b and gen_cmpxchg16b functions even have the correct prototype already; the only thing that needs to be done is removing the gen_lea_modrm() call. This moves the last LOCK-enabled instructions to the new decoder. It is now possible to assume that gen_multi0F is called only after checking that PREFIX_LOCK was not specified. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: decode address before going back to translate.cPaolo Bonzini4-118/+103
There are now relatively few unconverted opcodes in translate.c (there are 13 of them including 8 for x87), and all of them have the same format with a mod/rm byte and no immediate. A good next step is to remove the early bail out to disas_insn_x87/disas_insn_old, instead giving these legacy translator functions the same prototype as the other gen_* functions. To do this, the X86DecodeInsn can be passed down to the places that used to fetch address bytes from the instruction stream. To make sure that everything is done cleanly, the CPUX86State* argument is removed. As part of the unification, the gen_lea_modrm() name is now free, so rename gen_load_ea() to gen_lea_modrm(). This is as good a name and it makes the changes to translate.c easier to review. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: convert bit test instructions to new decoderPaolo Bonzini4-158/+183
Code generation was rewritten; it reuses the same trick to use the CC_OP_SAR values for cc_op, but it tries to use CC_OP_ADCX or CC_OP_ADCOX instead of CC_OP_EFLAGS. This is a tiny bit more efficient in the common case where only CF is checked in the resulting flags. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: Make sure SynIC state is really updated before KVM_RUNVitaly Kuznetsov1-0/+1
'hyperv_synic' test from KVM unittests was observed to be flaky on certain hardware (hangs sometimes). Debugging shows that the problem happens in hyperv_sint_route_new() when the test tries to set up a new SynIC route. The function bails out on: if (!synic->sctl_enabled) { goto cleanup; } but the test writes to HV_X64_MSR_SCONTROL just before it starts establishing SINT routes. Further investigation shows that synic_update() (called from async_synic_update()) happens after the SINT setup attempt and not before. Apparently, the comment before async_safe_run_on_cpu() in kvm_hv_handle_exit() does not correctly describe the guarantees async_safe_run_on_cpu() gives. In particular, async worked added to a CPU is actually processed from qemu_wait_io_event() which is not always called before KVM_RUN, i.e. kvm_cpu_exec() checks whether an exit request is pending for a CPU and if not, keeps running the vCPU until it meets an exit it can't handle internally. Hyper-V specific MSR writes are not automatically trigger an exit. Fix the issue by simply raising an exit request for the vCPU where SynIC update was queued. This is not a performance critical path as SynIC state does not get updated so often (and async_safe_run_on_cpu() is a big hammer anyways). Reported-by: Jan Richter <jarichte@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20240917160051.2637594-4-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: Exclude 'hv-syndbg' from 'hv-passthrough'Vitaly Kuznetsov1-2/+5
Windows with Hyper-V role enabled doesn't boot with 'hv-passthrough' when no debugger is configured, this significantly limits the usefulness of the feature as there's no support for subtracting Hyper-V features from CPU flags at this moment (e.g. "-cpu host,hv-passthrough,-hv-syndbg" does not work). While this is also theoretically fixable, 'hv-syndbg' is likely very special and unneeded in the default set. Genuine Hyper-V doesn't seem to enable it either. Introduce 'skip_passthrough' flag to 'kvm_hyperv_properties' and use it as one-off to skip 'hv-syndbg' when enabling features in 'hv-passthrough' mode. Note, "-cpu host,hv-passthrough,hv-syndbg" can still be used if needed. As both 'hv-passthrough' and 'hv-syndbg' are debug features, the change should not have any effect on production environments. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20240917160051.2637594-3-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: Fix conditional CONFIG_SYNDBG enablementVitaly Kuznetsov2-4/+9
Putting HYPERV_FEAT_SYNDBG entry under "#ifdef CONFIG_SYNDBG" in 'kvm_hyperv_properties' array is wrong: as HYPERV_FEAT_SYNDBG is not the highest feature number, the result is an empty (zeroed) entry in the array (and not a skipped entry!). hyperv_feature_supported() is designed to check that all CPUID bits are set but for a zeroed feature in 'kvm_hyperv_properties' it returns 'true' so QEMU considers HYPERV_FEAT_SYNDBG as always supported, regardless of whether KVM host actually supports it. To fix the issue, leave HYPERV_FEAT_SYNDBG's definition in 'kvm_hyperv_properties' array, there's nothing wrong in having it defined even when 'CONFIG_SYNDBG' is not set. Instead, put "hv-syndbg" CPU property under '#ifdef CONFIG_SYNDBG' to alter the existing behavior when the flag is silently skipped in !CONFIG_SYNDBG builds. Leave an 'assert' sentinel in hyperv_feature_supported() making sure there are no 'holes' or improperly defined features in 'kvm_hyperv_properties'. Fixes: d8701185f40c ("hw: hyperv: Initial commit for Synthetic Debugging device") Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20240917160051.2637594-2-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: Add support save/load HWCR MSRGao Shiyuan3-0/+37
KVM commit 191c8137a939 ("x86/kvm: Implement HWCR support") introduced support for emulating HWCR MSR. Add support for QEMU to save/load this MSR for migration purposes. Signed-off-by: Gao Shiyuan <gaoshiyuan@baidu.com> Signed-off-by: Wang Liang <wangliang44@baidu.com> Link: https://lore.kernel.org/r/20241009095109.66843-1-gaoshiyuan@baidu.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: Add more features enumerated by CPUID.7.2.EDXChao Gao1-2/+2
Following 5 bits in CPUID.7.2.EDX are supported by KVM. Add their supports in QEMU. Each of them indicates certain bits of IA32_SPEC_CTRL are supported. Those bits can control CPU speculation behavior which can be used to defend against side-channel attacks. bit0: intel-psfd if 1, indicates bit 7 of the IA32_SPEC_CTRL MSR is supported. Bit 7 of this MSR disables Fast Store Forwarding Predictor without disabling Speculative Store Bypass bit1: ipred-ctrl If 1, indicates bits 3 and 4 of the IA32_SPEC_CTRL MSR are supported. Bit 3 of this MSR enables IPRED_DIS control for CPL3. Bit 4 of this MSR enables IPRED_DIS control for CPL0/1/2 bit2: rrsba-ctrl If 1, indicates bits 5 and 6 of the IA32_SPEC_CTRL MSR are supported. Bit 5 of this MSR disables RRSBA behavior for CPL3. Bit 6 of this MSR disables RRSBA behavior for CPL0/1/2 bit3: ddpd-u If 1, indicates bit 8 of the IA32_SPEC_CTRL MSR is supported. Bit 8 of this MSR disables Data Dependent Prefetcher. bit4: bhi-ctrl if 1, indicates bit 10 of the IA32_SPEC_CTRL MSR is supported. Bit 10 of this MSR enables BHI_DIS_S behavior. Signed-off-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20240919051011.118309-1-chao.gao@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: Make invtsc migratable when user sets tsc-khz explicitlyXiaoyao Li1-2/+9
When user sets tsc-frequency explicitly, the invtsc feature is actually migratable because the tsc-frequency is supposed to be fixed during the migration. See commit d99569d9d856 ("kvm: Allow invtsc migration if tsc-khz is set explicitly") for referrence. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20240814075431.339209-10-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17target/i386: Construct CPUID 2 as stateful iff times > 1Xiaoyao Li1-2/+4
When times == 1, the CPUID leaf 2 is not stateful. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20240814075431.339209-6-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>