aboutsummaryrefslogtreecommitdiff
path: root/target/i386
AgeCommit message (Collapse)AuthorFilesLines
2022-09-06target/i386: Make translator stop before the end of a pageIlya Leoshkevich1-24/+38
Right now translator stops right *after* the end of a page, which breaks reporting of fault locations when the last instruction of a multi-insn translation block crosses a page boundary. An implementation, like the one arm and s390x have, would require an i386 length disassembler, which is burdensome to maintain. Another alternative would be to single-step at the end of a guest page, but this may come with a performance impact. Fix by snapshotting disassembly state and restoring it after we figure out we crossed a page boundary. This includes rolling back cc_op updates and emitted ops. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1143 Message-Id: <20220817150506.592862-4-iii@linux.ibm.com> [rth: Simplify end-of-insn cross-page checks.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06accel/tcg: Add pc and host_pc params to gen_intermediate_codeRichard Henderson1-2/+3
Pass these along to translator_loop -- pc may be used instead of tb->pc, and host_pc is currently unused. Adjust all targets at one time. Acked-by: Alistair Francis <alistair.francis@wdc.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06accel/tcg: Remove translator_ldswRichard Henderson1-1/+1
The only user can easily use translator_lduw and adjust the type to signed during the return. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-01target/i386: AVX+AES helpers prepPaul Brook1-19/+22
Make the AES vector helpers AVX ready No functional changes to existing helpers Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-22-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: AVX pclmulqdq prepPaul Brook1-7/+22
Make the pclmulqdq helper AVX ready Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-21-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rewrite blendv helpersPaul Brook1-62/+24
Rewrite the blendv helpers so that they can easily be extended to support the AVX encodings, which make all 4 arguments explicit. No functional changes to the existing helpers Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-20-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Misc AVX helper prepPaul Brook1-49/+94
Fixup various vector helpers that either trivially exten to 256 bit, or don't have 256 bit variants. No functional changes to existing helpers Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-19-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Destructive FP helpers for AVXPaul Brook1-58/+43
Perpare the horizontal atithmetic vector helpers for AVX These currently use a dummy Reg typed variable to store the result then assign the whole register. This will cause 128 bit operations to corrupt the upper half of the register, so replace it with explicit temporaries and element assignments. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-18-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Dot product AVX helper prepPaul Brook1-35/+45
Make the dpps and dppd helpers AVX-ready I can't see any obvious reason why dppd shouldn't work on 256 bit ymm registers, but both AMD and Intel agree that it's xmm only. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-17-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: reimplement AVX comparison helpersPaul Brook3-69/+78
AVX includes an additional set of comparison predicates, some of which our softfloat implementation does not expose as separate functions. Rewrite the helpers in terms of floatN_compare for future extensibility. Signed-off-by: Paul Brook <paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220424220204.2493824-24-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Floating point arithmetic helper AVX prepPaul Brook1-41/+87
Prepare the "easy" floating point vector helpers for AVX No functional changes to existing helpers. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-16-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Destructive vector helpers for AVXPaul Brook1-301/+269
These helpers need to take special care to avoid overwriting source values before the wole result has been calculated. Currently they use a dummy Reg typed variable to store the result then assign the whole register. This will cause 128 bit operations to corrupt the upper half of the register, so replace it with explicit temporaries and element assignments. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-14-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Misc integer AVX helper prepPaul Brook1-86/+82
More preparatory work for AVX support in various integer vector helpers No functional changes to existing helpers. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-13-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rewrite simple integer vector helpersPaul Brook1-54/+27
Rewrite the "simple" vector integer helpers in preperation for AVX support. While the current code is able to use the same prototype for unary (a = F(b)) and binary (a = F(b, c)) operations, future changes will cause them to diverge. No functional changes to existing helpers Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-12-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rewrite vector shift helperPaul Brook1-131/+108
Rewrite the vector shift helpers in preperation for AVX support (3 operand form and 256 bit vectors). For now keep the existing two operand interface. No functional changes to existing helpers. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-11-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: rewrite destructive 3DNow operationsPaolo Bonzini1-16/+16
Remove use of the MOVE macro, since it will be purged from MMX/SSE as well. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Add CHECK_NO_VEXPaul Brook1-0/+26
Reject invalid VEX encodings on MMX instructions. Signed-off-by: Paul Brook <paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220424220204.2493824-7-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: do not cast gen_helper_* function pointersPaolo Bonzini1-38/+37
Use a union to store the various possible kinds of function pointers, and access the correct one based on the flags. SSEOpHelper_table6 and SSEOpHelper_table7 right now only have one case, but this would change with AVX's 3- and 4-argument operations. Use unions there too, to keep the code more similar for the three tables. Extracted from a patch by Paul Brook <paul@nowt.org>. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Add size suffix to vector FP helpersPaolo Bonzini3-66/+67
For AVX we're going to need both 128 bit (xmm) and 256 bit (ymm) variants of floating point helpers. Add the register type suffix to the existing *PS and *PD helpers (SS and SD variants are only valid on 128 bit vectors) No functional changes. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-15-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: isolate MMX code morePaolo Bonzini1-18/+32
Extracted from a patch by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: check SSE table flags instead of hardcoding opcodesPaolo Bonzini1-44/+31
Put more flags to work to avoid hardcoding lists of opcodes. The op7 case for SSE_OPF_CMP is included for homogeneity and because AVX needs it, but it is never used by SSE or MMX. Extracted from a patch by Paul Brook <paul@nowt.org>. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Move 3DNOW decoderPaul Brook1-13/+17
Handle 3DNOW instructions early to avoid complicating the MMX/SSE logic. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-25-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rework sse_op_table6/7Paul Brook1-100/+132
Add a flags field each row in sse_op_table6 and sse_op_table7. Initially this is only used as a replacement for the magic SSE41_SPECIAL pointer. The other flags are mostly relevant for the AVX implementation but can be applied to SSE as well. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-6-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rework sse_op_table1Paul Brook1-130/+183
Add a flags field to each row in sse_op_table1. Initially this is only used as a replacement for the magic SSE_SPECIAL and SSE_DUMMY pointers, the other flags are mostly relevant for the AVX implementation but can be applied to SSE as well. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-5-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Add ZMM_OFFSET macroPaul Brook1-33/+27
Add a convenience macro to get the address of an xmm_regs element within CPUX86State. This was originally going to be the basis of an implementation that broke operations into 128 bit chunks. I scrapped that idea, so this is now a purely cosmetic change. But I think a worthwhile one - it reduces the number of function calls that need to be split over multiple lines. No functional changes. Signed-off-by: Paul Brook <paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220424220204.2493824-9-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: formatting fixesPaolo Bonzini1-3/+4
Extracted from a patch by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: do not use MOVL to move data between SSE registersPaolo Bonzini1-2/+4
Write down explicitly the load/store sequence. Extracted from a patch by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: DPPS rounding fixPaolo Bonzini1-32/+35
The DPPS (Dot Product) instruction is defined to first sum pairs of intermediate results, then sum those values to get the final result. i.e. (A+B)+(C+D) We incrementally sum the results, i.e. ((A+B)+C)+D, which can result in incorrect rouding. For consistency, also change the variable names to the ones used in the Intel SDM and implement DPPD following the manual. Based on a patch by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: fix PHSUB* instructions with dest=srcPaolo Bonzini1-20/+29
The computation must not overwrite neither the destination nor the source before the last element has been computed. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01i386: do kvm_put_msr_feature_control() first thing when vCPU is resetVitaly Kuznetsov1-5/+12
kvm_put_sregs2() fails to reset 'locked' CR4/CR0 bits upon vCPU reset when it is in VMX root operation. Do kvm_put_msr_feature_control() before kvm_put_sregs2() to (possibly) kick vCPU out of VMX root operation. It also seems logical to do kvm_put_msr_feature_control() before kvm_put_nested_state() and not after it, especially when 'real' nested state is set. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220818150113.479917-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01i386: reset KVM nested state upon CPU resetVitaly Kuznetsov1-10/+27
Make sure env->nested_state is cleaned up when a vCPU is reset, it may be stale after an incoming migration, kvm_arch_put_registers() may end up failing or putting vCPU in a weird state. Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220818150113.479917-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-08-05target/i386: display deprecation status in '-cpu help'Daniel P. Berrangé1-0/+5
When the user queries CPU models via QMP there is a 'deprecated' flag present, however, this is not done for the CLI '-cpu help' command. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2022-08-01misc: fix commonly doubled up wordsDaniel P. Berrangé1-1/+1
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20220707163720.1421716-5-berrange@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-07-13hvf: Enable RDTSCP supportCameron Esfahani3-13/+23
Pass through RDPID and RDTSCP support in CPUID if host supports it. Correctly detect if CPU_BASED_TSC_OFFSET and CPU_BASED2_RDTSCP would be supported in primary and secondary processor-based VM-execution controls. Enable RDTSCP in secondary processor controls if RDTSCP support is indicated in CPUID. Signed-off-by: Cameron Esfahani <dirty@apple.com> Message-Id: <20220214185605.28087-7-f4bug@amsat.org> Tested-by: Silvio Moioli <moio@suse.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1011 Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2022-06-08Fix 'writeable' typosPeter Maydell3-3/+3
We have about 30 instances of the typo/variant spelling 'writeable', and over 500 of the more common 'writable'. Standardize on the latter. Change produced with: sed -i -e 's/\([Ww][Rr][Ii][Tt]\)[Ee]\([Aa][Bb][Ll][Ee]\)/\1\2/g' $(git grep -il writeable) and then hand-undoing the instance in linux-headers/linux/kvm.h. Most of these changes are in comments or documentation; the exceptions are: * a local variable in accel/hvf/hvf-accel-ops.c * a local variable in accel/kvm/kvm-all.c * the PMCR_WRITABLE_MASK macro in target/arm/internals.h * the EPT_VIOLATION_GPA_WRITABLE macro in target/i386/hvf/vmcs.h (which is never used anywhere) * the AR_TYPE_WRITABLE_MASK macro in target/i386/hvf/vmx.h (which is never used anywhere) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Stefan Weil <sw@weilnetz.de> Message-id: 20220505095015.2714666-1-peter.maydell@linaro.org
2022-06-06x86: cpu: fixup number of addressable IDs for logical processors sharing cacheIgor Mammedov1-4/+16
When QEMU is started with '-cpu host,host-cache-info=on', it will passthrough host's number of logical processors sharing cache and number of processor cores in the physical package. QEMU already fixes up the later to correctly reflect number of configured cores for VM, however number of logical processors sharing cache is still comes from host CPU, which confuses guest started with: -machine q35,accel=kvm \ -cpu host,host-cache-info=on,l3-cache=off \ -smp 20,sockets=2,dies=1,cores=10,threads=1 \ -numa node,nodeid=0,memdev=ram-node0 \ -numa node,nodeid=1,memdev=ram-node1 \ -numa cpu,socket-id=0,node-id=0 \ -numa cpu,socket-id=1,node-id=1 on 2 socket Xeon 4210R host with 10 cores per socket with CPUID[04H]: ... --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1f (31) maximum IDs for cores in pkg = 0xf (15) ... that doesn't match number of logical processors VM was configured with and as result RHEL 9.0 guest complains: sched: CPU #10's llc-sibling CPU #0 is not on the same node! [node: 1 != 0]. Ignoring dependency. WARNING: CPU: 10 PID: 0 at arch/x86/kernel/smpboot.c:421 topology_sane.isra.0+0x67/0x80 ... Call Trace: set_cpu_sibling_map+0x176/0x590 start_secondary+0x5b/0x150 secondary_startup_64_no_verify+0xc2/0xcb Fix it by capping max number of logical processors to vcpus/socket as it was configured, which fixes the issue. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2088311 Message-Id: <20220524151020.2541698-3-imammedo@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-06-06x86: cpu: make sure number of addressable IDs for processor cores meets the specIgor Mammedov1-1/+1
Accourding Intel's CPUID[EAX=04H] resulting bits 31 - 26 in EAX should be: " **** The nearest power-of-2 integer that is not smaller than (1 + EAX[31:26]) is the number of unique Core_IDs reserved for addressing different processor cores in a physical package. Core ID is a subset of bits of the initial APIC ID. " ensure that values stored in EAX[31-26] always meets this condition. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220524151020.2541698-2-imammedo@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-06-06target/i386: Fix wrong count settingYang Zhong1-1/+1
The previous patch used wrong count setting with index value, which got wrong value from CPUID(EAX=12,ECX=0):EAX. So the SGX1 instruction can't be exposed to VM and the SGX decice can't work in VM. Fixes: d19d6ffa0710 ("target/i386: introduce helper to access supported CPUID") Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20220530131834.1222801-1-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-06-06target/i386/tcg: Fix masking of real-mode addresses with A20 bitStephen Michael Jothen1-1/+3
The correct A20 masking is done if paging is enabled (protected mode) but it seems to have been forgotten in real mode. For example from the AMD64 APM Vol. 2 section 1.2.4: > If the sum of the segment base and effective address carries over into bit 20, > that bit can be optionally truncated to mimic the 20-bit address wrapping of the > 8086 processor by using the A20M# input signal to mask the A20 address bit. Most BIOSes will enable the A20 line on boot, but I found by disabling the A20 line afterwards, the correct wrapping wasn't taking place. `handle_mmu_fault' in target/i386/tcg/sysemu/excp_helper.c seems to be the culprit. In real mode, it fills the TLB with the raw unmasked address. However, for the protected mode, the `mmu_translate' function does the correct A20 masking. The fix then should be to just apply the A20 mask in the first branch of the if statement. Signed-off-by: Stephen Michael Jothen <sjothen@gmail.com> Message-Id: <Yo5MUMSz80jXtvt9@air-old.local> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V Direct TLB flush hypercallVitaly Kuznetsov4-0/+12
Hyper-V TLFS allows for L0 and L1 hypervisors to collaborate on L2's TLB flush hypercalls handling. With the correct setup, L2's TLB flush hypercalls can be handled by L0 directly, without the need to exit to L1. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-6-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V Support extended GVA ranges for TLB flush hypercallsVitaly Kuznetsov4-0/+12
KVM kind of supported "extended GVA ranges" (up to 4095 additional GFNs per hypercall) since the implementation of Hyper-V PV TLB flush feature (Linux-4.18) as regardless of the request, full TLB flush was always performed. "Extended GVA ranges for TLB flush hypercalls" feature bit wasn't exposed then. Now, as KVM gains support for fine-grained TLB flush handling, exposing this feature starts making sense. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-5-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V XMM fast hypercall input featureVitaly Kuznetsov4-1/+11
Hyper-V specification allows to pass parameters for certain hypercalls using XMM registers ("XMM Fast Hypercall Input"). When the feature is in use, it allows for faster hypercalls processing as KVM can avoid reading guest's memory. KVM supports the feature since v5.14. Rename HV_HYPERCALL_{PARAMS_XMM_AVAILABLE -> XMM_INPUT_AVAILABLE} to comply with KVM. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-4-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V Enlightened MSR bitmap featureVitaly Kuznetsov4-0/+15
The newly introduced enlightenment allow L0 (KVM) and L1 (Hyper-V) hypervisors to collaborate to avoid unnecessary updates to L2 MSR-Bitmap upon vmexits. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Use hv_build_cpuid_leaf() for HV_CPUID_NESTED_FEATURESVitaly Kuznetsov2-11/+15
Previously, HV_CPUID_NESTED_FEATURES.EAX CPUID leaf was handled differently as it was only used to encode the supported eVMCS version range. In fact, there are also feature (e.g. Enlightened MSR-Bitmap) bits there. In preparation to adding these features, move HV_CPUID_NESTED_FEATURES leaf handling to hv_build_cpuid_leaf() and drop now-unneeded 'hyperv_nested'. No functional change intended. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25target/i386/kvm: Fix disabling MPX on "-cpu host" with MPX-capable hostMaciej S. Szmigiero1-0/+8
Since KVM commit 5f76f6f5ff96 ("KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled") it is not possible to disable MPX on a "-cpu host" just by adding "-mpx" there if the host CPU does indeed support MPX. QEMU will fail to set MSR_IA32_VMX_TRUE_{EXIT,ENTRY}_CTLS MSRs in this case and so trigger an assertion failure. Instead, besides "-mpx" one has to explicitly add also "-vmx-exit-clear-bndcfgs" and "-vmx-entry-load-bndcfgs" to QEMU command line to make it work, which is a bit convoluted. Make the MPX-related bits in FEAT_VMX_{EXIT,ENTRY}_CTLS dependent on MPX being actually enabled so such workarounds are no longer necessary. Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <51aa2125c76363204cc23c27165e778097c33f0b.1653323077.git.maciej.szmigiero@oracle.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-23target/i386: Remove LBREn bit check when access Arch LBR MSRsYang Weijiang1-12/+9
Live migration can happen when Arch LBR LBREn bit is cleared, e.g., when migration happens after guest entered SMM mode. In this case, we still need to migrate Arch LBR MSRs. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220517155024.33270-1-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-16Merge tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu ↵Richard Henderson1-1/+1
into staging virtio,pc,pci: fixes,cleanups,features most of CXL support fixes, cleanups all over the place Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # -----BEGIN PGP SIGNATURE----- # # iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmKCuLIPHG1zdEByZWRo # YXQuY29tAAoJECgfDbjSjVRpdDUH/12SmWaAo+0+SdIHgWFFxsmg3t/EdcO38fgi # MV+GpYdbp6TlU3jdQhrMZYmFdkVVydBdxk93ujCLbFS0ixTsKj31j0IbZMfdcGgv # SLqnV+E3JdHqnGP39q9a9rdwYWyqhkgHoldxilIFW76ngOSapaZVvnwnOMAMkf77 # 1LieL4/Xq7N9Ho86Zrs3IczQcf0czdJRDaFaSIu8GaHl8ELyuPhlSm6CSqqrEEWR # PA/COQsLDbLOMxbfCi5v88r5aaxmGNZcGbXQbiH9qVHw65nlHyLH9UkNTdJn1du1 # f2GYwwa7eekfw/LCvvVwxO1znJrj02sfFai7aAtQYbXPvjvQiqA= # =xdSk # -----END PGP SIGNATURE----- # gpg: Signature made Mon 16 May 2022 01:48:50 PM PDT # gpg: using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469 # gpg: issuer "mst@redhat.com" # gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [undefined] # gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [undefined] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67 # Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469 * tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu: (86 commits) vhost-user-scsi: avoid unlink(NULL) with fd passing virtio-net: don't handle mq request in userspace handler for vhost-vdpa vhost-vdpa: change name and polarity for vhost_vdpa_one_time_request() vhost-vdpa: backend feature should set only once vhost-net: fix improper cleanup in vhost_net_start vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated hw/i386/amd_iommu: Fix IOMMU event log encoding errors hw/i386: Make pic a property of common x86 base machine type hw/i386: Make pit a property of common x86 base machine type include/hw/pci/pcie_host: Correct PCIE_MMCFG_SIZE_MAX include/hw/pci/pcie_host: Correct PCIE_MMCFG_BUS_MASK docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG vhost-user: more master/slave things virtio: add vhost support for virtio devices virtio: drop name parameter for virtio_init() virtio/vhost-user: dynamically assign VhostUserHostNotifiers hw/virtio/vhost-user: don't suppress F_CONFIG when supported include/hw: start documenting the vhost API ... Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-05-16target/i386: Fix sanity check on max APIC ID / X2APIC enablementDavid Woodhouse1-1/+1
The check on x86ms->apic_id_limit in pc_machine_done() had two problems. Firstly, we need KVM to support the X2APIC API in order to allow IRQ delivery to APICs >= 255. So we need to call/check kvm_enable_x2apic(), which was done elsewhere in *some* cases but not all. Secondly, microvm needs the same check. So move it from pc_machine_done() to x86_cpus_init() where it will work for both. The check in kvm_cpu_instance_init() is now redundant and can be dropped. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Acked-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20220314142544.150555-1-dwmw2@infradead.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-05-14target/i386: Support Arch LBR in CPUID enumerationYang Weijiang1-1/+19
If CPUID.(EAX=07H, ECX=0):EDX[19] is set to 1, the processor supports Architectural LBRs. In this case, CPUID leaf 01CH indicates details of the Architectural LBRs capabilities. XSAVE support for Architectural LBRs is enumerated in CPUID.(EAX=0DH, ECX=0FH). Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-9-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: introduce helper to access supported CPUIDPaolo Bonzini1-16/+25
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>