aboutsummaryrefslogtreecommitdiff
path: root/target/i386/tcg
AgeCommit message (Collapse)AuthorFilesLines
2024-08-21target/i386: Fix tss access size in switch_tss_raRichard Henderson1-2/+3
The two limit_max variables represent size - 1, just like the encoding in the GDT, thus the 'old' access was off by one. Access the minimal size of the new tss: the complete tss contains the iopb, which may be a larger block than the access api expects, and irrelevant because the iopb is not accessed during the switch itself. Fixes: 8b131065080a ("target/i386/tcg: use X86Access for TSS access") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2511 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240819074052.207783-1-richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
2024-08-21target/i386: Fix carry flag for BLSIRichard Henderson4-1/+42
BLSI has inverted semantics for C as compared to the other two BMI1 instructions, BLSMSK and BLSR. Introduce CC_OP_BLSI* for this purpose. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2175 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240801075845.573075-3-richard.henderson@linaro.org>
2024-08-21target/i386: Split out gen_prepare_val_nzRichard Henderson1-8/+14
Split out the TCG_COND_TSTEQ logic from gen_prepare_eflags_z, and use it for CC_OP_BMILG* as well. Prepare for requiring both zero and non-zero senses. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240801075845.573075-2-richard.henderson@linaro.org>
2024-08-16target/i386: allow access_ptr to force slow path on failed probeAlex Bennée1-14/+13
When we are using TCG plugin memory callbacks probe_access_internal will return TLB_MMIO to force the slow path for memory access. This results in probe_access returning NULL but the x86 access_ptr function happily accepts an empty haddr resulting in segfault hilarity. Check for an empty haddr to prevent the segfault and enable plugins to track all the memory operations for the x86 save/restore helpers. As we also want to run the slow path when instrumenting *-user we should also not have the short cutting test_ptr macro. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2489 Fixes: 6d03226b42 (plugins: force slow path when plugins instrument memory ops) Reviewed-by: Alexandre Iooss <erdnaxe@crans.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240813202329.1237572-8-alex.bennee@linaro.org>
2024-08-13target/i386: Assert MMX and XMM registers in rangeRichard Henderson1-2/+7
The mmx assert would fire without the fix for #2495. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Link: https://lore.kernel.org/r/20240812025844.58956-4-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-08-13target/i386: Use unit not type in decode_modrmRichard Henderson1-4/+4
Rather that enumerating the types that can produce MMX operands, examine the unit. No functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Link: https://lore.kernel.org/r/20240812025844.58956-3-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-08-13target/i386: Do not apply REX to MMX operandsRichard Henderson1-1/+4
Cc: qemu-stable@nongnu.org Fixes: b3e22b2318a ("target/i386: add core of new i386 decoder") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2495 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240812025844.58956-2-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-08-05target/i386: Fix VSIB decodeRichard Henderson2-11/+12
With normal SIB, index == 4 indicates no index. With VSIB, there is no exception for VR4/VR12. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2474 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240805003130.1421051-3-richard.henderson@linaro.org Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-29target/i386: Remove dead assignment to ss in do_interrupt64()Peter Maydell1-3/+2
Coverity points out that in do_interrupt64() in the "to inner privilege" codepath we set "ss = 0", but because we also set "new_stack = 1" there, later in the function we will always override that value of ss with "ss = 0 | dpl". Remove the unnecessary initialization of ss, which allows us to reduce the scope of the variable to only where it is used. Borrow a comment from helper_lcall_protected() that explains what "0 | dpl" means here. Resolves: Coverity CID 1527395 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240723162525.1585743-1-peter.maydell@linaro.org
2024-07-16target/i386/tcg: save current task state before loading new onePaolo Bonzini1-40/+45
This is how the steps are ordered in the manual. EFLAGS.NT is overwritten after the fact in the saved image. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: use X86Access for TSS accessPaolo Bonzini1-52/+58
This takes care of probing the vaddr range in advance, and is also faster because it avoids repeated TLB lookups. It also matches the Intel manual better, as it says "Checks that the current (old) TSS, new TSS, and all segment descriptors used in the task switch are paged into system memory"; note however that it's not clear how the processor checks for segment descriptors, and this check is not included in the AMD manual. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: check for correct busy state before switching to a new taskPaolo Bonzini1-0/+5
This step is listed in the Intel manual: "Checks that the new task is available (call, jump, exception, or interrupt) or busy (IRET return)". The AMD manual lists the same operation under the "Preventing recursion" paragraph of "12.3.4 Nesting Tasks", though it is not clear if the processor checks the busy bit in the IRET case. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Compute MMU index oncePaolo Bonzini1-13/+22
Add the MMU index to the StackAccess struct, so that it can be cached or (in the next patch) computed from information that is not in CPUX86State. Co-developed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Reorg push/pop within seg_helper.cRichard Henderson1-222/+259
Interrupts and call gates should use accesses with the DPL as the privilege level. While computing the applicable MMU index is easy, the harder thing is how to plumb it in the code. One possibility could be to add a single argument to the PUSH* macros for the privilege level, but this is repetitive and risks confusion between the involved privilege levels. Another possibility is to pass both CPL and DPL, and adjusting both PUSH* and POP* to use specific privilege levels (instead of using cpu_{ld,st}*_data). This makes the code more symmetric. However, a more complicated but much nicer approach is to use a structure to contain the stack parameters, env, unwind return address, and rewrite the macros into functions. The struct provides an easy home for the MMU index as well. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: use PUSHL/PUSHW for error codePaolo Bonzini1-9/+7
Do not pre-decrement esp, let the macros subtract the appropriate operand size. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Allow IRET from user mode to user mode with SMAPPaolo Bonzini1-9/+9
This fixes a bug wherein i386/tcg assumed an interrupt return using the IRET instruction was always returning from kernel mode to either kernel mode or user mode. This assumption is violated when IRET is used as a clever way to restore thread state, as for example in the dotnet runtime. There, IRET returns from user mode to user mode. This bug is that stack accesses from IRET and RETF, as well as accesses to the parameters in a call gate, are normal data accesses using the current CPL. This manifested itself as a page fault in the guest Linux kernel due to SMAP preventing the access. This bug appears to have been in QEMU since the beginning. Analyzed-by: Robert R. Henry <rrh.henry@gmail.com> Co-developed-by: Robert R. Henry <rrh.henry@gmail.com> Signed-off-by: Robert R. Henry <rrh.henry@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Remove SEG_ADDLRichard Henderson1-6/+2
This truncation is now handled by MMU_*32_IDX. The introduction of MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses with a high segment base (e.g. big real mode or vm86 mode), which did not use SEG_ADDL. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: fix POP to memory in long modePaolo Bonzini2-1/+2
In long mode, POP to memory will write a full 64-bit value. However, the call to gen_writeback() in gen_POP will use MO_32 because the decoding table is incorrect. The bug was latent until commit aea49fbb01a ("target/i386: use gen_writeback() within gen_POP()", 2024-06-08), and then became visible because gen_op_st_v now receives op->ot instead of the "ot" returned by gen_pop_T0. Analyzed-by: Clément Chigot <chigot@adacore.com> Fixes: 5e9e21bcc4d ("target/i386: move 60-BF opcodes to new decoder", 2024-05-07) Tested-by: Clément Chigot <chigot@adacore.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28target/i386: remove unused enumPaolo Bonzini1-16/+0
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28target/i386: give CC_OP_POPCNT low bits corresponding to MO_TLPaolo Bonzini1-2/+1
Handle it like the other arithmetic cc_ops. This simplifies a bit the implementation of bit test instructions. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28target/i386: use cpu_cc_dst for CC_OP_POPCNTPaolo Bonzini3-5/+5
It is the only CCOp, among those that compute ZF from one of the cc_op_* registers, that uses cpu_cc_src. Do not make it the odd one off, instead use cpu_cc_dst like the others. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: convert CMPXCHG to new decoderPaolo Bonzini3-80/+53
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: convert XADD to new decoderPaolo Bonzini3-36/+26
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: convert LZCNT/TZCNT/BSF/BSR/POPCNT to new decoderPaolo Bonzini4-76/+133
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: convert SHLD/SHRD to new decoderPaolo Bonzini3-84/+50
Use the same flag generation code as SHL and SHR, but use the existing gen_shiftd_rm_T1 function to compute the result as well as CC_SRC. Decoding-wise, SHLD/SHRD by immediate count as a 4 operand instruction because s->T0 and s->T1 actually occupy three op slots. The infrastructure used by opcodes in the 0F 3A table works fine. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: adapt gen_shift_count for SHLD/SHRDPaolo Bonzini1-10/+10
SHLD/SHRD can have 3 register operands - s->T0, s->T1 and either 1 or CL - and therefore decode->op[2] is taken by the low part of the register being shifted. Pass X86_OP_* to gen_shift_count from its current callers and hardcode cpu_regs[R_ECX] as the shift count. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: pull load/writeback out of gen_shiftd_rm_T1Paolo Bonzini1-41/+14
Use gen_ld_modrm/gen_st_modrm, moving them and gen_shift_flags to the caller. This way, gen_shiftd_rm_T1 becomes something that the new decoder can call. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: convert non-grouped, helper-based 2-byte opcodesPaolo Bonzini5-170/+206
These have very simple generators and no need for complex group decoding. Apart from LAR/LSL which are simplified to use gen_op_deposit_reg_v and movcond, the code is generally lifted from translate.c into the generators. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: split X86_CHECK_prot into PE and VM86 checksPaolo Bonzini2-4/+13
SYSENTER is allowed in VM86 mode, but not in real mode. Split the check so that PE and !VM86 are covered by separate bits. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: finish converting 0F AE to the new decoderPaolo Bonzini4-194/+129
This is already partly implemented due to VLDMXCSR and VSTMXCSR; finish the job. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: fix bad sorting of entries in the 0F tablePaolo Bonzini1-47/+46
Aesthetic change only. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: replace read_crN helper with read_cr8Paolo Bonzini2-16/+6
All other control registers are stored plainly in CPUX86State. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17target/i386: convert MOV from/to CR and DR to new decoderPaolo Bonzini4-84/+81
Complete implementation of C and D operand types, then the operations are just MOVs. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11target/i386: fix processing of intercept 0 (read CR0)Paolo Bonzini2-2/+3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11target/i386: replace NoSeg special with NoLoadEAPaolo Bonzini3-11/+9
This is a bit more generic, as it can be applied to MPX as well. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11target/i386: change X86_ENTRYwr to use T0, use it for movesPaolo Bonzini2-25/+25
Just like X86_ENTRYr, X86_ENTRYwr is easily changed to use only T0. In this case, the motivation is to use it for the MOV instruction family. The case when you need to preserve the input value is the odd one, as it is used basically only for BLS* instructions. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11target/i386: change X86_ENTRYr to use T0Paolo Bonzini2-20/+20
I am not sure why I made it use T1. It is a bit more symmetric with respect to X86_ENTRYwr (which uses T0 for the "w"ritten operand and T1 for the "r"ead operand), but it is also less flexible because it does not let you apply zextT0/sextT0. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11target/i386: put BLS* input in T1, use generic flag writebackPaolo Bonzini2-17/+11
This makes for easier cpu_cc_* setup, and not using set_cc_op() should come in handy if QEMU ever implements APX. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11target/i386: rewrite flags writeback for ADCX/ADOXPaolo Bonzini1-26/+35
Avoid using set_cc_op() in preparation for implementing APX; treat CC_OP_EFLAGS similar to the case where we have the "opposite" cc_op (CC_OP_ADOX for ADCX and CC_OP_ADCX for ADOX), except the resulting cc_op is not CC_OP_ADCOX. This is written easily as two "if"s, whose conditions are both false for CC_OP_EFLAGS, both true for CC_OP_ADCOX, and one each true for CC_OP_ADCX/ADOX. The new logic also makes it easy to drop usage of tmp0. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11target/i386: remove CPUX86State argument from generator functionsPaolo Bonzini3-289/+289
CPUX86State argument would only be used to fetch bytes, but that has to be done before the generator function is called. So remove it, and all temptation together with it. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: fix size of EBP writeback in gen_enter()Mark Cave-Ayland1-1/+1
The calculation of FrameTemp is done using the size indicated by mo_pushpop() before being written back to EBP, but the final writeback to EBP is done using the size indicated by mo_stacksize(). In the case where mo_pushpop() is MO_32 and mo_stacksize() is MO_16 then the final writeback to EBP is done using MO_16 which can leave junk in the top 16-bits of EBP after executing ENTER. Change the writeback of EBP to use the same size indicated by mo_pushpop() to ensure that the full value is written back. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2198 Message-ID: <20240606095319.229650-5-mark.cave-ayland@ilande.co.uk> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: fix SP when taking a memory fault during POPMark Cave-Ayland1-1/+1
When OS/2 Warp configures its segment descriptors, many of them are configured with the P flag clear to allow for a fault-on-demand implementation. In the case where the stack value is POPped into the segment registers, the SP is incremented before calling gen_helper_load_seg() to validate the segment descriptor: IN: 0xffef2c0c: 66 07 popl %es OP: ld_i32 loc9,env,$0xfffffffffffffff8 sub_i32 loc9,loc9,$0x1 brcond_i32 loc9,$0x0,lt,$L0 st16_i32 loc9,env,$0xfffffffffffffff8 st8_i32 $0x1,env,$0xfffffffffffffffc ---- 0000000000000c0c 0000000000000000 ext16u_i64 loc0,rsp add_i64 loc0,loc0,ss_base ext32u_i64 loc0,loc0 qemu_ld_a64_i64 loc0,loc0,noat+un+leul,5 add_i64 loc3,rsp,$0x4 deposit_i64 rsp,rsp,loc3,$0x0,$0x10 extrl_i64_i32 loc5,loc0 call load_seg,$0x0,$0,env,$0x0,loc5 add_i64 rip,rip,$0x2 ext16u_i64 rip,rip exit_tb $0x0 set_label $L0 exit_tb $0x7fff58000043 If helper_load_seg() generates a fault when validating the segment descriptor then as the SP has already been incremented, the topmost word of the stack is overwritten by the arguments pushed onto the stack by the CPU before taking the fault handler. As a consequence things rapidly go wrong upon return from the fault handler due to the corrupted stack. Update the logic for the existing writeback condition so that a POP into the segment registers also calls helper_load_seg() first before incrementing the SP, so that if a fault occurs the SP remains unaltered. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2198 Message-ID: <20240606095319.229650-4-mark.cave-ayland@ilande.co.uk> Fixes: cc1d28bdbe0 ("target/i386: move 00-5F opcodes to new decoder", 2024-05-07) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: use gen_writeback() within gen_POP()Mark Cave-Ayland1-2/+2
Instead of directly implementing the writeback using gen_op_st_v(), use the existing gen_writeback() function. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-ID: <20240606095319.229650-3-mark.cave-ayland@ilande.co.uk> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: use local X86DecodedOp in gen_POP()Mark Cave-Ayland1-2/+4
This will make subsequent changes a little easier to read. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-ID: <20240606095319.229650-2-mark.cave-ayland@ilande.co.uk> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: document use of DISAS_NORETURNPaolo Bonzini1-0/+11
DISAS_NORETURN suppresses the work normally done by gen_eob(), and therefore must be used in special cases only. Document them. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: document incorrect semantics of watchpoint following MOV/POP SSPaolo Bonzini1-0/+6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: fix TF/RF handling for HLTPaolo Bonzini2-4/+15
HLT uses DISAS_NORETURN because the corresponding helper calls cpu_loop_exit(). However, while gen_eob() clears HF_RF_MASK and synthesizes a #DB exception if single-step is active, none of this is done by HLT. Note that the single-step trap is generated after the halt is finished. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: fix INHIBIT_IRQ/TF/RF handling for PAUSEPaolo Bonzini1-0/+4
PAUSE uses DISAS_NORETURN because the corresponding helper calls cpu_loop_exit(). However, while HLT clear HF_INHIBIT_IRQ_MASK to correctly handle "STI; HLT", the same is missing from PAUSE. And also gen_eob() clears HF_RF_MASK and synthesizes a #DB exception if single-step is active; none of this is done by HLT and PAUSE. Start fixing PAUSE, HLT will follow. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: fix INHIBIT_IRQ/TF/RF handling for VMRUNPaolo Bonzini2-13/+38
From vm entry to exit, VMRUN is handled as a single instruction. It uses DISAS_NORETURN in order to avoid processing TF or RF before the first instruction executes in the guest. However, the corresponding handling is missing in vmexit. Add it, and at the same time reorganize the comments with quotes from the manual about the tasks performed by a #VMEXIT. Another gen_eob() task that is missing in VMRUN is preparing the HF_INHIBIT_IRQ flag for the next instruction, in this case by loading it from the VMCB control state. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-08target/i386: disable/enable breakpoints on vmentry/vmexitPaolo Bonzini1-10/+15
If the required DR7 (either from the VMCB or from the host save area) disables a breakpoint that was enabled prior to vmentry or vmexit, it is left enabled and will trigger EXCP_DEBUG. This causes a spurious #DB on the next crossing of the breakpoint. To disable it, vmentry/vmexit must use cpu_x86_update_dr7 to load DR7. Because cpu_x86_update_dr7 takes a 32-bit argument, check reserved bits prior to calling cpu_x86_update_dr7, and do the same for DR6 as well for consistency. This scenario is tested by the "host_rflags" test in kvm-unit-tests. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>