aboutsummaryrefslogtreecommitdiff
path: root/target
diff options
context:
space:
mode:
authorStephen Michael Jothen <sjothen@gmail.com>2022-05-25 17:33:36 +0200
committerPaolo Bonzini <pbonzini@redhat.com>2022-06-06 09:26:53 +0200
commit9f9dcb96a46a60fcf95f7baebafa3ec5e2a1b5ce (patch)
tree39a864f8c231e394b43a51c8316c39a50146a24c /target
parentca127b3fc247517ec7d4dad291f2c0f90602ce5b (diff)
downloadqemu-9f9dcb96a46a60fcf95f7baebafa3ec5e2a1b5ce.zip
qemu-9f9dcb96a46a60fcf95f7baebafa3ec5e2a1b5ce.tar.gz
qemu-9f9dcb96a46a60fcf95f7baebafa3ec5e2a1b5ce.tar.bz2
target/i386/tcg: Fix masking of real-mode addresses with A20 bit
The correct A20 masking is done if paging is enabled (protected mode) but it seems to have been forgotten in real mode. For example from the AMD64 APM Vol. 2 section 1.2.4: > If the sum of the segment base and effective address carries over into bit 20, > that bit can be optionally truncated to mimic the 20-bit address wrapping of the > 8086 processor by using the A20M# input signal to mask the A20 address bit. Most BIOSes will enable the A20 line on boot, but I found by disabling the A20 line afterwards, the correct wrapping wasn't taking place. `handle_mmu_fault' in target/i386/tcg/sysemu/excp_helper.c seems to be the culprit. In real mode, it fills the TLB with the raw unmasked address. However, for the protected mode, the `mmu_translate' function does the correct A20 masking. The fix then should be to just apply the A20 mask in the first branch of the if statement. Signed-off-by: Stephen Michael Jothen <sjothen@gmail.com> Message-Id: <Yo5MUMSz80jXtvt9@air-old.local> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'target')
-rw-r--r--target/i386/tcg/sysemu/excp_helper.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c
index e1b6d88..48feba7 100644
--- a/target/i386/tcg/sysemu/excp_helper.c
+++ b/target/i386/tcg/sysemu/excp_helper.c
@@ -359,6 +359,7 @@ static int handle_mmu_fault(CPUState *cs, vaddr addr, int size,
CPUX86State *env = &cpu->env;
int error_code = PG_ERROR_OK;
int pg_mode, prot, page_size;
+ int32_t a20_mask;
hwaddr paddr;
hwaddr vaddr;
@@ -368,7 +369,8 @@ static int handle_mmu_fault(CPUState *cs, vaddr addr, int size,
#endif
if (!(env->cr[0] & CR0_PG_MASK)) {
- paddr = addr;
+ a20_mask = x86_get_a20_mask(env);
+ paddr = addr & a20_mask;
#ifdef TARGET_X86_64
if (!(env->hflags & HF_LMA_MASK)) {
/* Without long mode we can only address 32bits in real mode */