Age | Commit message (Collapse) | Author | Files | Lines |
|
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
|
|
These modifiers control, on a per-memory-op basis, whether
unaligned memory accesses are allowed. The default setting
reflects the target's definition of ALIGNED_ONLY.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
|
|
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
|
|
Rather than retaining io_mem_read/write as simple wrappers around
the memory_region_dispatch_read/write functions, make the latter
public and change all the callers to use them, since we need to
touch all the callsites anyway to add MemTxAttrs and MemTxResult
support. Delete io_mem_read and io_mem_write entirely.
(All the callers currently pass MEMTXATTRS_UNSPECIFIED
and convert the return value back to bool or ignore it.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
|
|
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.
With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
New MIPS features depend on the access type and enum is more convenient than
using the numbers directly.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
|
|
QEMU system mode page table walks are expensive. Taken by running QEMU
qemu-system-x86_64 system mode on Intel PIN , a TLB miss and walking a
4-level page tables in guest Linux OS takes ~450 X86 instructions on
average.
QEMU system mode TLB is implemented using a directly-mapped hashtable.
This structure suffers from conflict misses. Increasing the
associativity of the TLB may not be the solution to conflict misses as
all the ways may have to be walked in serial.
A victim TLB is a TLB used to hold translations evicted from the
primary TLB upon replacement. The victim TLB lies between the main TLB
and its refill path. Victim TLB is of greater associativity (fully
associative in this patch). It takes longer to lookup the victim TLB,
but its likely better than a full page table walk. The memory
translation path is changed as follows :
Before Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. TLB refill.
5. Do the memory access.
6. Return to code cache.
After Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. Victim TLB lookup.
5. If victim TLB misses, TLB refill
6. Do the memory access.
7. Return to code cache
The advantage is that victim TLB can offer more associativity to a
directly mapped TLB and thus potentially fewer page table walks while
still keeping the time taken to flush within reasonable limits.
However, placing a victim TLB before the refill path increase TLB
refill path as the victim TLB is consulted before the TLB refill. The
performance results demonstrate that the pros outweigh the cons.
some performance results taken on SPECINT2006 train
datasets and kernel boot and qemu configure script on an
Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Linux machine are shown in the
Google Doc link below.
https://docs.google.com/spreadsheets/d/1eiItzekZwNQOal_h-5iJmC4tMDi051m9qidi5_nwvH4/edit?usp=sharing
In summary, victim TLB improves the performance of qemu-system-x86_64 by
11% on average on SPECINT2006, kernelboot and qemu configscript and with
highest improvement of in 26% in 456.hmmer. And victim TLB does not result
in any performance degradation in any of the measured benchmarks. Furthermore,
the implemented victim TLB is architecture independent and is expected to
benefit other architectures in QEMU as well.
Although there are measurement fluctuations, the performance
improvement is very significant and by no means in the range of
noises.
Signed-off-by: Xin Tong <trent.tong@gmail.com>
Message-id: 1407202523-23553-1-git-send-email-trent.tong@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
It is only included in cputlb.c now.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Add GETPC_EXT which is used by MMU helpers to selectively calculate the code
address of accessing guest memory when called from a qemu_ld/st optimized code
or a C function. Currently, it supports only i386 and x86-64 hosts.
Signed-off-by: Yeongkyoon Lee <yeongkyoon.lee@samsung.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific). Replace it with a finger-friendly,
standards conformant hwaddr.
Outstanding patchsets can be fixed up with the command
git rebase -i --exec 'find -name "*.[ch]"
| xargs s/target_phys_addr_t/hwaddr/g' origin
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Now that CONFIG_TCG_PASS_AREG0 is enabled for all targets,
remove dead code and support for !CONFIG_TCG_PASS_AREG0 case.
Remove dyngen-exec.h and all references to it. Although included by
hw/spapr_hcall.c, it does not seem to use it.
Remove unused HELPER_CFLAGS.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
w64 requires uintptr_t.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
|
|
Use uintptr_t instead of void * or unsigned long in
several op related functions, env->mem_io_pc and
GETPC() macro.
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Optionally, make memory access helpers take a parameter for CPUState
instead of relying on global env.
On most targets, perform simple moves to reorder registers. On i386,
switch from regparm(3) calling convention to standard stack-based
version.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Use stack based calling convention (GCC default) for interfacing with
generated code instead of register based convention (regparm(3)).
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Instead of indirecting via io_mem_region, dispatch directly
through the MemoryRegion obtained from the iotlb or phys_page_find().
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
A step towards eliminating io indices.
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
We no longer use any of the lower bits of a ram_addr, so we might as well
use them for the io table index. This increases the number of potential
I/O handlers by a factor of 8.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED,
and IO_MEM_NOTDIRTY io handlers to MemoryRegions. These aren't real
regions, since they are never added to the memory hierarchy, but they
allow reuse of the dispatch functionality.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
The code sometimes uses range comparisons on io indexes (e.g.
index =< IO_MEM_ROM). Avoid these as they make moving to objects harder.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
Currently mmio access goes directly to the io_mem_{read,write} arrays.
In preparation for eliminating them, add indirection via a function.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
Pass CPUState pointer to tlb_fill() instead of architecture local
cpu_single_env hacks.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Add some comments to describe each file.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Historically the qemu tlb "addend" field was used for both RAM and IO accesses,
so needed to be able to hold both host addresses (unsigned long) and guest
physical addresses (target_phys_addr_t). However since the introduction of
the iotlb field it has only been used for RAM accesses.
This means we can change the type of addend to unsigned long, and remove
associated hacks in the big-endian TCG backends.
We can also remove the host dependence from target_phys_addr_t.
Signed-off-by: Paul Brook <paul@codesourcery.com>
|
|
Arrange various declarations so that also non-CPU code can access
them, adjust users.
Move CPU specific code to cpus.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
When splitting up unaligned IO accesses, ld calls slow_ld which was
clobbering retaddr.
AFAIK the problem only shows up when running emulations with -icount
that may abort TB execution on IO accesses.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
|
|
In the very least, a change like this requires discussion on the list.
The naming convention is goofy and it causes a massive merge problem. Something
like this _must_ be presented on the list first so people can provide input
and cope with it.
This reverts commit 99a0949b720a0936da2052cb9a46db04ffc6db29.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Some not so obvious bits, slirp and Xen were left alone for the time
being.
Signed-off-by: malc <av1474@comtv.ru>
|
|
kqemu introduces a number of restrictions on the i386 target. The worst is that
it prevents large memory from working in the default build.
Furthermore, kqemu is fundamentally flawed in a number of ways. It relies on
the TSC as a time source which will not be reliable on a multiple processor
system in userspace. Since most modern processors are multicore, this severely
limits the utility of kqemu.
kvm is a viable alternative for people looking to accelerate qemu and has the
benefit of being supported by the upstream Linux kernel. If someone can
implement work arounds to remove the restrictions introduced by kqemu, I'm
happy to avoid and/or revert this patch.
N.B. kqemu will still function in the 0.11 series but this patch removes it from
the 0.12 series.
Paul, please Ack or Nack this patch.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Basically a recursive ":%s/USE_KQEMU/CONFIG_KQEMU/g".
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7189 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
The attached patch updates the FSF address in the GPL/LGPL boilerplate
in most GPL/LGPLed files, and also in COPYING.LIB.
Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6162 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
Analogously to write accesses, we have to save the memory address also
on read accesses in order to support read watchpoints.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5739 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@4799 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
The IO index is now stored in its own field, instead of being wedged
into the vaddr field. This eliminates the ROMD and watchpoint host
pointer weirdness. The IO index space is expanded by 1 bit, and
several additional bits are made available in the TLB vaddr field.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@4704 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3935 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
Add a note from Fabrice in slow_st template.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3669 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
(patch from TeLeMan).
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3665 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
allowing support of more than 2 mmu access modes.
Add backward compatibility is_user variable in targets code when needed.
Implement per target cpu_mmu_index function, avoiding duplicated code
and #ifdef TARGET_xxx in softmmu core functions.
Implement per target mmu modes definitions. As an example, add PowerPC
hypervisor mode definition and Alpha executive and kernel modes definitions.
Optimize PowerPC case, precomputing mmu_idx when MSR register changes
and using the same definition in code translation code.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3384 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
the regex.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3177 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3173 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@1752 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@1688 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@1676 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@1659 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@1526 c046a42c-6fe2-441c-8c8c-71466251a162
|
|
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@1189 c046a42c-6fe2-441c-8c8c-71466251a162
|