Age | Commit message (Collapse) | Author | Files | Lines |
|
When running -cpu on a POWER7 system with PR KVM, we mask out the 1TB
MMU capability from the MMU type mask, but not the AMR bit.
This leads to us having a new MMU type that we don't check for in our
MMU management functions.
Add the new type, so that we don't have to worry about breakage there.
We're not going to use the TCG MMU management in that case anyway.
The long term fix for this will be to move all these MMU management
functions to class callbacks.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
After previous cleanups, the many scattered checks of env->mmu_model in
the ppc MMU implementation have, at least for "classic" hash MMUs been
reduced (almost) to a single switch at the top of
cpu_ppc_handle_mmu_fault().
An explicit switch is still a pretty ugly way of handling this though. Now
that Andreas Färber's CPU QOM cleanups for ppc have gone in, it's quite
straightforward to instead make the handle_mmu_fault function a QOM method
on the CPU object.
This patch implements such a scheme, initializing the method pointer at
the same time as the mmu_model variable. We need to keep the latter around
for now, because of the MMU types (BookE, 4xx, et al) which haven't been
converted to the new scheme yet, and also for a few other uses. It would
be good to clean those up eventually.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
For softmmu builds the interface from the generic code to the target
specific MMU implementation is through the tlb_fill() function. For ppc
this is currently in mem_helper.c, whereas it would make more sense in
mmu_helper.c. This patch moves it, which also allows
cpu_ppc_handle_mmu_fault() to become a local function in mmu_helper.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
mmu_helper.c is, for obvious reasons, almost entirely concerned with
softmmu builds of qemu. However, it does contain one stub function which
is used when CONFIG_USER_ONLY=y - the user only versoin of
cpu_ppc_handle_mmu_fault, which always triggers an exception. The entire
rest of the file is surrounded by #if !defined(CONFIG_USER_ONLY).
We clean this up by moving the user only stub into its own new file,
removing the ifdefs and building mmu_helper.c only when CONFIG_SOFTMMU
is set. This also lets us remove the #define of cpu_handle_mmu_fault to
cpu_ppc_handle_mmu_fault - that name is only used from generic code for
user only - so we just name our split user version by the generic name.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
mmu_ctx_t is currently defined in cpu.h. However it is used for temporary
information relating to mmu translation, and is only used in mmu_helper.c
and (now) mmu-hash{32,64}.c. Furthermore it contains information which
should be specific to particular MMU types. Therefore, move its definition
to mmu_helper.c. mmu-hash{32,64}.c are converted to use new data types
private to the relevant MMUs (identical to mmu_ctx_t for now, but that will
change in future patches).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The functions for looking up BATs (Block Address Translation - essentially
a level 0 TLB) are shared between the classic 32-bit hash MMUs and the
6xx style software loaded TLB implementations.
This patch splits out a copy for the 32-bit hash MMUs, to facilitate
cleaning it up. The remaining version is left, but cleaned up slightly
to no longer deal with PowerPC 601 peculiarities (601 has a hash MMU).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The get_pteg_offset() helper function is currently shared between 32-bit
and 64-bit hash mmus, taking a parameter for the hash pte size. In the
64-bit paths, it's only called in one place, and it's a trivial
calculation. This patch, therefore, open codes it for 64-bit. The
remaining version, which is used in two places is made 32-bit only and
moved to mmu-hash32.c.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The newly separated paths for hash mmus rely on several helper functions
which are still shared with 32-bit hash mmus: pp_check(), check_prot() and
pte_update_flags(). While these don't have ugly ifdefs on the mmu type,
they're not very well thought out, so sharing them impedes cleaning up the
hash mmu paths. For now, put near-duplicate versions into mmu-hash64.c and
mmu-hash32.c, leaving the old version in mmu_helper.c for 6xx software
loaded tlb implementations. The hash 32 and software loaded
implementations are simplfied slightly, using the fact that no 32-bit CPUs
implement the 3rd page protection bit.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
cpu_get_phys_page_debug() is a trivial wrapper around
get_physical_address(). But even the signature of
get_physical_address() has some things we'd like to clean up on a
per-mmu basis, so this patch moves the test on mmu model out to
cpu_get_phys_page_debug(), moving the version for 64-bit hash MMUs out
to mmu-hash64.c and the version for 32-bit hash MMUs to mmu-hash32.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
cpu_ppc_handle_mmu_fault() calls get_physical_address() (whose behaviour
depends on MMU type) then, if that fails, issues an appropriate exception
- which again has a number of dependencies on MMU type.
This patch starts converting cpu_ppc_handle_mmu_fault() to have a
single switch on MMU type, calling MMU specific fault handler
functions which deal with both translation and exception delivery
appropriately for the MMU type. We convert 32-bit and 64-bit hash
MMUs to this new model, but the existing code is left in place for
other MMU types for now.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Depending on the MSR state, for 64-bit hash MMUs, get_physical_address
can either call check_physical (which has further tests for mmu type)
or get_segment64. Similarly for 32-bit hash MMUs we can either call
check_physucal or get_bat() and get_segment32().
This patch splits off the whole get_physical_addresss() path for hash
MMUs into 32-bit and 64-bit versions, handling real mode correctly for
such MMUs without going to check_physical and rechecking the mmu type.
Correspondingly, the hash MMU specific paths in check_physical() are
removed.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Currently get_physical_address() first checks to see if translation is
enabled in the MSR, then in the translation on case switches on the mmu
type. Except that for BookE MMUs, translation is always on, and so it
has to switch in the "translation off" case as well and do the same thing
as the translation on path for those MMUs. Plus, even translation off
doesn't behave exactly the same on the various MMU types so there are
further mmu type checks in the "translation off" path.
As a first step to cleaning this up, this patch moves the switch on mmu
type to the top level, then makes the translation on/off check just for
those mmu types where it is meaningful.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The poorly named get_segment() function handles most of the address
translation logic for hash-based MMUs. It has many ugly conditionals on
whether the MMU is 32-bit or 64-bit.
This patch splits the function into 32 and 64-bit versions, using the
switch on mmu_type that's already in the caller
(get_physical_address()) to select the right one. Most of the
original function remains in mmu_helper.c to support the 6xx software
loaded TLB implementations (cleaning those up is a project for another
day).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
32-bit and 64-bit hash MMU implementations currently share a find_pte
function. This results in a whole bunch of ugly conditionals in the shared
function, and not all that much actually shared code.
This patch separates out the 32-bit and 64-bit versions, putting then
in mmu-hash64.c and mmu-has32.c, and removes the conditionals from
both versions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Currently support for both 32-bit and 64-bit hash MMUs share an
implementation of pte_check. But there are enough differences that this
means the shared function has several very ugly conditionals on "is_64b".
This patch cleans things up by separating out the 64-bit version
(putting it into mmu-hash64.c) and the 32-bit hash version (putting it
in mmu-hash32.c). Another copy remains in mmu_helper.c, which is used
for the 6xx software loaded TLB paths.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
As a first step to disentangling the handling for 64-bit hash MMUs from
the rest, we move the code handling the Segment Lookaside Buffer (SLB)
(which only exists on 64-bit hash MMUs) into a new mmu-hash64.c file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
One LOG_MMU statement in mmu_helper.c has an odd check on the effective
address being translated. I can see no reason for this; I suspect it was
a debugging hack from long ago. This patch removes it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This removes the never-used pte64_invalidate() function, and makes
ppcmas_tlb_check() static, since it's only used within that file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The PowerPC 620 was the very first 64-bit PowerPC implementation, but
hardly anyone ever actually used the chips. qemu notionally supports the
620, but since we don't actually have code to implement the segment table,
the support is broken (quite likely in other ways too).
This patch, therefore, removes all remaining pieces of 620 support, to
stop it cluttering up the platforms we actually care about. This includes
removing support for the ASR register, used only on segment table based
machines.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Since HWADDR_PRIx is always the same now, use %016 for TARGET_PPC64 and
%08 for common code. This may slightly change the ppc64 debug output.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
For all our PPC targets the physical address space is at least
36 bits, so drop an unnecessary preprocessor conditional check
on TARGET_PHYS_ADDR_SPACE_BITS (erroneously introduced as part
of the change from target_phys_addr_t to hwaddr). This brings
this bit of code into line with the way we handle the other
cases which were originally checking TARGET_PHYS_ADDR_BITS in
order to avoid compiler complaints about overflowing a 32 bit type.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific). Replace it with a finger-friendly,
standards conformant hwaddr.
Outstanding patchsets can be fixed up with the command
git rebase -i --exec 'find -name "*.[ch]"
| xargs s/target_phys_addr_t/hwaddr/g' origin
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
The hassle and compile time overhead of maintaining both 32-bit and 64-bit
capable source isn't worth the tiny performance advantage which is seen on
a minority of configurations. Switch to compiling libhw only once, with
target_phys_addr_t unconditionally typedefed to uint64_t.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
More recent Power server chips (i.e. based on the 64 bit hash MMU)
support more than just the traditional 4k and 16M page sizes. This
can get quite complicated, because which page sizes are supported,
which combinations are supported within an MMU segment and how these
page sizes are encoded both in the SLB entry and the hash PTE can vary
depending on the CPU model (they are not specified by the
architecture). In addition the firmware or hypervisor may not permit
use of certain page sizes, for various reasons. Whether various page
sizes are supported on KVM, for example, depends on whether the PR or
HV variant of KVM is in use, and on the page size of the memory
backing the guest's RAM.
This patch adds information to the CPUState and cpu defs to describe
the supported page sizes and encodings. Since TCG does not yet
support any extended page sizes, we just set this to NULL in the
static CPU definitions, expanding this to the default 4k and 16M page
sizes when we initialize the cpu state. When using KVM, however, we
instead determine available page sizes using the new
KVM_PPC_GET_SMMU_INFO call. For old kernels without that call, we use
some defaults, with some guesswork which should do the right thing for
existing HV and PR implementations. The fallback might not be correct
for future versions, but that's ok, because they'll have
KVM_PPC_GET_SMMU_INFO.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The size of EPN field in MAS2 depends on page size. This patch adds a
mask to discard invalid bits in EPN field.
Definition of EPN field from e500v2 RM:
EPN Effective page number: Depending on page size, only the bits
associated with a page boundary are valid. Bits that represent offsets
within a page are ignored and should be cleared.
There is a similar (but more complicated) definition in PowerISA V2.06.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Remove useless wrappers. In some cases 'int' parameters are
changed to uint32_t.
Make internal functions static.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
[agraf: fix kvm compilation]
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Move more MMU helpers from helper.c to mmu_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
[update to current helper.c state]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
When the code is moved together by the next patch, compiler
detects a possible uninitialized variable use. Avoid the warning
by initializing the variables.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Move MMU, TLB, SLB and BAT ops to mmu_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|