aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)AuthorFilesLines
2017-10-12target/arm: Implement SG instruction corner casesPeter Maydell1-1/+22
The common situation of the SG instruction is that it is executed from S&NSC memory by a CPU in NS state. That case is handled by v7m_handle_execute_nsc(). However the instruction also has defined behaviour in a couple of other cases: * SG instruction in NS memory (behaves as a NOP) * SG in S memory but CPU already secure (clears IT bits and does nothing else) * SG instruction in v8M without Security Extension (NOP) These can be implemented in translate.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-10-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Support some Thumb insns being always unconditionalPeter Maydell1-1/+47
A few Thumb instructions are always unconditional even inside an IT block (as opposed to being UNPREDICTABLE if used inside an IT block): BKPT, the v8M SG instruction, and the A profile HLT (debug halt) instruction. This means we need to suppress the jump-over-instruction-on-condfail code generation (though the IT state still advances as usual and subsequent insns in the IT block may be conditional). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-9-git-send-email-peter.maydell@linaro.org
2017-10-12target-arm: Simplify insn_crosses_page()Peter Maydell1-21/+6
Recent changes have left insn_crosses_page() more complicated than it needed to be: * it's only called from thumb_tr_translate_insn() so we know for certain that we're looking at a Thumb insn * the caller's check for dc->pc >= dc->next_page_start - 3 means that dc->pc can't possibly be 4 aligned, so there's no need to check that (the check was partly there to ensure that we didn't treat an ARM insn as Thumb, I think) * we now have thumb_insn_is_16bit() which lets us do a precise check of the length of the next insn, rather than opencoding an inaccurate check Simplify it down to just loading the first half of the insn and calling thumb_insn_is_16bit() on it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-8-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Pull Thumb insn word loads up to top levelPeter Maydell1-70/+108
Refactor the Thumb decode to do the loads of the instruction words at the top level rather than only loading the second half of a 32-bit Thumb insn in the middle of the decode. This is simple apart from the awkward case of Thumb1, where the BL/BLX prefix and suffix instructions live in what in Thumb2 is the 32-bit insn space. To handle these we decode enough to identify whether we're looking at a prefix/suffix that we handle as a 16 bit insn, or a prefix that we're going to merge with the following suffix to consider as a 32 bit insn. The translation of the 16 bit cases then moves from disas_thumb2_insn() to disas_thumb_insn(). The refactoring has the benefit that we don't need to pass the CPUARMState* down into the decoder code any more, but the major reason for doing this is that some Thumb instructions must be always unconditional regardless of the IT state bits, so we need to know the whole insn before we emit the "skip this insn if the IT bits and cond state tell us to" code. (The always unconditional insns are BKPT, HLT and SG; the last of these is 32 bits.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-7-git-send-email-peter.maydell@linaro.org
2017-10-12target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1Peter Maydell1-2/+1
The code which implements the Thumb1 split BL/BLX instructions is guarded by a check on "not M or THUMB2". All we really need to check here is "not THUMB2" (and we assume that elsewhere too, eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns). This doesn't change behaviour because all M profile cores have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2. (v6M implements a very restricted subset of Thumb2, but we can cross that bridge when we get to it with appropriate feature bits.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-6-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Implement secure function returnPeter Maydell3-10/+126
Secure function return happens when a non-secure function has been called using BLXNS and so has a particular magic LR value (either 0xfefffffe or 0xfeffffff). The function return via BX behaves specially when the new PC value is this magic value, in the same way that exception returns are handled. Adjust our BX excret guards so that they recognize the function return magic number as well, and perform the function-return unstacking in do_v7m_exception_exit(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-5-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Implement BLXNSPeter Maydell4-2/+76
Implement the BLXNS instruction, which allows secure code to call non-secure code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-4-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Implement SG instructionPeter Maydell1-5/+127
Implement the SG instruction, which we emulate 'by hand' in the exception handling code path. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-3-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()Peter Maydell1-0/+4
Add the M profile secure MMU index values to the switch in get_a32_user_mem_index() so that LDRT/STRT work correctly rather than asserting at translate time. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-2-git-send-email-peter.maydell@linaro.org
2017-10-10tcg: remove addr argument from lookup_tb_ptrEmilio G. Cota8-27/+17
It is unlikely that we will ever want to call this helper passing an argument other than the current PC. So just remove the argument, and use the pc we already get from cpu_get_tb_cpu_state. This change paves the way to having a common "tb_lookup" function. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-09x86: Correct translation of some rdgsbase and wrgsbase encodingsTodd Eisenberger1-2/+2
It looks like there was a transcription error when writing this code initially. The code previously only decoded src or dst of rax. This resolves https://bugs.launchpad.net/qemu/+bug/1719984. Signed-off-by: Todd Eisenberger <teisenbe@google.com> Message-Id: <CAP26EVRNVb=Mq=O3s51w7fDhGVmf-e3XFFA73MRzc5b4qKBA4g@mail.gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-10-09qom/cpu: move cpu_model null check to cpu_class_by_name()Philippe Mathieu-Daudé13-55/+2
and clean every implementation. Suggested-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170917232842.14544-1-f4bug@amsat.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Artyom Tarasenko <atar4qemu@gmail.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-10-06target/arm: Factor out "get mmuidx for specified security state"Peter Maydell1-11/+21
For the SG instruction and secure function return we are going to want to do memory accesses using the MMU index of the CPU in secure state, even though the CPU is currently in non-secure state. Write arm_v7m_mmu_idx_for_secstate() to do this job, and use it in cpu_mmu_index(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-17-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Fix calculation of secure mm_idx valuesPeter Maydell1-5/+7
In cpu_mmu_index() we try to do this: if (env->v7m.secure) { mmu_idx += ARMMMUIdx_MSUser; } but it will give the wrong answer, because ARMMMUIdx_MSUser includes the 0x40 ARM_MMU_IDX_M field, and so does the mmu_idx we're adding to, and we'll end up with 0x8n rather than 0x4n. This error is then nullified by the call to arm_to_core_mmu_idx() which masks out the high part, but we're about to factor out the code that calculates the ARMMMUIdx values so it can be used without passing it through arm_to_core_mmu_idx(), so fix this bug first. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-16-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Implement security attribute lookups for memory accessesPeter Maydell2-2/+195
Implement the security attribute lookups for memory accesses in the get_phys_addr() functions, causing these to generate various kinds of SecureFault for bad accesses. The major subtlety in this code relates to handling of the case when the security attributes the SAU assigns to the address don't match the current security state of the CPU. In the ARM ARM pseudocode for validating instruction accesses, the security attributes of the address determine whether the Secure or NonSecure MPU state is used. At face value, handling this would require us to encode the relevant bits of state into mmu_idx for both S and NS at once, which would result in our needing 16 mmu indexes. Fortunately we don't actually need to do this because a mismatch between address attributes and CPU state means either: * some kind of fault (usually a SecureFault, but in theory perhaps a UserFault for unaligned access to Device memory) * execution of the SG instruction in NS state from a Secure & NonSecure code region The purpose of SG is simply to flip the CPU into Secure state, so we can handle it by emulating execution of that instruction directly in arm_v7m_cpu_do_interrupt(), which means we can treat all the mismatch cases as "throw an exception" and we don't need to encode the state of the other MPU bank into our mmu_idx values. This commit doesn't include the actual emulation of SG; it also doesn't include implementation of the IDAU, which is a per-board way to specify hard-coded memory attributes for addresses, which override the CPU-internal SAU if they specify a more secure setting than the SAU is programmed to. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-15-git-send-email-peter.maydell@linaro.org
2017-10-06nvic: Implement Security Attribution Unit registersPeter Maydell3-0/+51
Implement the register interface for the SAU: SAU_CTRL, SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the actual behaviour is implemented here; registers just read back as written. When the CPU definition for Cortex-M33 is eventually added, its initfn will set cpu->sau_sregion, in the same way that we currently set cpu->pmsav7_dregion for the M3 and M4. Number of SAU regions is typically a configurable CPU parameter, but this patch doesn't provide a QEMU CPU property for it. We can easily add one when we have a board that requires it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-14-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add v8M support to exception entry codePeter Maydell1-20/+145
Add support for v8M and in particular the security extension to the exception entry code. This requires changes to: * calculation of the exception-return magic LR value * push the callee-saves registers in certain cases * clear registers when taking non-secure exceptions to avoid leaking information from the interrupted secure code * switch to the correct security state on entry * use the vector table for the security state we're targeting Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-13-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add support for restoring v8M additional state contextPeter Maydell1-0/+30
For v8M, exceptions from Secure to Non-Secure state will save callee-saved registers to the exception frame as well as the caller-saved registers. Add support for unstacking these registers in exception exit when necessary. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-12-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Update excret sanity checks for v8MPeter Maydell1-15/+58
In v8M, more bits are defined in the exception-return magic values; update the code that checks these so we accept the v8M values when the CPU permits them. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-11-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add new-in-v8M SFSR and SFARPeter Maydell2-0/+14
Add the new M profile Secure Fault Status Register and Secure Fault Address Register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-10-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Don't warn about exception return with PC low bit set for v8MPeter Maydell1-7/+15
In the v8M architecture, return from an exception to a PC which has bit 0 set is not UNPREDICTABLE; it is defined that bit 0 is discarded [R_HRJH]. Restrict our complaint about this to v7M. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-9-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Warn about restoring to unaligned stackPeter Maydell1-0/+7
Attempting to do an exception return with an exception frame that is not 8-aligned is UNPREDICTABLE in v8M; warn about this. (It is not UNPREDICTABLE in v7M, and our implementation can handle the merely-4-aligned case fine, so we don't need to do anything except warn.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-8-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Check for xPSR mismatch usage faults earlier for v8MPeter Maydell1-3/+27
ARM v8M specifies that the INVPC usage fault for mismatched xPSR exception field and handler mode bit should be checked before updating the PSR and SP, so that the fault is taken with the existing stack frame rather than by pushing a new one. Perform this check in the right place for v8M. Since v7M specifies in its pseudocode that this usage fault check should happen later, we have to retain the original code for that check rather than being able to merge the two. (The distinction is architecturally visible but only in very obscure corner cases like attempting an invalid exception return with an exception frame in read only memory.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-7-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Restore SPSEL to correct CONTROL register on exception returnPeter Maydell1-13/+27
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic value should be restored to the SPSEL bit in the CONTROL register banked specified by the EXC_RETURN.ES bit. Add write_v7m_control_spsel_for_secstate() which behaves like write_v7m_control_spsel() but allows the caller to specify which CONTROL bank to use, reimplement write_v7m_control_spsel() in terms of it, and use it in exception return. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-6-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Restore security state on exception returnPeter Maydell1-0/+2
Now that we can handle the CONTROL.SPSEL bit not necessarily being in sync with the current stack pointer, we can restore the correct security state on exception return. This happens before we start to read registers off the stack frame, but after we have taken possible usage faults for bad exception return magic values and updated CONTROL.SPSEL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-5-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler modePeter Maydell2-23/+50
In the v7M architecture, there is an invariant that if the CPU is in Handler mode then the CONTROL.SPSEL bit cannot be nonzero. This in turn means that the current stack pointer is always indicated by CONTROL.SPSEL, even though Handler mode always uses the Main stack pointer. In v8M, this invariant is removed, and CONTROL.SPSEL may now be nonzero in Handler mode (though Handler mode still always uses the Main stack pointer). In preparation for this change, change how we handle this bit: rename switch_v7m_sp() to the now more accurate write_v7m_control_spsel(), and make it check both the handler mode state and the SPSEL bit. Note that this implicitly changes the point at which we switch active SP on exception exit from before we pop the exception frame to after it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-4-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Don't switch to target stack early in v7M exception returnPeter Maydell1-32/+98
Currently our M profile exception return code switches to the target stack pointer relatively early in the process, before it tries to pop the exception frame off the stack. This is awkward for v8M for two reasons: * in v8M the process vs main stack pointer is not selected purely by the value of CONTROL.SPSEL, so updating SPSEL and relying on that to switch to the right stack pointer won't work * the stack we should be reading the stack frame from and the stack we will eventually switch to might not be the same if the guest is doing strange things Change our exception return code to use a 'frame pointer' to read the exception frame rather than assuming that we can switch the live stack pointer this early. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-3-git-send-email-peter.maydell@linaro.org
2017-10-06arm: Fix SMC reporting to EL2 when QEMU provides PSCIJan Kiszka2-11/+25
This properly forwards SMC events to EL2 when PSCI is provided by QEMU itself and, thus, ARM_FEATURE_EL3 is off. Found and tested with the Jailhouse hypervisor. Solution based on suggestions by Peter Maydell. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-id: 4f243068-aaea-776f-d18f-f9e05e7be9cd@siemens.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-06s390x/tcg: initialize machine check queueCornelia Huck1-0/+2
Just as for external interrupts and I/O interrupts, we need to initialize mchk_index during cpu reset. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390/kvm: Support for get/set of extended TOD-Clock for guestCollin L. Walling4-8/+63
Provides an interface for getting and setting the guest's extended TOD-Clock via a single ioctl to kvm. If the ioctl fails because it is not support by kvm, then we fall back to the old style of retrieving the clock via two ioctls. Signed-off-by: Collin L. Walling <walling@linux.vnet.ibm.com> Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> [split failure change from epoch index change] Message-Id: <20171004105751.24655-2-borntraeger@de.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> [some cosmetic fixes]
2017-10-06s390x/tcg: make STFL store into the lowcoreDavid Hildenbrand2-2/+7
Using virtual memory access is wrong and will soon include low-address protection checks, which is to be bypassed for STFL. STFL is a privileged instruction and using LowCore requires !CONFIG_USER_ONLY, so add the ifdef and move the declaration to the right place. This was originally part of a bigger STFL(E) refactoring. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170927170027.8539-4-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x: introduce and use S390_MAX_CPUSDavid Hildenbrand1-0/+2
Will be handy in the future. Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928134609.16985-6-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06target/s390x: get rid of next_core_idDavid Hildenbrand4-9/+11
core_id is not needed by linux-user, as the core_id a.k.a. CPU address is only accessible from kernel space. Therefore, drop next_core_id and make cpu_index get autoassigned again for linux-user. While at it, shield core_id and cpuid completely from linux-user. cpuid can also only be queried from kernel space. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928134609.16985-5-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/cpumodel: fix max STFL(E) bit numberDavid Hildenbrand1-1/+1
Not that it would matter in the near future, but it is actually 2048 bytes, therefore 16384 possible bits. Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928134609.16985-4-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x: raise CPU hotplug irq after really hotpluggedDavid Hildenbrand1-8/+0
Let's move it into the machine, so we trigger the IRQ after setting ms->possible_cpus (which SCLP uses to construct the list of online CPUs). This also fixes a problem reported by Thomas Huth, whereby qemu can be crashed using the none machine qemu-s390x-softmmu -M none -monitor stdio -> device_add qemu-s390-cpu Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928134609.16985-3-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: make idte/ipte use the new _real mmuDavid Hildenbrand1-4/+5
We don't wrap addresses in the mmu for the _real case, therefore the behavior should be unchanged. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-7-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: make testblock use the new _real mmuDavid Hildenbrand1-10/+2
Low address protection checks will be moved into the mmu later. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-6-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: make stora(g) use the new _real mmuDavid Hildenbrand2-8/+2
As we properly handle the return address now, we can drop potential_page_fault(). Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-5-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: make lura(g) use the new _real mmu.David Hildenbrand2-8/+2
Looks like, lurag was not loading 64bit but only 32bit. As we properly handle the return address now, we can drop potential_page_fault(). Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-4-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: add MMU for real addressesDavid Hildenbrand4-10/+40
This makes it easy to access real addresses (prefix) and in addition checks for valid memory addresses, which is missing when using e.g. stl_phys(). We can later reuse it to implement low address protection checks (then we might even decide to introduce yet another MMU for absolute addresses, just for handling storage keys and low address protection). Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-3-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: fix checking for invalid memory checkDavid Hildenbrand1-1/+3
It should have been a >=, but let's directly perform a proper access check to also be able to deal with hotplugged memory later. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-2-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/kvm: fix and cleanup storing CPU statusDavid Hildenbrand1-20/+42
env->psa is a 64bit value, while we copy 4 bytes into the save area, resulting always in 0 getting stored. Let's try to reduce such errors by using a proper structure. While at it, use correct cpu->be conversion (and get_psw_mask()), as we will be reusing this code for TCG soon. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170922140338.6068-1-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x: use generic cpu_model parsingIgor Mammedov3-13/+7
Define default CPU type in generic way in machine class_init and let common machine code handle cpu_model parsing. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <1505998749-269631-1-git-send-email-imammedo@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: add basic MSA featuresDavid Hildenbrand6-1/+140
The STFLE bits for the MSA (extension) facilities simply indicate that the respective instructions can be executed. The QUERY subfunction can then be used to identify which features exactly are available. Availability of subfunctions can also vary on real hardware. For now, we simply implement a CPU model without any available subfunctions except QUERY (which is always around). As all MSA functions behave quite similarly, we can use one translation handler for now. Prepare the code for implementation of actual subfunctions. At least MSA is helpful for now, as older Linux kernels require this facility when compiled for a z9 model. Allow to enable the facilities for the qemu cpu model. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170920153016.3858-4-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: move wrap_address() to internal.hDavid Hildenbrand2-14/+14
We want to use it in another file. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170920153016.3858-3-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: implement spm (SET PROGRAM MASK)David Hildenbrand3-0/+15
Missing and is used inside Linux in the context of CPACF. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170920153016.3858-2-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-29console: purge curses bits from console.hGerd Hoffmann1-0/+6
Handle the translation from vga chars to curses chars in curses_update() instead of console_write_ch(). Purge any curses support bits from ui/console.h include file. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Message-id: 20170927103811.19249-1-kraxel@redhat.com
2017-09-27Merge remote-tracking branch ↵Peter Maydell5-6/+17
'remotes/dgilbert/tags/pull-migration-20170927a' into staging Migration pull 2017-09-27 # gpg: Signature made Wed 27 Sep 2017 14:56:23 BST # gpg: using RSA key 0x0516331EBC5BFDE7 # gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7 * remotes/dgilbert/tags/pull-migration-20170927a: migration: Route more error paths migration: Route errors up through vmstate_save migration: wire vmstate_save_state errors up to vmstate_subsection_save migration: Check field save returns migration: check pre_save return in vmstate_save_state migration: pre_save return int migration: disable auto-converge during bulk block migration Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-27Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.11-20170927' ↵Peter Maydell6-324/+72
into staging ppc patch queue 2017-09-27 Contains * a number of Mac machine type fixes * a number of embedded machine type fixes (preliminary to adding the Sam460ex board) * a important fix for handling of migration with KVM PR * assorted other minor fixes and cleanups # gpg: Signature made Wed 27 Sep 2017 08:40:48 BST # gpg: using RSA key 0x6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dgibson/tags/ppc-for-2.11-20170927: (26 commits) macio: use object link between MACIO_IDE and MAC_DBDMA object macio: pass channel into MACIOIDEState via qdev property mac_dbdma: remove DBDMA_init() function mac_dbdma: QOMify mac_dbdma: remove unused IO fields from DBDMAState spapr: fix the value of SDR1 in kvmppc_put_books_sregs() ppc/pnv: check for OPAL firmware file presence ppc: remove all unused CPU definitions ppc: remove unused CPU definitions spapr_pci: make index property mandatory macio: convert pmac_ide_ops from old_mmio ppc/pnv: Improve macro parenthesization spapr: introduce helpers to migrate HPT chunks and the end marker ppc/kvm: generalize the use of kvmppc_get_htab_fd() ppc/kvm: change kvmppc_get_htab_fd() to return -errno on error ppc: Fix OpenPIC model ppc/ide/macio: Add missing registers ppc/mac: More rework of the DBDMA emulation ppc/mac: Advertise a high clock frequency for NewWorld Macs ppc: QOMify g3beige machine ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-27migration: pre_save return intDr. David Alan Gilbert5-6/+17
Modify the pre_save method on VMStateDescription to return an int rather than void so that it potentially can fail. Changed zillions of devices to make them return 0; the only case I've made it return non-0 is hw/intc/s390_flic_kvm.c that already had an error_report/return case. Note: If you add an error exit in your pre_save you must emit an error_report to say why. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170925112917.21340-2-dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>