aboutsummaryrefslogtreecommitdiff
path: root/target/ppc/cpu.h
AgeCommit message (Collapse)AuthorFilesLines
2022-03-02target/ppc: trigger PERFM EBBs from power8-pmu.cDaniel Henrique Barboza1-0/+5
This patch adds the EBB exception support that are triggered by Performance Monitor alerts. This happens when a Performance Monitor alert occurs and MMCR0_EBE, BESCR_PME and BESCR_GE are set. fire_PMC_interrupt() will execute the raise_ebb_perfm_exception() helper which will check for MMCR0_EBE, BESCR_PME and BESCR_GE bits. If all bits are set, do_ebb() will attempt to trigger a PERFM EBB event. If the EBB facility is enabled in both FSCR and HFSCR we consider that the EBB is valid and set BESCR_PMEO. After that, if we're running in problem state, fire a POWERPC_EXCP_PERM_EBB immediately. Otherwise we'll queue a PPC_INTERRUPT_EBB. Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220225101140.1054160-5-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-03-02target/ppc: add PPC_INTERRUPT_EBB and EBB exceptionsDaniel Henrique Barboza1-1/+4
PPC_INTERRUPT_EBB is a new interrupt that will be used to deliver EBB exceptions that had to be postponed because the thread wasn't in problem state at the time the event-based branch was supposed to occur. ISA 3.1 also defines two EBB exceptions: Performance Monitor EBB exception and External EBB exception. They are being added as POWERPC_EXCP_PERFM_EBB and POWERPC_EXCP_EXTERNAL_EBB. PPC_INTERRUPT_EBB will check BESCR bits to see the EBB type that occurred and trigger the appropriate exception. Both exceptions are doing the same thing in this first implementation: clear BESCR_GE and enter the branch with env->nip retrieved from SPR_EBBHR. The checks being done by the interrupt code are msr_pr and BESCR_GE states. All other checks (EBB facility check, BESCR_PME bit, specific bits related to the event type) must be done beforehand. Reviewed-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20220225101140.1054160-4-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-02-18target/ppc: cpu_init: Move check_pow and QOM macros to a headerFabiano Rosas1-0/+39
These will need to be accessed from other files once we move the CPUs code to separate files. The check_pow_hid0 and check_pow_hid0_74xx are too specific to be moved to a header so I'll deal with them later when splitting this code between the multiple CPU families. Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20220216162426.1885923-27-farosas@linux.ibm.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-02-18target/ppc: Introduce a vhyp framework for nested HV supportNicholas Piggin1-0/+7
Introduce virtual hypervisor methods that can support a "Nested KVM HV" implementation using the bare metal 2-level radix MMU, and using HV exceptions to return from H_ENTER_NESTED (rather than cause interrupts). HV exceptions can now be raised in the TCG spapr machine when running a nested KVM HV guest. The main ones are the lev==1 syscall, the hdecr, hdsi and hisi, hv fu, and hv emu, and h_virt external interrupts. HV exceptions are intercepted in the exception handler code and instead of causing interrupts in the guest and switching the machine to HV mode, they go to the vhyp where it may exit the H_ENTER_NESTED hcall with the interrupt vector numer as return value as required by the hcall API. Address translation is provided by the 2-level page table walker that is implemented for the bare metal radix MMU. The partition scope page table is pointed to the L1's partition scope by the get_pate vhc method. Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20220216102545.1808018-9-npiggin@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-02-18target/ppc: make vhyp get_pate method take lpid and return successNicholas Piggin1-1/+2
In prepartion for implementing a full partition table option for vhyp, update the get_pate method to take an lpid and return a success/fail indicator. The spapr implementation currently just asserts lpid is always 0 and always return success. Reviewed-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [ clg: checkpatch fixes ] Message-Id: <20220216102545.1808018-6-npiggin@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-02-09target/ppc: Remove PowerPC 601 CPUsCédric Le Goater1-33/+6
The PowerPC 601 processor is the first generation of processors to implement the PowerPC architecture. It was designed as a bridge processor and also could execute most of the instructions of the previous POWER architecture. It was found on the first Macs and IBM RS/6000 workstations. There is not much interest in keeping the CPU model of this POWER-PowerPC bridge processor. We have the 603 and 604 CPU models of the 60x family which implement the complete PowerPC instruction set. Cc: "Hervé Poussineau" <hpoussin@reactos.org> Cc: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Message-Id: <20220203142756.1302515-1-clg@kaod.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-01-28target/ppc: Remove support for the PowerPC 602 CPUCédric Le Goater1-7/+1
The 602 was derived from the PowerPC 603, for the gaming market it seems. It was hardly used and no firmware supporting the CPU could be found. Drop support. Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-01-28target/ppc: 405: Rename MSR_POW to MSR_WEFabiano Rosas1-0/+1
Bit 13 is the Wait State Enable bit. Give it its proper name. As far as I can see we don't do anything with MSR_POW for the 405, so this change has no effect. Suggested-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20220118184448.852996-2-farosas@linux.ibm.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-01-18target/ppc: Finish removal of 401/403 CPUsCédric Le Goater1-1/+0
Commit c8f49e6b938e ("target/ppc: remove 401/403 CPUs") left a few things behind. Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20220117091541.1615807-1-clg@kaod.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20220118104150.1899661-3-clg@kaod.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-01-12target/ppc: Add MSR_ILE support to ppc_interrupts_little_endianFabiano Rosas1-1/+3
Some CPUs set ILE via an MSR bit. We can make ppc_interrupts_little_endian handle that case as well. Now we have a centralized way of determining the endianness of interrupts. This change has no functional impact. Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20220107222601.4101511-6-farosas@linux.ibm.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-01-12target/ppc: Add HV support to ppc_interrupts_little_endianFabiano Rosas1-8/+15
The ppc_interrupts_little_endian function could be used for interrupts delivered in Hypervisor mode, so add support for powernv8 and powernv9 to it. Also drop the comment because it is inaccurate, all CPUs that can run little endian can have interrupts in little endian. The point is whether they can take interrupts in an endianness different from MSR_LE. This change has no functional impact. Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20220107222601.4101511-5-farosas@linux.ibm.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-01-04target/ppc: Cache per-pmc insn and cycle count settingsRichard Henderson1-0/+3
This is the combination of frozen bit and counter type, on a per counter basis. So far this is only used by HFLAGS_INSN_CNT, but will be used more later. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> [danielhb: fixed PMC4 cyc_cnt shift, insn run latch code, MMCR0_FC handling, "PMC[1-6]" comment] Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20220103224746.167831-2-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2022-01-04ppc/ppc405: Restore TCR and STR write handlersCédric Le Goater1-0/+2
The 405 timers were broken when booke support was added. Assumption was made that the register numbers were the same but it's not : SPR_BOOKE_TSR (0x150) SPR_BOOKE_TCR (0x154) SPR_40x_TSR (0x3D8) SPR_40x_TCR (0x3DA) Cc: Christophe Leroy <christophe.leroy@c-s.fr> Fixes: ddd1055b07fd ("PPC: booke timers") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20211222064025.1541490-5-clg@kaod.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20220103063441.3424853-6-clg@kaod.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17PPC64/TCG: Implement 'rfebb' instructionDaniel Henrique Barboza1-0/+13
An Event-Based Branch (EBB) allows applications to change the NIA when a event-based exception occurs. Event-based exceptions are enabled by setting the Branch Event Status and Control Register (BESCR). If the event-based exception is enabled when the exception occurs, an EBB happens. The following operations happens during an EBB: - Global Enable (GE) bit of BESCR is set to 0; - bits 0-61 of the Event-Based Branch Return Register (EBBRR) are set to the the effective address of the NIA that would have executed if the EBB didn't happen; - Instruction fetch and execution will continue in the effective address contained in the Event-Based Branch Handler Register (EBBHR). The EBB Handler will process the event and then execute the Return From Event-Based Branch (rfebb) instruction. rfebb sets BESCR_GE and then redirects execution to the address pointed in EBBRR. This process is described in the PowerISA v3.1, Book II, Chapter 6 [1]. This patch implements the rfebb instruction. Descriptions of all relevant BESCR bits are also added - this patch is only using BESCR_GE, but the next patches will use the remaining bits. [1] https://wiki.raptorcs.com/w/images/f/f5/PowerISA_public.v3.1.pdf Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211201151734.654994-9-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc/power8-pmu.c: add PM_RUN_INST_CMPL (0xFA) eventDaniel Henrique Barboza1-0/+4
PM_RUN_INST_CMPL, instructions completed with the run latch set, is the architected PowerISA v3.1 event defined with PMC4SEL = 0xFA. Implement it by checking for the CTRL RUN bit before incrementing the counter. To make this work properly we also need to force a new translation block each time SPR_CTRL is written. A small tweak in pmu_increment_insns() is then needed to only increment this event if the thread has the run latch. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211201151734.654994-8-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: enable PMU instruction countDaniel Henrique Barboza1-0/+1
The PMU is already counting cycles by calculating time elapsed in nanoseconds. Counting instructions is a different matter and requires another approach. This patch adds the capability of counting completed instructions (Perf event PM_INST_CMPL) by counting the amount of instructions translated in each translation block right before exiting it. A new pmu_count_insns() helper in translation.c was added to do that. After verifying that the PMU is counting instructions, call helper_insns_inc(). This new helper from power8-pmu.c will add the instructions to the relevant counters. It'll also be responsible for triggering counter negative overflows as it is already being done with cycles. To verify whether the PMU is counting instructions or now, a new hflags named 'HFLAGS_INSN_CNT' is introduced. This flag will match the internal state of the PMU. We're be using this flag to avoid calling helper_insn_inc() when we do not have a valid instruction event being sampled. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211201151734.654994-7-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: enable PMU counter overflow with cycle eventsDaniel Henrique Barboza1-0/+2
The PowerISA v3.1 defines that if the proper bits are set (MMCR0_PMC1CE for PMC1 and MMCR0_PMCjCE for the remaining PMCs), counter negative conditions are enabled. This means that if the counter value overflows (i.e. exceeds 0x80000000) a performance monitor alert will occur. This alert can trigger an event-based exception (to be implemented in the next patches) if the MMCR0_EBE bit is set. For now, overflowing the counter when the PMC is counting cycles will just trigger a performance monitor alert. This is done by starting the overflow timer to expire in the moment the overflow would be occuring. The timer will call fire_PMC_interrupt() (via cpu_ppc_pmu_timer_cb) which will trigger the PMU alert and, if the conditions are met, an EBB exception. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211201151734.654994-6-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: PMU basic cycle count for pseries TCGDaniel Henrique Barboza1-0/+20
This patch adds the barebones of the PMU logic by enabling cycle counting. The overall logic goes as follows: - MMCR0 reg initial value is set to 0x80000000 (MMCR0_FC set) to avoid having to spin the PMU right at system init; - to retrieve the events that are being profiled, pmc_get_event() will check the current MMCR0 and MMCR1 value and return the appropriate PMUEventType. For PMCs 1-4, event 0x2 is the implementation dependent value of PMU_EVENT_INSTRUCTIONS and event 0x1E is the implementation dependent value of PMU_EVENT_CYCLES. These events are supported by IBM Power chips since Power8, at least, and the Linux Perf driver makes use of these events until kernel v5.15. For PMC1, event 0xF0 is the architected PowerISA event for cycles. Event 0xFE is the architected PowerISA event for instructions; - if the counter is frozen, either via the global MMCR0_FC bit or its individual frozen counter bits, PMU_EVENT_INACTIVE is returned; - pmu_update_cycles() will go through each counter and update the values of all PMCs that are counting cycles. This function will be called every time a MMCR0 update is done to keep counters values up to date. Upcoming patches will use this function to allow the counters to be properly updated during read/write of the PMCs and MMCR1 writes. Given that the base CPU frequency is fixed at 1Ghz for both powernv and pseries clock, cycle calculation assumes that 1 nanosecond equals 1 CPU cycle. Cycle value is then calculated by adding the elapsed time, in nanoseconds, of the last cycle update done via pmu_update_cycles(). Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211201151734.654994-3-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: introduce PMUEventType and PMU overflow timersDaniel Henrique Barboza1-0/+15
This patch starts an IBM Power8+ compatible PMU implementation by adding the representation of PMU events that we are going to sample, PMUEventType. This enum represents a Perf event that is being sampled by a specific counter 'sprn'. Events that aren't available (i.e. no event was set in MMCR1) will be of type 'PMU_EVENT_INVALID'. Events that are inactive due to frozen counter bits state are of type 'PMU_EVENT_INACTIVE'. Other types added in this patch are PMU_EVENT_CYCLES and PMU_EVENT_INSTRUCTIONS. More types will be added later on. Let's also add the required PMU cycle overflow timers. They will be used to trigger cycle overflows when cycle events are being sampled. This timer will call cpu_ppc_pmu_timer_cb(), which in turn calls fire_PMC_interrupt(). Both functions are stubs that will be implemented later on when EBB support is added. Two new helper files are created to host this new logic. cpu_ppc_pmu_init() will init all overflow timers during CPU init time. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211201151734.654994-2-danielhb413@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Remove the software TLB model of 7450 CPUsFabiano Rosas1-3/+1
(Applies to 7441, 7445, 7450, 7451, 7455, 7457, 7447, 7447a and 7448) The QEMU-side software TLB implementation for the 7450 family of CPUs is being removed due to lack of known users in the real world. The last users in the code were removed by the two previous commits. A brief history: The feature was added in QEMU by commit 7dbe11acd8 ("Handle all MMU models in switches...") with the mention that Linux was not able to handle the TLB miss interrupts and the MMU model would be kept disabled. At some point later, commit 8ca3f6c382 ("Allow selection of all defined PowerPC 74xx (aka G4) CPUs.") enabled the model for the 7450 family without further justification. We have since the year 2011 [1] been unable to run OpenBIOS in the 7450s and have not heard of any other software that is used with those CPUs in QEMU. Attempts were made to find a guest OS that implemented the TLB miss handlers and none were found among Linux 5.15, FreeBSD 13, MacOS9, MacOSX and MorphOS 3.15. All CPUs that registered this feature were moved to an MMU model that replaces the software TLB with a QEMU hardware TLB implementation. They can now run the same software as the 7400 CPUs, including the OSes mentioned above. References: - https://bugs.launchpad.net/qemu/+bug/812398 https://gitlab.com/qemu-project/qemu/-/issues/86 - https://lists.nongnu.org/archive/html/qemu-ppc/2021-11/msg00289.html message id: 20211119134431.406753-1-farosas@linux.ibm.com Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20211130230123.781844-4-farosas@linux.ibm.com> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: ppc_store_fpscr doesn't update bits 0 to 28 and 52Lucas Mateus Castro (alqotel)1-0/+4
This commit fixes the difference reported in the bug in the reserved bit 52, it does this by adding this bit to the mask of bits to not be directly altered in the ppc_store_fpscr function (the hardware used to compare to QEMU was a Power9). The bits 0 to 27 were also added to the mask, as they are marked as reserved in the PowerISA and bit 28 is a reserved extension of the DRN field (bits 29:31) but can't be set using mtfsfi, while the other DRN bits may be set using mtfsfi instruction, so bit 28 was also added to the mask. Although this is a difference reported in the bug, since it's a reserved bit it may be a "don't care" case, as put in the bug report. Looking at the ISA it doesn't explicitly mention this bit can't be set, like it does for FEX and VX, so I'm unsure if this is necessary. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/266 Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20211201163808.440385-4-lucas.araujo@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-11-02target/ppc: Implement ppc_cpu_record_sigsegvRichard Henderson1-3/+0
Record DAR, DSISR, and exception_index. That last means that we must exit to cpu_loop ourselves, instead of letting exception_index being overwritten. This is exactly what the user-mode ppc_cpu_tlb_fill does, so simply rename it as ppc_cpu_record_sigsegv. Reviewed-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-21target/ppc: add user read/write functions for MMCR2Daniel Henrique Barboza1-0/+9
Similar to the previous patch, let's add problem state read/write access to the MMCR2 SPR, which is also a group A PMU SPR that needs to be filtered to be read/written by userspace. Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211018010133.315842-4-danielhb413@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-10-21target/ppc: add user read/write functions for MMCR0Gustavo Romero1-0/+7
Userspace need access to PMU SPRs to be able to operate the PMU. One of such SPRs is MMCR0. MMCR0, as defined by PowerISA v3.1, is classified as a 'group A' PMU register. This class of registers has common read/write rules that are governed by MMCR0 PMCC bits. MMCR0 is also not fully exposed to problem state: only MMCR0_FC, MMCR0_PMAO and MMCR0_PMAE bits are readable/writable in this case. This patch exposes MMCR0 to userspace by doing the following: - two new callbacks, spr_read_MMCR0_ureg() and spr_write_MMCR0_ureg(), are added to be used as problem state read/write callbacks of UMMCR0. Both callbacks filters the amount of bits userspace is able to read/write by using a MMCR0_UREG_MASK; - problem state access control is done by the spr_groupA_read_allowed() and spr_groupA_write_allowed() helpers. These helpers will read the current PMCC bits from DisasContext and check whether the read/write MMCR0 operation is valid or noti; - to avoid putting exclusive PMU logic into the already loaded translate.c file, let's create a new 'power8-pmu-regs.c.inc' file that will hold all the spr_read/spr_write functions of PMU registers. The 'power8' name of this new file intends to hint about the proven support of the PMU logic to be added. The code has been tested with the IBM POWER chip family, POWER8 being the oldest version tested. This doesn't mean that the PMU logic will break with any other PPC64 chip that implements Book3s, but rather that we can't assert that it works properly with any Book3s compliant chip. CC: Gustavo Romero <gustavo.romero@linaro.org> Signed-off-by: Gustavo Romero <gromero@linux.ibm.com> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211018010133.315842-3-danielhb413@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-10-21target/ppc: add MMCR0 PMCC bits to hflagsDaniel Henrique Barboza1-0/+6
We're going to add PMU support for TCG PPC64 chips, based on IBM POWER8+ emulation and following PowerISA v3.1. This requires several PMU related registers to be exposed to userspace (problem state). PowerISA v3.1 dictates that the PMCC bits of the MMCR0 register controls the level of access of the PMU registers to problem state. This patch start things off by exposing both PMCC bits to hflags, allowing us to access them via DisasContext in the read/write callbacks that we're going to add next. Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211018010133.315842-2-danielhb413@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-10-21target/ppc: Filter mtmsr[d] input before setting MSRMatheus Ferst1-0/+1
PowerISA says that mtmsr[d] "does not alter MSR[HV], MSR[S], MSR[ME], or MSR[LE]", but the current code only filters the GPR-provided value if L=1. This behavior caused some problems in FreeBSD, and a build option was added to work around the issue [1], but it seems that the bug was not reported in launchpad/gitlab. This patch address the issue in qemu, so the option on FreeBSD should no longer be required. [1] https://cgit.freebsd.org/src/commit/?id=4efb1ca7d2a44cfb33d7f9e18bd92f8d68dcfee0 Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20211015181940.197982-1-matheus.ferst@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-10-21linux-user: Fix XER access in ppc version of elf_core_copy_regsMatheus Ferst1-1/+1
env->xer doesn't hold some bits of XER, like OV and CA. To write the complete register in the core dump we should read XER value with cpu_read_xer. Reported-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Fixes: da91a00f191f ("target-ppc: Split out SO, OV, CA fields from XER") Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20211014223234.127012-4-matheus.ferst@eldorado.org.br> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-09-30target/ppc: add LPCR[HR] to DisasContext and hflagsMatheus Ferst1-0/+1
Add a Host Radix field (hr) in DisasContext with LPCR[HR] value to allow us to decide between Radix and HPT while validating instructions arguments. Note that PowerISA v3.1 does not require LPCR[HR] and PATE.HR to match if the thread is in ultravisor/hypervisor real addressing mode, so ctx->hr may be invalid if ctx->hv and ctx->dr are set. Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20210917114751.206845-2-matheus.ferst@eldorado.org.br> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-09-21include/exec: Move cpu_signal_handler declarationRichard Henderson1-7/+0
There is nothing target specific about this. The implementation is host specific, but the declaration is 100% common. Reviewed-By: Warner Losh <imp@bsdimp.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14target/ppc: Restrict cpu_exec_interrupt() handler to sysemuPhilippe Mathieu-Daudé1-2/+2
Restrict cpu_exec_interrupt() and its callees to sysemu. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20210911165434.531552-18-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-27target/ppc: divided mmu_helper.c in 2 filesLucas Mateus Castro (alqotel)1-0/+9
Divided mmu_helper.c in 2 files, functions inside #ifdef CONFIG_SOFTMMU stayed in mmu_helper.c, other functions moved to mmu_common.c. Updated meson.build to compile mmu_common.c and only compile mmu_helper.c when CONFIG_TCG is set. Moved function declarations, #define and structs used by both files to internal.h except for functions that use structures defined in cpu.h, those were moved to cpu.h. Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20210723175627.72847-2-lucas.araujo@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Introduce ppc_interrupts_little_endian()Greg Kurz1-0/+15
PowerPC CPUs use big endian by default but starting with POWER7, server grade CPUs use the ILE bit of the LPCR special purpose register to decide on the endianness to use when handling interrupts. This gives a clue to QEMU on the endianness the guest kernel is running, which is needed when generating an ELF dump of the guest or when delivering an FWNMI machine check interrupt. Commit 382d2db62bcb ("target-ppc: Introduce callback for interrupt endianness") added a class method to PowerPCCPUClass to modelize this : default implementation returns a fixed "big endian" value, while POWER7 and newer do the LPCR_ILE check. This is suboptimal as it forces to implement the method for every new CPU family, and it is very unlikely that this will ever be different than what we have today. We basically only have three cases to consider: a) CPU doesn't have an LPCR => big endian b) CPU has an LPCR but doesn't support the ILE bit => big endian c) CPU has an LPCR and supports the ILE bit => little or big endian Instead of class methods, introduce an inline helper that checks the ILE bit in the LPCR_MASK to decide on the outcome. The new helper words little endian instead of big endian. This allows to drop a ! operator in ppc_cpu_do_fwnmi_machine_check(). Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <20210622140926.677618-2-groug@kaod.org> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-06-03target/ppc: Add infrastructure for prefixed insnsRichard Henderson1-0/+1
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20210601193528.2533031-4-matheus.ferst@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-06-03target/ppc: overhauled and moved logic of storing fpscrBruno Larsen (billionai)1-6/+6
Followed the suggested overhaul to store_fpscr logic, and moved it to cpu.c where it can be accessed in !TCG builds. The overhaul was suggested because storing a value to fpscr should never raise an exception, so we could remove all the mess that happened with POWERPC_EXCP_FP. We also moved fpscr_set_rounding_mode into cpu.c as it could now be moved there, and it is needed when a value for the fpscr is being stored directly. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210527163522.23019-1-bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-06-03target/ppc: remove ppc_cpu_dump_statisticsBruno Larsen (billionai)1-1/+0
This function requires surce code modification to be useful, which means it probably is not used often, and the move to using decodetree means the statistics won't even be collected anymore. Also removed setting dump_statistics in ppc_cpu_realize, since it was only useful when in conjunction with ppc_cpu_dump_statistics. Suggested-by: Richard Henderson<richard.henderson@linaro.org> Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br> Message-Id: <20210526202104.127910-3-bruno.larsen@eldorado.org.br> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-06-03target/ppc: fold ppc_store_ptcr into it's only callerBruno Larsen (billionai)1-1/+0
ppc_store_ptcr, defined in mmu_helper.c, was only used by helper_store_ptcr, in misc_helper.c. To avoid possible confusion, the function was folded into the helper. Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br> Message-Id: <20210526143516.125582-1-bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-19target/ppc: Replace POWERPC_EXCP_BRANCH with DISAS_NORETURNRichard Henderson1-2/+0
The translation of branch instructions always results in exit from the TB. Remove the synthetic "exception" after no more uses. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20210517205025.3777947-4-matheus.ferst@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-19target/ppc: Replace POWERPC_EXCP_STOP with DISAS_EXIT_UPDATERichard Henderson1-1/+0
Remove the synthetic "exception" after no more uses. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20210517205025.3777947-3-matheus.ferst@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-19target/ppc: Replace POWERPC_EXCP_SYNC with DISAS_EXITRichard Henderson1-1/+0
Remove the synthetic "exception" after no more uses. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20210512185441.3619828-9-matheus.ferst@eldorado.org.br> Reviewed-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-19target/ppc: created ppc_{store,get}_vscr for generic vscr usageBruno Larsen (billionai)1-0/+2
Some functions unrelated to TCG use helper_m{t,f}vscr, so generic versions of those functions were added to cpu.c, in preparation for compilation without TCG Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br> Message-Id: <20210512140813.112884-2-bruno.larsen@eldorado.org.br> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-19hw/ppc: moved has_spr to cpu.hLucas Mateus Castro (alqotel)1-0/+6
Moved has_spr to cpu.h as ppc_has_spr and turned it into an inline function. Change spr verification in pnv.c and spapr.c to a version that can compile in a !TCG environment. Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20210507164146.67086-1-lucas.araujo@eldorado.org.br> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-19target/ppc: moved ppc_store_lpcr to misc_helper.cLucas Mateus Castro (alqotel)1-0/+1
Moved the function ppc_store from mmu-hash64.c to misc_helper.c and the prototype from mmu-hash64.h to cpu.h as it is a more appropriate place, but it will have to have its implementation moved to a new file as misc_helper.c should not be compiled in a !TCG environment. Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20210506163941.106984-4-lucas.araujo@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04target/ppc: Reduce the size of ppc_spr_tRichard Henderson1-4/+8
We elide values when registering sprs, we might as well save space in the array as well. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210501022923.1179736-3-richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04target/ppc: Add POWER10 exception modelNicholas Piggin1-2/+3
POWER10 adds a new bit that modifies interrupt behaviour, LPCR[HAIL], and it removes support for the LPCR[AIL]=0b10 mode. Reviewed-by: Cédric Le Goater <clg@kaod.org> Tested-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Message-Id: <20210501072436.145444-3-npiggin@gmail.com> [dwg: Corrected tab indenting] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04target/ppc: rework AIL logic in interrupt deliveryNicholas Piggin1-8/+0
The AIL logic is becoming unmanageable spread all over powerpc_excp(), and it is slated to get even worse with POWER10 support. Move it all to a new helper function. Reviewed-by: Cédric Le Goater <clg@kaod.org> Tested-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Message-Id: <20210501072436.145444-2-npiggin@gmail.com> [dwg: Corrected tab indenting] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04ppc: Rename current DAWR macros and variablesRavi Bangoria1-2/+2
Power10 is introducing second DAWR. Use real register names (with suffix 0) from ISA for current macros and variables used by Qemu. One exception to this is KVM_REG_PPC_DAWR[X]. This is from kernel uapi header and thus not changed in kernel as well as Qemu. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20210412114433.129702-3-ravi.bangoria@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04target/ppc: Validate hflags with CONFIG_DEBUG_TCGRichard Henderson1-0/+5
Verify that hflags was updated correctly whenever we change cpu state that is used by hflags. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210323184340.619757-11-richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04target/ppc: Remove env->immu_idx and env->dmmu_idxRichard Henderson1-4/+8
We weren't recording MSR_GS in hflags, which means that BookE memory accesses were essentially random vs Guest State. Instead of adding this bit directly, record the completed mmu indexes instead. This makes it obvious that we are recording exactly the information that we need. This also means that we can stop directly recording MSR_IR. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210323184340.619757-9-richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04target/ppc: Remove MSR_SA and MSR_AP from hflagsRichard Henderson1-3/+1
Nothing within the translator -- or anywhere else for that matter -- checks MSR_SA or MSR_AP on the 602. This may be a mistake. However, for the moment, we need not record these bits in hflags. This allows us to simplify HFLAGS_VSX computation by moving it to overlap with MSR_VSX. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210323184340.619757-8-richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04target/ppc: Put LPCR[GTSE] in hflagsRichard Henderson1-0/+1
Because this bit was not in hflags, the privilege check for tlb instructions was essentially random. Recompute hflags when storing to LPCR. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210323184340.619757-7-richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>