aboutsummaryrefslogtreecommitdiff
path: root/target/ppc
AgeCommit message (Collapse)AuthorFilesLines
2018-10-23Merge remote-tracking branch 'remotes/armbru/tags/pull-error-2018-10-22' ↵Peter Maydell1-2/+2
into staging Error reporting patches for 2018-10-22 # gpg: Signature made Mon 22 Oct 2018 13:20:23 BST # gpg: using RSA key 3870B400EB918653 # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-error-2018-10-22: (40 commits) error: Drop bogus "use error_setg() instead" admonitions vpc: Fail open on bad header checksum block: Clean up bdrv_img_create()'s error reporting vl: Simplify call of parse_name() vl: Fix exit status for -drive format=help blockdev: Convert drive_new() to Error vl: Assert drive_new() does not fail in default_drive() fsdev: Clean up error reporting in qemu_fsdev_add() spice: Clean up error reporting in add_channel() tpm: Clean up error reporting in tpm_init_tpmdev() numa: Clean up error reporting in parse_numa() vnc: Clean up error reporting in vnc_init_func() ui: Convert vnc_display_init(), init_keyboard_layout() to Error ui/keymaps: Fix handling of erroneous include files vl: Clean up error reporting in device_init_func() vl: Clean up error reporting in parse_fw_cfg() vl: Clean up error reporting in mon_init_func() vl: Clean up error reporting in machine_set_property() vl: Clean up error reporting in chardev_init_func() qom: Clean up error reporting in user_creatable_add_opts_foreach() ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-19cpus hw target: Use warn_report() & friends to report warningsMarkus Armbruster1-2/+2
Calling error_report() in a function that takes an Error ** argument is suspicious. Convert a few that are actually warnings to warn_report(). While there, split a warning consisting of multiple sentences to conform to conventions spelled out in warn_report()'s contract. Cc: Alex Bennée <alex.bennee@linaro.org> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Fam Zheng <famz@redhat.com> Cc: Wei Huang <wei@redhat.com> Cc: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Markus Armbruster <armbru@redhat.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20181017082702.5581-5-armbru@redhat.com>
2018-10-18target/ppc: Convert to HAVE_CMPXCHG128 and HAVE_ATOMIC128Richard Henderson3-62/+88
Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-09-25target/ppc/cpu-models: Re-group the 970 CPUs together againThomas Huth1-21/+19
The addition of the POWER9 CPUs divided the entries for the 970 CPUs, which is a little bit confusing when you look at the code. So let's re-group the 970 CPUs together again, and since these chips have been based on the POWER4 processor, move them also in front of the POWER5 chips now. Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-09-05target/ppc/kvm: set vcpu as online/offlineNikunj A Dadhania2-0/+16
Set the newly added register(KVM_REG_PPC_ONLINE) to indicate if the vcpu is online(1) or offline(0) KVM will use this information to set the RWMR register, which controls the PURR and SPURR accumulation. CC: paulus@samba.org Signed-off-by: Nikunj A Dadhania <nikunj@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-28ppc: Remove deprecated ppcemb targetThomas Huth5-58/+5
There is no known available OS for ppc around anymore that uses page sizes below 4k, so it does not make much sense that we keep wasting our time on building and testing the ppcemb-softmmu target. It has been deprecated since two releases, and nobody complained, so let's remove this now. Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21ppc: add DBCR based debuggingRoman Kapl4-33/+107
Add support for DBCR (debug control register) based debugging as used on BookE ppc. So far supports only branch and single-step events, but these are the important ones. GDB in Linux guest can now do single-stepping. Signed-off-by: Roman Kapl <rka@sysgo.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: simplify bcdadd/sub functionsYasmin Beatriz1-31/+18
After solving a corner case in bcdsub, this patch simplifies the logic of both bcdadd/sub instructions by removing some unnecessary local flags. This commit also rearranges some if-else conditions in bcdadd to make it easier to read. Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: bcdsub fix sign when result is zeroYasmin Beatriz1-0/+3
When the result of bcdsub is equal to zero, the result sign may be set to negative in some cases, and this does not follow the Power ISA specifications as to decimal integer arithmetic instructions. Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: Use non-arithmetic conversions for fp load/storeRichard Henderson3-30/+61
Memory operations have no side effects on fp state. The use of a "real" conversions between float64 and float32 would raise exceptions for SNaN and out-of-range inputs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: Honor fpscr_ze semantics and tidy fre, fresqrtRichard Henderson1-25/+37
Divide by zero, exception taken, leaves the destination register unmodified. Therefore we must raise the exception before returning from the respective helpers. >From helper_fre, divide by zero exception not taken, return the documented +/- 0.5. At the same time, tidy the invalid exception checking so that we rely on softfloat for initial argument validation, and select the kind of invalid operand exception only when we know we must. At the same time, pass and return float64 values directly rather than bounce through the CPU_DoubleU union. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: Tidy helper_fsqrtRichard Henderson2-16/+15
Tidy the invalid exception checking so that we rely on softfloat for initial argument validation, and select the kind of invalid operand exception only when we know we must. Pass and return float64 values directly rather than bounce through the CPU_DoubleU union. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: Tidy helper_fadd, helper_fsubRichard Henderson2-31/+23
Tidy the invalid exception checking so that we rely on softfloat for initial argument validation, and select the kind of invalid operand exception only when we know we must. Pass and return float64 values directly rather than bounce through the CPU_DoubleU union. Note that because we know float_flag_invalid was set, we do not have to re-check the signs of the infinities. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: Tidy helper_fmulRichard Henderson2-15/+12
Tidy the invalid exception checking so that we rely on softfloat for initial argument validation, and select the kind of invalid operand exception only when we know we must. Pass and return float64 values directly rather than bounce through the CPU_DoubleU union. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: Honor fpscr_ze semantics and tidy fdivRichard Henderson2-23/+29
Divide by zero, exception taken, leaves the destination register unmodified. Therefore we must raise the exception before returning from helper_fdiv. Move the check from do_float_check_status into helper_fdiv. At the same time, tidy the invalid exception checking so that we rely on softfloat for initial argument validation, and select the kind of invalid operand exception only when we know we must. At the same time, pass and return float64 values directly rather than bounce through the CPU_DoubleU union. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: Enable fp exceptions for user-onlyRichard Henderson2-3/+14
While just setting the MSR bits is sufficient, we can tidy the helper code by extracting the MSR test to a helper and then forcing it true for user-only. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-07target/ppc: fix build on ppc64 hostLaurent Vivier1-1/+1
When I try to build a ppc64 target on a ppc64 host (gcc 8.1.1), I have: .../target/ppc/int_helper.c: In function 'helper_vinsertb': .../target/ppc/int_helper.c:1954:32: error: array subscript 18446744073709551608 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds] memmove(&r->u8[index], &b->u8[8 - sizeof(r->element)], \ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .../target/ppc/int_helper.c:1965:1: note: in expansion of macro 'VINSERT' If we compare with the macro for ppc64le, we can see sizeof(r->element[0]) should be used instead of sizeof(r->element). And VINSERT uses only u8, u16, u32 and u64, so the maximum value of sizeof(r->element[0]) is 8 Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180703' ↵Peter Maydell9-341/+541
into staging ppc patch queue 2018-07-03 Here's a last minue pull request before today's soft freeze. Ideally I would have sent this earlier, but I was waiting for a couple of extra fixes I knew were close. And the freeze crept up on me, like always. Most of the changes here are bugfixes in any case. There are some cleanups as well, which have been in my staging tree for a little while. There are a couple of truly new features (some extensions to the sam460ex platform), but these are low risk, since they only affect a new and not really stabilized machine type anyway. Higlights are: * Mac platform improvements from Mark Cave-Ayland * Sam460ex improvements from BALATON Zoltan et al. * XICS interrupt handler cleanups from Cédric Le Goater * TCG improvements for atomic loads and stores from Richard Henderson * Assorted other bugfixes # gpg: Signature made Tue 03 Jul 2018 06:55:22 BST # gpg: using RSA key 6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dgibson/tags/ppc-for-3.0-20180703: (35 commits) ppc: Include vga cirrus card into the compiling process target/ppc: Relax reserved bitmask of indexed store instructions target/ppc: set is_jmp on ppc_tr_breakpoint_check spapr: compute default value of "hpt-max-page-size" later target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() ppc440_uc: Basic emulation of PPC440 DMA controller sam460ex: Add RTC device hw/timer: Add basic M41T80 emulation ppc4xx_i2c: Rewrite to model hardware more closely hw/ppc: Give sam46ex its own config option fpu_helper.c: fix setting FPSCR[FI] bit target/ppc: Implement the rest of gen_st_atomic target/ppc: Implement the rest of gen_ld_atomic target/ppc: Use atomic min/max helpers target/ppc: Use MO_ALIGN for EXIWX and ECOWX target/ppc: Split out gen_st_atomic target/ppc: Split out gen_ld_atomic target/ppc: Split out gen_load_locked target/ppc: Tidy gen_conditional_store ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org> # Conflicts: # hw/ppc/spapr.c
2018-07-03target/ppc: Relax reserved bitmask of indexed store instructionsBALATON Zoltan1-1/+1
The PPC440 User Manual says that if bit 31 is set, the contents of CR[CR0] are undefined for indexed store instructions but this form is not invalid. Other PPC variants confirming to recent ISA where this bit may be reserved should ignore reserved bits and not raise invalid instruction exception. In particular, MorphOS has an stwx instruction with bit 31 set and fails to boot currently because of this. With this patch it gets further. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: set is_jmp on ppc_tr_breakpoint_checkEmilio G. Cota1-0/+1
The use of GDB breakpoints was broken by b0c2d52 ("target/ppc: convert to TranslatorOps", 2018-02-16). Fix it by setting is_jmp, so that we break from the translation loop as originally intended. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc/kvm: don't pass cpu to kvm_get_smmu_info()Greg Kurz1-9/+8
In a future patch the machine code will need to retrieve the MMU information from KVM during machine initialization before the CPUs are created. Actually, KVM_PPC_GET_SMMU_INFO is a VM class ioctl, and thus, we don't need to have a CPU object around. We just need for KVM to be initialized and use the kvm_state global. This patch just does that. Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc/kvm: get rid of kvm_get_fallback_smmu_info()Greg Kurz1-97/+20
Now that we're checking our MMU configuration is supported by KVM, rather than adjusting it to KVM, it doesn't really make sense to have a fallback for kvm_get_smmu_info(). If KVM is too old or buggy to provide the details, we should rather treat this as an error. This patch thus adds error reporting to kvm_get_smmu_info() and get rid of the fallback code. QEMU will now terminate if KVM fails to provide MMU details. This may break some very old setups, but the simplification is worth the sacrifice. Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03fpu_helper.c: fix setting FPSCR[FI] bitJohn Arbuckle1-0/+8
The FPSCR[FI] bit indicates if the last floating point instruction had a result that was rounded. Each consecutive floating point instruction is suppose to set this bit to the correct value. What currently happens is this bit is not set as often as it should be. I have verified that this is the behavior of a real PowerPC 950. This patch fixes that problem by deciding to set this bit after each floating point instruction. https://www.pdfdrive.net/powerpc-microprocessor-family-the-programming-environments-for-32-e3087633.html Page 63 in table 2-4 is where the description of this bit can be found. Signed-off-by: John Arbuckle <programmingkidx@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Implement the rest of gen_st_atomicRichard Henderson1-1/+25
The store twin case was stubbed out. For now, implement it only within a serial context, forcing parallel execution to synchronize. It would be possible to implement with a cmpxchg loop, if we care, but the loose alignment requirements (simply no crossing 32-byte boundary) might send us back to the serial context anyway. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Implement the rest of gen_ld_atomicRichard Henderson1-4/+79
These cases were stubbed out. For now, implement them only within a serial context, forcing parallel execution to synchronize. It would be possible to implement these with cmpxchg loops, if we care. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic min/max helpersRichard Henderson1-3/+19
These operations were previously unimplemented for ppc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use MO_ALIGN for EXIWX and ECOWXRichard Henderson1-21/+4
This avoids the need for gen_check_align entirely. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_st_atomicRichard Henderson1-48/+49
Move the guts of ST_ATOMIC to a function. Use foo_tl for the operations instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_ld_atomicRichard Henderson1-53/+52
Move the guts of LD_ATOMIC to a function. Use foo_tl for the operations instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_load_lockedRichard Henderson1-17/+18
Leave only the minimal amount of code within the LDAR macro, moving the rest of the code into gen_load_locked. Use MO_ALIGN and remove the explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Tidy gen_conditional_storeRichard Henderson1-17/+11
Leave only the minimal amount of code within the STCX macro, moving the rest of the code into gen_conditional_store. Remove the explicit call to gen_check_align; the matching LDAX will have already checked alignment, and we verify the same address. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Remove POWERPC_EXCP_STCXRichard Henderson2-19/+0
Always use the gen_conditional_store implementation that uses atomic_cmpxchg. Make sure and clear reserve_addr across most interrupts crossing the cpu_loop. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic cmpxchg for STQCXRichard Henderson3-33/+100
When running in a parallel context, we must use a helper in order to perform the 128-bit atomic operation. When running in a serial context, do the compare before the store. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic store for STQRichard Henderson3-8/+45
Section 1.4 of the Power ISA v3.0B states that this insn is single-copy atomic. As we cannot (yet) issue 128-bit stores within TCG, use the generic helpers provided. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic load for LQ and LQARXRichard Henderson4-25/+94
Section 1.4 of the Power ISA v3.0B states that both of these instructions are single-copy atomic. As we cannot (yet) issue 128-bit loads within TCG, use the generic helpers provided. Since TCG cannot (yet) return a 128-bit value, add a slot within CPUPPCState for returning the high half of a 128-bit return value. This solution is preferred to the helper assigning to architectural registers directly, as it avoids clobbering all TCG live values. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Add do_unaligned_access hookRichard Henderson3-1/+23
This allows faults from MO_ALIGN to have the same effect as from gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-02hw/ppc: Use the IEC binary prefix definitionsPhilippe Mathieu-Daudé1-4/+4
It eases code review, unit is explicit. Patch generated using: $ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/ and modified manually. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20180625124238.25339-33-f4bug@amsat.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-06-27compiler: add a sizeof_field() macroStefan Hajnoczi1-5/+5
Determining the size of a field is useful when you don't have a struct variable handy. Open-coding this is ugly. This patch adds the sizeof_field() macro, which is similar to typeof_field(). Existing instances are updated to use the macro. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20180614164431.29305-1-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-06-22spapr: Don't rewrite mmu capabilities in KVM modeDavid Gibson2-68/+70
Currently during KVM initialization on POWER, kvm_fixup_page_sizes() rewrites a bunch of information in the cpu state to reflect the capabilities of the host MMU and KVM. This overwrites the information that's already there reflecting how the TCG implementation of the MMU will operate. This means that we can get guest-visibly different behaviour between KVM and TCG (and between different KVM implementations). That's bad. It also prevents migration between KVM and TCG. The pseries machine type now has filtering of the pagesizes it allows the guest to use which means it can present a consistent model of the MMU across all accelerators. So, we can now replace kvm_fixup_page_sizes() with kvm_check_mmu() which merely verifies that the expected cpu model can be faithfully handled by KVM, rather than updating the cpu model to match KVM. We call kvm_check_mmu() from the spapr cpu reset code. This is a hack: conceptually it makes more sense where fixup_page_sizes() was - in the KVM cpu init path. However, doing that would require moving the platform's pagesize filtering much earlier, which would require a lot of work making further adjustments. There wouldn't be a lot of concrete point to doing that, since the only KVM implementation which has the awkward MMU restrictions is KVM HV, which can only work with an spapr guest anyway. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org>
2018-06-22target/ppc: Add ppc_hash64_filter_pagesizes()David Gibson2-0/+62
The paravirtualized PAPR platform sometimes needs to restrict the guest to using only some of the page sizes actually supported by the host's MMU. At the moment this is handled in KVM specific code, but for consistency we want to apply the same limitations to all accelerators. This makes a start on this by providing a helper function in the cpu code to allow platform code to remove some of the cpu's page size definitions via a caller supplied callback. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-06-22spapr: Use maximum page size capability to simplify memory backend checkingDavid Gibson2-20/+0
The way we used to handle KVM allowable guest pagesizes for PAPR guests required some convoluted checking of memory attached to the guest. The allowable pagesizes advertised to the guest cpus depended on the memory which was attached at boot, but then we needed to ensure that any memory later hotplugged didn't change which pagesizes were allowed. Now that we have an explicit machine option to control the allowable maximum pagesize we can simplify this. We just check all memory backends against that declared pagesize. We check base and cold-plugged memory at reset time, and hotplugged memory at pre_plug() time. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-06-21target/ppc: Add missing opcode for icbt on PPC440BALATON Zoltan1-0/+2
According to PPC440 User Manual PPC440 has multiple opcodes for icbt instruction: one for compatibility with older cores and two 440 specific opcodes one of which is defined in BookE. QEMU only implements two of these, add the missing one. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-21fpu_helper.c: fix helper_fpscr_clrbit() functionJohn Arbuckle1-0/+28
Fix the helper_fpscr_clrbit() function so it correctly sets the FEX and VX bits. Determining the value for the Floating Point Status and Control Register's (FPSCR) FEX bit is suppose to be done like this: FEX = (VX & VE) | (OX & OE) | (UX & UE) | (ZX & ZE) | (XX & XE)) It is described as "the logical OR of all the floating-point exception bits masked by their respective enable bits". It was not implemented correctly. The value of FEX would stay on even when all other bits were set to off. The VX bit is described as "the logical OR of all of the invalid operation exceptions". This bit was also not implemented correctly. It too would stay on when all the other bits were set to off. My main source of information is an IBM document called: PowerPC Microprocessor Family: The Programming Environments for 32-Bit Microprocessors Page 62 is where the FPSCR information is located. This is an older copy than the one I use but it is still very useful: https://www.pdfdrive.net/powerpc-microprocessor-family-the-programming-environments-for-32-e3087633.html I use a G3 and G5 iMac to compare bit values with QEMU. This patch fixed all the problems I was having with these bits. Signed-off-by: John Arbuckle <programmingkidx@gmail.com> [dwg: Re-wrapped commit message] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-21target/ppc: Add kvmppc_hpt_needs_host_contiguous_pages() helperDavid Gibson2-2/+21
KVM HV has a restriction that for HPT mode guests, guest pages must be hpa contiguous as well as gpa contiguous. We have to account for that in various places. We determine whether we're subject to this restriction from the SMMU information exposed by KVM. Planned cleanups to the way we handle this will require knowing whether this restriction is in play in wider parts of the code. So, expose a helper function which returns it. This does mean some redundant calls to kvm_get_smmu_info(), but they'll go away again with future cleanups. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Cédric Le Goater <clg@kaod.org>
2018-06-21target/ppc: Allow cpu compatiblity checks based on type, not instanceDavid Gibson2-6/+25
ppc_check_compat() is used in a number of places to check if a cpu object supports a certain compatiblity mode, subject to various constraints. It takes a PowerPCCPU *, however it really only depends on the cpu's class. We have upcoming cases where it would be useful to make compatibility checks before we fully instantiate the cpu objects. ppc_type_check_compat() will now make an equivalent check, but based on a CPU's QOM typename instead of an instantiated CPU object. We make use of the new interface in several places in spapr, where we're essentially making a global check, rather than one specific to a particular cpu. This avoids some ugly uses of first_cpu to grab a "representative" instance. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Cédric Le Goater <clg@kaod.org>
2018-06-16target/ppc, spapr: Move VPA information to machine_dataDavid Gibson3-32/+22
CPUPPCState currently contains a number of fields containing the state of the VPA. The VPA is a PAPR specific concept covering several guest/host shared memory areas used to communicate some information with the hypervisor. As a PAPR concept this is really machine specific information, although it is per-cpu, so it doesn't really belong in the core CPU state structure. There's also other information that's per-cpu, but platform/machine specific. So create a (void *)machine_data in PowerPCCPU which can be used by the machine to locate per-cpu data. Intialization, lifetime and cleanup of machine_data is entirely up to the machine type. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Tested-by: Greg Kurz <groug@kaod.org>
2018-06-16target/ppc: drop empty #if/#endif blockGreg Kurz1-2/+0
Commit 9d6f106552fa moved the last line in this block to somewhere else, but it forgot to remove the now useless #if/#endif. Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-16target/ppc: Don't require private l1d cache on POWER8 for cap_ppc_safe_cacheSuraj Jitindar Singh1-1/+18
For cap_ppc_safe_cache to be set to workaround, we require both a l1d cache flush instruction and private l1d cache. On POWER8 don't require private l1d cache. This means a guest on a POWER8 machine can make use of the cache flush workarounds. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-12target/ppc: Allow PIR read in privileged modeluporl1-1/+1
According to PowerISA, the PIR register should be readable in privileged mode also, not only in hypervisor privileged mode. PowerISA 3.0 - 4.3.3 Processor Identification Register "Read access to the PIR is privileged; write access is not provided." Figure 18 in section 4.4.4 explicitly confirms that mfspr PIR is privileged and doesn't require hypervisor state. Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Alexander Graf <agraf@suse.de> Cc: qemu-ppc@nongnu.org Signed-off-by: Leandro Lupori <leandro.lupori@gmail.com> Reviewed-by: Jose Ricardo Ziviani <joserz@linux.ibm.com> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-12target/ppc: extend eieio for POWER9Cédric Le Goater1-2/+23
POWER9 introduced a new variant of the eieio instruction using bit 6 as a hint to tell the CPU it is a store-forwarding barrier. The usage of this eieio extension was recently added in Linux 4.17 which activated the "support for a store forwarding barrier at kernel entry/exit". Unfortunately, it is not possible to insert this new eieio instruction without considerable change in ppc_tr_translate_insn(). So instead we loosen the QEMU eieio instruction mask and modify the gen_eieio() helper to test for bit6. On non-POWER9 CPUs, the bit6 is just ignored but a warning is emitted as this is not an instruction software should be using. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>