aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2019-09-03s390x/tcg: Fix length calculation in probe_write_access()David Hildenbrand1-1/+1
Hm... how did that "-" slip in (-TAGRET_PAGE_SIZE would be correct). This currently makes us exceed one page in a single probe_write() call, essentially leaving some memory unchecked. Fixes: c5a7392cfb96 ("s390x/tcg: Provide probe_write_access helper") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190826075112.25637-3-david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03s390x/tcg: Use guest_addr_valid() instead of h2g_valid() in probe_write_access()David Hildenbrand1-1/+1
If I'm not completely wrong, we are dealing with guest addresses here and not with host addresses. Use the right check. Fixes: c5a7392cfb96 ("s390x/tcg: Provide probe_write_access helper") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190826075112.25637-2-david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03tcg: Check for watchpoints in probe_write()David Hildenbrand1-2/+13
Let size > 0 indicate a promise to write to those bytes. Check for write watchpoints in the probed range. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190823100741.9621-10-david@redhat.com> [rth: Recompute index after tlb_fill; check TLB_WATCHPOINT.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Handle watchpoints via TLB_WATCHPOINTRichard Henderson3-118/+90
The raising of exceptions from check_watchpoint, buried inside of the I/O subsystem, is fundamentally broken. We do not have the helper return address with which we can unwind guest state. Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT. Move the call to cpu_check_watchpoint into the cputlb helpers where we do have the helper return address. This allows watchpoints on RAM to bypass the full i/o access path. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Remove double-alignment in store_helperRichard Henderson1-2/+1
We have already aligned page2 to the start of the next page. There is no reason to do that a second time. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Fix size operand for tlb_fill on unaligned storeRichard Henderson1-1/+4
We are currently passing the size of the full write to the tlb_fill for the second page. Instead pass the real size of the write to that page. This argument is unused within all tlb_fill, except to be logged via tracing, so in practice this makes no difference. But in a moment we'll need the value of size2 for watchpoints, and if we've computed the value we might as well use it. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03exec: Factor out cpu_watchpoint_address_matchesRichard Henderson2-16/+36
We want to move the check for watchpoints from memory_region_section_get_iotlb to tlb_set_page_with_attrs. Isolate the loop over watchpoints to an exported function. Rename the existing cpu_watchpoint_address_matches to watchpoint_address_matches, since it doesn't actually have a cpu argument. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Fold TLB_RECHECK into TLB_INVALID_MASKRichard Henderson2-67/+24
We had two different mechanisms to force a recheck of the tlb. Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit that would immediate set TLB_INVALID_MASK, which automatically means that a second check of the tlb entry fails. We can use the same mechanism to handle small pages. Conserve TLB_* bits by removing TLB_RECHECK. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03exec: Factor out core logic of check_watchpoint()David Hildenbrand2-8/+25
We want to perform the same checks in probe_write() to trigger a cpu exit before doing any modifications. We'll have to pass a PC. Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190823100741.9621-9-david@redhat.com> [rth: Use vaddr for len, like other watchpoint functions; Move user-only stub to static inline.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03exec: Move user-only watchpoint stubs inlineRichard Henderson2-24/+25
Let the user-only watchpoint stubs resolve to empty inline functions. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03target/sparc: sun4u Invert Endian TTE bitTony Nguyen2-1/+9
This bit configures endianness of PCI MMIO devices. It is used by Solaris and OpenBSD sunhme drivers. Tested working on OpenBSD. Unfortunately Solaris 10 had a unrelated keyboard issue blocking testing... another inch towards Solaris 10 on SPARC64 =) Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <3c8d5181a584f1b3712d3d8d66801b13cecb4b88.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03target/sparc: Add TLB entry with attributesTony Nguyen1-14/+18
Append MemTxAttrs to interfaces so we can pass along up coming Invert Endian TTE bit on SPARC64. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <f8fcc3138570c460ef289a6b34ba7715ba36f99e.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Byte swap memory transaction attributeTony Nguyen2-0/+14
Notice new attribute, byte swap, and force the transaction through the memory slow path. Required by architectures that can invert endianness of memory transaction, e.g. SPARC64 has the Invert Endian TTE bit. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <2a10a1f1c00a894af1212c8f68ef09c2966023c1.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03memory: Single byte swap along the I/O pathTony Nguyen5-142/+23
Now that MemOp has been pushed down into the memory API, and callers are encoding endianness, we can collapse byte swaps along the I/O path into the accelerator and target independent adjust_endianness. Collapsing byte swaps along the I/O path enables additional endian inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant byte swaps cancelling out. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Message-Id: <911ff31af11922a9afba9b7ce128af8b8b80f316.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Replace size and endian operands for MemOpTony Nguyen2-89/+87
Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <755b7104410956b743e1f1e9c34ab87db113360f.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03memory: Access MemoryRegion with endiannessTony Nguyen9-23/+75
Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former. Call memory_region_dispatch_{read|write} with endianness encoded into the "MemOp op" operand. This patch does not change any behaviour as memory_region_dispatch_{read|write} is yet to handle the endianness. Once it does handle endianness, callers with byte swaps can collapse them into adjust_endianness. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Message-Id: <8066ab3eb037c0388dfadfe53c5118429dd1de3a.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03exec: Hard code size with MO_{8|16|32|64}Tony Nguyen1-9/+9
Temporarily no-op size_memop was introduced to aid the conversion of memory_region_dispatch_{read|write} operand "unsigned size" into "MemOp op". Now size_memop is implemented, again hard coded size but with MO_{8|16|32|64}. This is more expressive and avoids size_memop calls. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <99f69701cad294db638f84abebc58115e1b9de9a.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03target/mips: Hard code size with MO_{8|16|32|64}Tony Nguyen1-2/+2
Temporarily no-op size_memop was introduced to aid the conversion of memory_region_dispatch_{read|write} operand "unsigned size" into "MemOp op". Now size_memop is implemented, again hard coded size but with MO_{8|16|32|64}. This is more expressive and avoids size_memop calls. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Message-Id: <99c4459d5c1dc9013820be3dbda9798165c15b99.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03hw/s390x: Hard code size with MO_{8|16|32|64}Tony Nguyen1-2/+1
Temporarily no-op size_memop was introduced to aid the conversion of memory_region_dispatch_{read|write} operand "unsigned size" into "MemOp op". Now size_memop is implemented, again hard coded size but with MO_{8|16|32|64}. This is more expressive and avoids size_memop calls. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <76dc97273a8eb5e10170ffc16526863df808f487.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03memory: Access MemoryRegion with MemOpTony Nguyen3-12/+24
Convert memory_region_dispatch_{read|write} operand "unsigned size" into a "MemOp op". Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1dd82df5801866743f838f1d046475115a1d32da.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Access MemoryRegion with MemOpTony Nguyen1-4/+4
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <c4571c76467ade83660970f7ef9d7292297f1908.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03exec: Access MemoryRegion with MemOpTony Nguyen2-11/+13
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <3b042deef0a60dd49ae2320ece92120ba6027f2b.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03hw/vfio: Access MemoryRegion with MemOpTony Nguyen1-2/+4
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <e70ff5814ac3656974180db6375397c43b0bc8b8.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03hw/virtio: Access MemoryRegion with MemOpTony Nguyen1-2/+5
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <ebf1f78029d5ac1de1739a11d679740a87a1f02f.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03hw/intc/armv7m_nic: Access MemoryRegion with MemOpTony Nguyen1-4/+8
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <21113bae2f54b45176701e0bf595937031368ae6.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03hw/s390x: Access MemoryRegion with MemOpTony Nguyen1-3/+5
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <2f41da26201fb9b0339c2b7fde34df864f7f9ea8.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03target/mips: Access MemoryRegion with MemOpTony Nguyen1-2/+3
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Message-Id: <af407f0a34dc95ef5aaf2c00dffda7c65df23c3a.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03memory: Introduce size_memopTony Nguyen1-0/+10
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Introduce no-op size_memop to aid preparatory conversion of interfaces. Once interfaces are converted, size_memop will be implemented to return a MemOp from size in bytes. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <35b8ee74020f67cf40848fb7d5f127cf96c851d6.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03tcg: TCGMemOp is now accelerator independent MemOpTony Nguyen39-399/+421
Preparation for collapsing the two byte swaps, adjust_endianness and handle_bswap, along the I/O path. Target dependant attributes are conditionalized upon NEED_CPU_H. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <81d9cd7d7f5aaadfa772d6c48ecee834e9cf7882.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03target/arm: Don't abort on M-profile exception return in linux-user modePeter Maydell1-1/+20
An attempt to do an exception-return (branch to one of the magic addresses) in linux-user mode for M-profile should behave like a normal branch, because linux-user mode is always going to be in 'handler' mode. This used to work, but we broke it when we added support for the M-profile security extension in commit d02a8698d7ae2bfed. In that commit we allowed even handler-mode calls to magic return values to be checked for and dealt with by causing an EXCP_EXCEPTION_EXIT exception to be taken, because this is needed for the FNC_RETURN return-from-non-secure-function-call handling. For system mode we added a check in do_v7m_exception_exit() to make any spurious calls from Handler mode behave correctly, but forgot that linux-user mode would also be affected. How an attempted return-from-non-secure-function-call in linux-user mode should be handled is not clear -- on real hardware it would result in return to secure code (not to the Linux kernel) which could then handle the error in any way it chose. For QEMU we take the simple approach of treating this erroneous return the same way it would be handled on a CPU without the security extensions -- treat it as a normal branch. The upshot of all this is that for linux-user mode we should never do any of the bx_excret magic, so the code change is simple. This ought to be a weird corner case that only affects broken guest code (because Linux user processes should never be attempting to do exception returns or NS function returns), except that the code that assigns addresses in RAM for the process and stack in our linux-user code does not attempt to avoid this magic address range, so legitimate code attempting to return to a trampoline routine on the stack can fall into this case. This change fixes those programs, but we should also look at restricting the range of memory we use for M-profile linux-user guests to the area that would be real RAM in hardware. Cc: qemu-stable@nongnu.org Reported-by: Christophe Lyon <christophe.lyon@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190822131534.16602-1-peter.maydell@linaro.org Fixes: https://bugs.launchpad.net/qemu/+bug/1840922 Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03target/arm: Free TCG temps in trans_VMOV_64_sp()Peter Maydell1-0/+2
The function neon_store_reg32() doesn't free the TCG temp that it is passed, so the caller must do that. We got this right in most places but forgot to free the TCG temps in trans_VMOV_64_sp(). Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190827121931.26836-1-peter.maydell@linaro.org
2019-09-03include/exec/cpu-defs.h: fix typoAlex Bennée1-1/+1
Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190828165307.18321-10-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03atomic_template: fix indentation in GEN_ATOMIC_HELPEREmilio G. Cota1-1/+1
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190828165307.18321-8-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03tcg/README: fix typo s/afterwise/afterwards/Emilio G. Cota1-1/+1
Afterwise is "wise after the fact", as in "hindsight". Here we meant "afterwards" (as in "subsequently"). Fix it. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190828165307.18321-7-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03includes: remove stale [smp|max]_cpus externsAlex Bennée1-2/+0
Commit a5e0b3311 removed these in favour of querying machine properties. Remove the extern declarations as well. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190828165307.18321-6-alex.bennee@linaro.org Cc: Like Xu <like.xu@linux.intel.com> Message-Id: <20190711130546.18578-1-alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/net/xilinx_axi: Use object_initialize_child for correct ref. countingPhilippe Mathieu-Daudé1-9/+8
As explained in commit aff39be0ed97: Both functions, object_initialize() and object_property_add_child() increase the reference counter of the new object, so one of the references has to be dropped afterwards to get the reference counting right. Otherwise the child object will not be properly cleaned up when the parent gets destroyed. Thus let's use now object_initialize_child() instead to get the reference counting here right. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190823143249.8096-7-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/dma/xilinx_axi: Use object_initialize_child for correct ref. countingPhilippe Mathieu-Daudé1-8/+8
As explained in commit aff39be0ed97: Both functions, object_initialize() and object_property_add_child() increase the reference counter of the new object, so one of the references has to be dropped afterwards to get the reference counting right. Otherwise the child object will not be properly cleaned up when the parent gets destroyed. Thus let's use now object_initialize_child() instead to get the reference counting here right. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190823143249.8096-6-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/arm/fsl-imx: Add the cpu as child of the SoC objectPhilippe Mathieu-Daudé2-2/+6
Child properties form the composition tree. All objects need to be a child of another object. Objects can only be a child of one object. Respect this with the i.MX SoC, to get a cleaner composition tree. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190823143249.8096-5-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/arm: Use sysbus_init_child_obj for correct reference countingPhilippe Mathieu-Daudé1-2/+2
Both object_initialize() and qdev_set_parent_bus() increase the reference counter of the new object, so one of the references has to be dropped afterwards to get the reference counting right. In machine model code this refcount leak is not particularly problematic because (unlike devices) machines will never be created on demand via QMP, and they are never destroyed. But in any case let's use the new sysbus_init_child_obj() instead to get the reference counting here right. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190823143249.8096-4-philmd@redhat.com [PMM: rewrote commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/arm: Use object_initialize_child for correct reference countingPhilippe Mathieu-Daudé3-17/+16
As explained in commit aff39be0ed97: Both functions, object_initialize() and object_property_add_child() increase the reference counter of the new object, so one of the references has to be dropped afterwards to get the reference counting right. Otherwise the child object will not be properly cleaned up when the parent gets destroyed. Thus let's use now object_initialize_child() instead to get the reference counting here right. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190823143249.8096-3-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/arm: Use ARM_CPU_TYPE_NAME() macro when appropriatePhilippe Mathieu-Daudé8-11/+15
Commit ba1ba5cca introduce the ARM_CPU_TYPE_NAME() macro. Unify the code base by use it in all places. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190823143249.8096-2-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03target/arm: Fix SMMLS argument orderRichard Henderson1-2/+18
The previous simplification got the order of operands to the subtraction wrong. Since the 64-bit product is the subtrahend, we must use a 64-bit subtract to properly compute the borrow from the low-part of the product. Fixes: 5f8cd06ebcf5 ("target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR") Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190829013258.16102-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/arm/smmuv3: Remove spurious error messages on IOVA invalidationsEric Auger2-8/+12
An IOVA/ASID invalidation is notified to all IOMMU Memory Regions through smmuv3_inv_notifiers_iova/smmuv3_notify_iova. When the notification occurs it is possible that some of the PCIe devices associated to the notified regions do not have a valid stream table entry. In that case we output a LOG_GUEST_ERROR message, for example: invalid sid=<SID> (L1STD span=0) "smmuv3_notify_iova error decoding the configuration for iommu mr=<MR> This is unfortunate as the user gets the impression that there are some translation decoding errors whereas there are not. This patch adds a new field in SMMUEventInfo that tells whether the detection of an invalid STE must lead to an error report. invalid_ste_allowed is set before doing the invalidations and kept unset on actual translation. The other configuration decoding error messages are kept since if the STE is valid then the rest of the config must be correct. Signed-off-by: Eric Auger <eric.auger@redhat.com> Message-id: 20190822172350.12008-6-eric.auger@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/arm/smmuv3: Log a guest error when decoding an invalid STEEric Auger1-0/+1
Log a guest error when encountering an invalid STE. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190822172350.12008-5-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03memory: Remove unused memory_region_iommu_replay_all()Eric Auger2-19/+0
memory_region_iommu_replay_all is not used. Remove it. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reported-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Message-id: 20190822172350.12008-2-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03aspeed/timer: Provide back-pressure information for short periodsAndrew Jeffery1-1/+16
First up: This is not the way the hardware behaves. However, it helps resolve real-world problems with short periods being used under Linux. Commit 4451d3f59f2a ("clocksource/drivers/fttmr010: Fix set_next_event handler") in Linux fixed the timer driver to correctly schedule the next event for the Aspeed controller, and in combination with 5daa8212c08e ("ARM: dts: aspeed: Describe random number device") Linux will now set a timer with a period as low as 1us. Configuring a qemu timer with such a short period results in spending time handling the interrupt in the model rather than executing guest code, leading to noticeable "sticky" behaviour in the guest. The behaviour of Linux is correct with respect to the hardware, so we need to improve our handling under emulation. The approach chosen is to provide back-pressure information by calculating an acceptable minimum number of ticks to be set on the model. Under Linux an additional read is added in the timer configuration path to detect back-pressure, which will never occur on hardware. However if back-pressure is observed, the driver alerts the clock event subsystem, which then performs its own next event dilation via a config option - d1748302f70b ("clockevents: Make minimum delay adjustments configurable") A minimum period of 5us was experimentally determined on a Lenovo T480s, which I've increased to 20us for "safety". Signed-off-by: Andrew Jeffery <andrew@aj.id.au> Reviewed-by: Joel Stanley <joel@jms.id.au> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-id: 20190704055150.4899-1-clg@kaod.org [clg: - changed the computation of min_ticks to be done each time the timer value is reloaded. It removes the ordering issue of the timer and scu reset handlers but is slightly slower ] - introduced TIMER_MIN_NS - introduced calculate_min_ticks() ] Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03target/arm: Take exceptions on ATS instructions when neededPeter Maydell1-15/+92
The translation table walk for an ATS instruction can result in various faults. In general these are just reported back via the PAR_EL1 fault status fields, but in some cases the architecture requires that the fault is turned into an exception: * synchronous stage 2 faults of any kind during AT S1E0* and AT S1E1* instructions executed from NS EL1 fault to EL2 or EL3 * synchronous external aborts are taken as Data Abort exceptions (This is documented in the v8A Arm ARM DDI0487A.e D5.2.11 and G5.13.4.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20190816125802.25877-3-peter.maydell@linaro.org
2019-09-03target/arm: Allow ARMCPRegInfo read/write functions to throw exceptionsPeter Maydell3-1/+18
Currently the only part of an ARMCPRegInfo which is allowed to cause a CPU exception is the access function, which returns a value indicating that some flavour of UNDEF should be generated. For the ATS system instructions, we would like to conditionally generate exceptions as part of the writefn, because some faults during the page table walk (like external aborts) should cause an exception to be raised rather than returning a value. There are several ways we could do this: * plumb the GETPC() value from the top level set_cp_reg/get_cp_reg helper functions through into the readfn and writefn hooks * add extra readfn_with_ra/writefn_with_ra hooks that take the GETPC() value * require the ATS instructions to provide a dummy accessfn, which serves no purpose except to cause the code generation to emit TCG ops to sync the CPU state * add an ARM_CP_ flag to mark the ARMCPRegInfo as possibly throwing an exception in its read/write hooks, and make the codegen sync the CPU state before calling the hooks if the flag is set This patch opts for the last of these, as it is fairly simple to implement and doesn't require invasive changes like updating the readfn/writefn hook function prototype signature. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20190816125802.25877-2-peter.maydell@linaro.org
2019-09-03target/arm: Factor out unallocated_encoding for aarch32Richard Henderson2-12/+13
Make this a static function private to translate.c. Thus we can use the same idiom between aarch64 and aarch32 without actually sharing function implementations. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190826151536.6771-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03Revert "target/arm: Use unallocated_encoding for aarch32"Richard Henderson5-15/+21
This reverts commit 3cb36637157088892e9e33ddb1034bffd1251d3b. Despite the fact that the text for the call to gen_exception_insn is identical for aarch64 and aarch32, the implementation inside gen_exception_insn is totally different. This fixes exceptions raised from aarch64. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190826151536.6771-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>