aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2022-03-06pci-bridge/xio3130_downstream: Fix error handlingJonathan Cameron1-1/+1
Wrong goto label, so msi cleanup would not occur if there is an error in the ssvid initialization. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-Id: <20220218102303.7061-2-Jonathan.Cameron@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06pci-bridge/xio3130_upstream: Fix error handlingJonathan Cameron1-1/+1
Goto label is incorrect so msi cleanup would not occur if there is an error in the ssvid initialization. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-Id: <20220218102303.7061-1-Jonathan.Cameron@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06pcie: Add 1.2 version token for the Power Management CapabilityŁukasz Gieryk1-0/+1
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com> Message-Id: <20220217174504.1051716-5-lukasz.maniak@linux.intel.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06pcie: Add a helper to the SR/IOV APIŁukasz Gieryk2-1/+15
Convenience function for retrieving the PCIDevice object of the N-th VF. Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com> Reviewed-by: Knut Omang <knuto@ifi.uio.no> Message-Id: <20220217174504.1051716-4-lukasz.maniak@linux.intel.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06pcie: Add some SR/IOV API documentation in docs/pcie_sriov.txtKnut Omang1-0/+115
Add a small intro + minimal documentation for how to implement SR/IOV support for an emulated device. Signed-off-by: Knut Omang <knuto@ifi.uio.no> Message-Id: <20220217174504.1051716-3-lukasz.maniak@linux.intel.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06pcie: Add support for Single Root I/O Virtualization (SR/IOV)Knut Omang9-26/+470
This patch provides the building blocks for creating an SR/IOV PCIe Extended Capability header and register/unregister SR/IOV Virtual Functions. Signed-off-by: Knut Omang <knuto@ifi.uio.no> Message-Id: <20220217174504.1051716-2-lukasz.maniak@linux.intel.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06virtio-net: Unlimit tx queue size if peer is vdpaEugenio Pérez1-5/+8
The code used to limit the maximum size of tx queue for others backends than vhost_user since the introduction of configurable tx queue size in 9b02e1618cf2 ("virtio-net: enable configurable tx queue size"). As vhost_user, vhost_vdpa devices should deal with memory region crosses already, so let's use the full tx size. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20220217175029.2517071-1-eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06hw/pci-bridge/pxb: Fix missing swizzleJonathan Cameron1-0/+6
pxb_map_irq_fn() handled the necessary removal of the swizzle applied to the PXB interrupts by the bus to which it was attached but neglected to apply the normal swizzle for PCI root ports on the expander bridge. Result of this was on ARM virt, the PME interrupts for a second RP on a PXB instance were miss-routed to #45 rather than #46. Tested with a selection of different configurations with 1 to 5 RP per PXB instance. Note on my x86 test setup the PME interrupts are not triggered so I haven't been able to test this. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> Message-Id: <20220118174855.19325-1-Jonathan.Cameron@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06hw/i386/pc_piix: Mark the machine types from version 1.4 to 1.7 as deprecatedThomas Huth2-0/+9
The list of machine types grows larger and larger each release ... and it is unlikely that many people still use the very old ones for live migration. QEMU v1.7 has been released more than 8 years ago, so most people should have updated their machines to a newer version in those 8 years at least once. Thus let's mark the very old 1.x machine types as deprecated now. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20220117191639.278497-1-thuth@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06tests/qtest/virtio-iommu-test: Check bypass configJean-Philippe Brucker1-0/+2
The bypass config field should be initialized to 1 by default. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-Id: <20220214124356.872985-5-jean-philippe@linaro.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Thomas Huth <thuth@redhat.com>
2022-03-06virtio-iommu: Support bypass domainJean-Philippe Brucker1-5/+34
The driver can create a bypass domain by passing the VIRTIO_IOMMU_ATTACH_F_BYPASS flag on the ATTACH request. Bypass domains perform slightly better than domains with identity mappings since they skip translation. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-Id: <20220214124356.872985-4-jean-philippe@linaro.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06virtio-iommu: Default to bypass during bootJean-Philippe Brucker3-9/+56
Currently the virtio-iommu device must be programmed before it allows DMA from any PCI device. This can make the VM entirely unusable when a virtio-iommu driver isn't present, for example in a bootloader that loads the OS from storage. Similarly to the other vIOMMU implementations, default to DMA bypassing the IOMMU during boot. Add a "boot-bypass" property, defaulting to true, that lets users change this behavior. Replace the VIRTIO_IOMMU_F_BYPASS feature, which didn't support bypass before feature negotiation, with VIRTIO_IOMMU_F_BYPASS_CONFIG. We add the bypass field to the migration stream without introducing subsections, based on the assumption that this virtio-iommu device isn't being used in production enough to require cross-version migration at the moment (all previous version required workarounds since they didn't support ACPI and boot-bypass). Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-Id: <20220214124356.872985-3-jean-philippe@linaro.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06hw/i386: Replace magic number with field length calculationDov Murik1-3/+6
Replce the literal magic number 48 with length calculation (32 bytes at the end of the firmware after the table footer + 16 bytes of the OVMF table footer GUID). No functional change intended. Signed-off-by: Dov Murik <dovmurik@linux.ibm.com> Message-Id: <20220222071906.2632426-3-dovmurik@linux.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2022-03-06hw/i386: Improve bounds checking in OVMF table parsingDov Murik1-1/+8
When pc_system_parse_ovmf_flash() parses the optional GUIDed table in the end of the OVMF flash memory area, the table length field is checked for sizes that are too small, but doesn't error on sizes that are too big (bigger than the flash content itself). Add a check for maximal size of the OVMF table, and add an error report in case the size is invalid. In such a case, an error like this will be displayed during launch: qemu-system-x86_64: OVMF table has invalid size 4047 and the table parsing is skipped. Signed-off-by: Dov Murik <dovmurik@linux.ibm.com> Message-Id: <20220222071906.2632426-2-dovmurik@linux.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-03-06intel_iommu: support snoop controlJason Wang3-1/+15
SC is required for some kernel features like vhost-vDPA. So this patch implements basic SC feature. The idea is pretty simple, for software emulated DMA it would be always coherent. In this case we can simple advertise ECAP_SC bit. For VFIO and vhost, thing will be more much complicated, so this patch simply fail the IOMMU notifier registration. In the future, we may want to have a dedicated notifiers flag or similar mechanism to demonstrate the coherency so VFIO could advertise that if it has VFIO_DMA_CC_IOMMU, for vhost kernel backend we don't need that since it's a software backend. Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20220214060346.72455-1-jasowang@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06vhost-vdpa: make notifiers _init()/_uninit() symmetricLaurent Vivier1-10/+10
vhost_vdpa_host_notifiers_init() initializes queue notifiers for queues "dev->vq_index" to queue "dev->vq_index + dev->nvqs", whereas vhost_vdpa_host_notifiers_uninit() uninitializes the same notifiers for queue "0" to queue "dev->nvqs". This asymmetry seems buggy, fix that by using dev->vq_index as the base for both. Fixes: d0416d487bd5 ("vhost-vdpa: map virtqueue notification area if possible") Cc: jasowang@redhat.com Signed-off-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20220211161309.1385839-1-lvivier@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-06hw/virtio: vdpa: Fix leak of host-notifier memory-regionLaurent Vivier1-0/+1
If call virtio_queue_set_host_notifier_mr fails, should free host-notifier memory-region. This problem can trigger a coredump with some vDPA drivers (mlx5, but not with the vdpasim), if we unplug the virtio-net card from the guest after a stop/start. The same fix has been done for vhost-user: 1f89d3b91e3e ("hw/virtio: Fix leak of host-notifier memory-region") Fixes: d0416d487bd5 ("vhost-vdpa: map virtqueue notification area if possible") Cc: jasowang@redhat.com Resolves: https://bugzilla.redhat.com/2027208 Signed-off-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20220211170259.1388734-1-lvivier@redhat.com> Cc: qemu-stable@nongnu.org Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04hw/vhost-user-i2c: Add support for VIRTIO_I2C_F_ZERO_LENGTH_REQUESTViresh Kumar2-2/+12
VIRTIO_I2C_F_ZERO_LENGTH_REQUEST is a mandatory feature, that must be implemented by everyone. Add its support. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Message-Id: <fc47ab63b1cd414319c9201e8d6c7705b5ec3bd9.1644490993.git.viresh.kumar@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04virtio: fix the condition for iommu_platform not supportedHalil Pasic1-5/+7
The commit 04ceb61a40 ("virtio: Fail if iommu_platform is requested, but unsupported") claims to fail the device hotplug when iommu_platform is requested, but not supported by the (vhost) device. On the first glance the condition for detecting that situation looks perfect, but because a certain peculiarity of virtio_platform it ain't. In fact the aforementioned commit introduces a regression. It breaks virtio-fs support for Secure Execution, and most likely also for AMD SEV or any other confidential guest scenario that relies encrypted guest memory. The same also applies to any other vhost device that does not support _F_ACCESS_PLATFORM. The peculiarity is that iommu_platform and _F_ACCESS_PLATFORM collates "device can not access all of the guest RAM" and "iova != gpa, thus device needs to translate iova". Confidential guest technologies currently rely on the device/hypervisor offering _F_ACCESS_PLATFORM, so that, after the feature has been negotiated, the guest grants access to the portions of memory the device needs to see. So in for confidential guests, generally, _F_ACCESS_PLATFORM is about the restricted access to memory, but not about the addresses used being something else than guest physical addresses. This is the very reason for which commit f7ef7e6e3b ("vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORM") fences _F_ACCESS_PLATFORM from the vhost device that does not need it, because on the vhost interface it only means "I/O address translation is needed". This patch takes inspiration from f7ef7e6e3b ("vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORM"), and uses the same condition for detecting the situation when _F_ACCESS_PLATFORM is requested, but no I/O translation by the device, and thus no device capability is needed. In this situation claiming that the device does not support iommu_plattform=on is counter-productive. So let us stop doing that! Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reported-by: Jakob Naucke <Jakob.Naucke@ibm.com> Fixes: 04ceb61a40 ("virtio: Fail if iommu_platform is requested, but unsupported") Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Tested-by: Daniel Henrique Barboza <danielhb413@gmail.com> Cc: Kevin Wolf <kwolf@redhat.com> Cc: qemu-stable@nongnu.org Message-Id: <20220207112857.607829-1-pasic@linux.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2022-03-04vhost-user: fix VirtQ notifier cleanupXueming Li2-19/+31
When vhost-user device cleanup, remove notifier MR and munmaps notifier address in the event-handling thread, VM CPU thread writing the notifier in concurrent fails with an error of accessing invalid address. It happens because MR is still being referenced and accessed in another thread while the underlying notifier mmap address is being freed and becomes invalid. This patch calls RCU and munmap notifiers in the callback after the memory flatview update finish. Fixes: 44866521bd6e ("vhost-user: support registering external host notifiers") Cc: qemu-stable@nongnu.org Signed-off-by: Xueming Li <xuemingl@nvidia.com> Message-Id: <20220207071929.527149-3-xuemingl@nvidia.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04vhost-user: remove VirtQ notifier restoreXueming Li2-19/+1
Notifier set when vhost-user backend asks qemu to mmap an FD and offset. When vhost-user backend restart or getting killed, VQ notifier FD and mmap addresses become invalid. After backend restart, MR contains the invalid address will be restored and fail on notifier access. On the other hand, qemu should munmap the notifier, release underlying hardware resources to enable backend restart and allocate hardware notifier resources correctly. Qemu shouldn't reference and use resources of disconnected backend. This patch removes VQ notifier restore, uses the default vhost-user notifier to avoid invalid address access. After backend restart, the backend should ask qemu to install a hardware notifier if needed. Fixes: 44866521bd6e ("vhost-user: support registering external host notifiers") Cc: qemu-stable@nongnu.org Signed-off-by: Xueming Li <xuemingl@nvidia.com> Message-Id: <20220207071929.527149-2-xuemingl@nvidia.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04hw/smbios: add assertion to ensure handles of tables 19 and 32 do not collideAni Sinha1-0/+6
Since change dcf359832eec02 ("hw/smbios: fix table memory corruption with large memory vms") we reserve additional space between handle numbers of tables 17 and 19 for large VMs. This may cause table 19 to collide with table 32 in their handle numbers for those large VMs. This change adds an assertion to ensure numbers do not collide. If they do, qemu crashes with useful debug information for taking additional steps. Signed-off-by: Ani Sinha <ani@anisinha.ca> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220223143322.927136-8-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04hw/smbios: fix overlapping table handle numbers with large memory vmsAni Sinha1-4/+15
The current smbios table implementation splits the main memory in 16 GiB (DIMM like) chunks. With the current smbios table assignment code, we can have only 512 such chunks before the 16 bit handle numbers in the header for tables 17 and 19 conflict. A guest with more than 8 TiB of memory will hit this limitation and would fail with the following assertion in isa-debugcon: ASSERT_EFI_ERROR (Status = Already started) ASSERT /builddir/build/BUILD/edk2-ca407c7246bf/OvmfPkg/SmbiosPlatformDxe/SmbiosPlatformDxe.c(125): !EFI_ERROR (Status) This change adds an additional offset between tables 17 and 19 handle numbers when configuring VMs larger than 8 TiB of memory. The value of the offset is calculated to be equal to the additional space required to be reserved in order to accomodate more DIMM entries without the table handles colliding. In normal cases where the VM memory is smaller or equal to 8 TiB, this offset value is 0. Hence in this case, no additional handle numbers are reserved and table handle values remain as before. Since smbios memory is not transmitted over the wire during migration, this change can break migration for large memory vms if the guest is in the middle of generating the tables during migration. However, in those situations, qemu generates invalid table handles anyway with or without this fix. Hence, we do not preserve the old bug by introducing compat knobs/machine types. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2023977 Signed-off-by: Ani Sinha <ani@anisinha.ca> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220223143322.927136-7-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04hw/smbios: code cleanup - use macro definitions for table header handlesAni Sinha1-12/+26
This is a minor cleanup. Using macro definitions makes the code more readable. It is at once clear which tables use which handle numbers in their header. It also makes it easy to calculate the gaps between the numbers and update them if needed. Reviewed-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20220223143322.927136-6-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2022-03-04hw/acpi/erst: clean up unused IS_UEFI_CPER_RECORD macroAni Sinha1-5/+0
This change is cosmetic. IS_UEFI_CPER_RECORD macro definition that was added as a part of the ERST implementation seems to be unused. Remove it. CC: Eric DeVolder <eric.devolder@oracle.com> Reviewed-by: Eric DeVolder <eric.devolder@oracle.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20220223143322.927136-5-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04docs/acpi/erst: add device id for ACPI ERST device in pci-ids.txtAni Sinha1-0/+1
Adding device ID for ERST device in pci-ids.txt. It was missed when ERST related patches were reviewed. CC: Eric DeVolder <eric.devolder@oracle.com> Reviewed-by: Eric DeVolder <eric.devolder@oracle.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20220223143322.927136-4-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04MAINTAINERS: no need to add my name explicitly as a reviewer for VIOT tablesAni Sinha1-1/+0
I am already listed as a reviewer for ACPI/SMBIOS subsystem. There is no need to again add me as a reviewer for ACPI/VIOT. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20220223143322.927136-3-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04ACPI ERST: specification for ERST supportEric DeVolder2-0/+201
Information on the implementation of the ACPI ERST support. Signed-off-by: Eric DeVolder <eric.devolder@oracle.com> Acked-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20220223143322.927136-2-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-04qom: assert integer does not overflowMichael S. Tsirkin1-1/+5
QOM reference counting is not designed with an infinite amount of references in mind, trying to take a reference in a loop without dropping a reference will overflow the integer. It is generally a symptom of a reference leak (a missing deref, commonly as part of error handling - such as one fixed here: https://lore.kernel.org/r/20220228095058.27899-1-sgarzare%40redhat.com ). All this can lead to either freeing the object too early (memory corruption) or never freeing it (memory leak). If we happen to dereference at just the right time (when it's wrapping around to 0), we might eventually assert when dereferencing, but the real problem is an extra object_ref so let's assert there to make such issues cleaner and easier to debug. Some micro-benchmarking shows using fetch and add this is essentially free on x86. Since multiple threads could be incrementing in parallel, we assert around INT_MAX to make sure none of these approach the wrap around point: this way we get a memory leak and not a memory corruption, the former is generally easier to debug. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-03-03Merge remote-tracking branch ↵Peter Maydell20-193/+737
'remotes/pmaydell/tags/pull-target-arm-20220302' into staging target-arm queue: * mps3-an547: Add missing user ahb interfaces * hw/arm/mps2-tz.c: Update AN547 documentation URL * hw/input/tsc210x: Don't abort on bad SPI word widths * hw/i2c: flatten pca954x mux device * target/arm: Support PSCI 1.1 and SMCCC 1.0 * target/arm: Fix early free of TCG temp in handle_simd_shift_fpint_conv() * tests/qtest: add qtests for npcm7xx sdhci * Implement FEAT_LVA * Implement FEAT_LPA * Implement FEAT_LPA2 (but do not enable it yet) * Report KVM's actual PSCI version to guest in dtb * ui/cocoa.m: Fix updateUIInfo threading issues * ui/cocoa.m: Remove unnecessary NSAutoreleasePools # gpg: Signature made Wed 02 Mar 2022 20:52:06 GMT # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20220302: (26 commits) ui/cocoa.m: Remove unnecessary NSAutoreleasePools ui/cocoa.m: Fix updateUIInfo threading issues target/arm: Report KVM's actual PSCI version to guest in dtb target/arm: Implement FEAT_LPA2 target/arm: Advertise all page sizes for -cpu max target/arm: Validate tlbi TG matches translation granule in use target/arm: Fix TLBIRange.base for 16k and 64k pages target/arm: Introduce tlbi_aa64_get_range target/arm: Extend arm_fi_to_lfsc to level -1 target/arm: Implement FEAT_LPA target/arm: Implement FEAT_LVA target/arm: Prepare DBGBVR and DBGWVR for FEAT_LVA target/arm: Honor TCR_ELx.{I}PS target/arm: Use MAKE_64BIT_MASK to compute indexmask target/arm: Pass outputsize down to check_s2_mmu_setup target/arm: Move arm_pamax out of line target/arm: Fault on invalid TCR_ELx.TxSZ target/arm: Set TCR_EL1.TSZ for user-only hw/registerfields: Add FIELD_SEX<N> and FIELD_SDP<N> tests/qtest: add qtests for npcm7xx sdhci ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02Merge remote-tracking branch ↵Peter Maydell22-201/+435
'remotes/dgilbert-gitlab/tags/pull-migration-20220302b' into staging Migration/HMP/Virtio pull 2022-03-02 A bit of a mix this time: * Minor fixes from myself, Hanna, and Jack * VNC password rework by Stefan and Fabian * Postcopy changes from Peter X that are the start of a larger series to come * Removing the prehistoic load_state_old code from Peter M Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> # gpg: Signature made Wed 02 Mar 2022 18:25:12 GMT # gpg: using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7 # gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full] # Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7 * remotes/dgilbert-gitlab/tags/pull-migration-20220302b: migration: Remove load_state_old and minimum_version_id_old tests: Pass in MigrateStart** into test_migrate_start() migration: Add migration_incoming_transport_cleanup() migration: postcopy_pause_fault_thread() never fails migration: Enlarge postcopy recovery to capture !-EIO too migration: Move static var in ram_block_from_stream() into global migration: Add postcopy_thread_create() migration: Dump ramblock and offset too when non-same-page detected migration: Introduce postcopy channels on dest node migration: Tracepoint change in postcopy-run bottom half migration: Finer grained tracepoints for POSTCOPY_LISTEN migration: Dump sub-cmd name in loadvm_process_command tp migration/rdma: set the REUSEADDR option for destination qapi/monitor: allow VNC display id in set/expire_password qapi/monitor: refactor set/expire_password with enums monitor/hmp: add support for flag argument with value virtiofsd: Let meson check for statx.stx_mnt_id clock-vmstate: Add missing END_OF_LIST Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02ui/cocoa.m: Remove unnecessary NSAutoreleasePoolsPeter Maydell1-6/+0
In commit 6e657e64cdc478 in 2013 we added some autorelease pools to deal with complaints from macOS when we made calls into Cocoa from threads that didn't have automatically created autorelease pools. Later on, macOS got stricter about forbidding cross-thread Cocoa calls, and in commit 5588840ff77800e839d8 we restructured the code to avoid them. This left the autorelease pool creation in several functions without any purpose; delete it. We still need the pool in cocoa_refresh() for the clipboard related code which is called directly there. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@gmail.com> Tested-by: Akihiko Odaki <akihiko.odaki@gmail.com> Message-id: 20220224101330.967429-3-peter.maydell@linaro.org
2022-03-02ui/cocoa.m: Fix updateUIInfo threading issuesPeter Maydell1-3/+22
The updateUIInfo method makes Cocoa API calls. It also calls back into QEMU functions like dpy_set_ui_info(). To do this safely, we need to follow two rules: * Cocoa API calls are made on the Cocoa UI thread * When calling back into QEMU we must hold the iothread lock Fix the places where we got this wrong, by taking the iothread lock while executing updateUIInfo, and moving the call in cocoa_switch() inside the dispatch_async block. Some of the Cocoa UI methods which call updateUIInfo are invoked as part of the initial application startup, while we're still doing the little cross-thread dance described in the comment just above call_qemu_main(). This meant they were calling back into the QEMU UI layer before we'd actually finished initializing our display and registered the DisplayChangeListener, which isn't really valid. Once updateUIInfo takes the iothread lock, we no longer get away with this, because during this startup phase the iothread lock is held by the QEMU main-loop thread which is waiting for us to finish our display initialization. So we must suppress updateUIInfo until applicationDidFinishLaunching allows the QEMU main-loop thread to continue. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@gmail.com> Tested-by: Akihiko Odaki <akihiko.odaki@gmail.com> Message-id: 20220224101330.967429-2-peter.maydell@linaro.org
2022-03-02target/arm: Report KVM's actual PSCI version to guest in dtbPeter Maydell3-3/+15
When we're using KVM, the PSCI implementation is provided by the kernel, but QEMU has to tell the guest about it via the device tree. Currently we look at the KVM_CAP_ARM_PSCI_0_2 capability to determine if the kernel is providing at least PSCI 0.2, but if the kernel provides a newer version than that we will still only tell the guest it has PSCI 0.2. (This is fairly harmless; it just means the guest won't use newer parts of the PSCI API.) The kernel exposes the specific PSCI version it is implementing via the ONE_REG API; use this to report in the dtb that the PSCI implementation is 1.0-compatible if appropriate. (The device tree binding currently only distinguishes "pre-0.2", "0.2-compatible" and "1.0-compatible".) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-id: 20220224134655.1207865-1-peter.maydell@linaro.org
2022-03-02target/arm: Implement FEAT_LPA2Richard Henderson4-15/+112
This feature widens physical addresses (and intermediate physical addresses for 2-stage translation) from 48 to 52 bits, when using 4k or 16k pages. This introduces the DS bit to TCR_ELx, which is RES0 unless the page size is enabled and supports LPA2, resulting in the effective value of DS for a given table walk. The DS bit changes the format of the page table descriptor slightly, moving the PS field out to TCR so that all pages have the same sharability and repurposing those bits of the page table descriptor for the highest bits of the output address. Do not yet enable FEAT_LPA2; we need extra plumbing to avoid tickling an old kernel bug. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Advertise all page sizes for -cpu maxRichard Henderson1-0/+4
We support 16k pages, but do not advertize that in ID_AA64MMFR0. The value 0 in the TGRAN*_2 fields indicates that stage2 lookups defer to the same support as stage1 lookups. This setting is deprecated, so indicate support for all stage2 page sizes directly. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20220301215958.157011-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Validate tlbi TG matches translation granule in useRichard Henderson1-3/+7
For FEAT_LPA2, we will need other ARMVAParameters, which themselves depend on the translation granule in use. We might as well validate that the given TG matches; the architecture "does not require that the instruction invalidates any entries" if this is not true. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Fix TLBIRange.base for 16k and 64k pagesRichard Henderson1-2/+3
The shift of the BaseADDR field depends on the translation granule in use. Fixes: 84940ed8255 ("target/arm: Add support for FEAT_TLBIRANGE") Reported-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Introduce tlbi_aa64_get_rangeRichard Henderson1-34/+24
Merge tlbi_aa64_range_get_length and tlbi_aa64_range_get_base, returning a structure containing both results. Pass in the ARMMMUIdx, rather than the digested two_ranges boolean. This is in preparation for FEAT_LPA2, where the interpretation of 'value' depends on the effective value of DS for the regime. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Extend arm_fi_to_lfsc to level -1Richard Henderson1-6/+29
With FEAT_LPA2, rather than introducing translation level 4, we introduce level -1, below the current level 0. Extend arm_fi_to_lfsc to handle these faults. Assert that this new translation level does not leak into fault types for which it is not defined, which allows some masking of fi->level to be removed. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Implement FEAT_LPARichard Henderson4-5/+19
This feature widens physical addresses (and intermediate physical addresses for 2-stage translation) from 48 to 52 bits, when using 64k pages. The only thing left at this point is to handle the extra bits in the TTBR and in the table descriptors. Note that PAR_EL1 and HPFAR_EL2 are nominally extended, but we don't mask out the high bits when writing to those registers, so no changes are required there. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Implement FEAT_LVARichard Henderson5-2/+16
This feature is relatively small, as it applies only to 64k pages and thus requires no additional changes to the table descriptor walking algorithm, only a change to the minimum TSZ (which is the inverse of the maximum virtual address space size). Note that this feature widens VBAR_ELx, but we already treat the register as being 64 bits wide. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Prepare DBGBVR and DBGWVR for FEAT_LVARichard Henderson1-8/+24
The original A.a revision of the AArch64 ARM required that we force-extend the addresses in these registers from 49 bits. This language has been loosened via a combination of IMPLEMENTATION DEFINED and CONSTRAINTED UNPREDICTABLE to allow consideration of the entire aligned address. This means that we do not have to consider whether or not FEAT_LVA is enabled, and decide from which bit an address might need to be extended. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Honor TCR_ELx.{I}PSRichard Henderson2-16/+57
This field controls the output (intermediate) physical address size of the translation process. V8 requires to raise an AddressSize fault if the page tables are programmed incorrectly, such that any intermediate descriptor address, or the final translated address, is out of range. Add a PS field to ARMVAParameters, and properly compute outputsize in get_phys_addr_lpae. Test the descaddr as extracted from TTBR and from page table entries. Restrict descaddrmask so that we won't raise the fault for v7. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Use MAKE_64BIT_MASK to compute indexmaskRichard Henderson1-2/+2
The macro is a bit more readable than the inlined computation. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Pass outputsize down to check_s2_mmu_setupRichard Henderson1-11/+10
Pass down the width of the output address from translation. For now this is still just PAMax, but a subsequent patch will compute the correct value from TCR_ELx.{I}PS. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Move arm_pamax out of lineRichard Henderson2-18/+23
We will shortly share parts of this function with other portions of address translation. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Fault on invalid TCR_ELx.TxSZRichard Henderson2-4/+29
Without FEAT_LVA, the behaviour of programming an invalid value is IMPLEMENTATION DEFINED. With FEAT_LVA, programming an invalid minimum value requires a Translation fault. It is most self-consistent to choose to generate the fault always. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Set TCR_EL1.TSZ for user-onlyRichard Henderson1-1/+2
Set this as the kernel would, to 48 bits, to keep the computation of the address space correct for PAuth. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02hw/registerfields: Add FIELD_SEX<N> and FIELD_SDP<N>Richard Henderson1-1/+47
Add new macros to manipulate signed fields within the register. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-2-richard.henderson@linaro.org Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>