aboutsummaryrefslogtreecommitdiff
path: root/hw/vfio
diff options
context:
space:
mode:
authorPeter Xu <peterx@redhat.com>2017-02-07 16:28:05 +0800
committerMichael S. Tsirkin <mst@redhat.com>2017-02-17 21:52:31 +0200
commitdfbd90e5b9b772b1c5a52bad9f6dbabb385a7dc2 (patch)
treebdc09ff657a4167a4237fab3b4632fcd1b4b16ca /hw/vfio
parent4a4b88fbe1a95e80a2e29830e69e1deded407fc1 (diff)
downloadqemu-dfbd90e5b9b772b1c5a52bad9f6dbabb385a7dc2.zip
qemu-dfbd90e5b9b772b1c5a52bad9f6dbabb385a7dc2.tar.gz
qemu-dfbd90e5b9b772b1c5a52bad9f6dbabb385a7dc2.tar.bz2
vfio: allow to notify unmap for very large region
Linux vfio driver supports to do VFIO_IOMMU_UNMAP_DMA for a very big region. This can be leveraged by QEMU IOMMU implementation to cleanup existing page mappings for an entire iova address space (by notifying with an IOTLB with extremely huge addr_mask). However current vfio_iommu_map_notify() does not allow that. It make sure that all the translated address in IOTLB is falling into RAM range. The check makes sense, but it should only be a sensible checker for mapping operations, and mean little for unmap operations. This patch moves this check into map logic only, so that we'll get faster unmap handling (no need to translate again), and also we can then better support unmapping a very big region when it covers non-ram ranges or even not-existing ranges. Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Diffstat (limited to 'hw/vfio')
-rw-r--r--hw/vfio/common.c7
1 files changed, 3 insertions, 4 deletions
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 42c4790..f3ba9b9 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -352,11 +352,10 @@ static void vfio_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
rcu_read_lock();
- if (!vfio_get_vaddr(iotlb, &vaddr, &read_only)) {
- goto out;
- }
-
if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) {
+ if (!vfio_get_vaddr(iotlb, &vaddr, &read_only)) {
+ goto out;
+ }
/*
* vaddr is only valid until rcu_read_unlock(). But after
* vfio_dma_map has set up the mapping the pages will be