diff options
author | Alex Williamson <alex.williamson@redhat.com> | 2014-08-05 13:05:52 -0600 |
---|---|---|
committer | Alex Williamson <alex.williamson@redhat.com> | 2014-08-05 13:05:52 -0600 |
commit | c048be5cc92ae201c339d46984476c4629275ed6 (patch) | |
tree | 36038db6fb1159301daccc8dcd1ecec080849054 /hw/misc | |
parent | 69f87f713069f1f70f86cb65883f7d43e3aa21de (diff) | |
download | qemu-c048be5cc92ae201c339d46984476c4629275ed6.zip qemu-c048be5cc92ae201c339d46984476c4629275ed6.tar.gz qemu-c048be5cc92ae201c339d46984476c4629275ed6.tar.bz2 |
vfio: Fix MSI-X vector expansion
When new MSI-X vectors are enabled we need to disable MSI-X and
re-enable it with the correct number of vectors. That means we need
to reprogram the eventfd triggers for each vector. Prior to f4d45d47
vector->use tracked whether a vector was masked or unmasked and we
could always pick the KVM path when available for unmasked vectors.
Now vfio doesn't track mask state itself and vector->use and virq
remains configured even for masked vectors. Therefore we need to ask
the MSI-X code whether a vector is masked in order to select the
correct signaling path. As noted in the comment, MSI relies on
hardware to handle masking.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org # QEMU 2.1
Diffstat (limited to 'hw/misc')
-rw-r--r-- | hw/misc/vfio.c | 38 |
1 files changed, 29 insertions, 9 deletions
diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c index 0b9eba0..e88b610 100644 --- a/hw/misc/vfio.c +++ b/hw/misc/vfio.c @@ -120,11 +120,20 @@ typedef struct VFIOINTx { } VFIOINTx; typedef struct VFIOMSIVector { - EventNotifier interrupt; /* eventfd triggered on interrupt */ - EventNotifier kvm_interrupt; /* eventfd triggered for KVM irqfd bypass */ + /* + * Two interrupt paths are configured per vector. The first, is only used + * for interrupts injected via QEMU. This is typically the non-accel path, + * but may also be used when we want QEMU to handle masking and pending + * bits. The KVM path bypasses QEMU and is therefore higher performance, + * but requires masking at the device. virq is used to track the MSI route + * through KVM, thus kvm_interrupt is only available when virq is set to a + * valid (>= 0) value. + */ + EventNotifier interrupt; + EventNotifier kvm_interrupt; struct VFIODevice *vdev; /* back pointer to device */ MSIMessage msg; /* cache the MSI message so we know when it changes */ - int virq; /* KVM irqchip route for QEMU bypass */ + int virq; bool use; } VFIOMSIVector; @@ -681,13 +690,24 @@ static int vfio_enable_vectors(VFIODevice *vdev, bool msix) fds = (int32_t *)&irq_set->data; for (i = 0; i < vdev->nr_vectors; i++) { - if (!vdev->msi_vectors[i].use) { - fds[i] = -1; - } else if (vdev->msi_vectors[i].virq >= 0) { - fds[i] = event_notifier_get_fd(&vdev->msi_vectors[i].kvm_interrupt); - } else { - fds[i] = event_notifier_get_fd(&vdev->msi_vectors[i].interrupt); + int fd = -1; + + /* + * MSI vs MSI-X - The guest has direct access to MSI mask and pending + * bits, therefore we always use the KVM signaling path when setup. + * MSI-X mask and pending bits are emulated, so we want to use the + * KVM signaling path only when configured and unmasked. + */ + if (vdev->msi_vectors[i].use) { + if (vdev->msi_vectors[i].virq < 0 || + (msix && msix_is_masked(&vdev->pdev, i))) { + fd = event_notifier_get_fd(&vdev->msi_vectors[i].interrupt); + } else { + fd = event_notifier_get_fd(&vdev->msi_vectors[i].kvm_interrupt); + } } + + fds[i] = fd; } ret = ioctl(vdev->fd, VFIO_DEVICE_SET_IRQS, irq_set); |