aboutsummaryrefslogtreecommitdiff
path: root/hw/virtio/vhost.c
AgeCommit message (Collapse)AuthorFilesLines
8 daysvhost: Do not abort on log-stop errorHanna Czenczek1-1/+2
Failing to stop logging in a vhost device is not exactly fatal. We can log such an error, but there is no need to abort the whole qemu process because of it. Signed-off-by: Hanna Czenczek <hreitz@redhat.com> Message-Id: <20250724125928.61045-3-hreitz@redhat.com> Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
8 daysvhost: Do not abort on log-start errorHanna Czenczek1-1/+2
Commit 3688fec8923 ("memory: Add Error** argument to .log_global_start() handler") enabled vhost_log_global_start() to return a proper error, but did not change it to do so; instead, it still aborts the whole process on error. This crash can be reproduced by e.g. killing a virtiofsd daemon before initiating migration. In such a case, qemu should not crash, but just make the attempted migration fail. Buglink: https://issues.redhat.com/browse/RHEL-94534 Reported-by: Tingting Mao <timao@redhat.com> Signed-off-by: Hanna Czenczek <hreitz@redhat.com> Message-Id: <20250724125928.61045-2-hreitz@redhat.com> Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-07-14vhost: add a helper for force stopping a deviceDaniil Tatianin1-13/+39
This adds an ability to skip GET_VRING_BASE during device stop entirely, and thus the expensive drain operation that this call entails as well, which may be useful during a non-graceful shutdown in case the guest operating system hangs or refuses to react to a previously requested ACPI shutdown for whatever reason. Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru> Message-Id: <20250609212547.2859224-3-d-tatianin@yandex-team.ru> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-07-14vhost: Fix used memslot tracking when destroying a vhost deviceDavid Hildenbrand1-27/+10
When we unplug a vhost device, we end up calling vhost_dev_cleanup() where we do a memory_listener_unregister(). This memory_listener_unregister() call will end up disconnecting the listener from the address space through listener_del_address_space(). In that process, we effectively communicate the removal of all memory regions from that listener, resulting in region_del() + commit() callbacks getting triggered. So in case of vhost, we end up calling vhost_commit() with no remaining memory slots (0). In vhost_commit() we end up overwriting the global variables used_memslots / used_shared_memslots, used for detecting the number of free memslots. With used_memslots / used_shared_memslots set to 0 by vhost_commit() during device removal, we'll later assume that the other vhost devices still have plenty of memslots left when calling vhost_get_free_memslots(). Let's fix it by simply removing the global variables and depending only on the actual per-device count. Easy to reproduce by adding two vhost-user devices to a VM and then hot-unplugging one of them. While at it, detect unexpected underflows in vhost_get_free_memslots() and issue a warning. Reported-by: yuanminghao <yuanmh12@chinatelecom.cn> Link: https://lore.kernel.org/qemu-devel/20241121060755.164310-1-yuanmh12@chinatelecom.cn/ Fixes: 2ce68e4cf5be ("vhost: add vhost_has_free_slot() interface") Cc: Igor Mammedov <imammedo@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20250603111336.1858888-1-david@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-05-14vhost: return failure if stop virtqueue failed in vhost_dev_stopHaoqian He1-10/+13
This patch captures the error of vhost_virtqueue_stop() in vhost_dev_stop() and returns the error upward. Specifically, if QEMU is disconnected from the vhost backend, some actions in vhost_dev_stop() will fail, such as sending vhost-user messages to the backend (GET_VRING_BASE, SET_VRING_ENABLE) and vhost_reset_status. Considering that both set_vring_enable and vhost_reset_status require setting the specific virtio feature bit, we can capture vhost_virtqueue_stop()'s error to indicate that QEMU has lost connection with the backend. This patch is the pre patch for 'vhost-user: return failure if backend crashes when live migration', which makes the live migration aware of the loss of connection with the vhost-user backend and aborts the live migration. Signed-off-by: Haoqian He <haoqian.he@smartx.com> Message-Id: <20250416024729.3289157-3-haoqian.he@smartx.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-04-24cleanup: Drop pointless return at end of functionMarkus Armbruster1-1/+0
A few functions now end with a label. The next commit will clean them up. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20250407082643.2310002-3-armbru@redhat.com> [Straightforward conflict with commit 988ad4ccebb6 (hw/loongarch/virt: Fix cpuslot::cpu set at last in virt_cpu_plug()) resolved]
2024-12-20include: Rename sysemu/ -> system/Philippe Mathieu-Daudé1-1/+1
Headers in include/sysemu/ are not only related to system *emulation*, they are also used by virtualization. Rename as system/ which is clearer. Files renamed manually then mechanical change using sed tool. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Lei Yang <leiyang@redhat.com> Message-Id: <20241203172445.28576-1-philmd@linaro.org>
2024-11-26vhost: fail device start if iotlb update failsPrasad Pandit1-1/+12
While starting a vhost device, updating iotlb entries via 'vhost_device_iotlb_miss' may return an error. qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb Fail device start when such an error occurs. Signed-off-by: Prasad Pandit <pjp@fedoraproject.org> Message-Id: <20241107113247.46532-1-ppandit@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
2024-10-03vhost: Remove unused vhost_dev_{load|save}_inflightDr. David Alan Gilbert1-56/+0
vhost_dev_load_inflight and vhost_dev_save_inflight have been unused since they were added in 2019 by: 5ad204bf2a ("vhost-user: Support transferring inflight buffer between qemu and backend") Remove them, and their helper vhost_dev_resize_inflight. Signed-off-by: Dr. David Alan Gilbert <dave@treblig.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-09-11vhost_net: configure all host notifiers in a single MR transactionzuoboqun1-3/+3
This allows the vhost_net device which has multiple virtqueues to batch the setup of all its host notifiers. This significantly reduces the vhost_net device starting and stoping time, e.g. the time spend on enabling notifiers reduce from 630ms to 75ms and the time spend on disabling notifiers reduce from 441ms to 45ms for a VM with 192 vCPUs and 15 vhost-user-net devices (64vq per device) in our case. Signed-off-by: zuoboqun <zuoboqun@baidu.com> Message-Id: <20240816070835.8309-1-zuoboqun@baidu.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-07-01vhost: Perform memory section dirty scans once per iterationSi-Wei Liu1-6/+61
On setups with one or more virtio-net devices with vhost on, dirty tracking iteration increases cost the bigger the number amount of queues are set up e.g. on idle guests migration the following is observed with virtio-net with vhost=on: 48 queues -> 78.11% [.] vhost_dev_sync_region.isra.13 8 queues -> 40.50% [.] vhost_dev_sync_region.isra.13 1 queue -> 6.89% [.] vhost_dev_sync_region.isra.13 2 devices, 1 queue -> 18.60% [.] vhost_dev_sync_region.isra.14 With high memory rates the symptom is lack of convergence as soon as it has a vhost device with a sufficiently high number of queues, the sufficient number of vhost devices. On every migration iteration (every 100msecs) it will redundantly query the *shared log* the number of queues configured with vhost that exist in the guest. For the virtqueue data, this is necessary, but not for the memory sections which are the same. So essentially we end up scanning the dirty log too often. To fix that, select a vhost device responsible for scanning the log with regards to memory sections dirty tracking. It is selected when we enable the logger (during migration) and cleared when we disable the logger. If the vhost logger device goes away for some reason, the logger will be re-selected from the rest of vhost devices. After making mem-section logger a singleton instance, constant cost of 7%-9% (like the 1 queue report) will be seen, no matter how many queues or how many vhost devices are configured: 48 queues -> 8.71% [.] vhost_dev_sync_region.isra.13 2 devices, 8 queues -> 7.97% [.] vhost_dev_sync_region.isra.14 Co-developed-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Message-Id: <1710448055-11709-2-git-send-email-si-wei.liu@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2024-07-01vhost: dirty log should be per backend typeSi-Wei Liu1-12/+33
There could be a mix of both vhost-user and vhost-kernel clients in the same QEMU process, where separate vhost loggers for the specific vhost type have to be used. Make the vhost logger per backend type, and have them properly reference counted. Suggested-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Message-Id: <1710448055-11709-1-git-send-email-si-wei.liu@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-04-23memory: Add Error** argument to .log_global_start() handlerCédric Le Goater1-1/+2
Modify all .log_global_start() handlers to take an Error** parameter and return a bool. Adapt memory_global_dirty_log_start() to interrupt on the first error the loop on handlers. In such case, a rollback is performed to stop dirty logging on all listeners where it was previously enabled. Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Anthony Perard <anthony.perard@citrix.com> Cc: Paul Durrant <paul@xen.org> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: Cédric Le Goater <clg@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20240320064911.545001-10-clg@redhat.com [peterx: modify & enrich the comment for listener_add_address_space() ] Signed-off-by: Peter Xu <peterx@redhat.com>
2024-03-26vdpa-dev: Fix initialisation order to restore VDUSE compatibilityKevin Wolf1-1/+7
VDUSE requires that virtqueues are first enabled before the DRIVER_OK status flag is set; with the current API of the kernel module, it is impossible to enable the opposite order in our block export code because userspace is not notified when a virtqueue is enabled. This requirement also mathces the normal initialisation order as done by the generic vhost code in QEMU. However, commit 6c482547 accidentally changed the order for vdpa-dev and broke access to VDUSE devices with this. This changes vdpa-dev to use the normal order again and use the standard vhost callback .vhost_set_vring_enable for this. VDUSE devices can be used with vdpa-dev again after this fix. vhost_net intentionally avoided enabling the vrings for vdpa and does this manually later while it does enable them for other vhost backends. Reflect this in the vhost_net code and return early for vdpa, so that the behaviour doesn't change for this device. Cc: qemu-stable@nongnu.org Fixes: 6c4825476a43 ('vdpa: move vhost_vdpa_set_vring_ready to the caller') Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20240315155949.86066-1-kwolf@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2024-03-12hw/virtio/vhost: Fix missing ERRP_GUARD() for error_prepend()Zhao Liu1-0/+2
As the comment in qapi/error, passing @errp to error_prepend() requires ERRP_GUARD(): * = Why, when and how to use ERRP_GUARD() = * * Without ERRP_GUARD(), use of the @errp parameter is restricted: ... * - It should not be passed to error_prepend(), error_vprepend() or * error_append_hint(), because that doesn't work with &error_fatal. * ERRP_GUARD() lifts these restrictions. * * To use ERRP_GUARD(), add it right at the beginning of the function. * @errp can then be used without worrying about the argument being * NULL or &error_fatal. ERRP_GUARD() could avoid the case when @errp is &error_fatal, the user can't see this additional information, because exit() happens in error_setg earlier than information is added [1]. In hw/virtio/vhost.c, there are 2 functions passing @errp to error_prepend() without ERRP_GUARD(): - vhost_save_backend_state() - vhost_load_backend_state() Their @errp both points to callers' @local_err. However, as the APIs defined in include/hw/virtio/vhost.h, it is necessary to protect their @errp with ERRP_GUARD(). To follow the requirement of @errp, add missing ERRP_GUARD() at their beginning. [1]: Issue description in the commit message of commit ae7c80a7bd73 ("error: New macro ERRP_GUARD()"). Cc: "Michael S. Tsirkin" <mst@redhat.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Message-ID: <20240311033822.3142585-27-zhao1.liu@linux.intel.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2023-11-07vhost: Add high-level state save/load functionsHanna Czenczek1-0/+204
vhost_save_backend_state() and vhost_load_backend_state() can be used by vhost front-ends to easily save and load the back-end's state to/from the migration stream. Because we do not know the full state size ahead of time, vhost_save_backend_state() simply reads the data in 1 MB chunks, and writes each chunk consecutively into the migration stream, prefixed by its length. EOF is indicated by a 0-length chunk. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Hanna Czenczek <hreitz@redhat.com> Message-Id: <20231016134243.68248-7-hreitz@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-11-07vhost-user: Interface for migration state transferHanna Czenczek1-0/+37
Add the interface for transferring the back-end's state during migration as defined previously in vhost-user.rst. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Hanna Czenczek <hreitz@redhat.com> Message-Id: <20231016134243.68248-6-hreitz@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-11-01cpr: relax vhost migration blockersSteve Sistare1-1/+1
vhost blocks migration if logging is not supported to track dirty memory, and vhost-user blocks it if the log cannot be saved to a shm fd. vhost-vdpa blocks migration if both hosts do not support all the device's features using a shadow VQ, for tracking requests and dirty memory. vhost-scsi blocks migration if storage cannot be shared across hosts, or if state cannot be migrated. None of these conditions apply if the old and new qemu processes do not run concurrently, and if new qemu starts on the same host as old, which is the case for cpr. Narrow the scope of these blockers so they only apply to normal mode. They will not block cpr modes when they are added in subsequent patches. No functional change until a new mode is added. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <1698263069-406971-5-git-send-email-steven.sistare@oracle.com>
2023-10-23Merge tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu ↵Stefan Hajnoczi1-0/+9
into staging virtio,pc,pci: features, cleanups infrastructure for vhost-vdpa shadow work piix south bridge rework reconnect for vhost-user-scsi dummy ACPI QTG DSM for cxl tests, cleanups, fixes all over the place Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # -----BEGIN PGP SIGNATURE----- # # iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmU06PMPHG1zdEByZWRo # YXQuY29tAAoJECgfDbjSjVRpNIsH/0DlKti86VZLJ6PbNqsnKxoK2gg05TbEhPZU # pQ+RPDaCHpFBsLC5qsoMJwvaEQFe0e49ZFemw7bXRzBxgmbbNnZ9ArCIPqT+rvQd # 7UBmyC+kacVyybZatq69aK2BHKFtiIRlT78d9Izgtjmp8V7oyKoz14Esh8wkE+FT # ypHUa70Addi6alNm6BVkm7bxZxi0Wrmf3THqF8ViYvufzHKl7JR5e17fKWEG0BqV # 9W7AeHMnzJ7jkTvBGUw7g5EbzFn7hPLTbO4G/VW97k0puS4WRX5aIMkVhUazsRIa # zDOuXCCskUWuRapiCwY0E4g7cCaT8/JR6JjjBaTgkjJgvo5Y8Eg= # =ILek # -----END PGP SIGNATURE----- # gpg: Signature made Sun 22 Oct 2023 02:18:43 PDT # gpg: using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469 # gpg: issuer "mst@redhat.com" # gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [full] # gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [full] # Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67 # Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469 * tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu: (62 commits) intel-iommu: Report interrupt remapping faults, fix return value MAINTAINERS: Add include/hw/intc/i8259.h to the PC chip section vhost-user: Fix protocol feature bit conflict tests/acpi: Update DSDT.cxl with QTG DSM hw/cxl: Add QTG _DSM support for ACPI0017 device tests/acpi: Allow update of DSDT.cxl hw/i386/cxl: ensure maxram is greater than ram size for calculating cxl range vhost-user: fix lost reconnect vhost-user-scsi: start vhost when guest kicks vhost-user-scsi: support reconnect to backend vhost: move and rename the conn retry times vhost-user-common: send get_inflight_fd once hw/i386/pc_piix: Make PIIX4 south bridge usable in PC machine hw/isa/piix: Implement multi-process QEMU support also for PIIX4 hw/isa/piix: Resolve duplicate code regarding PCI interrupt wiring hw/isa/piix: Reuse PIIX3's PCI interrupt triggering in PIIX4 hw/isa/piix: Rename functions to be shared for PCI interrupt triggering hw/isa/piix: Reuse PIIX3 base class' realize method in PIIX4 hw/isa/piix: Share PIIX3's base class with PIIX4 hw/isa/piix: Harmonize names of reset control memory regions ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-22virtio: call ->vhost_reset_device() during resetStefan Hajnoczi1-0/+9
vhost-user-scsi has a VirtioDeviceClass->reset() function that calls ->vhost_reset_device(). The other vhost devices don't notify the vhost device upon reset. Stateful vhost devices may need to handle device reset in order to free resources or prevent stale device state from interfering after reset. Call ->vhost_device_reset() from virtio_reset() so that that vhost devices are notified of device reset. This patch affects behavior as follows: - vhost-kernel: No change in behavior since ->vhost_reset_device() is not implemented. - vhost-user: back-ends that negotiate VHOST_USER_PROTOCOL_F_RESET_DEVICE now receive a VHOST_USER_DEVICE_RESET message upon device reset. Otherwise there is no change in behavior. DPDK, SPDK, libvhost-user, and the vhost-user-backend crate do not negotiate VHOST_USER_PROTOCOL_F_RESET_DEVICE automatically. - vhost-vdpa: an extra SET_STATUS 0 call is made during device reset. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20231004014532.1228637-4-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
2023-10-20migration: simplify blockersSteve Sistare1-6/+2
Modify migrate_add_blocker and migrate_del_blocker to take an Error ** reason. This allows migration to own the Error object, so that if an error occurs in migrate_add_blocker, migration code can free the Error and clear the client handle, simplifying client code. It also simplifies the migrate_del_blocker call site. In addition, this is a pre-requisite for a proposed future patch that would add a mode argument to migration requests to support live update, and maintain a list of blockers for each mode. A blocker may apply to a single mode or to multiple modes, and passing Error** will allow one Error object to be registered for multiple modes. No functional change. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Tested-by: Michael Galaxy <mgalaxy@akamai.com> Reviewed-by: Michael Galaxy <mgalaxy@akamai.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <1697634216-84215-1-git-send-email-steven.sistare@oracle.com>
2023-10-12memory,vhost: Allow for marking memory device memory regions unmergeableDavid Hildenbrand1-2/+2
Let's allow for marking memory regions unmergeable, to teach flatview code and vhost to not merge adjacent aliases to the same memory region into a larger memory section; instead, we want separate aliases to stay separate such that we can atomically map/unmap aliases without affecting other aliases. This is desired for virtio-mem mapping device memory located on a RAM memory region via multiple aliases into a memory region container, resulting in separate memslots that can get (un)mapped atomically. As an example with virtio-mem, the layout would look something like this: [...] 0000000240000000-00000020bfffffff (prio 0, i/o): device-memory 0000000240000000-000000043fffffff (prio 0, i/o): virtio-mem 0000000240000000-000000027fffffff (prio 0, ram): alias memslot-0 @mem2 0000000000000000-000000003fffffff 0000000280000000-00000002bfffffff (prio 0, ram): alias memslot-1 @mem2 0000000040000000-000000007fffffff 00000002c0000000-00000002ffffffff (prio 0, ram): alias memslot-2 @mem2 0000000080000000-00000000bfffffff [...] Without unmergable memory regions, all three memslots would get merged into a single memory section. For example, when mapping another alias (e.g., virtio-mem-memslot-3) or when unmapping any of the mapped aliases, memory listeners will first get notified about the removal of the big memory section to then get notified about re-adding of the new (differently merged) memory section(s). In an ideal world, memory listeners would be able to deal with that atomically, like KVM nowadays does. However, (a) supporting this for other memory listeners (vhost-user, vfio) is fairly hard: temporary removal can result in all kinds of issues on concurrent access to guest memory; and (b) this handling is undesired, because temporarily removing+readding can consume quite some time on bigger memslots and is not efficient (e.g., vfio unpinning and repinning pages ...). Let's allow for marking a memory region unmergeable, such that we can atomically (un)map aliases to the same memory region, similar to (un)mapping individual DIMMs. Similarly, teach vhost code to not redo what flatview core stopped doing: don't merge such sections. Merging in vhost code is really only relevant for handling random holes in boot memory where; without this merging, the vhost-user backend wouldn't be able to mmap() some boot memory backed on hugetlb. We'll use this for virtio-mem next. Message-ID: <20230926185738.277351-18-david@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12memory-device,vhost: Support automatic decision on the number of memslotsDavid Hildenbrand1-1/+13
We want to support memory devices that can automatically decide how many memslots they will use. In the worst case, they have to use a single memslot. The target use cases are virtio-mem and the hyper-v balloon. Let's calculate a reasonable limit such a memory device may use, and instruct the device to make a decision based on that limit. Use a simple heuristic that considers: * A memslot soft-limit for all memory devices of 256; also, to not consume too many memslots -- which could harm performance. * Actually still free and unreserved memslots * The percentage of the remaining device memory region that memory device will occupy. Further, while we properly check before plugging a memory device whether there still is are free memslots, we have other memslot consumers (such as boot memory, PCI BARs) that don't perform any checks and might dynamically consume memslots without any prior reservation. So we might succeed in plugging a memory device, but once we dynamically map a PCI BAR we would be in trouble. Doing accounting / reservation / checks for all such users is problematic (e.g., sometimes we might temporarily split boot memory into two memslots, triggered by the BIOS). We use the historic magic memslot number of 509 as orientation to when supporting 256 memory devices -> memslots (leaving 253 for boot memory and other devices) has been proven to work reliable. We'll fallback to suggesting a single memslot if we don't have at least 509 total memslots. Plugging vhost devices with less than 509 memslots available while we have memory devices plugged that consume multiple memslots due to automatic decisions can be problematic. Most configurations might just fail due to "limit < used + reserved", however, it can also happen that these memory devices would suddenly consume memslots that would actually be required by other memslot consumers (boot, PCI BARs) later. Note that this has always been sketchy with vhost devices that support only a small number of memslots; but we don't want to make it any worse.So let's keep it simple and simply reject plugging such vhost devices in such a configuration. Eventually, all vhost devices that want to be fully compatible with such memory devices should support a decent number of memslots (>= 509). Message-ID: <20230926185738.277351-13-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12vhost: Add vhost_get_max_memslots()David Hildenbrand1-0/+11
Let's add vhost_get_max_memslots(). Message-ID: <20230926185738.277351-12-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12memory-device,vhost: Support memory devices that dynamically consume memslotsDavid Hildenbrand1-4/+14
We want to support memory devices that have a dynamically managed memory region container as device memory region. This device memory region maps multiple RAM memory subregions (e.g., aliases to the same RAM memory region), whereby these subregions can be (un)mapped on demand. Each RAM subregion will consume a memslot in KVM and vhost, resulting in such a new device consuming memslots dynamically, and initially usually 0. We already track the number of used vs. required memslots for all memslots. From that, we can derive the number of reserved memslots that must not be used otherwise. The target use case is virtio-mem and the hyper-v balloon, which will dynamically map aliases to RAM memory region into their device memory region container. Properly document what's supported and what's not and extend the vhost memslot check accordingly. Message-ID: <20230926185738.277351-10-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12vhost: Return number of free memslotsDavid Hildenbrand1-2/+2
Let's return the number of free slots instead of only checking if there is a free slot. Required to support memory devices that consume multiple memslots. This is a preparation for memory devices that consume multiple memslots. Message-ID: <20230926185738.277351-6-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12vhost: Remove vhost_backend_can_merge() callbackDavid Hildenbrand1-5/+1
Checking whether the memory regions are equal is sufficient: if they are equal, then most certainly the contained fd is equal. The whole vhost-user memslot handling is suboptimal and overly complicated. We shouldn't have to lookup a RAM memory regions we got notified about in vhost_user_get_mr_data() using a host pointer. But that requires a bigger rework -- especially an alternative vhost_set_mem_table() backend call that simply consumes MemoryRegionSections. For now, let's just drop vhost_backend_can_merge(). Message-ID: <20230926185738.277351-3-david@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12vhost: Rework memslot filtering and fix "used_memslot" trackingDavid Hildenbrand1-9/+47
Having multiple vhost devices, some filtering out fd-less memslots and some not, can mess up the "used_memslot" accounting. Consequently our "free memslot" checks become unreliable and we might run out of free memslots at runtime later. An example sequence which can trigger a potential issue that involves different vhost backends (vhost-kernel and vhost-user) and hotplugged memory devices can be found at [1]. Let's make the filtering mechanism less generic and distinguish between backends that support private memslots (without a fd) and ones that only support shared memslots (with a fd). Track the used_memslots for both cases separately and use the corresponding value when required. Note: Most probably we should filter out MAP_PRIVATE fd-based RAM regions (for example, via memory-backend-memfd,...,shared=off or as default with memory-backend-file) as well. When not using MAP_SHARED, it might not work as expected. Add a TODO for now. [1] https://lkml.kernel.org/r/fad9136f-08d3-3fd9-71a1-502069c000cf@redhat.com Message-ID: <20230926185738.277351-2-david@redhat.com> Fixes: 988a27754bbb ("vhost: allow backends to filter memory sections") Cc: Tiwei Bie <tiwei.bie@intel.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-06hw/virtio/vhost: Silence compiler warnings in vhost code when using -WshadowThomas Huth1-4/+4
Rename a variable in vhost_dev_sync_region() and remove a superfluous declaration in vhost_commit() to make this code compilable with "-Wshadow". Signed-off-by: Thomas Huth <thuth@redhat.com> Message-ID: <20231004114809.105672-1-thuth@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-By: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2023-08-03vhost: fix the fd leakLi Feng1-0/+2
When the vhost-user reconnect to the backend, the notifer should be cleanup. Otherwise, the fd resource will be exhausted. Fixes: f9a09ca3ea ("vhost: add support for configure interrupt") Signed-off-by: Li Feng <fengli@smartx.com> Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20230731121018.2856310-2-fengli@smartx.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Fiona Ebner <f.ebner@proxmox.com>
2023-07-10vhost: register and change IOMMU flag depending on Device-TLB stateViktor Prutyanov1-12/+26
The guest can disable or never enable Device-TLB. In these cases, it can't be used even if enabled in QEMU. So, check Device-TLB state before registering IOMMU notifier and select unmap flag depending on that. Also, implement a way to change IOMMU notifier flag if Device-TLB state is changed. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2001312 Signed-off-by: Viktor Prutyanov <viktor@daynix.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230626091258.24453-2-viktor@daynix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-28exec/memory: Add symbol for memory listener priority for device backendIsaku Yamahata1-1/+1
Add MEMORY_LISTENER_PRIORITY_DEV_BACKEND for the symbolic value for memory listener to replace the hard-coded value 10 for the device backend. No functional change intended. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <8314d91688030d7004e96958f12e2c83fb889245.1687279702.git.isaku.yamahata@intel.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-26vhost: fix vhost_dev_enable_notifiers() error caseLaurent Vivier1-29/+36
in vhost_dev_enable_notifiers(), if virtio_bus_set_host_notifier(true) fails, we call vhost_dev_disable_notifiers() that executes virtio_bus_set_host_notifier(false) on all queues, even on queues that have failed to be initialized. This triggers a core dump in memory_region_del_eventfd(): virtio_bus_set_host_notifier: unable to init event notifier: Too many open files (-24) vhost VQ 1 notifier binding failed: 24 .../softmmu/memory.c:2611: memory_region_del_eventfd: Assertion `i != mr->ioeventfd_nb' failed. Fix the problem by providing to vhost_dev_disable_notifiers() the number of queues to disable. Fixes: 8771589b6f81 ("vhost: simplify vhost_dev_enable_notifiers") Cc: longpeng2@huawei.com Signed-off-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20230602162735.3670785-1-lvivier@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-23vhost: release virtqueue objects in error pathPrasad Pandit1-1/+2
vhost_dev_start function does not release virtqueue objects when event_notifier_init() function fails. Release virtqueue objects and log a message about function failure. Signed-off-by: Prasad Pandit <pjp@fedoraproject.org> Message-Id: <20230529114333.31686-3-ppandit@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Fixes: f9a09ca3ea ("vhost: add support for configure interrupt") Reviewed-by: Peter Xu <peterx@redhat.com> Cc: qemu-stable@nongnu.org Acked-by: Jason Wang <jasowang@redhat.com>
2023-06-23vhost: release memory_listener object in error pathPrasad Pandit1-0/+3
vhost_dev_start function does not release memory_listener object in case of an error. This may crash the guest when vhost is unable to set memory table: stack trace of thread 125653: Program terminated with signal SIGSEGV, Segmentation fault #0 memory_listener_register (qemu-kvm + 0x6cda0f) #1 vhost_dev_start (qemu-kvm + 0x699301) #2 vhost_net_start (qemu-kvm + 0x45b03f) #3 virtio_net_set_status (qemu-kvm + 0x665672) #4 qmp_set_link (qemu-kvm + 0x548fd5) #5 net_vhost_user_event (qemu-kvm + 0x552c45) #6 tcp_chr_connect (qemu-kvm + 0x88d473) #7 tcp_chr_new_client (qemu-kvm + 0x88cf83) #8 tcp_chr_accept (qemu-kvm + 0x88b429) #9 qio_net_listener_channel_func (qemu-kvm + 0x7ac07c) #10 g_main_context_dispatch (libglib-2.0.so.0 + 0x54e2f) Release memory_listener objects in the error path. Signed-off-by: Prasad Pandit <pjp@fedoraproject.org> Message-Id: <20230529114333.31686-2-ppandit@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Fixes: c471ad0e9b ("vhost_net: device IOTLB support") Cc: qemu-stable@nongnu.org Acked-by: Jason Wang <jasowang@redhat.com>
2023-06-23hw/virtio: Remove unnecessary 'virtio-access.h' headerPhilippe Mathieu-Daudé1-1/+0
None of these files use the VirtIO Load/Store API declared by "hw/virtio/virtio-access.h". This header probably crept in via copy/pasting, remove it. Note, "virtio-access.h" is target-specific, so any file including it also become tainted as target-specific. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Message-Id: <20230524093744.88442-10-philmd@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2023-05-19vhost: expose function vhost_dev_has_iommu()Cindy Lu1-1/+1
To support vIOMMU in vdpa, need to exposed the function vhost_dev_has_iommu, vdpa will use this function to check if vIOMMU enable. Signed-off-by: Cindy Lu <lulu@redhat.com> Message-Id: <20230510054631.2951812-2-lulu@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-04-21vhost: Drop unused eventfd_add|del hooksPeter Xu1-14/+0
These hooks were introduced in: 80a1ea3748 ("memory: move ioeventfd ops to MemoryListener", 2012-02-29) But they seem to be never used. Drop them. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20230306193209.516011-1-peterx@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-03-07vdpa: move vhost reset after get vring baseEugenio Pérez1-0/+3
The function vhost.c:vhost_dev_stop calls vhost operation vhost_dev_start(false). In the case of vdpa it totally reset and wipes the device, making the fetching of the vring base (virtqueue state) totally useless. The kernel backend does not use vhost_dev_start vhost op callback, but vhost-user do. A patch to make vhost_user_dev_start more similar to vdpa is desirable, but it can be added on top. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230303172445.1089785-8-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-01-08vhost: configure all host notifiers in a single MR transactionLongpeng1-0/+24
This allows the vhost device to batch the setup of all its host notifiers. This significantly reduces the device starting time, e.g. the time spend on enabling notifiers reduce from 376ms to 9.1ms for a VM with 64 vCPUs and 3 vhost-vDPA generic devices (vdpa_sim_blk, 64vq per device) Signed-off-by: Longpeng <longpeng2@huawei.com> Message-Id: <20221227072015.3134-3-longpeng2@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-01-08vhost: simplify vhost_dev_enable_notifiersLongpeng1-16/+4
Simplify the error path in vhost_dev_enable_notifiers by using vhost_dev_disable_notifiers directly. Signed-off-by: Longpeng <longpeng2@huawei.com> Message-Id: <20221227072015.3134-2-longpeng2@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-01-08vhost: add support for configure interruptCindy Lu1-1/+77
Add functions to support configure interrupt. The configure interrupt process will start in vhost_dev_start and stop in vhost_dev_stop. Also add the functions to support vhost_config_pending and vhost_config_mask. Signed-off-by: Cindy Lu <lulu@redhat.com> Message-Id: <20221222070451.936503-8-lulu@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-12-21vhost: fix vq dirty bitmap syncing when vIOMMU is enabledJason Wang1-20/+64
When vIOMMU is enabled, the vq->used_phys is actually the IOVA not GPA. So we need to translate it to GPA before the syncing otherwise we may hit the following crash since IOVA could be out of the scope of the GPA log size. This could be noted when using virtio-IOMMU with vhost using 1G memory. Fixes: c471ad0e9bd46 ("vhost_net: device IOTLB support") Cc: qemu-stable@nongnu.org Tested-by: Lei Yang <leiyang@redhat.com> Reported-by: Yalan Zhang <yalzhang@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221216033552.77087-1-jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-12-01vhost: enable vrings in vhost_dev_start() for vhost-user devicesStefano Garzarella1-5/+39
Commit 02b61f38d3 ("hw/virtio: incorporate backend features in features") properly negotiates VHOST_USER_F_PROTOCOL_FEATURES with the vhost-user backend, but we forgot to enable vrings as specified in docs/interop/vhost-user.rst: If ``VHOST_USER_F_PROTOCOL_FEATURES`` has not been negotiated, the ring starts directly in the enabled state. If ``VHOST_USER_F_PROTOCOL_FEATURES`` has been negotiated, the ring is initialized in a disabled state and is enabled by ``VHOST_USER_SET_VRING_ENABLE`` with parameter 1. Some vhost-user front-ends already did this by calling vhost_ops.vhost_set_vring_enable() directly: - backends/cryptodev-vhost.c - hw/net/virtio-net.c - hw/virtio/vhost-user-gpio.c But most didn't do that, so we would leave the vrings disabled and some backends would not work. We observed this issue with the rust version of virtiofsd [1], which uses the event loop [2] provided by the vhost-user-backend crate where requests are not processed if vring is not enabled. Let's fix this issue by enabling the vrings in vhost_dev_start() for vhost-user front-ends that don't already do this directly. Same thing also in vhost_dev_stop() where we disable vrings. [1] https://gitlab.com/virtio-fs/virtiofsd [2] https://github.com/rust-vmm/vhost/blob/240fc2966/crates/vhost-user-backend/src/event_loop.rs#L217 Fixes: 02b61f38d3 ("hw/virtio: incorporate backend features in features") Reported-by: German Maglione <gmaglione@redhat.com> Tested-by: German Maglione <gmaglione@redhat.com> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Acked-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20221123131630.52020-1-sgarzare@redhat.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221130112439.2527228-3-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07vhost: expose vhost_virtqueue_stop()Kangjie Xu1-4/+4
Expose vhost_virtqueue_stop(), we need to use it when resetting a virtqueue. Signed-off-by: Kangjie Xu <kangjie.xu@linux.alibaba.com> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221017092558.111082-9-xuanzhuo@linux.alibaba.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07vhost: expose vhost_virtqueue_start()Kangjie Xu1-4/+4
Expose vhost_virtqueue_start(), we need to use it when restarting a virtqueue. Signed-off-by: Kangjie Xu <kangjie.xu@linux.alibaba.com> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221017092558.111082-8-xuanzhuo@linux.alibaba.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-07hw/virtio: add some vhost-user trace eventsAlex Bennée1-0/+6
These are useful for tracing the lifetime of vhost-user connections. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20220802095010.3330793-10-alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2022-08-17hw/virtio: gracefully handle unset vhost_dev vdevAlex Bennée1-3/+7
I've noticed asserts firing because we query the status of vdev after a vhost connection is closed down. Rather than faulting on the NULL indirect just quietly reply false. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20220728135503.1060062-3-alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-06-27vhost: setup error eventfd and dump errorsKonstantin Khlebnikov1-0/+37
Vhost has error notifications, let's log them like other errors. For each virt-queue setup eventfd for vring error notifications. Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> [vsementsov: rename patch, change commit message and dump error like other errors in the file] Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-Id: <20220623161325.18813-3-vsementsov@yandex-team.ru> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
2022-06-16vhost: also check queue state in the vhost_dev_set_log error routineNi Xun1-0/+4
When check queue state in the vhost_dev_set_log routine, it miss the error routine check, this patch also check queue state in error case. Fixes: 1e5a050f5798 ("check queue state in the vhost_dev_set_log routine") Signed-off-by: Ni Xun <richardni@tencent.com> Reviewed-by: Zhigang Lu <tonnylu@tencent.com> Message-Id: <OS0PR01MB57139163F3F3955960675B52EAA79@OS0PR01MB5713.jpnprd01.prod.outlook.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>