aboutsummaryrefslogtreecommitdiff
path: root/hw/virtio
AgeCommit message (Collapse)AuthorFilesLines
2020-04-01virtio-iommu: depend on PCIPaolo Bonzini1-1/+1
The virtio-iommu device attaches itself to a PCI bus, so it makes no sense to include it unless PCI is supported---and in fact compilation fails without this change. Reported-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-31vhost-vsock: fix double close() in the realize() error pathStefano Garzarella1-1/+5
vhost_dev_cleanup() closes the vhostfd parameter passed to vhost_dev_init(), so this patch avoids closing it twice in the vhost_vsock_device_realize() error path. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-Id: <20200331075910.42529-1-sgarzare@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2020-03-29virtio-iommu: avoid memleak in the unrealizePan Nengyuan1-0/+3
req_vq/event_vq forgot to free in unrealize. Fix that. And also do clean 's->as_by_busptr' hash table in unrealize to fix another leak. Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Acked-by: Eric Auger <eric.auger@redhat.com> Message-Id: <20200328005705.29898-3-pannengyuan@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-03-16misc: Replace zero-length arrays with flexible array member (automatic)Philippe Mathieu-Daudé1-2/+2
Description copied from Linux kernel commit from Gustavo A. R. Silva (see [3]): --v-- description start --v-- The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member [1], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being unadvertenly introduced [2] to the Linux codebase from now on. --^-- description end --^-- Do the similar housekeeping in the QEMU codebase (which uses C99 since commit 7be41675f7cb). All these instances of code were found with the help of the following Coccinelle script: @@ identifier s, m, a; type t, T; @@ struct s { ... t m; - T a[0]; + T a[]; }; @@ identifier s, m, a; type t, T; @@ struct s { ... t m; - T a[0]; + T a[]; } QEMU_PACKED; [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=76497732932f [3] https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git/commit/?id=17642a2fbd2c1 Inspired-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-08vhost-vsock: fix error message outputNick Erdmann1-1/+1
error_setg_errno takes a positive error number, so we should not invert errno's sign. Signed-off-by: Nick Erdmann <n@nirf.de> Message-Id: <04df3f47-c93b-1d02-d250-f9bda8dbc0fa@nirf.de> Reviewed-by: Ján Tomko <jtomko@redhat.com> Fixes: fc0b9b0e1cbb ("vhost-vsock: add virtio sockets device") Reviewed-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-03-08vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORMJason Wang1-1/+11
We turn on device IOTLB via VIRTIO_F_IOMMU_PLATFORM unconditionally on platform without IOMMU support. This can lead unnecessary IOTLB transactions which will damage the performance. Fixing this by check whether the device is backed by IOMMU and disable device IOTLB. Reported-by: Halil Pasic <pasic@linux.ibm.com> Tested-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20200302042454.24814-1-jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into stagingPeter Maydell10-15/+1135
virtio, pc: fixes, features New virtio iommu. Unrealize memory leaks. In-band kick/call support. Bugfixes, documentation all over the place. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # gpg: Signature made Thu 27 Feb 2020 08:46:33 GMT # gpg: using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469 # gpg: issuer "mst@redhat.com" # gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [full] # gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [full] # Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67 # Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469 * remotes/mst/tags/for_upstream: (30 commits) Fixed assert in vhost_user_set_mem_table_postcopy vhost-user: only set slave channel for first vq acpi: cpuhp: document CPHP_GET_CPU_ID_CMD command libvhost-user: implement in-band notifications docs: vhost-user: add in-band kick/call messages libvhost-user: handle NOFD flag in call/kick/err better libvhost-user-glib: use g_main_context_get_thread_default() libvhost-user-glib: fix VugDev main fd cleanup libvhost-user: implement VHOST_USER_PROTOCOL_F_REPLY_ACK MAINTAINERS: add virtio-iommu related files hw/arm/virt: Add the virtio-iommu device tree mappings virtio-iommu-pci: Add virtio iommu pci support virtio-iommu: Support migration virtio-iommu: Implement fault reporting virtio-iommu: Implement translate virtio-iommu: Implement map/unmap virtio-iommu: Implement attach/detach command virtio-iommu: Decode the command payload virtio-iommu: Add skeleton virtio: gracefully handle invalid region caches ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-27Fixed assert in vhost_user_set_mem_table_postcopyRaphael Norwitz1-1/+1
The current vhost_user_set_mem_table_postcopy() implementation populates each region of the VHOST_USER_SET_MEM_TABLE message without first checking if there are more than VHOST_MEMORY_MAX_NREGIONS already populated. This can cause memory corruption if too many regions are added to the message during the postcopy step. This change moves an existing assert up such that attempting to construct a VHOST_USER_SET_MEM_TABLE message with too many memory regions will gracefully bring down qemu instead of corrupting memory. Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Signed-off-by: Peter Turschmid <peter.turschm@nutanix.com> Message-Id: <1579143426-18305-2-git-send-email-raphael.norwitz@nutanix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27vhost-user: only set slave channel for first vqAdrian Moreno1-3/+5
When multiqueue is enabled, a vhost_dev is created for each queue pair. However, only one slave channel is needed. Fixes: 4bbeeba023f2 (vhost-user: add slave-req-fd support) Cc: marcandre.lureau@redhat.com Signed-off-by: Adrian Moreno <amorenoz@redhat.com> Message-Id: <20200121214553.28459-1-amorenoz@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio-iommu-pci: Add virtio iommu pci supportEric Auger2-0/+105
This patch adds virtio-iommu-pci, which is the pci proxy for the virtio-iommu device. Currently non DT integration is not yet supported by the kernel. So the machine must implement a hotplug handler for the virtio-iommu-pci device that creates the device tree iommu-map bindings as documented in kernel documentation: Documentation/devicetree/bindings/virtio/iommu.txt Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20200214132745.23392-9-eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio-iommu: Support migrationEric Auger1-10/+99
Add Migration support. We rely on recently added gtree and qlist migration. We only migrate the domain gtree. The endpoint gtree is re-constructed in a post-load operation. Signed-off-by: Eric Auger <eric.auger@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20200214132745.23392-8-eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio-iommu: Implement fault reportingEric Auger2-5/+66
The event queue allows to report asynchronous errors. The translate function now injects faults when relevant. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20200214132745.23392-7-eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio-iommu: Implement translateEric Auger2-1/+60
This patch implements the translate callback Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20200214132745.23392-6-eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio-iommu: Implement map/unmapEric Auger2-2/+62
This patch implements virtio_iommu_map/unmap. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20200214132745.23392-5-eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio-iommu: Implement attach/detach commandEric Auger2-2/+315
This patch implements the endpoint attach/detach to/from a domain. Domain and endpoint internal datatypes are introduced. Both are stored in RB trees. The domain owns a list of endpoints attached to it. Also helpers to get/put end points and domains are introduced. As for the IOMMU memory regions, a callback is called on PCI bus enumeration that initializes for a given device on the bus hierarchy an IOMMU memory region. The PCI bus hierarchy is stored locally in IOMMUPciBus and IOMMUDevice objects. At the time of the enumeration, the bus number may not be computed yet. So operations that will need to retrieve the IOMMUdevice and its IOMMU memory region from the bus number and devfn, once the bus number is garanteed to be frozen, use an array of IOMMUPciBus, lazily populated. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20200214132745.23392-4-eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio-iommu: Decode the command payloadEric Auger2-12/+68
This patch adds the command payload decoding and introduces the functions that will do the actual command handling. Those functions are not yet implemented. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20200214132745.23392-3-eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio-iommu: Add skeletonEric Auger4-0/+278
This patchs adds the skeleton for the virtio-iommu device. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20200214132745.23392-2-eric.auger@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-27virtio: gracefully handle invalid region cachesStefan Hajnoczi1-8/+91
The virtqueue code sets up MemoryRegionCaches to access the virtqueue guest RAM data structures. The code currently assumes that VRingMemoryRegionCaches is initialized before device emulation code accesses the virtqueue. An assertion will fail in vring_get_region_caches() when this is not true. Device fuzzing found a case where this assumption is false (see below). Virtqueue guest RAM addresses can also be changed from a vCPU thread while an IOThread is accessing the virtqueue. This breaks the same assumption but this time the caches could become invalid partway through the virtqueue code. The code fetches the caches RCU pointer multiple times so we will need to validate the pointer every time it is fetched. Add checks each time we call vring_get_region_caches() and treat invalid caches as a nop: memory stores are ignored and memory reads return 0. The fuzz test failure is as follows: $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \ -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \ -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \ -display none \ -qtest stdio endianness outl 0xcf8 0x80002020 outl 0xcfc 0xe0000000 outl 0xcf8 0x80002004 outw 0xcfc 0x7 write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001 inb 0x4 writew 0xe000001c 0x1 write 0xe0000014 0x1 0x0d The following error message is produced: qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed. The backtrace looks like this: #0 0x00007ffff5520625 in raise () at /lib64/libc.so.6 #1 0x00007ffff55098d9 in abort () at /lib64/libc.so.6 #2 0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6 #3 0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6 #4 0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286 #5 vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283 #6 0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398 #7 virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398 #8 virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451 #9 0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444 #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775 #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244 #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429 #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460 #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260 #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0 #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219 #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242 #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518 #19 0x00005555559b20c9 in main_loop () at vl.c:1683 #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441 Reported-by: Alexander Bulekov <alxndr@bu.edu> Cc: Michael Tsirkin <mst@redhat.com> Cc: Cornelia Huck <cohuck@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: qemu-stable@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200207104619.164892-1-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-25virtio-crypto: do delete ctrl_vq in virtio_crypto_device_unrealizePan Nengyuan1-1/+2
Similar to other virtio-deivces, ctrl_vq forgot to delete in virtio_crypto_device_unrealize, this patch fix it. This device has aleardy maintained vq pointers. Thus, we use the new virtio_delete_queue function directly to do the cleanup. The leak stack: Direct leak of 10752 byte(s) in 3 object(s) allocated from: #0 0x7f4c024b1970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970) #1 0x7f4c018be49d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d) #2 0x55a2f8017279 in virtio_add_queue /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:2333 #3 0x55a2f8057035 in virtio_crypto_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio-crypto.c:814 #4 0x55a2f8005d80 in virtio_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:3531 #5 0x55a2f8497d1b in device_set_realized /mnt/sdb/qemu-new/qemu_test/qemu/hw/core/qdev.c:891 #6 0x55a2f8b48595 in property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:2238 #7 0x55a2f8b54fad in object_property_set_qobject /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qobject.c:26 #8 0x55a2f8b4de2c in object_property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:1390 #9 0x55a2f80609c9 in virtio_crypto_pci_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio-crypto-pci.c:58 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Cc: "Gonglei (Arei)" <arei.gonglei@huawei.com> Message-Id: <20200225075554.10835-5-pannengyuan@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-25virtio-pmem: do delete rq_vq in virtio_pmem_unrealizePan Nengyuan1-0/+1
Similar to other virtio-devices, rq_vq forgot to delete in virtio_pmem_unrealize, this patch fix it. This device has already maintained a vq pointer, thus we use the new virtio_delete_queue function directly to do the cleanup. Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Message-Id: <20200225075554.10835-4-pannengyuan@huawei.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-25vhost-user-fs: convert to the new virtio_delete_queue functionPan Nengyuan1-6/+9
use the new virtio_delete_queue function to cleanup. Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200225075554.10835-3-pannengyuan@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-25vhost-user-fs: do delete virtio_queues in unrealizePan Nengyuan1-0/+9
Similar to other virtio device(https://patchwork.kernel.org/patch/11399237/), virtio queues forgot to delete in unrealize, and aslo error path in realize, this patch fix these memleaks, the leak stack is as follow: Direct leak of 57344 byte(s) in 1 object(s) allocated from: #0 0x7f15784fb970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970) #1 0x7f157790849d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d) #2 0x55587a1bf859 in virtio_add_queue /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:2333 #3 0x55587a2071d5 in vuf_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/vhost-user-fs.c:212 #4 0x55587a1ae360 in virtio_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:3531 #5 0x55587a63fb7b in device_set_realized /mnt/sdb/qemu-new/qemu_test/qemu/hw/core/qdev.c:891 #6 0x55587acf03f5 in property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:2238 #7 0x55587acfce0d in object_property_set_qobject /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qobject.c:26 #8 0x55587acf5c8c in object_property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:1390 #9 0x55587a8e22a2 in pci_qdev_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/pci/pci.c:2095 #10 0x55587a63fb7b in device_set_realized /mnt/sdb/qemu-new/qemu_test/qemu/hw/core/qdev.c:891 #11 0x55587acf03f5 in property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:2238 #12 0x55587acfce0d in object_property_set_qobject /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qobject.c:26 #13 0x55587acf5c8c in object_property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:1390 #14 0x55587a496d65 in qdev_device_add /mnt/sdb/qemu-new/qemu_test/qemu/qdev-monitor.c:679 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200225075554.10835-2-pannengyuan@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-02-20hw/virtio: Let vhost_memory_map() use a boolean 'is_write' argumentPhilippe Mathieu-Daudé1-4/+4
The 'is_write' argument is either 0 or 1. Convert it to a boolean type. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2020-02-20hw/virtio: Let virtqueue_map_iovec() use a boolean 'is_write' argumentPhilippe Mathieu-Daudé1-3/+4
The 'is_write' argument is either 0 or 1. Convert it to a boolean type. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2020-01-27Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell22-23/+23
* Register qdev properties as class properties (Marc-André) * Cleanups (Philippe) * virtio-scsi fix (Pan Nengyuan) * Tweak Skylake-v3 model id (Kashyap) * x86 UCODE_REV support and nested live migration fix (myself) * Advisory mode for pvpanic (Zhenwei) # gpg: Signature made Fri 24 Jan 2020 20:16:23 GMT # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (58 commits) build-sys: clean up flags included in the linker command line target/i386: Add the 'model-id' for Skylake -v3 CPU models qdev: use object_property_help() qapi/qmp: add ObjectPropertyInfo.default-value qom: introduce object_property_help() qom: simplify qmp_device_list_properties() vl: print default value in object help qdev: register properties as class properties qdev: move instance properties to class properties qdev: rename DeviceClass.props qdev: set properties with device_class_set_props() object: return self in object_ref() object: release all props object: add object_class_property_add_link() object: express const link with link property object: add direct link flag object: rename link "child" to "target" object: check strong flag with & object: do not free class properties object: add object_property_set_default ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-01-24qdev: set properties with device_class_set_props()Marc-André Lureau22-23/+23
The following patch will need to handle properties registration during class_init time. Let's use a device_class_set_props() setter. spatch --macro-file scripts/cocci-macro-file.h --sp-file ./scripts/coccinelle/qdev-set-props.cocci --keep-comments --in-place --dir . @@ typedef DeviceClass; DeviceClass *d; expression val; @@ - d->props = val + device_class_set_props(d, val) Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20200110153039.1379601-20-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-01-23vhost-user: Print unexpected slave message typesDr. David Alan Gilbert1-1/+1
When we receive an unexpected message type on the slave fd, print the type. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2020-01-23vhost: coding style fixMichael S. Tsirkin1-3/+3
Drop a trailing whitespace. Make line shorter. Fixes: 76525114736e8 ("vhost: Only align sections for vhost-user") Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-22vhost: Only align sections for vhost-userDr. David Alan Gilbert1-16/+18
I added hugepage alignment code in c1ece84e7c9 to deal with vhost-user + postcopy which needs aligned pages when using userfault. However, on x86 the lower 2MB of address space tends to be shotgun'd with small fragments around the 512-640k range - e.g. video RAM, and with HyperV synic pages tend to sit around there - again splitting it up. The alignment code complains with a 'Section rounded to ...' error and gives up. Since vhost-user already filters out devices without an fd (see vhost-user.c vhost_user_mem_section_filter) it shouldn't be affected by those overlaps. Turn the alignment off on vhost-kernel so that it doesn't try and align, and thus won't hit the rounding issues. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20200116202414.157959-3-dgilbert@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2020-01-22vhost: Add names to section rounded warningDr. David Alan Gilbert1-3/+4
Add the memory region names to section rounding/alignment warnings. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20200116202414.157959-2-dgilbert@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-22vhost-vsock: delete vqs in vhost_vsock_unrealize to avoid memleaksPan Nengyuan1-2/+10
Receive/transmit/event vqs forgot to cleanup in vhost_vsock_unrealize. This patch save receive/transmit vq pointer in realize() and cleanup vqs through those vq pointers in unrealize(). The leak stack is as follow: Direct leak of 21504 byte(s) in 3 object(s) allocated from: #0 0x7f86a1356970 (/lib64/libasan.so.5+0xef970) ??:? #1 0x7f86a09aa49d (/lib64/libglib-2.0.so.0+0x5249d) ??:? #2 0x5604852f85ca (./x86_64-softmmu/qemu-system-x86_64+0x2c3e5ca) /mnt/sdb/qemu/hw/virtio/virtio.c:2333 #3 0x560485356208 (./x86_64-softmmu/qemu-system-x86_64+0x2c9c208) /mnt/sdb/qemu/hw/virtio/vhost-vsock.c:339 #4 0x560485305a17 (./x86_64-softmmu/qemu-system-x86_64+0x2c4ba17) /mnt/sdb/qemu/hw/virtio/virtio.c:3531 #5 0x5604858e6b65 (./x86_64-softmmu/qemu-system-x86_64+0x322cb65) /mnt/sdb/qemu/hw/core/qdev.c:865 #6 0x5604861e6c41 (./x86_64-softmmu/qemu-system-x86_64+0x3b2cc41) /mnt/sdb/qemu/qom/object.c:2102 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Message-Id: <20200115062535.50644-1-pannengyuan@huawei.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-06virtio: reset region cache when on queue deletionYuri Benditovich1-0/+1
https://bugzilla.redhat.com/show_bug.cgi?id=1708480 Fix leak of region reference that prevents complete device deletion on hot unplug. Cc: qemu-stable@nongnu.org Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com> Message-Id: <20191226043649.14481-2-yuri.benditovich@daynix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-06virtio-mmio: update queue size on guest writeDenis Plotnikov1-1/+2
Some guests read back queue size after writing it. Always update the on size write otherwise they might be confused. Cc: qemu-stable@nongnu.org Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com> Message-Id: <20191224081446.17003-1-dplotnikov@virtuozzo.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-05vhost-user: add VHOST_USER_RESET_DEVICE to reset devicesRaphael Norwitz1-1/+7
Add a VHOST_USER_RESET_DEVICE message which will reset the vhost user backend. Disabling all rings, and resetting all internal state, ready for the backend to be reinitialized. A backend has to report it supports this features with the VHOST_USER_PROTOCOL_F_RESET_DEVICE protocol feature bit. If it does so, the new message is used instead of sending a RESET_OWNER which has had inconsistent implementations. Signed-off-by: David Vrabel <david.vrabel@nutanix.com> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <1572385083-5254-2-git-send-email-raphael.norwitz@nutanix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-05virtio-mmio: Clear v2 transport state on soft resetJean-Philippe Brucker1-0/+14
At the moment when the guest writes a status of 0, we only reset the virtio core state but not the virtio-mmio state. The virtio-mmio specification says (v1.1 cs01, 4.2.2.1 Device Requirements: MMIO Device Register Layout): Upon reset, the device MUST clear all bits in InterruptStatus and ready bits in the QueueReady register for all queues in the device. The core already takes care of InterruptStatus by clearing isr, but we still need to clear QueueReady. It would be tempting to clean all registers, but since the specification doesn't say anything more, guests could rely on the registers keeping their state across reset. Linux for example, relies on this for GuestPageSize in the legacy MMIO tranport. Fixes: 44e687a4d9ab ("virtio-mmio: implement modern (v2) personality (virtio-1)") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-Id: <20191213095410.1516119-1-jean-philippe@linaro.org> Reviewed-by: Sergio Lopez <slp@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-05virtio: don't enable notifications during pollingStefan Hajnoczi1-6/+6
Virtqueue notifications are not necessary during polling, so we disable them. This allows the guest driver to avoid MMIO vmexits. Unfortunately the virtio-blk and virtio-scsi handler functions re-enable notifications, defeating this optimization. Fix virtio-blk and virtio-scsi emulation so they leave notifications disabled. The key thing to remember for correctness is that polling always checks one last time after ending its loop, therefore it's safe to lose the race when re-enabling notifications at the end of polling. There is a measurable performance improvement of 5-10% with the null-co block driver. Real-life storage configurations will see a smaller improvement because the MMIO vmexit overhead contributes less to latency. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20191209210957.65087-1-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-05virtio-pci: disable vring processing when bus-mastering is disabledMichael Roth2-11/+36
Currently the SLOF firmware for pseries guests will disable/re-enable a PCI device multiple times via IO/MEM/MASTER bits of PCI_COMMAND register after the initial probe/feature negotiation, as it tends to work with a single device at a time at various stages like probing and running block/network bootloaders without doing a full reset in-between. In QEMU, when PCI_COMMAND_MASTER is disabled we disable the corresponding IOMMU memory region, so DMA accesses (including to vring fields like idx/flags) will no longer undergo the necessary translation. Normally we wouldn't expect this to happen since it would be misbehavior on the driver side to continue driving DMA requests. However, in the case of pseries, with iommu_platform=on, we trigger the following sequence when tearing down the virtio-blk dataplane ioeventfd in response to the guest unsetting PCI_COMMAND_MASTER: #2 0x0000555555922651 in virtqueue_map_desc (vdev=vdev@entry=0x555556dbcfb0, p_num_sg=p_num_sg@entry=0x7fffe657e1a8, addr=addr@entry=0x7fffe657e240, iov=iov@entry=0x7fffe6580240, max_num_sg=max_num_sg@entry=1024, is_write=is_write@entry=false, pa=0, sz=0) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:757 #3 0x0000555555922a89 in virtqueue_pop (vq=vq@entry=0x555556dc8660, sz=sz@entry=184) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:950 #4 0x00005555558d3eca in virtio_blk_get_request (vq=0x555556dc8660, s=0x555556dbcfb0) at /home/mdroth/w/qemu.git/hw/block/virtio-blk.c:255 #5 0x00005555558d3eca in virtio_blk_handle_vq (s=0x555556dbcfb0, vq=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/block/virtio-blk.c:776 #6 0x000055555591dd66 in virtio_queue_notify_aio_vq (vq=vq@entry=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:1550 #7 0x000055555591ecef in virtio_queue_notify_aio_vq (vq=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:1546 #8 0x000055555591ecef in virtio_queue_host_notifier_aio_poll (opaque=0x555556dc86c8) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:2527 #9 0x0000555555d02164 in run_poll_handlers_once (ctx=ctx@entry=0x55555688bfc0, timeout=timeout@entry=0x7fffe65844a8) at /home/mdroth/w/qemu.git/util/aio-posix.c:520 #10 0x0000555555d02d1b in try_poll_mode (timeout=0x7fffe65844a8, ctx=0x55555688bfc0) at /home/mdroth/w/qemu.git/util/aio-posix.c:607 #11 0x0000555555d02d1b in aio_poll (ctx=ctx@entry=0x55555688bfc0, blocking=blocking@entry=true) at /home/mdroth/w/qemu.git/util/aio-posix.c:639 #12 0x0000555555d0004d in aio_wait_bh_oneshot (ctx=0x55555688bfc0, cb=cb@entry=0x5555558d5130 <virtio_blk_data_plane_stop_bh>, opaque=opaque@entry=0x555556de86f0) at /home/mdroth/w/qemu.git/util/aio-wait.c:71 #13 0x00005555558d59bf in virtio_blk_data_plane_stop (vdev=<optimized out>) at /home/mdroth/w/qemu.git/hw/block/dataplane/virtio-blk.c:288 #14 0x0000555555b906a1 in virtio_bus_stop_ioeventfd (bus=bus@entry=0x555556dbcf38) at /home/mdroth/w/qemu.git/hw/virtio/virtio-bus.c:245 #15 0x0000555555b90dbb in virtio_bus_stop_ioeventfd (bus=bus@entry=0x555556dbcf38) at /home/mdroth/w/qemu.git/hw/virtio/virtio-bus.c:237 #16 0x0000555555b92a8e in virtio_pci_stop_ioeventfd (proxy=0x555556db4e40) at /home/mdroth/w/qemu.git/hw/virtio/virtio-pci.c:292 #17 0x0000555555b92a8e in virtio_write_config (pci_dev=0x555556db4e40, address=<optimized out>, val=1048832, len=<optimized out>) at /home/mdroth/w/qemu.git/hw/virtio/virtio-pci.c:613 I.e. the calling code is only scheduling a one-shot BH for virtio_blk_data_plane_stop_bh, but somehow we end up trying to process an additional virtqueue entry before we get there. This is likely due to the following check in virtio_queue_host_notifier_aio_poll: static bool virtio_queue_host_notifier_aio_poll(void *opaque) { EventNotifier *n = opaque; VirtQueue *vq = container_of(n, VirtQueue, host_notifier); bool progress; if (!vq->vring.desc || virtio_queue_empty(vq)) { return false; } progress = virtio_queue_notify_aio_vq(vq); namely the call to virtio_queue_empty(). In this case, since no new requests have actually been issued, shadow_avail_idx == last_avail_idx, so we actually try to access the vring via vring_avail_idx() to get the latest non-shadowed idx: int virtio_queue_empty(VirtQueue *vq) { bool empty; ... if (vq->shadow_avail_idx != vq->last_avail_idx) { return 0; } rcu_read_lock(); empty = vring_avail_idx(vq) == vq->last_avail_idx; rcu_read_unlock(); return empty; but since the IOMMU region has been disabled we get a bogus value (0 usually), which causes virtio_queue_empty() to falsely report that there are entries to be processed, which causes errors such as: "virtio: zero sized buffers are not allowed" or "virtio-blk missing headers" and puts the device in an error state. This patch works around the issue by introducing virtio_set_disabled(), which sets a 'disabled' flag to bypass checks like virtio_queue_empty() when bus-mastering is disabled. Since we'd check this flag at all the same sites as vdev->broken, we replace those checks with an inline function which checks for either vdev->broken or vdev->disabled. The 'disabled' flag is only migrated when set, which should be fairly rare, but to maintain migration compatibility we disable it's use for older machine types. Users requiring the use of the flag in conjunction with older machine types can set it explicitly as a virtio-device option. NOTES: - This leaves some other oddities in play, like the fact that DRIVER_OK also gets unset in response to bus-mastering being disabled, but not restored (however the device seems to continue working) - Similarly, we disable the host notifier via virtio_bus_stop_ioeventfd(), which seems to move the handling out of virtio-blk dataplane and back into the main IO thread, and it ends up staying there till a reset (but otherwise continues working normally) Cc: David Gibson <david@gibson.dropbear.id.au>, Cc: Alexey Kardashevskiy <aik@ozlabs.ru> Cc: "Michael S. Tsirkin" <mst@redhat.com> Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com> Message-Id: <20191120005003.27035-1-mdroth@linux.vnet.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-05virtio: update queue size on guest writeMichael S. Tsirkin1-0/+2
Some guests read back queue size after writing it. Update the size immediatly upon write otherwise they get confused. In particular this is the case for seabios. Reported-by: Roman Kagan <rkagan@virtuozzo.com> Suggested-by: Denis Plotnikov <dplotnikov@virtuozzo.com> Cc: qemu-stable@nongnu.org Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-05virtio-balloon: fix memory leak while attach virtio-balloon devicePan Nengyuan1-0/+7
ivq/dvq/svq/free_page_vq is forgot to cleanup in virtio_balloon_device_unrealize, the memory leak stack is as follow: Direct leak of 14336 byte(s) in 2 object(s) allocated from: #0 0x7f99fd9d8560 in calloc (/usr/lib64/libasan.so.3+0xc7560) #1 0x7f99fcb20015 in g_malloc0 (/usr/lib64/libglib-2.0.so.0+0x50015) #2 0x557d90638437 in virtio_add_queue hw/virtio/virtio.c:2327 #3 0x557d9064401d in virtio_balloon_device_realize hw/virtio/virtio-balloon.c:793 #4 0x557d906356f7 in virtio_device_realize hw/virtio/virtio.c:3504 #5 0x557d9073f081 in device_set_realized hw/core/qdev.c:876 #6 0x557d908b1f4d in property_set_bool qom/object.c:2080 #7 0x557d908b655e in object_property_set_qobject qom/qom-qobject.c:26 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Message-Id: <1575444716-17632-2-git-send-email-pannengyuan@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com>
2020-01-05virtio: make virtio_delete_queue idempotentMichael S. Tsirkin1-0/+1
Let's make sure calling this twice is harmless - no known instances, but seems safer. Suggested-by: Pan Nengyuan <pannengyuan@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-05virtio: add ability to delete vq through a pointerMichael S. Tsirkin1-5/+10
Devices tend to maintain vq pointers, allow deleting them trough a vq pointer. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com>
2019-12-17configure: simplify vhost condition with KconfigMarc-André Lureau2-2/+5
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-12-13virtio-fs: fix MSI-X nvectors calculationStefan Hajnoczi1-1/+2
The following MSI-X vectors are required: * VIRTIO Configuration Change * hiprio virtqueue * requests virtqueues Fix the calculation to reserve enough MSI-X vectors. Otherwise guest drivers fall back to a sub-optional configuration where all virtqueues share a single vector. This change does not break live migration compatibility since vhost-user-fs-pci devices are not migratable yet. Reported-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20191209110759.35227-1-stefanha@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-12-13vhost-user-fs: remove "vhostfd" propertyMarc-André Lureau1-1/+0
The property doesn't make much sense for a vhost-user device. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20191116112016.14872-1-marcandre.lureau@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-11-06virtio: notify virtqueue via host notifier when availableStefan Hajnoczi2-1/+12
Host notifiers are used in several cases: 1. Traditional ioeventfd where virtqueue notifications are handled in the main loop thread. 2. IOThreads (aio_handle_output) where virtqueue notifications are handled in an IOThread AioContext. 3. vhost where virtqueue notifications are handled by kernel vhost or a vhost-user device backend. Most virtqueue notifications from the guest use the ioeventfd mechanism, but there are corner cases where QEMU code calls virtio_queue_notify(). This currently honors the host notifier for the IOThreads aio_handle_output case, but not for the vhost case. The result is that vhost does not receive virtqueue notifications from QEMU when virtio_queue_notify() is called. This patch extends virtio_queue_notify() to set the host notifier whenever it is enabled instead of calling the vq->(aio_)handle_output() function directly. We track the host notifier state for each virtqueue separately since some devices may use it only for certain virtqueues. This fixes the vhost case although it does add a trip through the eventfd for the traditional ioeventfd case. I don't think it's worth adding a fast path for the traditional ioeventfd case because calling virtio_queue_notify() is rare when ioeventfd is enabled. Reported-by: Felipe Franciosi <felipe@nutanix.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20191105140946.165584-1-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-10-29virtio: Use auto rcu_read macrosDr. David Alan Gilbert1-42/+23
Use RCU_READ_LOCK_GUARD and WITH_RCU_READ_LOCK_GUARD to replace the manual rcu_read_(un)lock calls. I think the only change is virtio_load which was missing unlocks in error paths; those end up being fatal errors so it's not that important anyway. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20191028161109.60205-1-dgilbert@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-10-29virtio/vhost: Use auto_rcu_read macrosDr. David Alan Gilbert1-3/+1
Use RCU_READ_LOCK_GUARD instead of manual rcu_read_(un)lock Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20191025103403.120616-2-dgilbert@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-10-29Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into ↵Peter Maydell1-0/+7
staging # gpg: Signature made Tue 29 Oct 2019 02:33:36 GMT # gpg: using RSA key EF04965B398D6211 # gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [marginal] # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211 * remotes/jasowang/tags/net-pull-request: COLO-compare: Fix incorrect `if` logic virtio-net: prevent offloads reset on migration virtio: new post_load hook net: add tulip (dec21143) driver Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-29virtio: new post_load hookMichael S. Tsirkin1-0/+7
Post load hook in virtio vmsd is called early while device is processed, and when VirtIODevice core isn't fully initialized. Most device specific code isn't ready to deal with a device in such state, and behaves weirdly. Add a new post_load hook in a device class instead. Devices should use this unless they specifically want to verify the migration stream as it's processed, e.g. for bounds checking. Cc: qemu-stable@nongnu.org Suggested-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2019-10-28Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into stagingPeter Maydell2-92/+977
virtio: features, tests libqos update with support for virtio 1. Packed ring support for virtio. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # gpg: Signature made Fri 25 Oct 2019 12:47:59 BST # gpg: using RSA key 281F0DB8D28D5469 # gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [full] # gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [full] # Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67 # Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469 * remotes/mst/tags/for_upstream: (25 commits) virtio: drop unused virtio_device_stop_ioeventfd() function libqos: add VIRTIO PCI 1.0 support libqos: extract Legacy virtio-pci.c code libqos: make the virtio-pci BAR index configurable libqos: expose common virtqueue setup/cleanup functions libqos: add MSI-X callbacks to QVirtioPCIDevice libqos: pass full QVirtQueue to set_queue_address() libqos: add iteration support to qpci_find_capability() libqos: access VIRTIO 1.0 vring in little-endian libqos: implement VIRTIO 1.0 FEATURES_OK step libqos: enforce Device Initialization order libqos: add missing virtio-9p feature negotiation tests/virtio-blk-test: set up virtqueue after feature negotiation virtio-scsi-test: add missing feature negotiation libqos: extend feature bits to 64-bit libqos: read QVIRTIO_MMIO_VERSION register tests/virtio-blk-test: read config space after feature negotiation virtio: add property to enable packed virtqueue vhost_net: enable packed ring support virtio: event suppression support for packed ring ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>