aboutsummaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)AuthorFilesLines
2024-01-05util/oslib: Have qemu_prealloc_mem() handler return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have qemu_prealloc_mem() return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-19-philmd@linaro.org>
2024-01-05backends: Have HostMemoryBackendClass::alloc() handler return a booleanPhilippe Mathieu-Daudé1-1/+9
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have HostMemoryBackendClass::alloc return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-17-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_ram_from_fd() handler return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_ram_from_fd return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-14-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_ram_from_file() handler return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_ram_from_file return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-13-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_resizeable_ram() return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_resizeable_ram return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-12-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_rom_device() handler return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_rom_device return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-11-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_rom_device_nomigrate() return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_rom_device_nomigrate() return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-9-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_rom() handler return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_rom() return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-8-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_ram() handler return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_ram() return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-7-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_rom_nomigrate() handler return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_rom_nomigrate return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-4-philmd@linaro.org> [PMD: Only update 'readonly' field on success (Manos Pitsidianakis)] Message-Id: <af352e7d-3346-4705-be77-6eed86858d18@linaro.org>
2024-01-05memory: Have memory_region_init_ram_nomigrate() handler return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_ram_nomigrate return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-3-philmd@linaro.org>
2024-01-05memory: Have memory_region_init_ram_flags_nomigrate() return a booleanPhilippe Mathieu-Daudé1-1/+3
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have memory_region_init_ram_nomigrate return a boolean indicating whether an error is set or not. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Message-Id: <20231120213301.24349-2-philmd@linaro.org>
2024-01-05hw/mips: Inline 'bios.h' definitionsPhilippe Mathieu-Daudé1-14/+0
There is no universal BIOS, each machine needs a specific one. Move the machine-specific definitions to each machine code and remove this bogus header. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20231122184334.18201-1-philmd@linaro.org>
2024-01-05hw/ppc/xive2_regs: Remove unnecessary 'cpu.h' inclusionPhilippe Mathieu-Daudé1-1/+1
xive2_regs.h only requires declarations from "qemu/bswap.h". Include it instead of the huge target-specific "cpu.h". Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Cédric Le Goater <clg@redhat.com> Message-Id: <20231122183920.17905-1-philmd@linaro.org>
2024-01-05hw/core/cpu: Update description of CPUState::nodePhilippe Mathieu-Daudé1-1/+1
'next_cpu' was converted to 'node' in commit bdc44640cb ("cpu: Use QTAILQ for CPU list"). Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20231129183243.15859-1-philmd@linaro.org>
2024-01-05hw/core/cpu: Remove final vestiges of dynamic state tracingPhilippe Mathieu-Daudé1-3/+0
The dynamic state tracing was removed in commit d0aaf08bb9. Fixes: d0aaf08bb9 ("tcg: remove the final vestiges of dstate") Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20231129182734.15565-1-philmd@linaro.org>
2024-01-05hw/core: Add machine_class_default_cpu_type()Philippe Mathieu-Daudé1-0/+6
Add a helper to return a machine default CPU type. If this machine is restricted to a single CPU type, use it as default, obviously. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20231116163726.28952-1-philmd@linaro.org>
2024-01-05cpu: Add helper cpu_model_from_type()Gavin Shan1-0/+13
Add helper cpu_model_from_type() to extract the CPU model name from the CPU type name in two circumstances: (1) The CPU type name is the combination of the CPU model name and suffix. (2) The CPU type name is same to the CPU model name. The helper will be used in the subsequent commits to conver the CPU type name to the CPU model name. Suggested-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20231114235628.534334-6-gshan@redhat.com> [PMD: Mention returned string must be released with g_free()] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-12-30hw/pci: Constify VMStateRichard Henderson1-1/+1
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20231221031652.119827-45-richard.henderson@linaro.org>
2023-12-29migration: Make VMStateDescription.subsections constRichard Henderson1-1/+1
Allow the array of pointers to itself be const. Propagate this through the copies of this field. Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20231221031652.119827-2-richard.henderson@linaro.org>
2023-12-26Merge tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu ↵Stefan Hajnoczi3-13/+38
into staging virtio,pc,pci: features, cleanups, fixes vhost-scsi support for worker ioctls fixes, cleanups all over the place. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # -----BEGIN PGP SIGNATURE----- # # iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmWKohIPHG1zdEByZWRo # YXQuY29tAAoJECgfDbjSjVRpG2YH/1rJGV8TQm4V8kcGP9wOknPAMFADnEFdFmrB # V+JEDnyKrdcEZLPRh0b846peWRJhC13iL7Ks3VNjeVsfE9TyzNyNDpUzCJPfYFjR # 3m8ChLDvE9tKBA5/hXMIcgDXaYcPIrPvHyl4HG8EQn7oaeMpS2uecKqDpDDvNXGq # oNamNvqimFSqA+3ChzA+0Qt07Ts7xFEw4OEXSwfRXlsam/dhQG0SI+crRheHuvFb # HR8EwmNydA1D/M51AuBNuvX36u3SnPWm7Anp5711SZ1b59unshI0ztIqIJnGkvYe # qpUJSmxR6ulwWe4nQfb+GhBsuJ2j2ORC7YfXyAT7mw8rds8loaI= # =cNy2 # -----END PGP SIGNATURE----- # gpg: Signature made Tue 26 Dec 2023 04:51:14 EST # gpg: using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469 # gpg: issuer "mst@redhat.com" # gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [full] # gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [full] # Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67 # Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469 * tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu: (21 commits) vdpa: move memory listener to vhost_vdpa_shared vdpa: use dev_shared in vdpa_iommu vdpa: use VhostVDPAShared in vdpa_dma_map and unmap vdpa: move iommu_list to vhost_vdpa_shared vdpa: remove msg type of vhost_vdpa vdpa: move backend_cap to vhost_vdpa_shared vdpa: move iotlb_batch_begin_sent to vhost_vdpa_shared vdpa: move file descriptor to vhost_vdpa_shared vdpa: use vdpa shared for tracing vdpa: move shadow_data to vhost_vdpa_shared vdpa: move iova_range to vhost_vdpa_shared vdpa: move iova tree to the shared struct vdpa: add VhostVDPAShared vdpa: do not set virtio status bits if unneeded Fix bugs when VM shutdown with virtio-gpu unplugged vhost-scsi: fix usage of error_reportf_err() hw/acpi: propagate vcpu hotplug after switch to modern interface vhost-scsi: Add support for a worker thread per virtqueue vhost: Add worker backend callouts tests: bios-tables-test: Rename smbios type 4 related test functions ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-12-26vdpa: move memory listener to vhost_vdpa_sharedEugenio Pérez1-1/+1
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the memory listener to a common place rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-14-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: use dev_shared in vdpa_iommuEugenio Pérez1-1/+1
The memory listener functions can call these too. Make vdpa_iommu work with VhostVDPAShared. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-13-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: use VhostVDPAShared in vdpa_dma_map and unmapEugenio Pérez1-2/+2
The callers only have the shared information by the end of this series. Start converting this functions. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-12-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: move iommu_list to vhost_vdpa_sharedEugenio Pérez1-1/+1
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iommu_list member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-11-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: remove msg type of vhost_vdpaEugenio Pérez1-1/+0
It is always VHOST_IOTLB_MSG_V2. We can always make it back per vhost_dev if needed. This change makes easier for vhost_vdpa_map and unmap not to depend on vhost_vdpa but only in VhostVDPAShared. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-10-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: move backend_cap to vhost_vdpa_sharedEugenio Pérez1-0/+3
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the backend_cap member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-9-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: move iotlb_batch_begin_sent to vhost_vdpa_sharedEugenio Pérez1-1/+2
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iotlb_batch_begin_sent member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-8-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: move file descriptor to vhost_vdpa_sharedEugenio Pérez1-1/+1
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the file descriptor to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-7-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: move shadow_data to vhost_vdpa_sharedEugenio Pérez1-2/+3
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the shadow_data member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-5-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: move iova_range to vhost_vdpa_sharedEugenio Pérez1-1/+2
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iova range to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-4-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: move iova tree to the shared structEugenio Pérez1-2/+2
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iova tree to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-3-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-26vdpa: add VhostVDPASharedEugenio Pérez1-0/+5
It will hold properties shared among all vhost_vdpa instances associated with of the same device. For example, we just need one iova_tree or one memory listener for the entire device. Next patches will register the vhost_vdpa memory listener at the beginning of the VM migration at the destination. This enables QEMU to map the memory to the device before stopping the VM at the source, instead of doing while both source and destination are stopped, thus minimizing the downtime. However, the destination QEMU is unaware of which vhost_vdpa struct will register its memory_listener. If the source guest has CVQ enabled, it will be the one associated with the CVQ. Otherwise, it will be the first one. Save the memory operations related members in a common place rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-2-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-25vhost-scsi: Add support for a worker thread per virtqueueMike Christie1-0/+1
This adds support for vhost-scsi to be able to create a worker thread per virtqueue. Right now for vhost-net we get a worker thread per tx/rx virtqueue pair which scales nicely as we add more virtqueues and CPUs, but for scsi we get the single worker thread that's shared by all virtqueues. When trying to send IO to more than 2 virtqueues the single thread becomes a bottlneck. This patch adds a new setting, worker_per_virtqueue, which can be set to: false: Existing behavior where we get the single worker thread. true: Create a worker per IO virtqueue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20231204231618.21962-3-michael.christie@oracle.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
2023-12-25vhost: Add worker backend calloutsMike Christie1-0/+14
This adds the vhost backend callouts for the worker ioctls added in the 6.4 linux kernel commit: c1ecd8e95007 ("vhost: allow userspace to create workers") Signed-off-by: Mike Christie <michael.christie@oracle.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20231204231618.21962-2-michael.christie@oracle.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-12-25include/ui/rect.h: fix qemu_rect_init() mis-assignmentElen Avan1-1/+1
Signed-off-by: Elen Avan <elen.avan@bk.ru> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2051 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2050 Fixes: a200d53b1fde "virtio-gpu: replace PIXMAN for region/rect test" Cc: qemu-stable@nongnu.org Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-12-21Merge tag 'pull-loongarch-20231221' of https://gitlab.com/gaosong/qemu into ↵Stefan Hajnoczi1-1/+1
staging pull-loongarch-20231221 # -----BEGIN PGP SIGNATURE----- # # iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZYPyvQAKCRBAov/yOSY+ # 38/vBADT3b+Wo/2AeXOO3OXOM1VBhIvzDjY1OWytuJpkF3JGW45cMLqgtIgMj8h7 # NtzRS3JbFYbYuxITeeo1Ppl6dAD0pCZjIU6OCBxAJ6ADPsE/xD8nYWrMGqYVXg7E # hN0Cno2sf6dmJ0QxUxn7G+cUuvNtnGaDSZE+RAkjtzq1nvx7CQ== # =mte5 # -----END PGP SIGNATURE----- # gpg: Signature made Thu 21 Dec 2023 03:09:33 EST # gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF # gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF * tag 'pull-loongarch-20231221' of https://gitlab.com/gaosong/qemu: target/loongarch: Add timer information dump support hw/loongarch/virt: Align high memory base address with super page size Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-12-21virtio-blk: add iothread-vq-mapping parameterStefan Hajnoczi1-0/+2
Add the iothread-vq-mapping parameter to assign virtqueues to IOThreads. Store the vq:AioContext mapping in the new struct VirtIOBlockDataPlane->vq_aio_context[] field and refactor the code to use the per-vq AioContext instead of the BlockDriverState's AioContext. Reimplement --device virtio-blk-pci,iothread= and non-IOThread mode by assigning all virtqueues to the IOThread and main loop's AioContext in vq_aio_context[], respectively. The comment in struct VirtIOBlockDataPlane about EventNotifiers is stale. Remove it. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231220134755.814917-5-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21qdev: add IOThreadVirtQueueMappingList property typeStefan Hajnoczi1-0/+5
virtio-blk and virtio-scsi devices will need a way to specify the mapping between IOThreads and virtqueues. At the moment all virtqueues are assigned to a single IOThread or the main loop. This single thread can be a CPU bottleneck, so it is necessary to allow finer-grained assignment to spread the load. Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a parameter that maps virtqueues to IOThreads. The command-line syntax for this new property is as follows: --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}' IOThreads are specified by name and virtqueues are specified by 0-based index. It will be common to simply assign virtqueues round-robin across a set of IOThreads. A convenient syntax that does not require specifying individual virtqueue indices is available: --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}' Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231220134755.814917-4-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21qdev-properties: alias all object class propertiesStefan Hajnoczi1-2/+2
qdev_alias_all_properties() aliases a DeviceState's qdev properties onto an Object. This is used for VirtioPCIProxy types so that --device virtio-blk-pci has properties of its embedded --device virtio-blk-device object. Currently this function is implemented using qdev properties. Change the function to use QOM object class properties instead. This works because qdev properties create QOM object class properties, but it also catches any QOM object class-only properties that have no qdev properties. This change ensures that properties of devices are shown with --device foo,\? even if they are QOM object class properties. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231220134755.814917-2-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21string-output-visitor: show structs as "<omitted>"Stefan Hajnoczi1-3/+3
StringOutputVisitor crashes when it visits a struct because ->start_struct() is NULL. Show "<omitted>" instead of crashing. This is necessary because the virtio-blk-pci iothread-vq-mapping parameter that I'd like to introduce soon is a list of IOThreadMapping structs. This patch is a quick fix to solve the crash, but the long-term solution is replacing StringOutputVisitor with something that can handle the full gamut of values in QEMU. Cc: Markus Armbruster <armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231212134934.500289-1-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21block: remove outdated AioContext locking commentsStefan Hajnoczi3-10/+4
The AioContext lock no longer exists. There is one noteworthy change: - * More specifically, these functions use BDRV_POLL_WHILE(bs), which - * requires the caller to be either in the main thread and hold - * the BlockdriverState (bs) AioContext lock, or directly in the - * home thread that runs the bs AioContext. Calling them from - * another thread in another AioContext would cause deadlocks. + * More specifically, these functions use BDRV_POLL_WHILE(bs), which requires + * the caller to be either in the main thread or directly in the home thread + * that runs the bs AioContext. Calling them from another thread in another + * AioContext would cause deadlocks. I am not sure whether deadlocks are still possible. Maybe they have just moved to the fine-grained locks that have replaced the AioContext. Since I am not sure if the deadlocks are gone, I have kept the substance unchanged and just removed mention of the AioContext. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-15-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21job: remove outdated AioContext locking commentsStefan Hajnoczi1-20/+0
The AioContext lock no longer exists. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-14-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21aio: remove aio_context_acquire()/aio_context_release() APIStefan Hajnoczi1-17/+0
Delete these functions because nothing calls these functions anymore. I introduced these APIs in commit 98563fc3ec44 ("aio: add aio_context_acquire() and aio_context_release()") in 2014. It's with a sigh of relief that I delete these APIs almost 10 years later. Thanks to Paolo Bonzini's vision for multi-queue QEMU, we got an understanding of where the code needed to go in order to remove the limitations that the original dataplane and the IOThread/AioContext approach that followed it. Emanuele Giuseppe Esposito had the splendid determination to convert large parts of the codebase so that they no longer needed the AioContext lock. This was a painstaking process, both in the actual code changes required and the iterations of code review that Emanuele eked out of Kevin and me over many months. Kevin Wolf tackled multitudes of graph locking conversions to protect in-flight I/O from run-time changes to the block graph as well as the clang Thread Safety Analysis annotations that allow the compiler to check whether the graph lock is being used correctly. And me, well, I'm just here to add some pizzazz to the QEMU multi-queue block layer :). Thank you to everyone who helped with this effort, including Eric Blake, code reviewer extraordinaire, and others who I've forgotten to mention. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-11-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21aio-wait: draw equivalence between AIO_WAIT_WHILE() and ↵Stefan Hajnoczi1-12/+4
AIO_WAIT_WHILE_UNLOCKED() Now that the AioContext lock no longer exists, AIO_WAIT_WHILE() and AIO_WAIT_WHILE_UNLOCKED() are equivalent. A future patch will get rid of AIO_WAIT_WHILE_UNLOCKED(). Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-10-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21scsi: remove AioContext lockingStefan Hajnoczi1-14/+0
The AioContext lock no longer has any effect. Remove it. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-9-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21block: remove bdrv_co_lock()Stefan Hajnoczi1-14/+0
The bdrv_co_lock() and bdrv_co_unlock() functions are already no-ops. Remove them. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231205182011.1976568-8-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21block: remove AioContext lockingStefan Hajnoczi3-9/+5
This is the big patch that removes aio_context_acquire()/aio_context_release() from the block layer and affected block layer users. There isn't a clean way to split this patch and the reviewers are likely the same group of people, so I decided to do it in one patch. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Message-ID: <20231205182011.1976568-7-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21graph-lock: remove AioContext lockingStefan Hajnoczi1-19/+2
Stop acquiring/releasing the AioContext lock in bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any effect. The distinction between bdrv_graph_wrunlock() and bdrv_graph_wrunlock_ctx() becomes meaningless and they can be collapsed into one function. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231205182011.1976568-6-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-12-21virtio-scsi: replace AioContext lock with tmf_bh_lockStefan Hajnoczi1-1/+2
Protect the Task Management Function BH state with a lock. The TMF BH runs in the main loop thread. An IOThread might process a TMF at the same time as the TMF BH is running. Therefore tmf_bh_list and tmf_bh must be protected by a lock. Run TMF request completion in the IOThread using aio_wait_bh_oneshot(). This avoids more locking to protect the virtqueue and SCSI layer state. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231205182011.1976568-2-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>