diff options
author | Eugenio Pérez <eperezma@redhat.com> | 2023-12-21 18:43:12 +0100 |
---|---|---|
committer | Michael S. Tsirkin <mst@redhat.com> | 2023-12-26 04:51:07 -0500 |
commit | ae25ff41b72366248aa89bdc8be58aa86f67e4c3 (patch) | |
tree | e40bb2b6f847ce174fef7fbe7ea44429a0820975 /net | |
parent | 5edb02e8004c2f1d5026a02cd9378046973f47af (diff) | |
download | qemu-ae25ff41b72366248aa89bdc8be58aa86f67e4c3.zip qemu-ae25ff41b72366248aa89bdc8be58aa86f67e4c3.tar.gz qemu-ae25ff41b72366248aa89bdc8be58aa86f67e4c3.tar.bz2 |
vdpa: move iova_range to vhost_vdpa_shared
Next patches will register the vhost_vdpa memory listener while the VM
is migrating at the destination, so we can map the memory to the device
before stopping the VM at the source. The main goal is to reduce the
downtime.
However, the destination QEMU is unaware of which vhost_vdpa device will
register its memory_listener. If the source guest has CVQ enabled, it
will be the CVQ device. Otherwise, it will be the first one.
Move the iova range to VhostVDPAShared so all vhost_vdpa can use it,
rather than always in the first or last vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20231221174322.3130442-4-eperezma@redhat.com>
Tested-by: Lei Yang <leiyang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Diffstat (limited to 'net')
-rw-r--r-- | net/vhost-vdpa.c | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 10703e5..7be2c30 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -354,8 +354,8 @@ static void vhost_vdpa_net_data_start_first(VhostVDPAState *s) migration_add_notifier(&s->migration_state, vdpa_net_migration_state_notifier); if (v->shadow_vqs_enabled) { - v->shared->iova_tree = vhost_iova_tree_new(v->iova_range.first, - v->iova_range.last); + v->shared->iova_tree = vhost_iova_tree_new(v->shared->iova_range.first, + v->shared->iova_range.last); } } @@ -591,8 +591,8 @@ out: * and it is not worth it for the moment. */ if (!v->shared->iova_tree) { - v->shared->iova_tree = vhost_iova_tree_new(v->iova_range.first, - v->iova_range.last); + v->shared->iova_tree = vhost_iova_tree_new(v->shared->iova_range.first, + v->shared->iova_range.last); } r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer, @@ -1688,12 +1688,12 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, s->always_svq = svq; s->migration_state.notify = NULL; s->vhost_vdpa.shadow_vqs_enabled = svq; - s->vhost_vdpa.iova_range = iova_range; s->vhost_vdpa.shadow_data = svq; if (queue_pair_index == 0) { vhost_vdpa_net_valid_svq_features(features, &s->vhost_vdpa.migration_blocker); s->vhost_vdpa.shared = g_new0(VhostVDPAShared, 1); + s->vhost_vdpa.shared->iova_range = iova_range; } else if (!is_datapath) { s->cvq_cmd_out_buffer = mmap(NULL, vhost_vdpa_net_cvq_cmd_page_len(), PROT_READ | PROT_WRITE, |