aboutsummaryrefslogtreecommitdiff
path: root/block/backup.c
diff options
context:
space:
mode:
authorVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>2021-08-24 11:38:30 +0300
committerHanna Reitz <hreitz@redhat.com>2021-09-01 12:57:31 +0200
commit2a6511dfeb0d1bd10211b264177afbc360f9bd9d (patch)
treee4e023d38227c34fa14055449d049713198769a9 /block/backup.c
parentf8b9504bac3a658af81cb19aec9572aa086799e2 (diff)
downloadqemu-2a6511dfeb0d1bd10211b264177afbc360f9bd9d.zip
qemu-2a6511dfeb0d1bd10211b264177afbc360f9bd9d.tar.gz
qemu-2a6511dfeb0d1bd10211b264177afbc360f9bd9d.tar.bz2
block/backup: set copy_range and compress after filter insertion
We are going to publish copy-before-write filter, so it would be initialized through options. Still we don't want to publish compress and copy-range options, as 1. Modern way to enable compression is to use compress filter. 2. For copy-range it's unclean how to make proper interface: - it's has experimental prefix for backup job anyway - the whole BackupPerf structure doesn't make sense for the filter So, let's just add copy-range possibility to the filter later if needed. Still, we are going to continue support for compression and experimental copy-range in backup job. So, set these options after filter creation. Note, that we can drop "compress" argument of bdrv_cbw_append() now, as well as "perf". The only reason not doing so is that now, when I prepare this patch the big series around it is already reviewed and I want to avoid extra rebase conflicts to simplify review of the following version. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20210824083856.17408-9-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Diffstat (limited to 'block/backup.c')
-rw-r--r--block/backup.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/block/backup.c b/block/backup.c
index 84f9a5f..b31fd99 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -504,7 +504,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
}
cbw = bdrv_cbw_append(bs, target, filter_node_name,
- cluster_size, perf, compress, &bcs, errp);
+ cluster_size, false, &bcs, errp);
if (!cbw) {
goto error;
}
@@ -530,6 +530,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
job->len = len;
job->perf = *perf;
+ block_copy_set_copy_opts(bcs, perf->use_copy_range, compress);
block_copy_set_progress_meter(bcs, &job->common.job.progress);
block_copy_set_speed(bcs, speed);