diff options
author | John Snow <jsnow@redhat.com> | 2019-07-29 16:35:55 -0400 |
---|---|---|
committer | John Snow <jsnow@redhat.com> | 2019-08-16 16:28:03 -0400 |
commit | 7e30dd618ebfe3ab1ed54f2e98ba75d799c0be20 (patch) | |
tree | 8f6c4d6398603583e13417c75229c76758ee13d6 /block/trace-events | |
parent | dba8700f16ebda0632977c303f66021407971081 (diff) | |
download | qemu-7e30dd618ebfe3ab1ed54f2e98ba75d799c0be20.zip qemu-7e30dd618ebfe3ab1ed54f2e98ba75d799c0be20.tar.gz qemu-7e30dd618ebfe3ab1ed54f2e98ba75d799c0be20.tar.bz2 |
block/backup: teach TOP to never copy unallocated regions
Presently, If sync=TOP is selected, we mark the entire bitmap as dirty.
In the write notifier handler, we dutifully copy out such regions.
Fix this in three parts:
1. Mark the bitmap as being initialized before the first yield.
2. After the first yield but before the backup loop, interrogate the
allocation status asynchronously and initialize the bitmap.
3. Teach the write notifier to interrogate allocation status if it is
invoked during bitmap initialization.
As an effect of this patch, the job progress for TOP backups
now behaves like this:
- total progress starts at bdrv_length.
- As allocation status is interrogated, total progress decreases.
- As blocks are copied, current progress increases.
Taken together, the floor and ceiling move to meet each other.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190716000117.25219-10-jsnow@redhat.com
[Remove ret = -ECANCELED change. --js]
[Squash in conflict resolution based on Max's patch --js]
Message-id: c8b0ab36-79c8-0b4b-3193-4e12ed8c848b@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Diffstat (limited to 'block/trace-events')
-rw-r--r-- | block/trace-events | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/block/trace-events b/block/trace-events index d724df0..04209f0 100644 --- a/block/trace-events +++ b/block/trace-events @@ -41,6 +41,7 @@ mirror_yield_in_flight(void *s, int64_t offset, int in_flight) "s %p offset %" P backup_do_cow_enter(void *job, int64_t start, int64_t offset, uint64_t bytes) "job %p start %" PRId64 " offset %" PRId64 " bytes %" PRIu64 backup_do_cow_return(void *job, int64_t offset, uint64_t bytes, int ret) "job %p offset %" PRId64 " bytes %" PRIu64 " ret %d" backup_do_cow_skip(void *job, int64_t start) "job %p start %"PRId64 +backup_do_cow_skip_range(void *job, int64_t start, uint64_t bytes) "job %p start %"PRId64" bytes %"PRId64 backup_do_cow_process(void *job, int64_t start) "job %p start %"PRId64 backup_do_cow_read_fail(void *job, int64_t start, int ret) "job %p start %"PRId64" ret %d" backup_do_cow_write_fail(void *job, int64_t start, int ret) "job %p start %"PRId64" ret %d" |