diff options
author | Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> | 2019-01-15 18:26:50 -0500 |
---|---|---|
committer | John Snow <jsnow@redhat.com> | 2019-01-15 18:26:50 -0500 |
commit | 1eaf1b0fdf41069b4b3e67eae88da0d781261792 (patch) | |
tree | 076362574f940fad39b153c7c15415d0842f1ac0 /include | |
parent | bb6a0ec10ee3f791835f1479a8a3226f64cb6d75 (diff) | |
download | qemu-1eaf1b0fdf41069b4b3e67eae88da0d781261792.zip qemu-1eaf1b0fdf41069b4b3e67eae88da0d781261792.tar.gz qemu-1eaf1b0fdf41069b4b3e67eae88da0d781261792.tar.bz2 |
block/mirror: fix and improve do_sync_target_write
Use bdrv_dirty_bitmap_next_dirty_area() instead of
bdrv_dirty_iter_next_area(), because of the following problems of
bdrv_dirty_iter_next_area():
1. Using HBitmap iterators we should carefully handle unaligned offset,
as first call to hbitmap_iter_next() may return a value less than
original offset (actually, it will be original offset rounded down to
bitmap granularity). This handling is not done in
do_sync_target_write().
2. bdrv_dirty_iter_next_area() handles unaligned max_offset
incorrectly:
look at the code:
if (max_offset == iter->bitmap->size) {
/* If max_offset points to the image end, round it up by the
* bitmap granularity */
gran_max_offset = ROUND_UP(max_offset, granularity);
} else {
gran_max_offset = max_offset;
}
ret = hbitmap_iter_next(&iter->hbi, false);
if (ret < 0 || ret + granularity > gran_max_offset) {
return false;
}
and assume that max_offset != iter->bitmap->size but still unaligned.
if 0 < ret < max_offset we found dirty area, but the function can
return false in this case (if ret + granularity > max_offset).
3. bdrv_dirty_iter_next_area() uses inefficient loop to find the end of
the dirty area. Let's use more efficient hbitmap_next_zero instead
(bdrv_dirty_bitmap_next_dirty_area() do so)
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Diffstat (limited to 'include')
0 files changed, 0 insertions, 0 deletions