diff options
author | Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> | 2021-12-17 17:46:54 +0100 |
---|---|---|
committer | Kevin Wolf <kwolf@redhat.com> | 2022-01-14 12:03:16 +0100 |
commit | 96054c76ff2db74165385a69f234c57a6bbc941e (patch) | |
tree | 7426f47bd77a9a4153f2c88fd043947daf5580e8 /qemu-img.c | |
parent | 51cd8bddd63540514d44808f7920811439baa253 (diff) | |
download | qemu-96054c76ff2db74165385a69f234c57a6bbc941e.zip qemu-96054c76ff2db74165385a69f234c57a6bbc941e.tar.gz qemu-96054c76ff2db74165385a69f234c57a6bbc941e.tar.bz2 |
qemu-img: make is_allocated_sectors() more efficient
Consider the case when the whole buffer is zero and end is unaligned.
If i <= tail, we return 1 and do one unaligned WRITE, RMW happens.
If i > tail, we do on aligned WRITE_ZERO (or skip if target is zeroed)
and again one unaligned WRITE, RMW happens.
Let's do better: don't fragment the whole-zero buffer and report it as
ZERO: in case of zeroed target we just do nothing and avoid RMW. If
target is not zeroes, one unaligned WRITE_ZERO should not be much worse
than one unaligned WRITE.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20211217164654.1184218-3-vsementsov@virtuozzo.com>
Tested-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'qemu-img.c')
-rw-r--r-- | qemu-img.c | 23 |
1 files changed, 19 insertions, 4 deletions
@@ -1171,19 +1171,34 @@ static int is_allocated_sectors(const uint8_t *buf, int n, int *pnum, } } + if (i == n) { + /* + * The whole buf is the same. + * No reason to split it into chunks, so return now. + */ + *pnum = i; + return !is_zero; + } + tail = (sector_num + i) & (alignment - 1); if (tail) { if (is_zero && i <= tail) { - /* treat unallocated areas which only consist - * of a small tail as allocated. */ + /* + * For sure next sector after i is data, and it will rewrite this + * tail anyway due to RMW. So, let's just write data now. + */ is_zero = false; } if (!is_zero) { - /* align up end offset of allocated areas. */ + /* If possible, align up end offset of allocated areas. */ i += alignment - tail; i = MIN(i, n); } else { - /* align down end offset of zero areas. */ + /* + * For sure next sector after i is data, and it will rewrite this + * tail anyway due to RMW. Better is avoid RMW and write zeroes up + * to aligned bound. + */ i -= tail; } } |