From 48557b138383aaf69c2617ca9a88bfb394fc50ec Mon Sep 17 00:00:00 2001 From: Vladimir Sementsov-Ogievskiy Date: Tue, 6 Aug 2019 18:26:11 +0300 Subject: util/hbitmap: strict hbitmap_reset hbitmap_reset has an unobvious property: it rounds requested region up. It may provoke bugs, like in recently fixed write-blocking mode of mirror: user calls reset on unaligned region, not keeping in mind that there are possible unrelated dirty bytes, covered by rounded-up region and information of this unrelated "dirtiness" will be lost. Make hbitmap_reset strict: assert that arguments are aligned, allowing only one exception when @start + @count == hb->orig_size. It's needed to comfort users of hbitmap_next_dirty_area, which cares about hb->orig_size. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Max Reitz Message-Id: <20190806152611.280389-1-vsementsov@virtuozzo.com> [Maintainer edit: Max's suggestions from on-list. --js] [Maintainer edit: Eric's suggestion for aligned macro. --js] Signed-off-by: John Snow --- util/hbitmap.c | 4 ++++ 1 file changed, 4 insertions(+) (limited to 'util') diff --git a/util/hbitmap.c b/util/hbitmap.c index fd44c89..66db87c 100644 --- a/util/hbitmap.c +++ b/util/hbitmap.c @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count) /* Compute range in the last layer. */ uint64_t first; uint64_t last = start + count - 1; + uint64_t gran = 1ULL << hb->granularity; + + assert(QEMU_IS_ALIGNED(start, gran)); + assert(QEMU_IS_ALIGNED(count, gran) || (start + count == hb->orig_size)); trace_hbitmap_reset(hb, start, count, start >> hb->granularity, last >> hb->granularity); -- cgit v1.1