aboutsummaryrefslogtreecommitdiff
path: root/block
diff options
context:
space:
mode:
authorVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>2020-12-11 21:39:24 +0300
committerEric Blake <eblake@redhat.com>2021-02-03 08:14:00 -0600
commit801625e69d947b0f9291d77bb1deb7545525c66d (patch)
treeba4a16aaf67f7e295e367ebbfbeae375451e304c /block
parent98ca45494fcd6bf0336ecd559e440b6de6ea4cd3 (diff)
downloadqemu-801625e69d947b0f9291d77bb1deb7545525c66d.zip
qemu-801625e69d947b0f9291d77bb1deb7545525c66d.tar.gz
qemu-801625e69d947b0f9291d77bb1deb7545525c66d.tar.bz2
block/throttle-groups: throttle_group_co_io_limits_intercept(): 64bit bytes
The function is called from 64bit io handlers, and bytes is just passed to throttle_account() which is 64bit too (unsigned though). So, let's convert intermediate argument to 64bit too. This patch is a first in the 64-bit-blocklayer series, so we are generally moving to int64_t for both offset and bytes parameters on all io paths. Main motivation is realization of 64-bit write_zeroes operation for fast zeroing large disk chunks, up to the whole disk. We chose signed type, to be consistent with off_t (which is signed) and with possibility for signed return type (where negative value means error). Patch-correctness audit by Eric Blake: Caller has 32-bit, this patch now causes widening which is safe: block/block-backend.c: blk_do_preadv() passes 'unsigned int' block/block-backend.c: blk_do_pwritev_part() passes 'unsigned int' block/throttle.c: throttle_co_pwrite_zeroes() passes 'int' block/throttle.c: throttle_co_pdiscard() passes 'int' Caller has 64-bit, this patch fixes potential bug where pre-patch could narrow, except it's easy enough to trace that callers are still capped at 2G actions: block/throttle.c: throttle_co_preadv() passes 'uint64_t' block/throttle.c: throttle_co_pwritev() passes 'uint64_t' Implementation in question: block/throttle-groups.c throttle_group_co_io_limits_intercept() takes 'unsigned int bytes' and uses it: argument to util/throttle.c throttle_account(uint64_t) All safe: it patches a latent bug, and does not introduce any 64-bit gotchas once throttle_co_p{read,write}v are relaxed, and assuming throttle_account() is not buggy. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-Id: <20201211183934.169161-7-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
Diffstat (limited to 'block')
-rw-r--r--block/throttle-groups.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
index abd16ed..fb203c3 100644
--- a/block/throttle-groups.c
+++ b/block/throttle-groups.c
@@ -358,12 +358,15 @@ static void schedule_next_request(ThrottleGroupMember *tgm, bool is_write)
* @is_write: the type of operation (read/write)
*/
void coroutine_fn throttle_group_co_io_limits_intercept(ThrottleGroupMember *tgm,
- unsigned int bytes,
+ int64_t bytes,
bool is_write)
{
bool must_wait;
ThrottleGroupMember *token;
ThrottleGroup *tg = container_of(tgm->throttle_state, ThrottleGroup, ts);
+
+ assert(bytes >= 0);
+
qemu_mutex_lock(&tg->lock);
/* First we check if this I/O has to be throttled. */