aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlberto Garcia <berto@igalia.com>2018-08-02 17:50:24 +0300
committerKevin Wolf <kwolf@redhat.com>2018-08-15 12:50:39 +0200
commit5d8e4ca035f5a21e8634eb63a678bed55a1a94f9 (patch)
tree449d58957c53efd81a8e3f760cc63491bfc5f811
parentef7a6a3c2a7725b169d054aa7487f9738bd6c4a6 (diff)
downloadqemu-5d8e4ca035f5a21e8634eb63a678bed55a1a94f9.zip
qemu-5d8e4ca035f5a21e8634eb63a678bed55a1a94f9.tar.gz
qemu-5d8e4ca035f5a21e8634eb63a678bed55a1a94f9.tar.bz2
throttle-groups: Skip the round-robin if a member is being drained
In the throttling code after an I/O request has been completed the next one is selected from a different member using a round-robin algorithm. This ensures that all members get a chance to finish their pending I/O requests. However, if a group member has its I/O limits disabled (because it's being drained) then we should always give it priority in order to have all its pending requests finished as soon as possible. If we don't do this we could have a member in the process of being drained waiting for the throttled requests of other members, for which the I/O limits still apply. This can have additional consequences: if we're running in qtest mode (with QEMU_CLOCK_VIRTUAL) then timers can only fire if we advance the clock manually, so attempting to drain a block device can hang QEMU in the BDRV_POLL_WHILE() loop at the end of bdrv_do_drained_begin(). Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
-rw-r--r--block/throttle-groups.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
index e297b04..d46c56b 100644
--- a/block/throttle-groups.c
+++ b/block/throttle-groups.c
@@ -221,6 +221,15 @@ static ThrottleGroupMember *next_throttle_token(ThrottleGroupMember *tgm,
ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts);
ThrottleGroupMember *token, *start;
+ /* If this member has its I/O limits disabled then it means that
+ * it's being drained. Skip the round-robin search and return tgm
+ * immediately if it has pending requests. Otherwise we could be
+ * forcing it to wait for other member's throttled requests. */
+ if (tgm_has_pending_reqs(tgm, is_write) &&
+ atomic_read(&tgm->io_limits_disabled)) {
+ return tgm;
+ }
+
start = token = tg->tokens[is_write];
/* get next bs round in round robin style */