aboutsummaryrefslogtreecommitdiff
path: root/tests/qemu-iotests/154.out
diff options
context:
space:
mode:
authorDenis V. Lunev <den@openvz.org>2016-05-25 21:48:45 -0600
committerKevin Wolf <kwolf@redhat.com>2016-06-08 10:21:08 +0200
commit443668ca408cdb7f01e1a70a58518bb100c3e9d1 (patch)
tree668a5dad129a866dcba61c066a193f42e5bdabd9 /tests/qemu-iotests/154.out
parent6ed5546fa7bf12c5b87ef76bafb86e1d77ed6e85 (diff)
downloadqemu-443668ca408cdb7f01e1a70a58518bb100c3e9d1.zip
qemu-443668ca408cdb7f01e1a70a58518bb100c3e9d1.tar.gz
qemu-443668ca408cdb7f01e1a70a58518bb100c3e9d1.tar.bz2
block: split write_zeroes always
We should split requests even if they are less than write_zeroes_alignment. For example we can have the following request: offset 62k size 4k write_zeroes_alignment 64k The original code sent 1 request covering 2 qcow2 clusters, and resulted in both clusters being allocated. But by splitting the request, we can cater to the case where one of the two clusters can be zeroed as a whole, for only 1 cluster allocated after the operation. Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Eric Blake <eblake@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Message-Id: <1463476543-3087-2-git-send-email-den@openvz.org> [eblake: Avoid exceeding nb_sectors, hoist alignment checks out of loop, and update testsuite to show that patch works] Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'tests/qemu-iotests/154.out')
-rw-r--r--tests/qemu-iotests/154.out18
1 files changed, 12 insertions, 6 deletions
diff --git a/tests/qemu-iotests/154.out b/tests/qemu-iotests/154.out
index 8946b73..b9d27c5 100644
--- a/tests/qemu-iotests/154.out
+++ b/tests/qemu-iotests/154.out
@@ -106,11 +106,14 @@ read 1024/1024 bytes at offset 67584
read 5120/5120 bytes at offset 68608
5 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
[{ "start": 0, "length": 32768, "depth": 1, "zero": true, "data": false},
-{ "start": 32768, "length": 8192, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 32768, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 36864, "length": 4096, "depth": 0, "zero": true, "data": false},
{ "start": 40960, "length": 8192, "depth": 1, "zero": true, "data": false},
-{ "start": 49152, "length": 8192, "depth": 0, "zero": false, "data": true, "offset": 28672},
+{ "start": 49152, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 24576},
+{ "start": 53248, "length": 4096, "depth": 0, "zero": true, "data": false},
{ "start": 57344, "length": 8192, "depth": 1, "zero": true, "data": false},
-{ "start": 65536, "length": 8192, "depth": 0, "zero": false, "data": true, "offset": 36864},
+{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 28672},
+{ "start": 69632, "length": 4096, "depth": 0, "zero": true, "data": false},
{ "start": 73728, "length": 134144000, "depth": 1, "zero": true, "data": false}]
== spanning two clusters, non-zero after request ==
@@ -145,11 +148,14 @@ read 7168/7168 bytes at offset 65536
read 1024/1024 bytes at offset 72704
1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
[{ "start": 0, "length": 32768, "depth": 1, "zero": true, "data": false},
-{ "start": 32768, "length": 8192, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 32768, "length": 4096, "depth": 0, "zero": true, "data": false},
+{ "start": 36864, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
{ "start": 40960, "length": 8192, "depth": 1, "zero": true, "data": false},
-{ "start": 49152, "length": 8192, "depth": 0, "zero": false, "data": true, "offset": 28672},
+{ "start": 49152, "length": 4096, "depth": 0, "zero": true, "data": false},
+{ "start": 53248, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 24576},
{ "start": 57344, "length": 8192, "depth": 1, "zero": true, "data": false},
-{ "start": 65536, "length": 8192, "depth": 0, "zero": false, "data": true, "offset": 36864},
+{ "start": 65536, "length": 4096, "depth": 0, "zero": true, "data": false},
+{ "start": 69632, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 28672},
{ "start": 73728, "length": 134144000, "depth": 1, "zero": true, "data": false}]
== spanning two clusters, partially overwriting backing file ==