aboutsummaryrefslogtreecommitdiff
path: root/nbd/client.c
AgeCommit message (Collapse)AuthorFilesLines
2023-09-07nbd: drop unused nbd_start_negotiate() aio_context argumentStefan Hajnoczi1-8/+3
aio_context is always NULL, so drop it. Suggested-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20230830224802.493686-3-stefanha@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2023-09-07nbd: drop unused nbd_receive_negotiate() aio_context argumentStefan Hajnoczi1-3/+2
aio_context is always NULL, so drop it. Suggested-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20230830224802.493686-2-stefanha@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2023-07-19nbd: Use enum for various negotiation modesEric Blake1-21/+25
Deciphering the hard-coded list of integer return values from nbd_start_negotiate() will only get more confusing when adding support for 64-bit extended headers. Better is to name things in an enum. Although the function in question is private to client.c, putting the enum in a public header and including an enum-to-string conversion will allow its use in more places in upcoming patches. The enum is intentionally laid out so that operators like <= can be used to group multiple modes with similar characteristics, and where the least powerful mode has value 0, even though this patch does not exploit that. No semantic change intended. Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230608135653.2918540-9-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-07-19nbd/client: Add safety check on chunk payload lengthEric Blake1-0/+12
Our existing use of structured replies either reads into a qiov capped at 32M (NBD_CMD_READ) or caps allocation to 1000 bytes (see NBD_MAX_MALLOC_PAYLOAD in block/nbd.c). But the existing length checks are rather late; if we encounter a buggy (or malicious) server that sends a super-large payload length, we should drop the connection right then rather than assuming the layer on top will be careful. This becomes more important when we permit 64-bit lengths which are even more likely to have the potential for attempted denial of service abuse. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230608135653.2918540-8-eblake@redhat.com>
2023-07-19nbd: s/handle/cookie/ to match NBD specEric Blake1-7/+7
Externally, libnbd exposed the 64-bit opaque marker for each client NBD packet as the "cookie", because it was less confusing when contrasted with 'struct nbd_handle *' holding all libnbd state. It also avoids confusion between the noun 'handle' as a way to identify a packet and the verb 'handle' for reacting to things like signals. Upstream NBD changed their spec to favor the name "cookie" based on libnbd's recommendations[1], so we can do likewise. [1] https://github.com/NetworkBlockDevice/nbd/commit/ca4392eb2b Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230608135653.2918540-6-eblake@redhat.com> [eblake: typo fix] Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-07-19nbd/client: Use smarter assertEric Blake1-3/+4
Assigning strlen() to a uint32_t and then asserting that it isn't too large doesn't catch the case of an input string 4G in length. Thankfully, the incoming strings can never be that large: if the export name or query is reflecting a string the client got from the server, we already guarantee that we dropped the NBD connection if the server sent more than 32M in a single reply to our NBD_OPT_* request; if the export name is coming from qemu, nbd_receive_negotiate() asserted that strlen(info->name) <= NBD_MAX_STRING_SIZE; and similarly, a query string via x->dirty_bitmap coming from the user was bounds-checked in either qemu-nbd or by the limitations of QMP. Still, it doesn't hurt to be more explicit in how we write our assertions to not have to analyze whether inadvertent wraparound is possible. Fixes: 93676c88 ("nbd: Don't send oversize strings", v4.2.0) Reported-by: Dr. David Alan Gilbert <dave@treblig.org> Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230608135653.2918540-2-eblake@redhat.com>
2021-09-29block/nbd: drop connection_coVladimir Sementsov-Ogievskiy1-2/+0
OK, that's a big rewrite of the logic. Pre-patch we have an always running coroutine - connection_co. It does reply receiving and reconnecting. And it leads to a lot of difficult and unobvious code around drained sections and context switch. We also abuse bs->in_flight counter which is increased for connection_co and temporary decreased in points where we want to allow drained section to begin. One of these place is in another file: in nbd_read_eof() in nbd/client.c. We also cancel reconnect and requests waiting for reconnect on drained begin which is not correct. And this patch fixes that. Let's finally drop this always running coroutine and go another way: do both reconnect and receiving in request coroutines. The detailed list of changes below (in the sequence of diff hunks). 1. receiving coroutines are woken directly from nbd_channel_error, when we change s->state 2. nbd_co_establish_connection_cancel(): we don't have drain_begin now, and in nbd_teardown_connection() all requests should already be finished (and reconnect is done from request). So nbd_co_establish_connection_cancel() is called from nbd_cancel_in_flight() (to cancel the request that is doing nbd_co_establish_connection()) and from reconnect_delay_timer_cb() (previously we didn't need it, as reconnect delay only should cancel active requests not the reconnection itself). But now reconnection itself is done in the separate thread (we now call nbd_client_connection_enable_retry() in nbd_open()), and we need to cancel the requests that wait in nbd_co_establish_connection() now). 2A. We do receive headers in request coroutine. But we also should dispatch replies for other pending requests. So, nbd_connection_entry() is turned into nbd_receive_replies(), which does reply dispatching while it receives other request headers, and returns when it receives the requested header. 3. All old staff around drained sections and context switch is dropped. In details: - we don't need to move connection_co to new aio context, as we don't have connection_co anymore - we don't have a fake "request" of connection_co (extra increasing in_flight), so don't care with it in drain_begin/end - we don't stop reconnection during drained section anymore. This means that drain_begin may wait for a long time (up to reconnect_delay). But that's an improvement and more correct behavior see below[*] 4. In nbd_teardown_connection() we don't have to wait for connection_co, as it is dropped. And cleanup for s->ioc and nbd_yank is moved here from removed connection_co. 5. In nbd_co_do_establish_connection() we now should handle NBD_CLIENT_CONNECTING_NOWAIT: if new request comes when we are in NBD_CLIENT_CONNECTING_NOWAIT, it still should call nbd_co_establish_connection() (who knows, maybe the connection was already established by another thread in the background). But we shouldn't wait: if nbd_co_establish_connection() can't return new channel immediately the request should fail (we are in NBD_CLIENT_CONNECTING_NOWAIT state). 6. nbd_reconnect_attempt() is simplified: it's now easier to wait for other requests in the caller, so here we just assert that fact. Also delay time is now initialized here: we can easily detect first attempt and start a timer. 7. nbd_co_reconnect_loop() is dropped, we don't need it. Reconnect retries are fully handle by thread (nbd/client-connection.c), delay timer we initialize in nbd_reconnect_attempt(), we don't have to bother with s->drained and friends. nbd_reconnect_attempt() now called from nbd_co_send_request(). 8. nbd_connection_entry is dropped: reconnect is now handled by nbd_co_send_request(), receiving reply is now handled by nbd_receive_replies(): all handled from request coroutines. 9. So, welcome new nbd_receive_replies() called from request coroutine, that receives reply header instead of nbd_connection_entry(). Like with sending requests, only one coroutine may receive in a moment. So we introduce receive_mutex, which is locked around nbd_receive_reply(). It also protects some related fields. Still, full audit of thread-safety in nbd driver is a separate task. New function waits for a reply with specified handle being received and works rather simple: Under mutex: - if current handle is 0, do receive by hand. If another handle received - switch to other request coroutine, release mutex and yield. Otherwise return success - if current handle == requested handle, we are done - otherwise, release mutex and yield 10: in nbd_co_send_request() we now do nbd_reconnect_attempt() if needed. Also waiting in free_sema queue we now wait for one of two conditions: - connectED, in_flight < MAX_NBD_REQUESTS (so we can start new one) - connectING, in_flight == 0, so we can call nbd_reconnect_attempt() And this logic is protected by s->send_mutex Also, on failure we don't have to care of removed s->connection_co 11. nbd_co_do_receive_one_chunk(): now instead of yield() and wait for s->connection_co we just call new nbd_receive_replies(). 12. nbd_co_receive_one_chunk(): place where s->reply.handle becomes 0, which means that handling of the whole reply is finished. Here we need to wake one of coroutines sleeping in nbd_receive_replies(). If none are sleeping - do nothing. That's another behavior change: we don't have endless recv() in the idle time. It may be considered as a drawback. If so, it may be fixed later. 13. nbd_reply_chunk_iter_receive(): don't care about removed connection_co, just ping in_flight waiters. 14. Don't create connection_co, enable retry in the connection thread (we don't have own reconnect loop anymore) 15. We now need to add a nbd_co_establish_connection_cancel() call in nbd_cancel_in_flight(), to cancel the request that is doing a connection attempt. [*], ok, now we don't cancel reconnect on drain begin. That's correct: reconnect feature leads to possibility of long-running requests (up to reconnect delay). Still, drain begin is not a reason to kill long requests. We should wait for them. This also means, that we can again reproduce a dead-lock, described in 8c517de24a8a1dcbeb54e7e12b5b0fda42a90ace. Why we are OK with it: 1. Now this is not absolutely-dead dead-lock: the vm is unfrozen after reconnect delay. Actually 8c517de24a8a1dc fixed a bug in NBD logic, that was not described in 8c517de24a8a1dc and led to forever dead-lock. The problem was that nobody woke the free_sema queue, but drain_begin can't finish until there is a request in free_sema queue. Now we have a reconnect delay timer that works well. 2. It's not a problem of the NBD driver, but of the ide code, because it does drain_begin under the global mutex; the problem doesn't reproduce when using scsi instead of ide. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210902103805.25686-5-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: grammar and comment tweaks] Signed-off-by: Eric Blake <eblake@redhat.com>
2020-07-10nbd: Use ERRP_GUARD()Vladimir Sementsov-Ogievskiy1-0/+5
If we want to check error after errp-function call, we need to introduce local_err and then propagate it to errp. Instead, use the ERRP_GUARD() macro, benefits are: 1. No need of explicit error_propagate call 2. No need of explicit local_err variable: use errp directly 3. ERRP_GUARD() leaves errp as is if it's not NULL or &error_fatal, this means that we don't break error_abort (we'll abort on error_set, not on error_propagate) If we want to add some info to errp (by error_prepend() or error_append_hint()), we must use the ERRP_GUARD() macro. Otherwise, this info will not be added when errp == &error_fatal (the program will exit prior to the error_append_hint() or error_prepend() call). Fix several such cases, e.g. in nbd_read(). This commit is generated by command sed -n '/^Network Block Device (NBD)$/,/^$/{s/^F: //p}' \ MAINTAINERS | \ xargs git ls-files | grep '\.[hc]$' | \ xargs spatch \ --sp-file scripts/coccinelle/errp-guard.cocci \ --macro-file scripts/cocci-macro-file.h \ --in-place --no-show-diff --max-width 80 Reported-by: Kevin Wolf <kwolf@redhat.com> Reported-by: Greg Kurz <groug@kaod.org> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Commit message tweaked] Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20200707165037.1026246-8-armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> [ERRP_AUTO_PROPAGATE() renamed to ERRP_GUARD(), and auto-propagated-errp.cocci to errp-guard.cocci. Commit message tweaked again.]
2019-11-18nbd: Don't send oversize stringsEric Blake1-3/+15
Qemu as server currently won't accept export names larger than 256 bytes, nor create dirty bitmap names longer than 1023 bytes, so most uses of qemu as client or server have no reason to get anywhere near the NBD spec maximum of a 4k limit per string. However, we weren't actually enforcing things, ignoring when the remote side violates the protocol on input, and also having several code paths where we send oversize strings on output (for example, qemu-nbd --description could easily send more than 4k). Tighten things up as follows: client: - Perform bounds check on export name and dirty bitmap request prior to handing it to server - Validate that copied server replies are not too long (ignoring NBD_INFO_* replies that are not copied is not too bad) server: - Perform bounds check on export name and description prior to advertising it to client - Reject client name or metadata query that is too long - Adjust things to allow full 4k name limit rather than previous 256 byte limit Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20191114024635.11363-4-eblake@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-09-24nbd/client: Add hint when TLS is missingEric Blake1-0/+1
I received an off-list report of failure to connect to an NBD server expecting an x509 certificate, when the client was attempting something similar to this command line: $ ./x86_64-softmmu/qemu-system-x86_64 -name 'blah' -machine q35 -nodefaults \ -object tls-creds-x509,id=tls0,endpoint=client,dir=$path_to_certs \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x6 \ -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0 \ -device scsi-hd,id=image1,drive=drive_image1,bootindex=0 qemu-system-x86_64: -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0: TLS negotiation required before option 7 (go) server reported: Option 0x7 not permitted before TLS The problem? As specified, -drive is trying to pass tls-creds to the raw format driver instead of the nbd protocol driver, but before we get to the point where we can detect that raw doesn't know what to do with tls-creds, the nbd driver has already failed because the server complained. The fix to the broken command line? Pass '...,file.tls-creds=tls0' to ensure the tls-creds option is handed to nbd, not raw. But since the error message was rather cryptic, I'm trying to improve the error message. With this patch, the error message adds a line: qemu-system-x86_64: -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0: TLS negotiation required before option 7 (go) Did you forget a valid tls-creds? server reported: Option 0x7 not permitted before TLS And with luck, someone grepping for that error message will find this commit message and figure out their command line mistake. Sadly, the only mention of file.tls-creds in our docs relates to an --image-opts use of PSK encryption with qemu-img as the client, rather than x509 certificate encryption with qemu-kvm as the client. CC: Tingting Mao <timao@redhat.com> CC: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190907172055.26870-1-eblake@redhat.com> [eblake: squash in iotest 233 fix] Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2019-09-05nbd: Tolerate more errors to structured reply requestEric Blake1-30/+33
A server may have a reason to reject a request for structured replies, beyond just not recognizing them as a valid request; similarly, it may have a reason for rejecting a request for a meta context. It doesn't hurt us to continue talking to such a server; otherwise 'qemu-nbd --list' of such a server fails to display all available details about the export. Encountered when temporarily tweaking nbdkit to reply with NBD_REP_ERR_POLICY. Present since structured reply support was first added (commit d795299b reused starttls handling, but starttls is different in that we can't fall back to other behavior on any error). Note that for an unencrypted client trying to connect to a server that requires encryption, this defers the point of failure to when we finally execute a strict command (such as NBD_OPT_GO or NBD_OPT_LIST), now that the intermediate NBD_OPT_STRUCTURED_REPLY does not diagnose NBD_REP_ERR_TLS_REQD as fatal; but as the protocol eventually gets us to a command where we can't continue onwards, the changed error message doesn't cause any security concerns. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190824172813.29720-3-eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> [eblake: fix iotest 233]
2019-09-05nbd: Use g_autofree in a few placesEric Blake1-15/+7
Thanks to our recent move to use glib's g_autofree, I can join the bandwagon. Getting rid of gotos is fun ;) There are probably more places where we could register cleanup functions and get rid of more gotos; this patch just focuses on the labels that existed merely to call g_free. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190824172813.29720-2-eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-08-16Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2019-08-15' into ↵Peter Maydell1-5/+11
staging nbd patches for 2019-08-15 - Addition of InetSocketAddress keep-alive - Addition of BDRV_REQ_PREFETCH for more efficient copy-on-read - Initial refactoring in preparation of NBD reconnect # gpg: Signature made Thu 15 Aug 2019 19:28:41 BST # gpg: using RSA key A7A16B4A2527436A # gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full] # gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full] # gpg: aka "[jpeg image of size 6874]" [full] # Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A * remotes/ericb/tags/pull-nbd-2019-08-15: block/nbd: refactor nbd connection parameters block/nbd: add cmdline and qapi parameter reconnect-delay block/nbd: move from quit to state block/nbd: use non-blocking io channel for nbd negotiation block/nbd: split connection_co start out of nbd_client_connect nbd: improve CMD_CACHE: use BDRV_REQ_PREFETCH block/stream: use BDRV_REQ_PREFETCH block: implement BDRV_REQ_PREFETCH qapi: Add InetSocketAddress member keep-alive Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16Include qemu/queue.h slightly lessMarkus Armbruster1-0/+1
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-20-armbru@redhat.com>
2019-08-15block/nbd: use non-blocking io channel for nbd negotiationVladimir Sementsov-Ogievskiy1-5/+11
No reason to use blocking channel for negotiation and we'll benefit in further reconnect feature, as qio_channel reads and writes will do qemu_coroutine_yield while waiting for io completion. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190618114328.55249-3-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2019-04-08nbd/client: Fix error message for server with unusable sizingEric Blake1-1/+1
Add a missing space to the error message used when giving up on a server that insists on an alignment which renders the last few bytes of the export unreadable. Fixes: 3add3ab78 Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190404145226.32649-1-eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2019-04-01nbd/client: Reject inaccessible tail of inconsistent serverEric Blake1-0/+8
The NBD spec suggests that a server should never advertise a size inconsistent with its minimum block alignment, as that tail is effectively inaccessible to a compliant client obeying those block constraints. Since we have a habit of rounding up rather than truncating, to avoid losing the last few bytes of user input, and we cannot access the tail when the server advertises bogus block sizing, abort the connection to alert the server to fix their bug. And rejecting such servers matches what we already did for a min_block that was not a power of 2 or which was larger than max_block. Does not impact either qemu (which always sends properly aligned sizes) or nbdkit (which does not send minimum block requirements yet); so this is mostly aimed at new NBD server implementations, and ensures that the rest of our code can assume the size is aligned. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190330155704.24191-1-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-02-25nbd: Use low-level QIOChannel API in nbd_read_eof()Kevin Wolf1-9/+37
Instead of using the convenience wrapper qio_channel_read_all_eof(), use the lower level QIOChannel API. This means duplicating some code, but we'll need this because this coroutine yield is special: We want it to be interruptible so that nbd_client_attach_aio_context() can correctly reenter the coroutine. This moves the bdrv_dec/inc_in_flight() pair into nbd_read_eof(), so that connection_co will always sit in this exact qio_channel_yield() call when bdrv_drain() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2019-02-25nbd: Move nbd_read_eof() to nbd/client.cKevin Wolf1-1/+21
The only caller of nbd_read_eof() is nbd_receive_reply(), so it doesn't have to live in the header file, but can move next to its caller. Also add the missing coroutine_fn to the function and its caller. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2019-02-04nbd: generalize usage of nbd_readVladimir Sementsov-Ogievskiy1-59/+29
We generally do very similar things around nbd_read: error_prepend specifying what we have tried to read, and be_to_cpu conversion of integers. So, it seems reasonable to move common things to helper functions, which: 1. simplify code a bit 2. generalize nbd_read error descriptions, all starting with "Failed to read" 3. make it more difficult to forget to convert things from BE Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190128165830.165170-1-vsementsov@virtuozzo.com> [eblake: rename macro to DEF_NBD_READ_N and formatting tweaks; checkpatch has false positive complaint] Signed-off-by: Eric Blake <eblake@redhat.com>
2019-01-21nbd/client: Work around 3.0 bug for listing meta contextsEric Blake1-0/+19
Commit 3d068aff forgot to advertise available qemu: contexts when the client requests a list with 0 queries. Furthermore, 3.0 shipped with a qemu-img hack of x-dirty-bitmap (commit 216ee365) that _silently_ acts as though the entire image is clean if a requested bitmap is not present. Both bugs have been recently fixed, so that a modern qemu server gives full context output right away, and the client refuses a connection if a requested x-dirty-bitmap was not found. Still, it is likely that there will be users that have to work with a mix of old and new qemu versions, depending on which features get backported where, at which point being able to rely on 'qemu-img --list' output to know for sure whether a given NBD export has the desired dirty bitmap is much nicer than blindly connecting and risking that the entire image may appear clean. We can make our --list code smart enough to work around buggy servers by tracking whether we've seen any qemu: replies in the original 0-query list; if not, repeat with a single query on "qemu:" (which may still have no replies, but then we know for sure we didn't trip up on the server bug). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-21-eblake@redhat.com>
2019-01-21nbd/client: Add meta contexts to nbd_receive_export_list()Eric Blake1-2/+39
We want to be able to detect whether a given qemu NBD server is exposing the right export(s) and dirty bitmaps, at least for regression testing. We could use 'nbd-client -l' from the upstream NBD project to list exports, but it's annoying to rely on out-of-tree binaries; furthermore, nbd-client doesn't necessarily know about all of the qemu NBD extensions. Thus, we plan on adding a new mode to qemu-nbd that merely sniffs all possible information from the server during handshake phase, then disconnects and dumps the information. This patch continues the work of the previous patch, by adding the ability to track the list of available meta contexts into NBDExportInfo. It benefits from the recent refactoring patches with a new nbd_list_meta_contexts() that reuses much of the same framework as setting a meta context. Note: a malicious server could exhaust memory of a client by feeding an unending loop of contexts; perhaps we could place a limit on how many we are willing to receive. But this is no different from our earlier analysis on a server sending an unending list of exports, and the death of a client due to memory exhaustion when the client was going to exit soon anyways is not really a denial of service attack. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-19-eblake@redhat.com>
2019-01-21nbd/client: Add nbd_receive_export_list()Eric Blake1-2/+130
We want to be able to detect whether a given qemu NBD server is exposing the right export(s) and dirty bitmaps, at least for regression testing. We could use 'nbd-client -l' from the upstream NBD project to list exports, but it's annoying to rely on out-of-tree binaries; furthermore, nbd-client doesn't necessarily know about all of the qemu NBD extensions. Thus, we plan on adding a new mode to qemu-nbd that merely sniffs all possible information from the server during handshake phase, then disconnects and dumps the information. This patch adds the low-level client code for grabbing the list of exports. It benefits from the recent refactoring patches, in order to share as much code as possible when it comes to doing validation of server replies. The resulting information is stored in an array of NBDExportInfo which has been expanded to any description string, along with a convenience function for freeing the list. Note: a malicious server could exhaust memory of a client by feeding an unending loop of exports; perhaps we should place a limit on how many we are willing to receive. But note that a server could reasonably be serving an export for every file in a large directory, where an arbitrary limit in the client means we can't list anything from such a server; the same happens if we just run until the client fails to malloc() and thus dies by an abort(), where the limit is no longer arbitrary but determined by available memory. Since the client is already planning on being short-lived, it's hard to call this a denial of service attack that would starve off other uses, so it does not appear to be a security issue. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Message-Id: <20190117193658.16413-18-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-21nbd/client: Refactor nbd_opt_go() to support NBD_OPT_INFOEric Blake1-14/+22
Rename the function to nbd_opt_info_or_go() with an added parameter and slight changes to comments and trace messages, in order to reuse the function for NBD_OPT_INFO. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190117193658.16413-17-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-21nbd/client: Pull out oldstyle size determinationEric Blake1-17/+32
Another refactoring creating nbd_negotiate_finish_oldstyle() for further reuse during 'qemu-nbd --list'. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Message-Id: <20190117193658.16413-16-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-21nbd/client: Split handshake into two functionsEric Blake1-50/+95
An upcoming patch will add the ability for qemu-nbd to list the services provided by an NBD server. Share the common code of the TLS handshake by splitting the initial exchange into a separate function, leaving only the export handling in the original function. Functionally, there should be no change in behavior in this patch, although some of the code motion may be difficult to follow due to indentation changes (view with 'git diff -w' for a smaller changeset). I considered an enum for the return code coordinating state between the two functions, but in the end just settled with ample comments. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-15-eblake@redhat.com>
2019-01-21nbd/client: Refactor return of nbd_receive_negotiate()Eric Blake1-28/+23
The function could only ever return 0 or -EINVAL; make this clearer by dropping a useless 'fail:' label. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-14-eblake@redhat.com>
2019-01-21nbd/client: Split out nbd_receive_one_meta_context()Eric Blake1-57/+90
Extract portions of nbd_negotiate_simple_meta_context() to a new function nbd_receive_one_meta_context() that copies the pattern of nbd_receive_list() for performing the argument validation of one reply. The error message when the server replies with more than one context changes slightly, but that shouldn't happen in the common case. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-13-eblake@redhat.com>
2019-01-21nbd/client: Split out nbd_send_meta_query()Eric Blake1-20/+44
Refactor nbd_negotiate_simple_meta_context() to pull out the code that can be reused to send a LIST request for 0 or 1 query. No semantic change. The old comment about 'sizeof(uint32_t)' being equivalent to '/* number of queries */' is no longer needed, now that we are computing 'sizeof(queries)' instead. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Message-Id: <20190117193658.16413-12-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-21nbd/client: Change signature of nbd_negotiate_simple_meta_context()Eric Blake1-28/+25
Pass 'info' instead of three separate parameters related to info, when requesting the server to set the meta context. Update the NBDExportInfo struct to rename the received id field to match the fact that we are currently overloading the field to match whatever context the user supplied through the x-dirty-bitmap hack, as well as adding a TODO comment to remind future patches about a desire to request two contexts at once. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-11-eblake@redhat.com>
2019-01-21nbd/client: Move export name into NBDExportInfoEric Blake1-21/+18
Refactor the 'name' parameter of nbd_receive_negotiate() from being a separate parameter into being part of the in-out 'info'. This also spills over to a simplification of nbd_opt_go(). The main driver for this refactoring is that an upcoming patch would like to add support to qemu-nbd to list information about all exports available on a server, where the name(s) will be provided by the server instead of the client. But another benefit is that we can now allow the client to explicitly specify the empty export name "" even when connecting to an oldstyle server (even if qemu is no longer such a server after commit 7f7dfe2a). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-10-eblake@redhat.com>
2019-01-21nbd/client: Refactor nbd_receive_list()Eric Blake1-33/+58
Right now, nbd_receive_list() is only called by nbd_receive_query_exports(), which in turn is only called if the server lacks NBD_OPT_GO but has working option negotiation, and is merely used as a quality-of-implementation trick since servers can't give decent errors for NBD_OPT_EXPORT_NAME. However, servers that lack NBD_OPT_GO are becoming increasingly rare (nbdkit was a latecomer, in Aug 2018, but qemu has been such a server since commit f37708f6 in July 2017 and released in 2.10), so it no longer makes sense to micro-optimize that function for performance. Furthermore, when debugging a server's implementation, tracing the full reply (both names and descriptions) is useful, not to mention that upcoming patches adding 'qemu-nbd --list' will want to collect that data. And when you consider that a server can send an export name up to the NBD protocol length limit of 4k; but our current NBD_MAX_NAME_SIZE is only 256, we can't trace all valid server names without more storage, but 4k is large enough that the heap is better than the stack for long names. Thus, I'm changing the division of labor, with nbd_receive_list() now always malloc'ing a result on success (the malloc is bounded by the fact that we reject servers with a reply length larger than 32M), and moving the comparison to 'wantname' to the caller. There is a minor change in behavior where a server with 0 exports (an immediate NBD_REP_ACK reply) is now no longer distinguished from a server without LIST support (NBD_REP_ERR_UNSUP); this information could be preserved with a complication to the calling contract to provide a bit more information, but I didn't see the point. After all, the worst that can happen if our guess at a match is wrong is that the caller will get a cryptic disconnect when NBD_OPT_EXPORT_NAME fails (which is no different from what would happen if we had not tried LIST), while treating an empty list as immediate failure would prevent connecting to really old servers that really did lack LIST. Besides, NBD servers with 0 exports are rare (qemu can do it when using QMP nbd-server-start without nbd-server-add - but qemu understands NBD_OPT_GO and thus won't tickle this change in behavior). Fix the spelling of foundExport to match coding standards while in the area. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-9-eblake@redhat.com>
2019-01-05nbd/client: Drop pointless buf variableEric Blake1-15/+7
There's no need to read into a temporary buffer (oversized since commit 7d3123e1) followed by a byteswap into a uint64_t to check for a magic number via memcmp(), when the code immediately below demonstrates reading into the uint64_t then byteswapping in place and checking for a magic number via integer math. What's more, having a different error message when the server's first reply byte is 0 is unusual - it's no different from any other wrong magic number, and we already detected short reads. That whole strlen() issue has been present and useless since commit 1d45f8b5 in 2010; perhaps it was leftover debugging (since the correct magic number happens to be ASCII)? Make the error messages more consistent and detailed while touching things. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20181215135324.152629-9-eblake@redhat.com>
2019-01-05qemu-nbd: Fail earlier for -c/-d on non-linuxEric Blake1-17/+1
Connecting to a /dev/nbdN device is a Linux-specific action. We were already masking -c and -d from 'qemu-nbd --help' on non-linux. However, while -d fails with a sensible error message, it took hunting through a couple of files to prove that. What's more, the code for -c doesn't fail until after it has created a pthread and tried to open a device - possibly even printing an error message with %m on a non-Linux platform in spite of the comment that %m is glibc-specific. Make the failure happen sooner, then get rid of stubs that are no longer needed because of the early exits. While at it: tweak the blank newlines in --help output to be consistent, whether or not built on Linux. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20181215135324.152629-7-eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-05nbd/client: More consistent error messagesEric Blake1-9/+12
Consolidate on using decimal (not hex), on outputting the option reply name (not just value), and a consistent comma between clauses, when the client reports protocol discrepancies from the server. While it won't affect normal operation, it makes debugging additions easier. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20181215135324.152629-6-eblake@redhat.com>
2019-01-04nbd/client: Trace all server option error messagesEric Blake1-0/+2
Not all servers send free-form text alongside option error replies, but for servers that do (such as qemu), we pass the server's message as a hint alongside our own error reporting. However, it would also be useful to trace such server messages, since we can't guarantee how the hint may be consumed. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20181218225714.284495-3-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-10-03nbd: Don't take address of fields in packed structsPeter Maydell1-22/+22
Taking the address of a field in a packed struct is a bad idea, because it might not be actually aligned enough for that pointer type (and thus cause a crash on dereference on some host architectures). Newer versions of clang warn about this. Avoid the bug by not using the "modify in place" byte swapping functions. This patch was produced with the following spatch script: @@ expression E; @@ -be16_to_cpus(&E); +E = be16_to_cpu(E); @@ expression E; @@ -be32_to_cpus(&E); +E = be32_to_cpu(E); @@ expression E; @@ -be64_to_cpus(&E); +E = be64_to_cpu(E); @@ expression E; @@ -cpu_to_be16s(&E); +E = cpu_to_be16(E); @@ expression E; @@ -cpu_to_be32s(&E); +E = cpu_to_be32(E); @@ expression E; @@ -cpu_to_be64s(&E); +E = cpu_to_be64(E); Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20180927164200.15097-1-peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: rebase, and squash in missed changes] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-07-02nbd/client: Add x-dirty-bitmap to query bitmap from serverEric Blake1-2/+2
In order to test that the NBD server is properly advertising dirty bitmaps, we need a bare minimum client that can request and read the context. Since feature freeze for 3.0 is imminent, this is the smallest workable patch, which replaces the qemu block status report with the results of the NBD server's dirty bitmap (making it very easy to use 'qemu-img map --output=json' to learn where the dirty portions are). Note that the NBD protocol defines a dirty section with the same bit but opposite sense that normal "base:allocation" uses to report an allocated section; so in qemu-img map output, "data":true corresponds to clean, "data":false corresponds to dirty. A more complete solution that allows dirty bitmaps to be queried at the same time as normal block status will be required before this addition can lose the x- prefix. Until then, the fact that this replaces normal status with dirty status means actions like 'qemu-img convert' will likely misbehave due to treating dirty regions of the file as if they are unallocated. The next patch adds an iotest to exercise this new code. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180702191458.28741-2-eblake@redhat.com>
2018-05-04nbd/client: Fix error messages during NBD_INFO_BLOCK_SIZEEric Blake1-4/+10
A missing space makes for poor error messages, and sizes can't go negative. Also, we missed diagnosing a server that sends a maximum block size less than the minimum. Fixes: 081dd1fe CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180501154654.943782-1-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-05-04nbd/client: fix nbd_negotiate_simple_meta_contextVladimir Sementsov-Ogievskiy1-2/+2
Initialize received variable. Otherwise, is is possible for server to answer without any contexts, but we will set context_id to something random (received_id is not initialized too) and return 1, which is wrong. To solve it, just initialize received to false. Initialize received_id too, just to make all possible checkers happy. Bug was introduced in 78a33ab58782efdb206de14 "nbd: BLOCK_STATUS for standard get_block_status function: client part" with the whole function. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180427142002.21930-2-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com>
2018-04-02nbd: trace meta context negotiationEric Blake1-0/+2
Having a more detailed log of the interaction between client and server is invaluable in debugging how meta context negotiation actually works. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180330130950.1931229-1-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-04-02nbd/client: Correctly handle bad server REP_META_CONTEXTEric Blake1-7/+21
It's never a good idea to blindly read for size bytes as returned by the server without first validating that the size is within bounds; a malicious or buggy server could cause us to hang or get out of sync from reading further messages. It may be smarter to try and teach the client to cope with unexpected context ids by silently ignoring them instead of hanging up on the server, but for now, if the server doesn't reply with exactly the one context we expect, it's easier to just give up - however, if we give up for any reason other than an I/O failure, we might as well try to politely tell the server we are quitting rather than continuing. Fix some typos in the process. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180329231837.1914680-1-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-03-13nbd: BLOCK_STATUS for standard get_block_status function: client partVladimir Sementsov-Ogievskiy1-0/+117
Minimal realization: only one extent in server answer is supported. Flag NBD_CMD_FLAG_REQ_ONE is used to force this behavior. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180312152126.286890-6-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: grammar tweaks, fix min_block check and 32-bit cap, use -1 instead of errno on failure in nbd_negotiate_simple_meta_context, ensure that block status makes progress on success] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-03-06qio: non-default context for TLS handshakePeter Xu1-0/+1
A new parameter "context" is added to qio_channel_tls_handshake() is to allow the TLS to be run on a non-default context. Still, no functional change. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2018-03-01nbd/client: fix error messages in nbd_handle_reply_errVladimir Sementsov-Ogievskiy1-12/+12
1. NBD_REP_ERR_INVALID is not only about length, so, make message more general 2. hex format is not very good: it's hard to read something like "option a (set meta context)", so switch to dec. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <1518702707-7077-6-git-send-email-vsementsov@virtuozzo.com> [eblake: expand scope of patch: ALL uses of nbd_opt_lookup and nbd_rep_lookup are now decimal] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-01-10nbd: rename nbd_option and nbd_opt_replyVladimir Sementsov-Ogievskiy1-6/+6
Rename nbd_option and nbd_opt_reply to NBDOption and NBDOptionReply to correspond to Qemu coding style and other structures here. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20171122101958.17065-5-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2017-11-17nbd/client: Don't hard-disconnect on ESHUTDOWN from serverEric Blake1-6/+0
The NBD spec says that a server may fail any transmission request with ESHUTDOWN when it is apparent that no further request from the client can be successfully honored. The client is supposed to then initiate a soft shutdown (wait for all remaining in-flight requests to be answered, then send NBD_CMD_DISC). However, since qemu's server never uses ESHUTDOWN errors, this code was mostly untested since its introduction in commit b6f5d3b5. More recently, I learned that nbdkit as the NBD server is able to send ESHUTDOWN errors, so I finally tested this code, and noticed that our client was special-casing ESHUTDOWN to cause a hard shutdown (immediate disconnect, with no NBD_CMD_DISC), but only if the server sends this error as a simple reply. Further investigation found that commit d2febedb introduced a regression where structured replies behave differently than simple replies - but that the structured reply behavior is more in line with the spec (even if we still lack code in nbd-client.c to properly quit sending further requests). So this patch reverts the portion of b6f5d3b5 that introduced an improper hard-disconnect special-case at the lower level, and leaves the future enhancement of a nicer soft-disconnect at the higher level for another day. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20171113194857.13933-1-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2017-11-17nbd/client: Use error_prepend() correctlyEric Blake1-24/+26
When using error prepend(), it is necessary to end with a space in the format string; otherwise, messages come out incorrectly, such as when connecting to a socket that hangs up immediately: can't open device nbd://localhost:10809/: Failed to read dataUnexpected end-of-file before all bytes were read Originally botched in commit e44ed99d, then several more instances added in the meantime. Pre-existing and not fixed here: we are inconsistent on capitalization; some of our messages start with lower case, and others start with upper, although the use of error_prepend() is much nicer to read when all fragments consistently start with lower. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20171113152424.25381-1-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Markus Armbruster <armbru@redhat.com>
2017-11-09nbd/client: Nicer trace of structured replyEric Blake1-1/+3
It's useful to know which structured reply chunk is being processed. Missed in commit d2febedb. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20171108215703.9295-4-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2017-10-30nbd: Minimal structured read for clientVladimir Sementsov-Ogievskiy1-0/+12
Minimal implementation: for structured error only error_report error message. Note that test 83 is now more verbose, because the implementation prints more warnings about unexpected communication errors; perhaps future patches should tone things down by using trace messages instead of traces, but the common case of successful communication is no noisier than before. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20171027104037.8319-13-eblake@redhat.com>