aboutsummaryrefslogtreecommitdiff
path: root/nbd/server.c
AgeCommit message (Collapse)AuthorFilesLines
2020-01-08nbd: fix uninitialized variable warningPan Nengyuan1-9/+1
Fixes: /mnt/sdb/qemu/nbd/server.c: In function 'nbd_handle_request': /mnt/sdb/qemu/nbd/server.c:2313:9: error: 'ret' may be used uninitialized in this function [-Werror=maybe-uninitialized] int ret; Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20200108025132.46956-1-pannengyuan@huawei.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-11-18nbd: Don't send oversize stringsEric Blake1-5/+15
Qemu as server currently won't accept export names larger than 256 bytes, nor create dirty bitmap names longer than 1023 bytes, so most uses of qemu as client or server have no reason to get anywhere near the NBD spec maximum of a 4k limit per string. However, we weren't actually enforcing things, ignoring when the remote side violates the protocol on input, and also having several code paths where we send oversize strings on output (for example, qemu-nbd --description could easily send more than 4k). Tighten things up as follows: client: - Perform bounds check on export name and dirty bitmap request prior to handing it to server - Validate that copied server replies are not too long (ignoring NBD_INFO_* replies that are not copied is not too bad) server: - Perform bounds check on export name and description prior to advertising it to client - Reject client name or metadata query that is too long - Adjust things to allow full 4k name limit rather than previous 256 byte limit Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20191114024635.11363-4-eblake@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-11-18nbd/server: Prefer heap over stack for parsing client namesEric Blake1-10/+15
As long as we limit NBD names to 256 bytes (the bare minimum permitted by the standard), stack-allocation works for parsing a name received from the client. But as mentioned in a comment, we eventually want to permit up to the 4k maximum of the NBD standard, which is too large for stack allocation; so switch everything in the server to use heap allocation. For now, there is no change in actually supported name length. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20191114024635.11363-2-eblake@redhat.com> [eblake: fix uninit variable compile failure] Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-09-24nbd: Grab aio context lock in more placesEric Blake1-4/+18
When iothreads are in use, the failure to grab the aio context results in an assertion failure when trying to unlock things during blk_unref, when trying to unlock a mutex that was not locked. In short, all calls to nbd_export_put need to done while within the correct aio context. But since nbd_export_put can recursively reach itself via nbd_export_close, and recursively grabbing the context would deadlock, we can't do the context grab directly in those functions, but must do so in their callers. Hoist the use of the correct aio_context from nbd_export_new() to its caller qmp_nbd_server_add(). Then tweak qmp_nbd_server_remove(), nbd_eject_notifier(), and nbd_esport_close_all() to grab the right context, so that all callers during qemu now own the context before nbd_export_put() can call blk_unref(). Remaining uses in qemu-nbd don't matter (since that use case does not support iothreads). Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190917023917.32226-1-eblake@redhat.com> Reviewed-by: Sergio Lopez <slp@redhat.com>
2019-09-24nbd/server: attach client channel to the export's AioContextSergio Lopez1-0/+5
On creation, the export's AioContext is set to the same one as the BlockBackend, while the AioContext in the client QIOChannel is left untouched. As a result, when using data-plane, nbd_client_receive_next_request() schedules coroutines in the IOThread AioContext, while the client's QIOChannel is serviced from the main_loop, potentially triggering the assertion at qio_channel_restart_[read|write]. To fix this, as soon we have the export corresponding to the client, we call qio_channel_attach_aio_context() to attach the QIOChannel context to the export's AioContext. This matches with the logic at blk_aio_attached(). RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1748253 Signed-off-by: Sergio Lopez <slp@redhat.com> Message-Id: <20190912110032.26395-1-slp@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2019-09-05nbd: Implement server use of NBD FAST_ZEROEric Blake1-2/+6
The server side is fairly straightforward: we can always advertise support for detection of fast zero, and implement it by mapping the request to the block layer BDRV_REQ_NO_FALLBACK. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-5-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: update iotests 223, 233]
2019-09-05nbd: Prepare for NBD_CMD_FLAG_FAST_ZEROEric Blake1-0/+5
Commit fe0480d6 and friends added BDRV_REQ_NO_FALLBACK as a way to avoid wasting time on a preliminary write-zero request that will later be rewritten by actual data, if it is known that the write-zero request will use a slow fallback; but in doing so, could not optimize for NBD. The NBD specification is now considering an extension that will allow passing on those semantics; this patch updates the new protocol bits and 'qemu-nbd --list' output to recognize the bit, as well as the new errno value possible when using the new flag; while upcoming patches will improve the client to use the feature when present, and the server to advertise support for it. The NBD spec recommends (but not requires) that ENOTSUP be avoided for all but failures of a fast zero (the only time it is mandatory to avoid an ENOTSUP failure is when fast zero is supported but not requested during write zeroes; the questionable use is for ENOTSUP to other actions like a normal write request). However, clients that get an unexpected ENOTSUP will either already be treating it the same as EINVAL, or may appreciate the extra bit of information. We were equally loose for returning EOVERFLOW in more situations than recommended by the spec, so if it turns out to be a problem in practice, a later patch can tighten handling for both error codes. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-3-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: tweak commit message, also handle EOPNOTSUPP]
2019-09-05nbd: Improve per-export flag handling in serverEric Blake1-28/+34
When creating a read-only image, we are still advertising support for TRIM and WRITE_ZEROES to the client, even though the client should not be issuing those commands. But seeing this requires looking across multiple functions: All callers to nbd_export_new() passed a single flag based solely on whether the export allows writes. Later, we then pass a constant set of flags to nbd_negotiate_options() (namely, the set of flags which we always support, at least for writable images), which is then further dynamically modified with NBD_FLAG_SEND_DF based on client requests for structured options. Finally, when processing NBD_OPT_EXPORT_NAME or NBD_OPT_EXPORT_GO we bitwise-or the original caller's flag with the runtime set of flags we've built up over several functions. Let's refactor things to instead compute a baseline of flags as soon as possible which gets shared between multiple clients, in nbd_export_new(), and changing the signature for the callers to pass in a simpler bool rather than having to figure out flags. We can then get rid of the 'myflags' parameter to various functions, and instead refer to client for everything we need (we still have to perform a bitwise-OR for NBD_FLAG_SEND_DF during NBD_OPT_EXPORT_NAME and NBD_OPT_EXPORT_GO, but it's easier to see what is being computed). This lets us quit advertising senseless flags for read-only images, as well as making the next patch for exposing FAST_ZERO support easier to write. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: improve commit message, update iotest 223]
2019-09-05nbd: Use g_autofree in a few placesEric Blake1-8/+4
Thanks to our recent move to use glib's g_autofree, I can join the bandwagon. Getting rid of gotos is fun ;) There are probably more places where we could register cleanup functions and get rid of more gotos; this patch just focuses on the labels that existed merely to call g_free. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190824172813.29720-2-eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-09-05nbd: Advertise multi-conn for shared read-only connectionsEric Blake1-1/+3
The NBD specification defines NBD_FLAG_CAN_MULTI_CONN, which can be advertised when the server promises cache consistency between simultaneous clients (basically, rules that determine what FUA and flush from one client are able to guarantee for reads from another client). When we don't permit simultaneous clients (such as qemu-nbd without -e), the bit makes no sense; and for writable images, we probably have a lot more work before we can declare that actions from one client are cache-consistent with actions from another. But for read-only images, where flush isn't changing any data, we might as well advertise multi-conn support. What's more, advertisement of the bit makes it easier for clients to determine if 'qemu-nbd -e' was in use, where a second connection will succeed rather than hang until the first client goes away. This patch affects qemu as server in advertising the bit. We may want to consider patches to qemu as client to attempt parallel connections for higher throughput by spreading the load over those connections when a server advertises multi-conn, but for now sticking to one connection per nbd:// BDS is okay. See also: https://bugzilla.redhat.com/1708300 Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190815185024.7010-1-eblake@redhat.com> [eblake: tweak blockdev-nbd.c to not request shared when writable, fix iotest 233] Reviewed-by: John Snow <jsnow@redhat.com>
2019-08-16block/dirty-bitmap: add bdrv_dirty_bitmap_getJohn Snow1-1/+1
Add a public interface for get. While we're at it, rename "bdrv_get_dirty_bitmap_locked" to "bdrv_dirty_bitmap_get_locked". (There are more functions to rename to the bdrv_dirty_bitmap_VERB form, but they will wait until the conclusion of this series.) Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-11-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2019-08-15' into ↵Peter Maydell1-8/+35
staging nbd patches for 2019-08-15 - Addition of InetSocketAddress keep-alive - Addition of BDRV_REQ_PREFETCH for more efficient copy-on-read - Initial refactoring in preparation of NBD reconnect # gpg: Signature made Thu 15 Aug 2019 19:28:41 BST # gpg: using RSA key A7A16B4A2527436A # gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full] # gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full] # gpg: aka "[jpeg image of size 6874]" [full] # Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A * remotes/ericb/tags/pull-nbd-2019-08-15: block/nbd: refactor nbd connection parameters block/nbd: add cmdline and qapi parameter reconnect-delay block/nbd: move from quit to state block/nbd: use non-blocking io channel for nbd negotiation block/nbd: split connection_co start out of nbd_client_connect nbd: improve CMD_CACHE: use BDRV_REQ_PREFETCH block/stream: use BDRV_REQ_PREFETCH block: implement BDRV_REQ_PREFETCH qapi: Add InetSocketAddress member keep-alive Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16Include qemu/queue.h slightly lessMarkus Armbruster1-0/+1
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-20-armbru@redhat.com>
2019-08-15nbd: improve CMD_CACHE: use BDRV_REQ_PREFETCHVladimir Sementsov-Ogievskiy1-8/+35
This helps to avoid extra io, allocations and memory copying. We assume here that CMD_CACHE is always used with copy-on-read, as otherwise it's a noop. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190725100550.33801-4-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2019-06-13nbd/server: Nicer spelling of max BLOCK_STATUS reply lengthEric Blake1-5/+8
Commit 3d068aff (3.0) introduced NBD_MAX_BITMAP_EXTENTS as a limit on how large we would allow a reply to NBD_CMD_BLOCK_STATUS to grow when it is visiting a qemu:dirty-bitmap: context. Later, commit fb7afc79 (3.1) reused the constant to limit base:allocation context replies, although the name is now less appropriate in that situation. Rename things, and improve the macro to use units.h for better legibility. Then reformat the comment to comply with checkpatch rules added in the meantime. No semantic change. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190510151735.29687-1-eblake@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-06-04block: Add BlockBackend.ctxKevin Wolf1-2/+3
This adds a new parameter to blk_new() which requires its callers to declare from which AioContext this BlockBackend is going to be used (or the locks of which AioContext need to be taken anyway). The given context is only stored and kept up to date when changing AioContexts. Actually applying the stored AioContext to the root node is saved for another commit. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-04nbd-server: Call blk_set_allow_aio_context_change()Kevin Wolf1-0/+1
The NBD server uses an AioContext notifier, so it can tolerate that its BlockBackend is switched to a different AioContext. Before we start actually calling bdrv_try_set_aio_context(), which checks for consistency, outside of test cases, we need to make sure that the NBD server actually allows this. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2019-04-08nbd/server: Don't fail NBD_OPT_INFO for byte-aligned sourcesEric Blake1-5/+8
In commit 0c1d50bd, I added a couple of TODO comments about whether we consult bl.request_alignment when responding to NBD_OPT_INFO. At the time, qemu as server was hard-coding an advertised alignment of 512 to clients that promised to obey constraints, and there was no function for getting at a device's preferred alignment. But in hindsight, advertising 512 when the block device prefers 1 caused other compliance problems, and commit b0245d64 changed one of the two TODO comments to advertise a more accurate alignment. Time to fix the other TODO. Doesn't really impact qemu as client (our normal client doesn't use NBD_OPT_INFO, and qemu-nbd --list promises to obey block sizes), but it might prove useful to other clients. Fixes: b0245d64 Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190403030526.12258-4-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-04-08nbd/server: Trace client noncompliance on unaligned requestsEric Blake1-1/+16
We've recently added traces for clients to flag server non-compliance; let's do the same for servers to flag client non-compliance. According to the spec, if the client requests NBD_INFO_BLOCK_SIZE, it is promising to send all requests aligned to those boundaries. Of course, if the client does not request NBD_INFO_BLOCK_SIZE, then it made no promises so we shouldn't flag anything; and because we are willing to handle clients that made no promises (the spec allows us to use NBD_REP_ERR_BLOCK_SIZE_REQD if we had been unwilling), we already have to handle unaligned requests (which the block layer already does on our behalf). So even though the spec allows us to return EINVAL for clients that promised to behave, it's easier to always answer unaligned requests. Still, flagging non-compliance can be useful in debugging a client that is trying to be maximally portable. Qemu as client used to have one spot where it sent non-compliant requests: if the server sends an unaligned reply to NBD_CMD_BLOCK_STATUS, and the client was iterating over the entire disk, the next request would start at that unaligned point; this was fixed in commit a39286dd when the client was taught to work around server non-compliance; but is equally fixed if the server is patched to not send unaligned replies in the first place (yes, qemu 4.0 as server still has few such bugs, although they will be patched in 4.1). Fortunately, I did not find any more spots where qemu as client was non-compliant. I was able to test the patch by using the following hack to convince qemu-io to run various unaligned commands, coupled with serving 512-byte alignment by intentionally omitting '-f raw' on the server while viewing server traces. | diff --git i/nbd/client.c w/nbd/client.c | index 427980bdd22..1858b2aac35 100644 | --- i/nbd/client.c | +++ w/nbd/client.c | @@ -449,6 +449,7 @@ static int nbd_opt_info_or_go(QIOChannel *ioc, uint32_t opt, | nbd_send_opt_abort(ioc); | return -1; | } | + info->min_block = 1;//hack | if (!is_power_of_2(info->min_block)) { | error_setg(errp, "server minimum block size %" PRIu32 | " is not a power of two", info->min_block); Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190403030526.12258-3-eblake@redhat.com> [eblake: address minor review nits] Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-04-08nbd/server: Fix blockstatus traceEric Blake1-6/+3
Don't increment remaining_bytes until we know that we will actually be including the current block status extent in the reply; otherwise, the value traced will include a bytes value that is oversized by the length of the next block status extent which did not get sent because it instead ended the loop. Fixes: fb7afc79 Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190403030526.12258-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-04-01nbd/server: Advertise actual minimum block sizeEric Blake1-5/+8
Both NBD_CMD_BLOCK_STATUS and structured NBD_CMD_READ will split their reply according to bdrv_block_status() boundaries. If the block device has a request_alignment smaller than 512, but we advertise a block alignment of 512 to the client, then this can result in the server reply violating client expectations by reporting a smaller region of the export than what the client is permitted to address (although this is less of an issue for qemu 4.0 clients, given recent client patches to overlook our non-compliance at EOF). Since it's always better to be strict in what we send, it is worth advertising the actual minimum block limit rather than blindly rounding it up to 512. Note that this patch is not foolproof - it is still possible to provoke non-compliant server behavior using: $ qemu-nbd --image-opts driver=blkdebug,align=512,image.driver=file,image.filename=/path/to/non-aligned-file That is arguably a bug in the blkdebug driver (it should never pass back block status smaller than its alignment, even if it has to make multiple bdrv_get_status calls and determine the least-common-denominator status among the group to return). It may also be possible to observe issues with a backing layer with smaller alignment than the active layer, although so far I have been unable to write a reliable iotest for that scenario (but again, an issue like that could be argued to be a bug in the block layer, or something where we need a flag to bdrv_block_status() to state whether the result must be aligned to the current layer's limits or can be subdivided for accuracy when chasing backing files). Anyways, as blkdebug is not normally used, and as this patch makes our server more interoperable with qemu 3.1 clients, it is worth applying now, even while we still work on a larger patch series for the 4.1 timeframe to have byte-accurate file lengths. Note that the iotests output changes - for 223 and 233, we can see the server's better granularity advertisement; and for 241, the three test cases have the following effects: - natural alignment: the server's smaller alignment is now advertised, and the hole reported at EOF is now the right result; we've gotten rid of the server's non-compliance - forced server alignment: the server still advertises 512 bytes, but still sends a mid-sector hole. This is still a server compliance bug, which needs to be fixed in the block layer in a later patch; output does not change because the client is already being tolerant of the non-compliance - forced client alignment: the server's smaller alignment means that the client now sees the server's status change mid-sector without any protocol violations, but the fact that the map shows an unaligned mid-sector hole is evidence of the block layer problems with aligned block status, to be fixed in a later patch Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190329042750.14704-7-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: rebase to enhanced iotest 241 coverage]
2019-03-12block/dirty-bitmaps: add block_dirty_bitmap_check functionJohn Snow1-2/+1
Instead of checking against busy, inconsistent, or read only directly, use a check function with permissions bits that let us streamline the checks without reproducing them in many places. Included in this patch are permissions changes that simply add the inconsistent check to existing permissions call spots, without addressing existing bugs. In general, this means that busy+readonly checks become BDRV_BITMAP_DEFAULT, which checks against all three conditions. busy-only checks become BDRV_BITMAP_ALLOW_RO. Notably, remove allows inconsistent bitmaps, so it doesn't follow the pattern. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190301191545.8728-4-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-03-12block/dirty-bitmaps: unify qmp_locked and user_locked callsJohn Snow1-3/+3
These mean the same thing now. Unify them and rename the merged call bdrv_dirty_bitmap_busy to indicate semantically what we are describing, as well as help disambiguate from the various _locked and _unlocked versions of bitmap helpers that refer to mutex locks. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190223000614.13894-8-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-03-12nbd: change error checking order for bitmapsJohn Snow1-5/+5
Check that the bitmap is not in use prior to it checking if it is not enabled/recording guest writes. The bitmap being busy was likely at the behest of the user, so this error has a greater chance of being understood by the user. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190223000614.13894-6-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-03-06qemu-nbd: add support for authorization of TLS clientsDaniel P. Berrange1-5/+5
Currently any client which can complete the TLS handshake is able to use the NBD server. The server admin can turn on the 'verify-peer' option for the x509 creds to require the client to provide a x509 certificate. This means the client will have to acquire a certificate from the CA before they are permitted to use the NBD server. This is still a fairly low bar to cross. This adds a '--tls-authz OBJECT-ID' option to the qemu-nbd command which takes the ID of a previously added 'QAuthZ' object instance. This will be used to validate the client's x509 distinguished name. Clients failing the authorization check will not be permitted to use the NBD server. For example to setup authorization that only allows connection from a client whose x509 certificate distinguished name is CN=laptop.example.com,O=Example Org,L=London,ST=London,C=GB escape the commas in the name and use: qemu-nbd --object tls-creds-x509,id=tls0,dir=/home/berrange/qemutls,\ endpoint=server,verify-peer=yes \ --object 'authz-simple,id=auth0,identity=CN=laptop.example.com,,\ O=Example Org,,L=London,,ST=London,,C=GB' \ --tls-creds tls0 \ --tls-authz authz0 \ ....other qemu-nbd args... NB: a real shell command line would not have leading whitespace after the line continuation, it is just included here for clarity. Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-Id: <20190227162035.18543-2-berrange@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: split long line in --help text, tweak 233 to show that whitespace after ,, in identity= portion is actually okay] Signed-off-by: Eric Blake <eblake@redhat.com>
2019-02-11nbd/server: Kill pointless shadowed variableEric Blake1-1/+0
lgtm.com pointed out that commit 678ba275 introduced a shadowed declaration of local variable 'bs'; thankfully, the inner 'bs' obtained by 'blk_bs(blk)' matches the outer one given that we had 'blk_insert_bs(blk, bs, errp)' a few lines earlier, and there are no later uses of 'bs' beyond the scope of the 'if (bitmap)' to care if we change the value stored in 'bs' while traveling the backing chain to find a bitmap. So simply get rid of the extra declaration. Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190207191357.6665-1-eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2019-02-04nbd: generalize usage of nbd_readVladimir Sementsov-Ogievskiy1-18/+9
We generally do very similar things around nbd_read: error_prepend specifying what we have tried to read, and be_to_cpu conversion of integers. So, it seems reasonable to move common things to helper functions, which: 1. simplify code a bit 2. generalize nbd_read error descriptions, all starting with "Failed to read" 3. make it more difficult to forget to convert things from BE Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190128165830.165170-1-vsementsov@virtuozzo.com> [eblake: rename macro to DEF_NBD_READ_N and formatting tweaks; checkpatch has false positive complaint] Signed-off-by: Eric Blake <eblake@redhat.com>
2019-01-21nbd/server: Favor [u]int64_t over off_tEric Blake1-9/+9
Although our compile-time environment is set up so that we always support long files with 64-bit off_t, we have no guarantee whether off_t is the same type as int64_t. This requires casts when printing values, and prevents us from directly using qemu_strtoi64() (which will be done in the next patch). Let's just flip to uint64_t where possible, and stick to int64_t for detecting failure of blk_getlength(); we also keep the assertions added in the previous patch that the resulting values fit in 63 bits. The overflow check in nbd_co_receive_request() was already sane (request->from is validated to fit in 63 bits, and request->len is 32 bits, so the addition can't overflow 64 bits), but rewrite it in a form easier to recognize as a typical overflow check. Rename the variable 'description' to keep line lengths reasonable. Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190117193658.16413-7-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-21nbd/server: Hoist length check to qmp_nbd_server_addEric Blake1-7/+3
We only had two callers to nbd_export_new; qemu-nbd.c always passed a valid offset/length pair (because it already checked the file length, to ensure that offset was in bounds), while blockdev-nbd.c always passed 0/-1. Then nbd_export_new reduces the size to a multiple of BDRV_SECTOR_SIZE (can only happen when offset is not sector-aligned, since bdrv_getlength() currently rounds up) (someday, it would be nice to have byte-accurate lengths - but not today). However, I'm finding it easier to work with the code if we are consistent on having both callers pass in a valid length, and just assert that things are sane in nbd_export_new, meaning that no negative values were passed, and that offset+size does not exceed 63 bits (as that really is a fundamental limit to later operations, whether we use off_t or uint64_t). Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190117193658.16413-6-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-15dirty-bitmap: improve bdrv_dirty_bitmap_next_zeroVladimir Sementsov-Ogievskiy1-1/+1
Add bytes parameter to the function, to limit searched range. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-14nbd: Merge nbd_export_bitmap into nbd_export_newEric Blake1-47/+40
We only have one caller that wants to export a bitmap name, which it does right after creation of the export. But there is still a brief window of time where an NBD client could see the export but not the dirty bitmap, which a robust client would have to interpret as meaning the entire image should be treated as dirty. Better is to eliminate the window entirely, by inlining nbd_export_bitmap() into nbd_export_new(), and refusing to create the bitmap in the first place if the requested bitmap can't be located. We also no longer need logic for setting a different bitmap name compared to the bitmap being exported. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190111194720.15671-8-eblake@redhat.com>
2019-01-14nbd: Merge nbd_export_set_name into nbd_export_newEric Blake1-29/+23
The existing NBD code had a weird split where nbd_export_new() created an export but did not add it to the list of exported names until a later nbd_export_set_name() came along and grabbed a second reference on the object; later, the first call to nbd_export_close() drops the second reference while removing the export from the list. This is in part because the QAPI NbdServerRemoveNode enum documents the possibility of adding a mode where we could do a soft disconnect: preventing new clients, but waiting for existing clients to gracefully quit, based on the mode used when calling nbd_export_close(). But in spite of all that, note that we never change the name of an NBD export while it is exposed, which means it is easier to just inline the process of setting the name as part of creating the export. Inline the contents of nbd_export_set_name() and nbd_export_set_description() into the two points in an export lifecycle where they matter, then adjust both callers to pass the name up front. Note that for creation, all callers pass a non-NULL name, (passing NULL at creation was for old style servers, but we removed support for that in commit 7f7dfe2a), so we can add an assert and do things unconditionally; but for cleanup, because of the dual nature of nbd_export_close(), we still have to be careful to avoid use-after-free. Along the way, add a comment reminding ourselves of the potential of adding a middle mode disconnect. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190111194720.15671-5-eblake@redhat.com>
2019-01-14nbd: Only require disabled bitmap for read-only exportsEric Blake1-2/+5
Our initial implementation of x-nbd-server-add-bitmap put in a restriction because of incremental backups: in that usage, we are exporting one qcow2 file (the temporary overlay target of a blockdev-backup sync:none job) and a dirty bitmap owned by a second qcow2 file (the source of the blockdev-backup, which is the backing file of the temporary). While both qcow2 files are still writable (the target in order to capture copy-on-write of old contents, and the source in order to track live guest writes in the meantime), the NBD client expects to see constant data, including the dirty bitmap. An enabled bitmap in the source would be modified by guest writes, which is at odds with the NBD export being a read-only constant view, hence the initial code choice of enforcing a disabled bitmap (the intent is that the exposed bitmap was disabled in the same transaction that started the blockdev-backup job, although we don't want to track enough state to actually enforce that). However, consider the case of a bitmap contained in a read-only node (including when the bitmap is found in a backing layer of the active image). Because the node can't be modified, the bitmap won't change due to writes, regardless of whether it is still enabled. Forbidding the export unless the bitmap is disabled is awkward, paritcularly since we can't change the bitmap to be disabled (because the node is read-only). Alternatively, consider the case of live storage migration, where management directs the destination to create a writable NBD server, then performs a drive-mirror from the source to the target, prior to doing the rest of the live migration. Since storage migration can be time-consuming, it may be wise to let the destination include a dirty bitmap to track which portions it has already received, where even if the migration is interrupted and restarted, the source can query the destination block status in order to potentially minimize re-sending data that has not changed in the meantime on a second attempt. Such code has not been written, and might not be trivial (after all, a cluster being marked dirty in the bitmap does not necessarily guarantee it has the desired contents), but it makes sense that letting an active dirty bitmap be exposed and changing alongside writes may prove useful in the future. Solve both issues by gating the restriction against a disabled bitmap to only happen when the caller has requested a read-only export, and where the BDS that owns the bitmap (whether or not it is the BDS handed to nbd_export_new() or from its backing chain) is still writable. We could drop the check altogether (if management apps are prepared to deal with a changing bitmap even on a read-only image), but for now keeping a check for the read-only case still stands a chance of preventing management errors. Update iotest 223 to show the looser behavior by leaving a bitmap enabled the whole run; note that we have to tear down and re-export a node when handling an error. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190111194720.15671-4-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-11-30nbd/server: Advertise all contexts in response to bare LISTEric Blake1-0/+1
The NBD spec, and even our code comment, says that if the client asks for NBD_OPT_LIST_META_CONTEXT with 0 queries, then we should reply with (a possibly-compressed representation of) ALL contexts that we are willing to let them try. But commit 3d068aff forgot to advertise qemu:dirty-bitmap:FOO. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20181130023232.3079982-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-11-19nbd/server: Ignore write errors when replying to NBD_OPT_ABORTEric Blake1-4/+8
Commit 37ec36f6 intentionally ignores errors when trying to reply to an NBD_OPT_ABORT request for plaintext clients, but did not make the same change for a TLS server. Since NBD_OPT_ABORT is documented as being a potential for an EPIPE when the client hangs up without waiting for our reply, we don't need to pollute the server's output with that failure. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20181117223221.2198751-1-eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2018-11-19nbd: fix whitespace in server error messageDaniel P. Berrangé1-1/+1
A space was missing after the option number was printed: Option 0x8not permitted before TLS becomes Option 0x8 not permitted before TLS This fixes commit 3668328303429f3bc93ab3365c66331600b06a2d Author: Eric Blake <eblake@redhat.com> Date: Fri Oct 14 13:33:09 2016 -0500 nbd: Send message along with server NBD_REP_ERR errors Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20181116155325.22428-2-berrange@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: move lone space to next line] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-10-29nbd: forbid use of frozen bitmapsJohn Snow1-2/+2
Whether it's "locked" or "frozen", it's in use and should not be allowed for the purposes of this operation. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20181002230218.13949-7-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2018-10-03nbd/server: drop old-style negotiationVladimir Sementsov-Ogievskiy1-38/+15
After the previous commit, nbd_client_new's first parameter is always NULL. Let's drop it with all corresponding old-style negotiation code path which is unreachable now. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20181003170228.95973-3-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: re-wrap short line] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-10-03nbd/server: fix NBD_CMD_CACHEVladimir Sementsov-Ogievskiy1-1/+2
We should not go to structured-read branch on CACHE command, fix that. Bug introduced in bc37b06a5cde24 "nbd/server: introduce NBD_CMD_CACHE" with the whole feature and affects 3.0.0 release. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> CC: qemu-stable@nongnu.org Message-Id: <20181003144738.70670-1-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: commit message typo fix] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-10-03nbd: Don't take address of fields in packed structsPeter Maydell1-12/+12
Taking the address of a field in a packed struct is a bad idea, because it might not be actually aligned enough for that pointer type (and thus cause a crash on dereference on some host architectures). Newer versions of clang warn about this. Avoid the bug by not using the "modify in place" byte swapping functions. This patch was produced with the following spatch script: @@ expression E; @@ -be16_to_cpus(&E); +E = be16_to_cpu(E); @@ expression E; @@ -be32_to_cpus(&E); +E = be32_to_cpu(E); @@ expression E; @@ -be64_to_cpus(&E); +E = be64_to_cpu(E); @@ expression E; @@ -cpu_to_be16s(&E); +E = cpu_to_be16(E); @@ expression E; @@ -cpu_to_be32s(&E); +E = cpu_to_be32(E); @@ expression E; @@ -cpu_to_be64s(&E); +E = cpu_to_be64(E); Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20180927164200.15097-1-peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: rebase, and squash in missed changes] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-09-26nbd/server: send more than one extent of base:allocation contextVladimir Sementsov-Ogievskiy1-19/+60
This is necessary for efficient block-status export, for clients which support it. (qemu is not yet such a client, but could become one.) Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180704112302.471456-3-vsementsov@virtuozzo.com> [eblake: grammar tweaks] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-09-26nbd/server: fix bitmap exportVladimir Sementsov-Ogievskiy1-1/+4
bitmap_to_extents function is broken: it switches dirty variable after every iteration, however it can process only part of dirty (or zero) area during one iteration in case when this area is too large for one extent. Fortunately, the bug doesn't produce wrong extent flags: it just inserts a zero-length extent between sequential extents representing large dirty (or zero) area. However, zero-length extents are forbidden by the NBD protocol. So, a careful client should consider such a reply as a server fault, while a less-careful will likely ignore zero-length extents. The bug can only be triggered by a client that requests block status for nearly 4G at once (a request of 4G and larger is impossible per the protocol, and requests smaller than 4G less the bitmap granularity cause the loop to quit iterating rather than revisit the tail of the large area); it also cannot trigger if the client used the NBD_CMD_FLAG_REQ_ONE flag. Since qemu 3.0 as client (using the x-dirty-bitmap extension) always passes the flag, it is immune; and we are not aware of other open-source clients that know how to request qemu:dirty-bitmap:FOO contexts. Clients that want to avoid the bug could cap block status requests to a smaller length, such as 2G or 3G. Fix this by more careful handling of dirty variable. Bug was introduced in 3d068aff16 "nbd/server: implement dirty bitmap export", with the whole function. and is present in v3.0.0 release. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180914165116.23182-1-vsementsov@virtuozzo.com> CC: qemu-stable@nongnu.org Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: improved commit message] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-07-07nbd/server: fix nbd_co_send_block_statusVladimir Sementsov-Ogievskiy1-2/+3
Call nbd_co_send_extents() with correct length parameter (extent.length may be smaller than original length). Also, switch length parameter type to uint32_t, to correspond with request->len and similar nbd_co_send_bitmap(). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180704112302.471456-2-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2018-07-02nbd/server: Fix dirty bitmap logic regressionEric Blake1-1/+1
In my hurry to fix a build failure, I introduced a logic bug. The assertion conditional is backwards, meaning that qemu will now abort instead of reporting dirty bitmap status. The bug can only be tickled by an NBD client using an exported dirty bitmap (which is still an experimental QMP command), so it's not the end of the world for supported usage (and neither 'make check' nor qemu-iotests fails); but it also shows that we really want qemu-io support for reading dirty bitmaps if only so that I can add iotests coverage to prevent future brown-bag-of-shame events like this one. Fixes: 45eb6fb6 Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180622153509.375130-1-eblake@redhat.com>
2018-06-22nbd/server: Silence gcc false positiveEric Blake1-1/+2
The code has a while() loop that always initialized 'end', and the loop always executes at least once (as evidenced by the assert() just prior to the loop). But some versions of gcc still complain that 'end' is used uninitialized, so silence them. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20180622125814.345274-1-eblake@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-21nbd/server: introduce NBD_CMD_CACHEVladimir Sementsov-Ogievskiy1-4/+7
Handle nbd CACHE command. Just do read, without sending read data back. Cache mechanism should be done by exported node driver chain. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180413143156.11409-1-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: fix two missing case labels in switch statements] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-06-21nbd/server: implement dirty bitmap exportVladimir Sementsov-Ogievskiy1-23/+255
Handle a new NBD meta namespace: "qemu", and corresponding queries: "qemu:dirty-bitmap:<export bitmap name>". With the new metadata context negotiated, BLOCK_STATUS query will reply with dirty-bitmap data, converted to extents. The new public function nbd_export_bitmap selects which bitmap to export. For now, only one bitmap may be exported. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180609151758.17343-5-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: wording tweaks, minor cleanups, additional tracing] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-06-21nbd/server: add nbd_meta_empty_or_pattern helperVladimir Sementsov-Ogievskiy1-27/+62
Add nbd_meta_pattern() and nbd_meta_empty_or_pattern() helpers for metadata query parsing. nbd_meta_pattern() will be reused for the "qemu" namespace in following patches. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180609151758.17343-4-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: comment tweaks] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-06-21nbd/server: refactor NBDExportMetaContextsVladimir Sementsov-Ogievskiy1-12/+11
Use NBDExport pointer instead of just export name: there is no need to store a duplicated name in the struct; moreover, NBDExport will be used further. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180609151758.17343-3-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: commit message grammar tweak] Signed-off-by: Eric Blake <eblake@redhat.com>
2018-06-21nbd/server: fix traceVladimir Sementsov-Ogievskiy1-4/+10
Return code = 1 doesn't mean that we parsed base:allocation. Use correct traces in both -parsed and -skipped cases. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180609151758.17343-2-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: comment tweaks] Signed-off-by: Eric Blake <eblake@redhat.com>