aboutsummaryrefslogtreecommitdiff
path: root/docs/devel
diff options
context:
space:
mode:
Diffstat (limited to 'docs/devel')
-rw-r--r--docs/devel/migration/CPR.rst5
-rw-r--r--docs/devel/migration/main.rst4
-rw-r--r--docs/devel/migration/postcopy.rst36
-rw-r--r--docs/devel/migration/vfio.rst35
-rw-r--r--docs/devel/qapi-code-gen.rst28
-rw-r--r--docs/devel/qapi-domain.rst35
-rw-r--r--docs/devel/rust.rst11
-rw-r--r--docs/devel/testing/functional.rst2
-rw-r--r--docs/devel/testing/main.rst4
-rw-r--r--docs/devel/tracing.rst2
10 files changed, 110 insertions, 52 deletions
diff --git a/docs/devel/migration/CPR.rst b/docs/devel/migration/CPR.rst
index 7897873..0a0fd4f 100644
--- a/docs/devel/migration/CPR.rst
+++ b/docs/devel/migration/CPR.rst
@@ -152,8 +152,7 @@ cpr-transfer mode
This mode allows the user to transfer a guest to a new QEMU instance
on the same host with minimal guest pause time, by preserving guest
RAM in place, albeit with new virtual addresses in new QEMU. Devices
-and their pinned memory pages will also be preserved in a future QEMU
-release.
+and their pinned memory pages are also preserved for VFIO and IOMMUFD.
The user starts new QEMU on the same host as old QEMU, with command-
line arguments to create the same machine, plus the ``-incoming``
@@ -322,6 +321,6 @@ Futures
cpr-transfer mode is based on a capability to transfer open file
descriptors from old to new QEMU. In the future, descriptors for
-vfio, iommufd, vhost, and char devices could be transferred,
+vhost, and char devices could be transferred,
preserving those devices and their kernel state without interruption,
even if they do not explicitly support live migration.
diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
index cdd4f4a..6493c1d 100644
--- a/docs/devel/migration/main.rst
+++ b/docs/devel/migration/main.rst
@@ -508,8 +508,8 @@ An iterative device must provide:
the point that stream bandwidth limits tell it to stop. Each call
generates one section.
- - A ``save_live_complete_precopy`` function that must transmit the
- last section for the device containing any remaining data.
+ - A ``save_complete`` function that must transmit the last section for
+ the device containing any remaining data.
- A ``load_state`` function used to load sections generated by
any of the save functions that generate sections.
diff --git a/docs/devel/migration/postcopy.rst b/docs/devel/migration/postcopy.rst
index 82e7a84..e319388 100644
--- a/docs/devel/migration/postcopy.rst
+++ b/docs/devel/migration/postcopy.rst
@@ -33,25 +33,6 @@ will now cause the transition from precopy to postcopy.
It can be issued immediately after migration is started or any
time later on. Issuing it after the end of a migration is harmless.
-Blocktime is a postcopy live migration metric, intended to show how
-long the vCPU was in state of interruptible sleep due to pagefault.
-That metric is calculated both for all vCPUs as overlapped value, and
-separately for each vCPU. These values are calculated on destination
-side. To enable postcopy blocktime calculation, enter following
-command on destination monitor:
-
-``migrate_set_capability postcopy-blocktime on``
-
-Postcopy blocktime can be retrieved by query-migrate qmp command.
-postcopy-blocktime value of qmp command will show overlapped blocking
-time for all vCPU, postcopy-vcpu-blocktime will show list of blocking
-time per vCPU.
-
-.. note::
- During the postcopy phase, the bandwidth limits set using
- ``migrate_set_parameter`` is ignored (to avoid delaying requested pages that
- the destination is waiting for).
-
Postcopy internals
==================
@@ -312,3 +293,20 @@ explicitly) to be sent in a separate preempt channel, rather than queued in
the background migration channel. Anyone who cares about latencies of page
faults during a postcopy migration should enable this feature. By default,
it's not enabled.
+
+Postcopy blocktime statistics
+-----------------------------
+
+Blocktime is a postcopy live migration metric, intended to show how
+long the vCPU was in state of interruptible sleep due to pagefault.
+That metric is calculated both for all vCPUs as overlapped value, and
+separately for each vCPU. These values are calculated on destination
+side. To enable postcopy blocktime calculation, enter following
+command on destination monitor:
+
+``migrate_set_capability postcopy-blocktime on``
+
+Postcopy blocktime can be retrieved by query-migrate qmp command.
+postcopy-blocktime value of qmp command will show overlapped blocking
+time for all vCPU, postcopy-vcpu-blocktime will show list of blocking
+time per vCPU.
diff --git a/docs/devel/migration/vfio.rst b/docs/devel/migration/vfio.rst
index 673e354..0790e50 100644
--- a/docs/devel/migration/vfio.rst
+++ b/docs/devel/migration/vfio.rst
@@ -75,12 +75,12 @@ VFIO implements the device hooks for the iterative approach as follows:
in the non-multifd mode.
In the multifd mode it just emits either a dummy EOS marker.
-* A ``save_live_complete_precopy`` function that sets the VFIO device in
- _STOP_COPY state and iteratively copies the data for the VFIO device until
- the vendor driver indicates that no data remains.
- In the multifd mode it just emits a dummy EOS marker.
+* A ``save_complete`` function that sets the VFIO device in _STOP_COPY
+ state and iteratively copies the data for the VFIO device until the
+ vendor driver indicates that no data remains. In the multifd mode it
+ just emits a dummy EOS marker.
-* A ``save_live_complete_precopy_thread`` function that in the multifd mode
+* A ``save_complete_precopy_thread`` function that in the multifd mode
provides thread handler performing multifd device state transfer.
It sets the VFIO device to _STOP_COPY state, iteratively reads the data
from the VFIO device and queues it for multifd transmission until the vendor
@@ -195,12 +195,12 @@ Live migration save path
|
Then the VFIO device is put in _STOP_COPY state
(FINISH_MIGRATE, _ACTIVE, _STOP_COPY)
- .save_live_complete_precopy() is called for each active device
+ .save_complete() is called for each active device
For the VFIO device: in the non-multifd mode iterate in
- .save_live_complete_precopy() until
+ .save_complete() until
pending data is 0
In the multifd mode this iteration is done in
- .save_live_complete_precopy_thread() instead.
+ .save_complete_precopy_thread() instead.
|
(POSTMIGRATE, _COMPLETED, _STOP_COPY)
Migraton thread schedules cleanup bottom half and exits
@@ -247,3 +247,22 @@ The multifd VFIO device state transfer is controlled by
"x-migration-multifd-transfer" VFIO device property. This property defaults to
AUTO, which means that VFIO device state transfer via multifd channels is
attempted in configurations that otherwise support it.
+
+Since the target QEMU needs to load device state buffers in-order it needs to
+queue incoming buffers until they can be loaded into the device.
+This means that a malicious QEMU source could theoretically cause the target
+QEMU to allocate unlimited amounts of memory for such buffers-in-flight.
+
+The "x-migration-max-queued-buffers-size" property allows capping the total size
+of these VFIO device state buffers queued at the destination.
+
+Because a malicious QEMU source causing OOM on the target is not expected to be
+a realistic threat in most of VFIO live migration use cases and the right value
+depends on the particular setup by default this queued buffers size limit is
+disabled by setting it to UINT64_MAX.
+
+Some host platforms (like ARM64) require that VFIO device config is loaded only
+after all iterables were loaded, during non-iterables loading phase.
+Such interlocking is controlled by "x-migration-load-config-after-iter" VFIO
+device property, which in its default setting (AUTO) does so only on platforms
+that actually require it.
diff --git a/docs/devel/qapi-code-gen.rst b/docs/devel/qapi-code-gen.rst
index 231cc0f..dfdbeac 100644
--- a/docs/devel/qapi-code-gen.rst
+++ b/docs/devel/qapi-code-gen.rst
@@ -876,25 +876,35 @@ structuring content.
Headings and subheadings
~~~~~~~~~~~~~~~~~~~~~~~~
-A free-form documentation comment containing a line which starts with
-some ``=`` symbols and then a space defines a section heading::
+Free-form documentation does not start with ``@SYMBOL`` and can contain
+arbitrary rST markup. Headings can be marked up using the standard rST
+syntax::
##
- # = This is a top level heading
+ # *************************
+ # This is a level 2 heading
+ # *************************
#
# This is a free-form comment which will go under the
# top level heading.
##
##
- # == This is a second level heading
+ # This is a third level heading
+ # ==============================
+ #
+ # Level 4
+ # _______
+ #
+ # Level 5
+ # ^^^^^^^
+ #
+ # Level 6
+ # """""""
##
-A heading line must be the first line of the documentation
-comment block.
-
-Section headings must always be correctly nested, so you can only
-define a third-level heading inside a second-level heading, and so on.
+Level 1 headings are reserved for use by the generated documentation
+page itself, leaving level 2 as the highest level that should be used.
Documentation markup
diff --git a/docs/devel/qapi-domain.rst b/docs/devel/qapi-domain.rst
index 1123872..fe540d1 100644
--- a/docs/devel/qapi-domain.rst
+++ b/docs/devel/qapi-domain.rst
@@ -9,7 +9,7 @@ in Sphinx is provided by the QAPI Domain, located in
`Python Domain
<https://www.sphinx-doc.org/en/master/usage/domains/python.html>`_
included with Sphinx, but provides special directives and roles
-speciically for annotating and documenting QAPI definitions
+for annotating and documenting QAPI definitions
specifically.
A `Domain
@@ -101,7 +101,7 @@ without types. The QAPI domain uses this class for features, returns,
and enum values.
TypedField:
- * Creates a grouped, typed field. Multiple adjacent entres will be
+ * Creates a grouped, typed field. Multiple adjacent entries will be
merged into one section, and the content will form a bulleted list.
* *Must* take at least one argument, but supports up to two -
nominally, a name and a type.
@@ -242,6 +242,37 @@ Example::
}
+``:return-nodesc:``
+-------------------
+
+Document the return type of a QAPI command, without an accompanying
+description.
+
+:availability: This field list is only available in the body of the
+ Command directive.
+:syntax: ``:return-nodesc: type``
+:type: `sphinx.util.docfields.Field
+ <https://pydoc.dev/sphinx/latest/sphinx.util.docfields.Field.html?private=1>`_
+
+
+Example::
+
+ .. qapi:command:: query-replay
+ :since: 5.2
+
+ Retrieve the record/replay information. It includes current
+ instruction count which may be used for ``replay-break`` and
+ ``replay-seek`` commands.
+
+ :return-nodesc: ReplayInfo
+
+ .. qmp-example::
+
+ -> { "execute": "query-replay" }
+ <- { "return": {
+ "mode": "play", "filename": "log.rr", "icount": 220414 }
+ }
+
``:value:``
-----------
diff --git a/docs/devel/rust.rst b/docs/devel/rust.rst
index dc8c441..b673753 100644
--- a/docs/devel/rust.rst
+++ b/docs/devel/rust.rst
@@ -351,7 +351,7 @@ Writing procedural macros
'''''''''''''''''''''''''
By conventions, procedural macros are split in two functions, one
-returning ``Result<proc_macro2::TokenStream, MacroError>`` with the body of
+returning ``Result<proc_macro2::TokenStream, syn::Error>`` with the body of
the procedural macro, and the second returning ``proc_macro::TokenStream``
which is the actual procedural macro. The former's name is the same as
the latter with the ``_or_error`` suffix. The code for the latter is more
@@ -361,18 +361,19 @@ from the type after ``as`` in the invocation of ``parse_macro_input!``::
#[proc_macro_derive(Object)]
pub fn derive_object(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as DeriveInput);
- let expanded = derive_object_or_error(input).unwrap_or_else(Into::into);
- TokenStream::from(expanded)
+ derive_object_or_error(input)
+ .unwrap_or_else(syn::Error::into_compile_error)
+ .into()
}
The ``qemu_api_macros`` crate has utility functions to examine a
``DeriveInput`` and perform common checks (e.g. looking for a struct
-with named fields). These functions return ``Result<..., MacroError>``
+with named fields). These functions return ``Result<..., syn::Error>``
and can be used easily in the procedural macro function::
fn derive_object_or_error(input: DeriveInput) ->
- Result<proc_macro2::TokenStream, MacroError>
+ Result<proc_macro2::TokenStream, Error>
{
is_c_repr(&input, "#[derive(Object)]")?;
diff --git a/docs/devel/testing/functional.rst b/docs/devel/testing/functional.rst
index 9e56dd1..3728bab 100644
--- a/docs/devel/testing/functional.rst
+++ b/docs/devel/testing/functional.rst
@@ -65,7 +65,7 @@ directory should be your build folder. For example::
The test framework will automatically purge any scratch files created during
the tests. If needing to debug a failed test, it is possible to keep these
-files around on disk by setting ```QEMU_TEST_KEEP_SCRATCH=1``` as an env
+files around on disk by setting ``QEMU_TEST_KEEP_SCRATCH=1`` as an env
variable. Any preserved files will be deleted the next time the test is run
without this variable set.
diff --git a/docs/devel/testing/main.rst b/docs/devel/testing/main.rst
index 6b18ed8..2b5cb0c 100644
--- a/docs/devel/testing/main.rst
+++ b/docs/devel/testing/main.rst
@@ -604,9 +604,9 @@ below steps to debug it:
2. Add "V=1" to the command line, try again, to see the verbose output.
3. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
in the container right before testing starts. You could either manually
- build QEMU and run tests from there, or press Ctrl-D to let the Docker
+ build QEMU and run tests from there, or press :kbd:`Ctrl+d` to let the Docker
testing continue.
-4. If you press Ctrl-D, the same building and testing procedure will begin, and
+4. If you press :kbd:`Ctrl+d`, the same building and testing procedure will begin, and
will hopefully run into the error again. After that, you will be dropped to
the prompt for debug.
diff --git a/docs/devel/tracing.rst b/docs/devel/tracing.rst
index 043bed7..f4557ee 100644
--- a/docs/devel/tracing.rst
+++ b/docs/devel/tracing.rst
@@ -76,7 +76,7 @@ The "io/trace.h" file must be created manually with an #include of the
corresponding "trace/trace-<subdir>.h" file that will be generated in the
builddir::
- $ echo '#include "trace/trace-io.h"' >io/trace.h
+ $ (echo '/* SPDX-License-Identifier: GPL-2.0-or-later */' ; echo '#include "trace/trace-io.h"') >io/trace.h
While it is possible to include a trace.h file from outside a source file's own
sub-directory, this is discouraged in general. It is strongly preferred that