aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorRichard Henderson <richard.henderson@linaro.org>2021-10-13 11:43:29 -0700
committerRichard Henderson <richard.henderson@linaro.org>2021-10-13 11:43:29 -0700
commite5b2333f24ff207f08cf96e73d2e11438c985801 (patch)
tree14589d17d9316913fe5a6fdc772bd28a64fa2cb0 /docs
parent984b2b504942d9adb31d7743f02138fb79a73c12 (diff)
parent76e366e728549b3324cc2dee6745d6a4f1af18e6 (diff)
downloadqemu-e5b2333f24ff207f08cf96e73d2e11438c985801.zip
qemu-e5b2333f24ff207f08cf96e73d2e11438c985801.tar.gz
qemu-e5b2333f24ff207f08cf96e73d2e11438c985801.tar.bz2
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20211013' into staging
Use MO_128 for 16-byte atomic memory operations. Add cpu_ld/st_mmu memory primitives. Move helper_ld/st memory helpers out of tcg.h. Canonicalize alignment flags in MemOp. # gpg: Signature made Wed 13 Oct 2021 10:48:45 AM PDT # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate] * remotes/rth/tags/pull-tcg-20211013: tcg: Canonicalize alignment flags in MemOp tcg: Move helper_*_mmu decls to tcg/tcg-ldst.h target/arm: Use cpu_*_mmu instead of helper_*_mmu target/sparc: Use cpu_*_mmu instead of helper_*_mmu target/s390x: Use cpu_*_mmu instead of helper_*_mmu target/mips: Use 8-byte memory ops for msa load/store target/mips: Use cpu_*_data_ra for msa load/store accel/tcg: Move cpu_atomic decls to exec/cpu_ldst.h accel/tcg: Add cpu_{ld,st}*_mmu interfaces target/hexagon: Implement cpu_mmu_index target/s390x: Use MO_128 for 16 byte atomics target/ppc: Use MO_128 for 16 byte atomics target/i386: Use MO_128 for 16 byte atomics target/arm: Use MO_128 for 16 byte atomics memory: Log access direction for invalid accesses Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Diffstat (limited to 'docs')
-rw-r--r--docs/devel/loads-stores.rst52
1 files changed, 43 insertions, 9 deletions
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
index 568274b..8f0035c 100644
--- a/docs/devel/loads-stores.rst
+++ b/docs/devel/loads-stores.rst
@@ -68,15 +68,19 @@ Regexes for git grep
- ``\<ldn_\([hbl]e\)?_p\>``
- ``\<stn_\([hbl]e\)?_p\>``
-``cpu_{ld,st}*_mmuidx_ra``
-~~~~~~~~~~~~~~~~~~~~~~~~~~
+``cpu_{ld,st}*_mmu``
+~~~~~~~~~~~~~~~~~~~~
-These functions operate on a guest virtual address plus a context,
-known as a "mmu index" or ``mmuidx``, which controls how that virtual
-address is translated. The meaning of the indexes are target specific,
-but specifying a particular index might be necessary if, for instance,
-the helper requires an "always as non-privileged" access rather that
-the default access for the current state of the guest CPU.
+These functions operate on a guest virtual address, plus a context
+known as a "mmu index" which controls how that virtual address is
+translated, plus a ``MemOp`` which contains alignment requirements
+among other things. The ``MemOp`` and mmu index are combined into
+a single argument of type ``MemOpIdx``.
+
+The meaning of the indexes are target specific, but specifying a
+particular index might be necessary if, for instance, the helper
+requires a "always as non-privileged" access rather than the
+default access for the current state of the guest CPU.
These functions may cause a guest CPU exception to be taken
(e.g. for an alignment fault or MMU fault) which will result in
@@ -99,6 +103,35 @@ function, which is a return address into the generated code [#gpc]_.
Function names follow the pattern:
+load: ``cpu_ld{size}{end}_mmu(env, ptr, oi, retaddr)``
+
+store: ``cpu_st{size}{end}_mmu(env, ptr, val, oi, retaddr)``
+
+``size``
+ - ``b`` : 8 bits
+ - ``w`` : 16 bits
+ - ``l`` : 32 bits
+ - ``q`` : 64 bits
+
+``end``
+ - (empty) : for target endian, or 8 bit sizes
+ - ``_be`` : big endian
+ - ``_le`` : little endian
+
+Regexes for git grep:
+ - ``\<cpu_ld[bwlq](_[bl]e)\?_mmu\>``
+ - ``\<cpu_st[bwlq](_[bl]e)\?_mmu\>``
+
+
+``cpu_{ld,st}*_mmuidx_ra``
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+These functions work like the ``cpu_{ld,st}_mmu`` functions except
+that the ``mmuidx`` parameter is not combined with a ``MemOp``,
+and therefore there is no required alignment supplied or enforced.
+
+Function names follow the pattern:
+
load: ``cpu_ld{sign}{size}{end}_mmuidx_ra(env, ptr, mmuidx, retaddr)``
store: ``cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
@@ -132,7 +165,8 @@ of the guest CPU, as determined by ``cpu_mmu_index(env, false)``.
These are generally the preferred way to do accesses by guest
virtual address from helper functions, unless the access should
-be performed with a context other than the default.
+be performed with a context other than the default, or alignment
+should be enforced for the access.
Function names follow the pattern: