From 9d49cc632417db08ba2fecf2e91b423656ce92a8 Mon Sep 17 00:00:00 2001 From: Andrew Waterman Date: Thu, 6 Jul 2023 17:18:59 -0700 Subject: Fix formatting errors in Supervisor and Svnapot chapters --- src/supervisor.adoc | 58 ++++++++++++++++++++++++++--------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/src/supervisor.adoc b/src/supervisor.adoc index 2a376d6..cc74c64 100644 --- a/src/supervisor.adoc +++ b/src/supervisor.adoc @@ -263,11 +263,11 @@ formatted as shown in Figures <> and <> respectively. [[sipreg-standard]] -.Standard portion (bits 15:0)of `sip`. +.Standard portion (bits 15:0) of `sip`. include::images/bytefield/sipreg-standard.edn[] [[siereg-standard]] -.Statndard portion (bits 15:0)of `sie`. +.Statndard portion (bits 15:0) of `sie`. include::images/bytefield/siereg-standard.edn[] @@ -336,7 +336,7 @@ The counter-enable register `scounteren` is a 32-bit register that controls the availability of the hardware performance monitoring counters to U-mode. -When the CY, TM, IR, or HPM_n_ bit in the `scounteren` register is +When the CY, TM, IR, or HPM__n__ bit in the `scounteren` register is clear, attempts to read the `cycle`, `time`, `instret`, or `hpmcountern` register while executing in U-mode will cause an illegal instruction exception. When one of these bits is set, access to the corresponding @@ -882,20 +882,20 @@ The behavior of SFENCE.VMA depends on _rs1_ and _rs2_ as follows: made to any level of the page tables, for all address spaces. The fence also invalidates all address-translation cache entries, for all address spaces. -* If __rs1__=`x0` and __rs2__≥``x0``, the fence orders all +* If __rs1__=`x0` and __rs2__≠``x0``, the fence orders all reads and writes made to any level of the page tables, but only for the address space identified by integer register _rs2_. Accesses to _global_ mappings (see <>) are not ordered. The fence also invalidates all address-translation cache entries matching the address space identified by integer register _rs2_, except for entries containing global mappings. -* If __rs1__≥``x0`` and __rs2__=`x0`, the fence orders only +* If __rs1__≠``x0`` and __rs2__=`x0`, the fence orders only reads and writes made to leaf page table entries corresponding to the virtual address in __rs1__, for all address spaces. The fence also invalidates all address-translation cache entries that contain leaf page table entries corresponding to the virtual address in _rs1_, for all address spaces. -* If __rs1__≥``x0`` and __rs2__≥``x0``, the +* If __rs1__≠``x0`` and __rs2__≠``x0``, the fence orders only reads and writes made to leaf page table entries corresponding to the virtual address in _rs1_, for the address space identified by integer register _rs2_. Accesses to global mappings are @@ -908,7 +908,7 @@ If the value held in _rs1_ is not a valid virtual address, then the SFENCE.VMA instruction has no effect. No exception is raised in this case. -When __rs2__≥``x0``, bits SXLEN-1:ASIDMAX of the value held +When __rs2__≠``x0``, bits SXLEN-1:ASIDMAX of the value held in _rs2_ are reserved for future standard use. Until their use is defined by a standard extension, they should be zeroed by software and ignored by current implementations. Furthermore, if @@ -1202,7 +1202,7 @@ either mapping being used. Global mappings need not be stored redundantly in address-translation caches for multiple ASIDs. Additionally, they need not be flushed from local address-translation caches when an SFENCE.VMA instruction is -executed with __rs2__≥``x0``. +executed with __rs2__≠``x0``. ==== The RSW field is reserved for use by supervisor software; the @@ -1286,29 +1286,29 @@ region). A virtual address _va_ is translated into a physical address _pa_ as follows: -. Let _a_ be ``satp``.__ppn__ X PAGESIZE, and let __i__= LEVELS - 1. (For Sv32, PAGESIZE=2^12^ and LEVELS=2.) The `satp` register must be +. Let _a_ be ``satp``.__ppn__×PAGESIZE, and let __i__=LEVELS-1. (For Sv32, PAGESIZE=2^12^ and LEVELS=2.) The `satp` register must be _active_, i.e., the effective privilege mode must be S-mode or U-mode. -. Let _pte_ be the value of the PTE at address __a__+__va.vpn[i] X PTESIZE. (For Sv32, PTESIZE=4.) If accessing _pte_ violates a PMA or PMP check, raise an access-fault exception corresponding to the original access type. -. If _pte.v_=0, or if _pte.r_=0 and _pte.w_=1, or if any bits or encodings that are reserved for future standard use are set within _pte_, stop and raise a page-fault exception corresponding to the original access type. -. Otherwise, the PTE is valid. If __pte.r__=1 or __pte.x__=1, go to step 5. Otherwise, this PTE is a pointer to the next level of the page table. Let __i=i__-1. If i<0, stop and raise a page-fault exception corresponding to the original access type. Otherwise, let -__a=pte.ppn__ X PAGESIZE and go to step 2. +. Let _pte_ be the value of the PTE at address __a__+__va__.__vpn__[__i__]×PTESIZE. (For Sv32, PTESIZE=4.) If accessing _pte_ violates a PMA or PMP check, raise an access-fault exception corresponding to the original access type. +. If _pte_._v_=0, or if _pte_._r_=0 and _pte_._w_=1, or if any bits or encodings that are reserved for future standard use are set within _pte_, stop and raise a page-fault exception corresponding to the original access type. +. Otherwise, the PTE is valid. If __pte__.__r__=1 or __pte__.__x__=1, go to step 5. Otherwise, this PTE is a pointer to the next level of the page table. Let __i=i__-1. If __i__<0, stop and raise a page-fault exception corresponding to the original access type. Otherwise, let +__a__=__pte__.__ppn__×PAGESIZE and go to step 2. . A leaf PTE has been found. Determine if the requested memory access is -allowed by the _pte.r_, _pte.w_, _pte.x_, and _pte.u_ bits, given the current privilege mode and the value of the SUM and MXR fields of the `mstatus` register. If not, stop and raise a page-fault exception corresponding to the original access type. -. If _i>0_ and _pte.ppn_[i-1:0] ≠ 0, this is a misaligned superpage; stop and raise a page-fault exception corresponding to the original access type. -. If _pte.a_=0, or if the original memory access is a store and _pte.d_=0, either raise a page-fault exception corresponding to the original access type, or: +allowed by the _pte_._r_, _pte_._w_, _pte_._x_, and _pte_._u_ bits, given the current privilege mode and the value of the SUM and MXR fields of the `mstatus` register. If not, stop and raise a page-fault exception corresponding to the original access type. +. If _i>0_ and _pte_._ppn_[__i__-1:0] ≠ 0, this is a misaligned superpage; stop and raise a page-fault exception corresponding to the original access type. +. If _pte_._a_=0, or if the original memory access is a store and _pte_._d_=0, either raise a page-fault exception corresponding to the original access type, or: * If a store to _pte_ would violate a PMA or PMP check, raise an access-fault exception corresponding to the original access type. * Perform the following steps atomically: -** Compare _pte_ to the value of the PTE at address __a__+__va.vpn[i]__ X PTESIZE. -** If the values match, set _pte.a_ to 1 and, if the -original memory access is a store, also set _pte.d_ to 1. -** If the comparison fails, return to step 2 +** Compare _pte_ to the value of the PTE at address __a__+__va.vpn__[__i__]×PTESIZE. +** If the values match, set _pte_._a_ to 1 and, if the +original memory access is a store, also set _pte_._d_ to 1. +** If the comparison fails, return to step 2. . The translation is successful. The translated physical address is given as follows: * _pa.pgoff_ = _va.pgoff_. -* If _i_>0, then this is a superpage translation and __pa.ppn[i__-1:0] = _va.vpn[i_-1:0]. -* _pa.ppn_[LEVELS - 1:__i__] = _pte.ppn_[LEVELS - 1:__i__]. +* If _i_>0, then this is a superpage translation and __pa.ppn__[__i__-1:0] = __va.vpn__[__i__-1:0]. +* _pa.ppn_[LEVELS-1:__i__] = _pte_._ppn_[LEVELS-1:__i__]. All implicit accesses to the address-translation data structures in this algorithm are performed using width PTESIZE. @@ -1613,7 +1613,7 @@ The Svnapot extension depends on Sv39. .Page table entry encodings when __pte__.N=1 [%autowidth,float="center",align="center",cols="^,^,<,^",options="header"] |=== -|i |_pte.ppn[i]_ |Description |_pte.napot_bits_ +|i |_pte_._ppn_[_i_] |Description |_pte_.__napot_bits__ |0 + 0 + 0 + @@ -1647,14 +1647,14 @@ except that: * If the encoding in _pte_ is valid according to <>, then instead of returning the original value of _pte_, implicit reads of a NAPOT PTE return a copy -of _pte_ in which __pte.ppn[i][pte.napot_bits__-1:0] is replaced by -__vpn[i][pte.napot_bits__-1:0]. If the encoding in _pte_ is reserved according to +of _pte_ in which __pte__.__ppn__[__i__][__pte__.__napot_bits__-1:0] is replaced by +__vpn__[__i__][__pte__.__napot_bits__-1:0]. If the encoding in _pte_ is reserved according to <>, then a page-fault exception must be raised. * Implicit reads of NAPOT page table entries may create address-translation cache entries mapping -_a_ + _j_ X PTESIZE to a copy of _pte_ in which _pte.ppn[i][pte.napot_bits_-1:0] +_a_ + _j_×PTESIZE to a copy of _pte_ in which _pte_._ppn_[_i_][_pte_.__napot_bits__-1:0] is replaced by _vpn[i][pte.napot_bits_-1:0], for any or all _j_ such that -__j >> napot_bits__ = __vpn[i] >> napot_bits__, all for the address space identified in _satp_ as loaded by step 1. +__j__ >> __napot_bits__ = __vpn__[__i__] >> __napot_bits__, all for the address space identified in _satp_ as loaded by step 1. [NOTE] ==== @@ -1711,7 +1711,7 @@ __ [%autowidth,float="center",align="center",cols="^,^,<,^",options="header"] |=== -|i |_pte.ppn[i]_ |Description |_pte.napot_bits_ +|i |_pte_._ppn_[_i_] |Description |_pte_.__napot_bits__ |0 + 0 + 0 + @@ -1756,7 +1756,7 @@ allow system software to determine which sizes are supported. Other sizes may remain deliberately excluded, so that PPN bits not being used to indicate a valid NAPOT region size (e.g., the least-significant -bit of _pte.ppn[i]_) may be repurposed for other uses in the +bit of _pte_._ppn_[_i_]) may be repurposed for other uses in the future. However, in case finer-grained intermediate page size support proves not -- cgit v1.1