[[machine]] == Machine-Level ISA, Version 1.13 This chapter describes the machine-level operations available in machine-mode (M-mode), which is the highest privilege mode in a RISC-V hart. M-mode is used for low-level access to a hardware platform and is the first mode entered at reset. M-mode can also be used to implement features that are too difficult or expensive to implement in hardware directly. The RISC-V machine-level ISA contains a common core that is extended depending on which other privilege levels are supported and other details of the hardware implementation. === Machine-Level CSRs In addition to the machine-level CSRs described in this section, M-mode code can access all CSRs at lower privilege levels. [[misa]] ==== Machine ISA (`misa`) Register The `misa` CSR is a *WARL* read-write register reporting the ISA supported by the hart. This register must be readable in any implementation, but a value of zero can be returned to indicate the `misa` register has not been implemented, requiring that CPU capabilities be determined through a separate non-standard mechanism. .Machine ISA register (misa) include::images/bytefield/misareg.edn[] The MXL (Machine XLEN) field encodes the native base integer ISA width as shown in <>. The MXL field is read-only. If `misa` is nonzero, the MXL field indicates the effective XLEN in M-mode, a constant termed _MXLEN_. XLEN is never greater than MXLEN, but XLEN might be smaller than MXLEN in less-privileged modes. [[misabase]] .Encoding of MXL field in `misa` [%autowidth,float="center",align="center",cols=">,>",options="header",] |=== |MXL |XLEN |1 + 2 + 3 |32 + 64 + _Reserved_ |=== The `misa` CSR is MXLEN bits wide. [NOTE] ==== The base width can be quickly ascertained using branches on the sign of the returned `misa` value, and possibly a shift left by one and a second branch on the sign. These checks can be written in assembly code without knowing the register width (MXLEN) of the hart. The base width is given by __MXLEN=2^MXL+4^__. The base width can also be found if `misa` is zero, by placing the immediate 2 in a register, then shifting the register left by 31 bits. If zero, the hart is RV32, else it is RV64. ==== The Extensions field encodes the presence of the standard extensions, with a single bit per letter of the alphabet (bit 0 encodes presence of extension "A" , bit 1 encodes presence of extension "B", through to bit 25 which encodes "Z"). The "I" bit will be set for the RV32I and RV64I base ISAs, and the "E" bit will be set for RV32E and RV64E. The Extensions field is a *WARL* field that can contain writable bits where the implementation allows the supported ISA to be modified. At reset, the Extensions field shall contain the maximal set of supported extensions, and "I" shall be selected over "E" if both are available. When a standard extension is disabled by clearing its bit in `misa`, the instructions and CSRs defined or modified by the extension revert to their defined or reserved behaviors as if the extension is not implemented. [NOTE] ==== For a given RISC-V execution environment, an instruction, extension, or other feature of the RISC-V ISA is ordinarily judged to be _implemented_ or not by the observable execution behavior in that environment. For example, the F extension is said to be implemented for an execution environment if and only if the instructions that the RISC-V Unprivileged ISA defines for F execute as specified. With this definition of _implemented_, disabling an extension by clearing its bit in `misa` results in the extension being considered _not implemented_ in M-mode. For example, setting `misa`.F=0 results in the F extension being not implemented for M-mode, because the F extension's instructions will not act as the Unprivileged ISA requires but may instead raise an illegal-instruction exception. Defining the term _implemented_ based strictly on the observable behavior might conflict with other common understandings of the same word. In particular, although common usage may allow for the combination "implemented but disabled," in this document it is considered a contradiction of terms, because _disabled_ implies execution will not behave as required for the feature to be considered _implemented_. In the same vein, "implemented and enabled" is redundant here; "implemented" suffices. ==== .Encoding of Extensions field in `misa`. All bits that are reserved for future use must return zero when read. [%autowidth,float="center",align="center",cols=">,>,<",options="header",] |=== |Bit |Character |Description |0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 22 + 23 + 24 + 25 + |A + B + C + D + E + F + G + H + I + J + K + L + M + N + O + P + Q + R + S + T + U + V + W + X + Y + Z |Atomic extension + B extension + Compressed extension + Double-precision floating-point extension + RV32E/64E base ISA + Single-precision floating-point extension + _Reserved_ + Hypervisor extension + RV32I/64I base ISA + _Reserved_ + _Reserved_ + _Reserved_ + Integer Multiply/Divide extension + _Tentatively reserved for User-Level Interrupts extension_ + _Reserved_ + _Tentatively reserved for Packed-SIMD extension_ + Quad-precision floating-point extension + _Reserved_ + Supervisor mode implemented + _Reserved_ + User mode implemented + Vector extension + _Reserved_ + Non-standard extensions present + _Reserved_ + _Reserved_ |=== The "X" bit will be set if there are any non-standard extensions. When the "B" bit is 1, the implementation supports the instructions provided by the Zba, Zbb, and Zbs extensions. When the "B" bit is 0, it indicates that the implementation might not support one or more of the Zba, Zbb, or Zbs extensions. When the "M" bit is 1, the implementation supports all multiply and division instructions defined by the M extension. When the "M" bit is 0, it indicates that the implementation might not support those instructions. However if the Zmmul extension is supported then the multiply instructions it specifies are supported irrespective of the value of the "M" bit. When the "S" bit is 1, the implementation supports supervisor mode. When the "S" bit is 0, the implementation might not support supervisor mode. When the "U" bit is 1, the implementation supports user mode. When the "U" bit is 0, the implementation might not support user mode. [NOTE] ==== The `misa` CSR exposes a rudimentary catalog of CPU features to machine-mode code. More extensive information can be obtained in machine mode by probing other machine registers, and examining other ROM storage in the system as part of the boot process. We require that lower privilege levels execute environment calls instead of reading CPU registers to determine features available at each privilege level. This enables virtualization layers to alter the ISA observed at any level, and supports a much richer command interface without burdening hardware designs. ==== The "E" bit is read-only. Unless `misa` is all read-only zero, the "E" bit always reads as the complement of the "I" bit. If an execution environment supports both RV32E and RV32I, software can select RV32E by clearing the "I" bit. If an ISA feature _x_ depends on an ISA feature _y_, then attempting to enable feature _x_ but disable feature _y_ results in both features being disabled. For example, setting "F"=0 and "D"=1 results in both "F" and "D" being cleared. Similarly, setting "U"=0 and "S"=1" results in both "U" and "S" being cleared. An implementation may impose additional constraints on the collective setting of two or more `misa` fields, in which case they function collectively as a single *WARL* field. An attempt to write an unsupported combination causes those bits to be set to some supported combination. Writing `misa` may increase IALIGN, e.g., by disabling the "C" extension. If an instruction that would write `misa` increases IALIGN, and the subsequent instruction's address is not IALIGN-bit aligned, the write to `misa` is suppressed, leaving `misa` unchanged. When software enables an extension that was previously disabled, then all state uniquely associated with that extension is UNSPECIFIED, unless otherwise specified by that extension. NOTE: Although one of the bits 25--0 in `misa` being set to 1 implies that the corresponding feature is implemented, the inverse is not necessarily true: one of these bits being clear does not necessarily imply that the corresponding feature is not implemented. This follows from the fact that, when a feature is not implemented, the corresponding opcodes and CSRs become reserved, not necessarily illegal. ==== Machine Vendor ID (`mvendorid`) Register The `mvendorid` CSR is a 32-bit read-only register providing the JEDEC manufacturer ID of the provider of the core. This register must be readable in any implementation, but a value of 0 can be returned to indicate the field is not implemented or that this is a non-commercial implementation. //.Vendor ID register (`mvendorid`) //image::png/mvendorid.png[align="center"] .Vendor ID register (`mvendorid`) include::images/bytefield/mvendorid.edn[] JEDEC manufacturer IDs are ordinarily encoded as a sequence of one-byte continuation codes `0x7f`, terminated by a one-byte ID not equal to `0x7f`, with an odd parity bit in the most-significant bit of each byte. `mvendorid` encodes the number of one-byte continuation codes in the Bank field, and encodes the final byte in the Offset field, discarding the parity bit. For example, the JEDEC manufacturer ID `0x7f 0x7f 0x7f 0x7f 0x7f 0x7f 0x7f 0x7f 0x7f 0x7f 0x7f 0x7f 0x8a` (twelve continuation codes followed by `0x8a`) would be encoded in the `mvendorid` CSR as `0x60a`. [NOTE] ==== In JEDEC's parlance, the bank number is one greater than the number of continuation codes; hence, the `mvendorid` Bank field encodes a value that is one less than the JEDEC bank number. *** Previously the vendor ID was to be a number allocated by RISC-V International, but this duplicates the work of JEDEC in maintaining a manufacturer ID standard. At time of writing, registering a manufacturer ID with JEDEC has a one-time cost of $500. ==== ==== Machine Architecture ID (`marchid`) Register The `marchid` CSR is an MXLEN-bit read-only register encoding the base microarchitecture of the hart. This register must be readable in any implementation, but a value of 0 can be returned to indicate the field is not implemented. The combination of `mvendorid` and `marchid` should uniquely identify the type of hart microarchitecture that is implemented. .Machine Architecture ID (`marchid`) register include::images/bytefield/marchid.edn[] Open-source project architecture IDs are allocated globally by RISC-V International, and have non-zero architecture IDs with a zero most-significant-bit (MSB). Commercial architecture IDs are allocated by each commercial vendor independently, but must have the MSB set and cannot contain zero in the remaining MXLEN-1 bits. [NOTE] ==== The intent is for the architecture ID to represent the microarchitecture associated with the repo around which development occurs rather than a particular organization. Commercial fabrications of open-source designs should (and might be required by the license to) retain the original architecture ID. This will aid in reducing fragmentation and tool support costs, as well as provide attribution. Open-source architecture IDs are administered by RISC-V International and should only be allocated to released, functioning open-source projects. Commercial architecture IDs can be managed independently by any registered vendor but are required to have IDs disjoint from the open-source architecture IDs (MSB set) to prevent collisions if a vendor wishes to use both closed-source and open-source microarchitectures. The convention adopted within the following Implementation field can be used to segregate branches of the same architecture design, including by organization. The `misa` register also helps distinguish different variants of a design. ==== ==== Machine Implementation ID (`mimpid`) Register The `mimpid` CSR provides a unique encoding of the version of the processor implementation. This register must be readable in any implementation, but a value of 0 can be returned to indicate that the field is not implemented. The Implementation value should reflect the design of the RISC-V processor itself and not any surrounding system. .Machine Implementation ID (`mimpid`) register include::images/bytefield/mimpid.edn[] [NOTE] ==== The format of this field is left to the provider of the architecture source code, but will often be printed by standard tools as a hexadecimal string without any leading or trailing zeros, so the Implementation value can be left-justified (i.e., filled in from most-significant nibble down) with subfields aligned on nibble boundaries to ease human readability. ==== ==== Hart ID (`mhartid`) Register The `mhartid` CSR is an MXLEN-bit read-only register containing the integer ID of the hardware thread running the code. This register must be readable in any implementation. Hart IDs might not necessarily be numbered contiguously in a multiprocessor system, but at least one hart must have a hart ID of zero. Hart IDs must be unique within the execution environment. .Hart ID (`mhartid`) register include::images/bytefield/mhartid.edn[] [NOTE] ==== In certain cases, we must ensure exactly one hart runs some code (e.g., at reset), and so require one hart to have a known hart ID of zero. For efficiency, system implementers should aim to reduce the magnitude of the largest hart ID used in a system. ==== ==== Machine Status (`mstatus` and `mstatush`) Registers The `mstatus` register is an MXLEN-bit read/write register formatted as shown in <> for RV32 and <> for RV64. The `mstatus` register keeps track of and controls the hart’s current operating state. A restricted view of `mstatus` appears as the `sstatus` register in the S-level ISA. [[mstatusreg-rv32]] .Machine-mode status (`mstatus`) register for RV32 include::images/wavedrom/mstatusreg-rv321.edn[] [[mstatusreg]] .Machine-mode status (`mstatus`) register for RV64 include::images/wavedrom/mstatusreg.edn[] For RV32 only, `mstatush` is a 32-bit read/write register formatted as shown in <>. Bits 30:4 of `mstatush` generally contain the same fields found in bits 62:36 of `mstatus` for RV64. Fields SD, SXL, and UXL do not exist in `mstatush`. [[mstatushreg]] .Additional machine-mode status (`mstatush`) register for RV32. include::images/wavedrom/mstatushreg.edn[] [[privstack]] ===== Privilege and Global Interrupt-Enable Stack in `mstatus` register Global interrupt-enable bits, MIE and SIE, are provided for M-mode and S-mode respectively. These bits are primarily used to guarantee atomicity with respect to interrupt handlers in the current privilege mode. [NOTE] ==== The global __x__IE bits are located in the low-order bits of `mstatus`, allowing them to be atomically set or cleared with a single CSR instruction. ==== When a hart is executing in privilege mode _x_, interrupts are globally enabled when __x__IE=1 and globally disabled when __x__IE=0. Interrupts for lower-privilege modes, __w__<__x__, are always globally disabled regardless of the setting of any global __w__IE bit for the lower-privilege mode. Interrupts for higher-privilege modes, __y__>__x__, are always globally enabled regardless of the setting of the global __y__IE bit for the higher-privilege mode. Higher-privilege-level code can use separate per-interrupt enable bits to disable selected higher-privilege-mode interrupts before ceding control to a lower-privilege mode. If supervisor mode is not implemented, then SIE and SPIE are read-only 0. [NOTE] ==== A higher-privilege mode _y_ could disable all of its interrupts before ceding control to a lower-privilege mode but this would be unusual as it would leave only a synchronous trap, non-maskable interrupt, or reset as means to regain control of the hart. ==== To support nested traps, each privilege mode _x_ that can respond to interrupts has a two-level stack of interrupt-enable bits and privilege modes. __x__PIE holds the value of the interrupt-enable bit active prior to the trap, and __x__PP holds the previous privilege mode. The __x__PP fields can only hold privilege modes up to _x_, so MPP is two bits wide and SPP is one bit wide. When a trap is taken from privilege mode _y_ into privilege mode _x_, __x__PIE is set to the value of __x__IE; __x__IE is set to 0; and __x__PP is set to _y_. [NOTE] ==== For lower privilege modes, any trap (synchronous or asynchronous) is usually taken at a higher privilege mode with interrupts disabled upon entry. The higher-level trap handler will either service the trap and return using the stacked information, or, if not returning immediately to the interrupted context, will save the privilege stack before re-enabling interrupts, so only one entry per stack is required. ==== An MRET or SRET instruction is used to return from a trap in M-mode or S-mode respectively. When executing an __x__RET instruction, supposing __x__PP holds the value _y_, __x__IE is set to __x__PIE; the privilege mode is changed to _y_; __x__PIE is set to 1; and __x__PP is set to the least-privileged supported mode (U if U-mode is implemented, else M). If __y__≠M, __x__RET also sets MPRV=0. [NOTE] ==== Setting __x__PP to the least-privileged supported mode on an __x__RET helps identify software bugs in the management of the two-level privilege-mode stack. ==== [NOTE] ==== Trap handlers must be designed to neither enable interrupts nor cause exceptions during the phase of handling where the trap handler preserves the critical state information required to handle and resume from the trap. An exception or interrupt in this critical phase of trap handling may lead to a trap that can overwrite such critical state. This could result in the loss of data needed to recover from the initial trap. Further, if an exception occurs in the code path needed to handle traps, then such a situation may lead to an infinite loop of traps. To prevent this, trap handlers must be meticulously designed to identify and safely manage exceptions within their operational flow. ==== __x__PP fields are *WARL* fields that can hold only privilege mode _x_ and any implemented privilege mode lower than _x_. If privilege mode _x_ is not implemented, then __x__PP must be read-only 0. [NOTE] ==== M-mode software can determine whether a privilege mode is implemented by writing that mode to MPP then reading it back. If the machine provides only U and M modes, then only a single hardware storage bit is required to represent either 00 or 11 in MPP. ==== [[machine-double-trap]] ===== Double Trap Control in `mstatus` Register A double trap typically arises during a sensitive phase in trap handling operations -- when an exception or interrupt occurs while the trap handler (the component responsible for managing these events) is in a non-reentrant state. This non-reentrancy usually occurs in the early phase of trap handling, wherein the trap handler has not yet preserved the necessary state to handle and resume from the trap. The occurrence of a trap during this phase can lead to an overwrite of critical state information, resulting in the loss of data needed to recover from the initial trap. The trap that caused this critical error condition is henceforth called the _unexpected trap_. Trap handlers are designed to neither enable interrupts nor cause exceptions during this phase of handling. However, managing Hardware-Error exceptions, which may occur unpredictably, presents significant challenges in trap handler implementation due to the potential risk of a double trap. The M-mode-disable-trap (`MDT`) bit is a WARL field introduced by the Smdbltrp extension. Upon reset, the `MDT` field is set to 1. When the `MDT` bit is set to 1 by an explicit CSR write, the `MIE` (Machine Interrupt Enable) bit is cleared to 0. For RV64, this clearing occurs regardless of the value written, if any, to the `MIE` bit by the same write. The `MIE` bit can only be set to 1 by an explicit CSR write if the `MDT` bit is already 0 or, for RV64, is being set to 0 by the same write (For RV32, the `MDT` bit is in `mstatush` and the `MIE` bit in `mstatus` register). When a trap is to be taken into M-mode, if the `MDT` bit is currently 0, it is then set to 1, and the trap is delivered as expected. However, if `MDT` is already set to 1, then this is an _unexpected trap_. When the Smrnmi extension is implemented, a trap caused by an RNMI is not considered an _unexpected trap_ irrespective of the state of the `MDT` bit. A trap caused by an RNMI does not set the `MDT` bit. However, a trap that occurs when executing in M-mode with `mnstatus.NMIE` set to 0 is an _unexpected trap_. In the event of a _unexpected trap_, the handling is as follows: * When the Smrnmi extension is implemented and `mnstatus.NMIE` is 1, the hart traps to the RNMI handler. To deliver this trap, the `mnepc` and `mncause` registers are written with the values that the _unexpected trap_ would have written to the `mepc` and `mcause` registers respectively. The privilege mode information fields in the `mnstatus` register are written to indicate M-mode and its `NMIE` field is set to 0. [NOTE] ==== The consequence of this specification is that on occurrence of double trap the RNMI handler is not provided with information that a trap reports in the `mtval` and the `mtval2` registers. This information, if needed, can be obtained by the RNMI handler by decoding the instruction at the address in `mnepc` and examining its source register contents. ==== * When the Smrnmi extension is not implemented, or if the Smrnmi extension is implemented and `mnstatus.NMIE` is 0, the hart enters a critical-error state without updating any architectural state, including the `pc`. This state involves ceasing execution, disabling all interrupts (including NMIs), and asserting a `critical-error` signal to the platform. [NOTE] ==== The actions performed by the platform when a hart asserts a `critical-error` signal are platform-specific. The range of possible actions include restarting the affected hart or restarting the entire platform, among others. ==== The `MRET` and `SRET` instructions, when executed in M-mode, set the `MDT` bit to 0. If the new privilege mode is U, VS, or VU, then `sstatus.SDT` is also set to 0. Additionally, if it is VU, then `vsstatus.SDT` is also set to 0. The `MNRET` instruction, provided by the Smrnmi extension, sets the `MDT` bit to 0 if the new privilege mode is not M. If it is U, VS, or VU, then `sstatus.SDT` is also set to 0. Additionally, if it is VU, then `vsstatus.SDT` is also set to 0. [[xlen-control]] ===== Base ISA Control in `mstatus` Register For RV64 harts, the SXL and UXL fields are *WARL* fields that control the value of XLEN for S-mode and U-mode, respectively. The encoding of these fields is the same as the MXL field of `misa`, shown in <>. The effective XLEN in S-mode and U-mode are termed _SXLEN_ and _UXLEN_, respectively. When MXLEN=32, the SXL and UXL fields do not exist, and SXLEN=32 and UXLEN=32. When MXLEN=64, if S-mode is not supported, then SXL is read-only zero. Otherwise, it is a *WARL* field that encodes the current value of SXLEN. In particular, an implementation may make SXL be a read-only field whose value always ensures that SXLEN=MXLEN. When MXLEN=64, if U-mode is not supported, then UXL is read-only zero. Otherwise, it is a *WARL* field that encodes the current value of UXLEN. In particular, an implementation may make UXL be a read-only field whose value always ensures that UXLEN=MXLEN or UXLEN=SXLEN. If S-mode is implemented, the set of legal values that the UXL field may assume excludes those that would cause UXLEN to be greater than SXLEN. Whenever XLEN in any mode is set to a value less than the widest supported XLEN, all operations must ignore source operand register bits above the configured XLEN, and must sign-extend results to fill the entire widest supported XLEN in the destination register. Similarly, `pc` bits above XLEN are ignored, and when the `pc` is written, it is sign-extended to fill the widest supported XLEN. [NOTE] ==== We require that operations always fill the entire underlying hardware registers with defined values to avoid implementation-defined behavior. To reduce hardware complexity, the architecture imposes no checks that lower-privilege modes have XLEN settings less than or equal to the next-higher privilege mode. In practice, such settings would almost always be a software bug, but machine operation is well-defined even in this case. ==== Some HINT instructions are encoded as integer computational instructions that overwrite their destination register with its current value, e.g., `c.addi x8, 0`. When such a HINT is executed with XLEN < MXLEN and bits MXLEN..XLEN of the destination register not all equal to bit XLEN-1, it is implementation-defined whether bits MXLEN..XLEN of the destination register are unchanged or are overwritten with copies of bit XLEN-1. NOTE: This definition allows implementations to elide register writeback for some HINTs, while allowing them to execute other HINTs in the same manner as other integer computational instructions. The implementation choice is observable only by privilege modes with an XLEN setting greater than the current XLEN; it is invisible to the current privilege mode. ===== Memory Privilege in `mstatus` Register The MPRV (Modify PRiVilege) bit modifies the _effective privilege mode_, i.e., the privilege level at which loads and stores execute. When MPRV=0, loads and stores behave as normal, using the translation and protection mechanisms of the current privilege mode. When MPRV=1, load and store memory addresses are translated and protected, and endianness is applied, as though the current privilege mode were set to MPP. Instruction address-translation and protection are unaffected by the setting of MPRV. MPRV is read-only 0 if U-mode is not supported. An MRET or SRET instruction that changes the privilege mode to a mode less privileged than M also sets MPRV=0. The MXR (Make eXecutable Readable) bit modifies the privilege with which loads access virtual memory. When MXR=0, only loads from pages marked readable (R=1 in <>) will succeed. When MXR=1, loads from pages marked either readable or executable (R=1 or X=1) will succeed. MXR has no effect when page-based virtual memory is not in effect. MXR is read-only 0 if S-mode is not supported. [NOTE] ==== The MPRV and MXR mechanisms were conceived to improve the efficiency of M-mode routines that emulate missing hardware features, e.g., misaligned loads and stores. MPRV obviates the need to perform address translation in software. MXR allows instruction words to be loaded from pages marked execute-only. The current privilege mode and the privilege mode specified by MPP might have different XLEN settings. When MPRV=1, load and store memory addresses are treated as though the current XLEN were set to MPP’s XLEN, following the rules in <>. ==== The SUM (permit Supervisor User Memory access) bit modifies the privilege with which S-mode loads and stores access virtual memory. When SUM=0, S-mode memory accesses to pages that are accessible by U-mode (U=1 in <>) will fault. When SUM=1, these accesses are permitted. SUM has no effect when page-based virtual memory is not in effect. Note that, while SUM is ordinarily ignored when not executing in S-mode, it _is_ in effect when MPRV=1 and MPP=S. SUM is read-only 0 if S-mode is not supported or if `satp`.MODE is read-only 0. The MXR and SUM mechanisms only affect the interpretation of permissions encoded in page-table entries. In particular, they have no impact on whether access-fault exceptions are raised due to PMAs or PMP. ===== Endianness Control in `mstatus` and `mstatush` Registers The MBE, SBE, and UBE bits in `mstatus` and `mstatush` are *WARL* fields that control the endianness of memory accesses other than instruction fetches. Instruction fetches are always little-endian. MBE controls whether non-instruction-fetch memory accesses made from M-mode (assuming `mstatus`.MPRV=0) are little-endian (MBE=0) or big-endian (MBE=1). If S-mode is not supported, SBE is read-only 0. Otherwise, SBE controls whether explicit load and store memory accesses made from S-mode are little-endian (SBE=0) or big-endian (SBE=1). If U-mode is not supported, UBE is read-only 0. Otherwise, UBE controls whether explicit load and store memory accesses made from U-mode are little-endian (UBE=0) or big-endian (UBE=1). For _implicit_ accesses to supervisor-level memory management data structures, such as page tables, endianness is always controlled by SBE. Since changing SBE alters the implementation’s interpretation of these data structures, if any such data structures remain in use across a change to SBE, M-mode software must follow such a change to SBE by executing an SFENCE.VMA instruction with _rs1_=`x0` and _rs2_=`x0`. [NOTE] ==== Only in contrived scenarios will a given memory-management data structure be interpreted as both little-endian and big-endian. In practice, SBE will only be changed at runtime on world switches, in which case neither the old nor new memory-management data structure will be reinterpreted in a differe #include "system.h" #include "coretypes.h" #include "tm.h" #include "jcf.h" #include "tree.h" #include "java-tree.h" #include "toplev.h" #include "ggc.h" static void set_constant_entry (CPool *, int, int, jword); static int find_tree_constant (CPool *, int, tree); static int find_class_or_string_constant (CPool *, int, tree); static int find_name_and_type_constant (CPool *, tree, tree); static tree get_tag_node (int); static tree build_constant_data_ref (void); static CPool *cpool_for_class (tree); /* Set the INDEX'th constant in CPOOL to have the given TAG and VALUE. */ static void set_constant_entry (CPool *cpool, int index, int tag, jword value) { if (cpool->data == NULL) { cpool->capacity = 100; cpool->tags = ggc_alloc_cleared (sizeof(uint8) * cpool->capacity); cpool->data = ggc_alloc_cleared (sizeof(union cpool_entry) * cpool->capacity); cpool->count = 1; } if (index >= cpool->capacity) { int old_cap = cpool->capacity; cpool->capacity *= 2; if (index >= cpool->capacity) cpool->capacity = index + 10; cpool->tags = ggc_realloc (cpool->tags, sizeof(uint8) * cpool->capacity); cpool->data = ggc_realloc (cpool->data, sizeof(union cpool_entry) * cpool->capacity); /* Make sure GC never sees uninitialized tag values. */ memset (cpool->tags + old_cap, 0, cpool->capacity - old_cap); memset (cpool->data + old_cap, 0, (cpool->capacity - old_cap) * sizeof (union cpool_entry)); } if (index >= cpool->count) cpool->count = index + 1; cpool->tags[index] = tag; cpool->data[index].w = value; } /* Find (or create) a constant pool entry matching TAG and VALUE. */ int find_constant1 (CPool *cpool, int tag, jword value) { int i; for (i = cpool->count; --i > 0; ) { if (cpool->tags[i] == tag && cpool->data[i].w == value) return i; } i = cpool->count == 0 ? 1 : cpool->count; set_constant_entry (cpool, i, tag, value); return i; } /* Find a double-word constant pool entry matching TAG and WORD1/WORD2. */ int find_constant2 (CPool *cpool, int tag, jword word1, jword word2) { int i; for (i = cpool->count - 1; --i > 0; ) { if (cpool->tags[i] == tag && cpool->data[i].w == word1 && cpool->data[i+1].w == word2) return i; } i = cpool->count == 0 ? 1 : cpool->count; set_constant_entry (cpool, i, tag, word1); set_constant_entry (cpool, i+1, 0, word2); return i; } static int find_tree_constant (CPool *cpool, int tag, tree value) { int i; for (i = cpool->count; --i > 0; ) { if (cpool->tags[i] == tag && cpool->data[i].t == value) return i; } i = cpool->count == 0 ? 1 : cpool->count; set_constant_entry (cpool, i, tag, 0); cpool->data[i].t = value; return i; } int find_utf8_constant (CPool *cpool, tree name) { if (name == NULL_TREE) return 0; return find_tree_constant (cpool, CONSTANT_Utf8, name); } static int find_class_or_string_constant (CPool *cpool, int tag, tree name) { jword j = find_utf8_constant (cpool, name); int i; for (i = cpool->count; --i > 0; ) { if (cpool->tags[i] == tag && cpool->data[i].w == j) return i; } i = cpool->count; set_constant_entry (cpool, i, tag, j); return i; } int find_class_constant (CPool *cpool, tree type) { return find_class_or_string_constant (cpool, CONSTANT_Class, build_internal_class_name (type)); } /* Allocate a CONSTANT_string entry given a STRING_CST. */ int find_string_constant (CPool *cpool, tree string) { string = get_identifier (TREE_STRING_POINTER (string)); return find_class_or_string_constant (cpool, CONSTANT_String, string); } /* Find (or create) a CONSTANT_NameAndType matching NAME and TYPE. Return its index in the constant pool CPOOL. */ static int find_name_and_type_constant (CPool *cpool, tree name, tree type) { int name_index = find_utf8_constant (cpool, name); int type_index = find_utf8_constant (cpool, build_java_signature (type)); return find_constant1 (cpool, CONSTANT_NameAndType, (name_index << 16) | type_index); } /* Find (or create) a CONSTANT_Fieldref for DECL (a FIELD_DECL or VAR_DECL). Return its index in the constant pool CPOOL. */ int find_fieldref_index (CPool *cpool, tree decl) { int class_index = find_class_constant (cpool, DECL_CONTEXT (decl)); int name_type_index = find_name_and_type_constant (cpool, DECL_NAME (decl), TREE_TYPE (decl)); return find_constant1 (cpool, CONSTANT_Fieldref, (class_index << 16) | name_type_index); } /* Find (or create) a CONSTANT_Methodref for DECL (a FUNCTION_DECL). Return its index in the constant pool CPOOL. */ int find_methodref_index (CPool *cpool, tree decl) { return find_methodref_with_class_index (cpool, decl, DECL_CONTEXT (decl)); } int find_methodref_with_class_index (CPool *cpool, tree decl, tree mclass) { int class_index = find_class_constant (cpool, mclass); tree name = DECL_CONSTRUCTOR_P (decl) ? init_identifier_node : DECL_NAME (decl); int name_type_index; name_type_index = find_name_and_type_constant (cpool, name, TREE_TYPE (decl)); return find_constant1 (cpool, CLASS_INTERFACE (TYPE_NAME (mclass)) ? CONSTANT_InterfaceMethodref : CONSTANT_Methodref, (class_index << 16) | name_type_index); } #define PUT1(X) (*ptr++ = (X)) #define PUT2(X) (PUT1((X) >> 8), PUT1(X)) #define PUT4(X) (PUT2((X) >> 16), PUT2(X)) #define PUTN(P, N) (memcpy(ptr, (P), (N)), ptr += (N)) /* Give the number of bytes needed in a .class file for the CPOOL constant pool. Includes the 2-byte constant_pool_count. */ int count_constant_pool_bytes (CPool *cpool) { int size = 2; int i = 1; for ( ; i < cpool->count; i++) { size++; switch (cpool->tags[i]) { case CONSTANT_NameAndType: case CONSTANT_Fieldref: case CONSTANT_Methodref: case CONSTANT_InterfaceMethodref: case CONSTANT_Float: case CONSTANT_Integer: size += 4; break; case CONSTANT_Class: case CONSTANT_String: size += 2; break; case CONSTANT_Long: case CONSTANT_Double: size += 8; i++; break; case CONSTANT_Utf8: { tree t = cpool->data[i].t; int len = IDENTIFIER_LENGTH (t); size += len + 2; } break; default: /* Second word of CONSTANT_Long and CONSTANT_Double. */ size--; } } return size; } /* Write the constant pool CPOOL into BUFFER. The length of BUFFER is LENGTH, which must match the needed length. */ void write_constant_pool (CPool *cpool, unsigned char *buffer, int length) { unsigned char *ptr = buffer; int i = 1; union cpool_entry *datap = &cpool->data[1]; PUT2 (cpool->count); for ( ; i < cpool->count; i++, datap++) { int tag = cpool->tags[i]; PUT1 (tag); switch (tag) { case CONSTANT_NameAndType: case CONSTANT_Fieldref: case CONSTANT_Methodref: case CONSTANT_InterfaceMethodref: case CONSTANT_Float: case CONSTANT_Integer: PUT4 (datap->w); break; case CONSTANT_Class: case CONSTANT_String: PUT2 (datap->w); break; break; case CONSTANT_Long: case CONSTANT_Double: PUT4(datap->w); i++; datap++; PUT4 (datap->w); break; case CONSTANT_Utf8: { tree t = datap->t; int len = IDENTIFIER_LENGTH (t); PUT2 (len); PUTN (IDENTIFIER_POINTER (t), len); } break; } } gcc_assert (ptr == buffer + length); } static GTY(()) tree tag_nodes[13]; static tree get_tag_node (int tag) { /* A Cache for build_int_cst (CONSTANT_XXX, 0). */ if (tag_nodes[tag] == NULL_TREE) tag_nodes[tag] = build_int_cst (NULL_TREE, tag); return tag_nodes[tag]; } /* Given a class, return its constant pool, creating one if necessary. */ static CPool * cpool_for_class (tree class) { CPool *cpool = TYPE_CPOOL (class); if (cpool == NULL) { cpool = ggc_alloc_cleared (sizeof (struct CPool)); TYPE_CPOOL (class) = cpool; } return cpool; } /* Look for a constant pool entry that matches TAG and NAME. Creates a new entry if not found. TAG is one of CONSTANT_Utf8, CONSTANT_String or CONSTANT_Class. NAME is an IDENTIFIER_NODE naming the Utf8 constant, string, or class. Returns the index of the entry. */ int alloc_name_constant (int tag, tree name) { CPool *outgoing_cpool = cpool_for_class (output_class); return find_tree_constant (outgoing_cpool, tag, name); } /* Create a constant pool entry for a name_and_type. This one has '.' rather than '/' because it isn't going into a class file, it's going into a compiled object. We don't use the '/' separator in compiled objects. */ static int find_name_and_type_constant_tree (CPool *cpool, tree name, tree type) { int name_index = find_utf8_constant (cpool, name); int type_index = find_utf8_constant (cpool, identifier_subst (build_java_signature (type), "", '/', '.', "")); return find_constant1 (cpool, CONSTANT_NameAndType, (name_index << 16) | type_index); } /* Look for a field ref that matches DECL in the constant pool of CLASS. Return the index of the entry. */ int alloc_constant_fieldref (tree class, tree decl) { CPool *outgoing_cpool = cpool_for_class (class); int class_index = find_tree_constant (outgoing_cpool, CONSTANT_Class, DECL_NAME (TYPE_NAME (DECL_CONTEXT (decl)))); int name_type_index = find_name_and_type_constant_tree (outgoing_cpool, DECL_NAME (decl), TREE_TYPE (decl)); return find_constant1 (outgoing_cpool, CONSTANT_Fieldref, (class_index << 16) | name_type_index); } /* Build an identifier for the internal name of reference type TYPE. */ tree build_internal_class_name (tree type) { tree name; if (TYPE_ARRAY_P (type)) name = build_java_signature (type); else { name = TYPE_NAME (type); if (TREE_CODE (name) != IDENTIFIER_NODE) name = DECL_NAME (name); name = identifier_subst (name, "", '.', '/', ""); } return name; } /* Look for a CONSTANT_Class entry for CLAS, creating a new one if needed. */ int alloc_class_constant (tree clas) { tree class_name = build_internal_class_name (clas); return alloc_name_constant (CONSTANT_Class, (unmangle_classname (IDENTIFIER_POINTER(class_name), IDENTIFIER_LENGTH(class_name)))); } /* Return the decl of the data array of the current constant pool. */ static tree build_constant_data_ref (void) { tree decl = TYPE_CPOOL_DATA_REF (output_class); if (decl == NULL_TREE) { tree type; tree decl_name = mangled_classname ("_CD_", output_class); /* Build a type with unspecified bounds. The will make sure that targets do the right thing with whatever size we end up with at the end. Using bounds that are too small risks assuming the data is in the small data section. */ type = build_array_type (ptr_type_node, NULL_TREE); /* We need to lay out the type ourselves, since build_array_type thinks the type is incomplete. */ layout_type (type); decl = build_decl (VAR_DECL, decl_name, type); TREE_STATIC (decl) = 1; TYPE_CPOOL_DATA_REF (output_class) = decl; } return decl; } /* Get the pointer value at the INDEX'th element of the constant pool. */ tree build_ref_from_constant_pool (int index) { tree d = build_constant_data_ref (); tree i = build_int_cst (NULL_TREE, index); if (flag_indirect_classes) { tree decl = build_class_ref (output_class); tree klass = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (decl)), decl); tree constants = build3 (COMPONENT_REF, TREE_TYPE (constants_field_decl_node), klass, constants_field_decl_node, NULL_TREE); tree data = build3 (COMPONENT_REF, TREE_TYPE (constants_data_field_decl_node), constants, constants_data_field_decl_node, NULL_TREE); data = fold_convert (build_pointer_type (TREE_TYPE (d)), data); d = build1 (INDIRECT_REF, TREE_TYPE (d), data); /* FIXME: These should be cached. */ TREE_INVARIANT (d) = 1; } d = build4 (ARRAY_REF, TREE_TYPE (TREE_TYPE (d)), d, i, NULL_TREE, NULL_TREE); TREE_INVARIANT (d) = 1; return d; } /* Build an initializer for the constants field of the current constant pool. Should only be called at top-level, since it may emit declarations. */ tree build_constants_constructor (void) { CPool *outgoing_cpool = cpool_for_class (current_class); tree tags_value, data_value; tree cons; tree tags_list = NULL_TREE; tree data_list = NULL_TREE; int i; for (i = outgoing_cpool->count; --i > 0; ) switch (outgoing_cpool->tags[i]) { case CONSTANT_Fieldref: case CONSTANT_NameAndType: { unsigned HOST_WIDE_INT temp = outgoing_cpool->data[i].w; /* Make sure that on a 64-bit big-endian machine this 32-bit jint appears in the first word. FIXME: This is a kludge. The field we're initializing is not a scalar but a union, and that's how we should represent it in the compiler. We should fix this. */ if (BYTES_BIG_ENDIAN && BITS_PER_WORD > 32) temp <<= BITS_PER_WORD - 32; tags_list = tree_cons (NULL_TREE, build_int_cst (NULL_TREE, outgoing_cpool->tags[i]), tags_list); data_list = tree_cons (NULL_TREE, fold_convert (ptr_type_node, (build_int_cst (NULL_TREE, temp))), data_list); } break; default: tags_list = tree_cons (NULL_TREE, get_tag_node (outgoing_cpool->tags[i]), tags_list); data_list = tree_cons (NULL_TREE, build_utf8_ref (outgoing_cpool->data[i].t), data_list); break; } if (outgoing_cpool->count > 0) { tree data_decl, tags_decl, tags_type; tree max_index = build_int_cst (sizetype, outgoing_cpool->count - 1); tree index_type = build_index_type (max_index); /* Add dummy 0'th element of constant pool. */ tags_list = tree_cons (NULL_TREE, get_tag_node (0), tags_list); data_list = tree_cons (NULL_TREE, null_pointer_node, data_list); data_decl = build_constant_data_ref (); TREE_TYPE (data_decl) = build_array_type (ptr_type_node, index_type); DECL_INITIAL (data_decl) = build_constructor_from_list (TREE_TYPE (data_decl), data_list); DECL_SIZE (data_decl) = TYPE_SIZE (TREE_TYPE (data_decl)); DECL_SIZE_UNIT (data_decl) = TYPE_SIZE_UNIT (TREE_TYPE (data_decl)); rest_of_decl_compilation (data_decl, 1, 0); data_value = build_address_of (data_decl); tags_type = build_array_type (unsigned_byte_type_node, index_type); tags_decl = build_decl (VAR_DECL, mangled_classname ("_CT_", current_class), tags_type); TREE_STATIC (tags_decl) = 1; DECL_INITIAL (tags_decl) = build_constructor_from_list (tags_type, tags_list); rest_of_decl_compilation (tags_decl, 1, 0); tags_value = build_address_of (tags_decl); } else { data_value = null_pointer_node; tags_value = null_pointer_node; } START_RECORD_CONSTRUCTOR (cons, constants_type_node); PUSH_FIELD_VALUE (cons, "size", build_int_cst (NULL_TREE, outgoing_cpool->count)); PUSH_FIELD_VALUE (cons, "tags", tags_value); PUSH_FIELD_VALUE (cons, "data", data_value); FINISH_RECORD_CONSTRUCTOR (cons); return cons; } #include "gt-java-constants.h" s not implemented `sstatus.SDT`, `vsstatus.SDT`, and `henvcfg.DTE` bits are read-only zero. When XLEN=32, `menvcfgh` is a 32-bit read/write register that aliases bits 63:32 of `menvcfg`. The `menvcfgh` register does not exist when XLEN=64. If U-mode is not supported, then registers `menvcfg` and `menvcfgh` do not exist. ==== Machine Security Configuration (`mseccfg`) Register `mseccfg` is an optional 64-bit read/write register, formatted as shown in <>, that controls security features. [[mseccfg]] .Machine security configuration (`mseccfg`) register. include::images/wavedrom/mseccfg.edn[] The definitions of the SSEED and USEED fields are furnished by the entropy-source extension, Zkr. The definitions of the RLB, MMWP, and MML fields are furnished by the PMP-enhancement extension, Smepmp. The definition of the PMM field is furnished by the Smmpm extension. The Zicfilp extension adds the `MLPE` field in `mseccfg`. When `MLPE` field is 1, Zicfilp extension is enabled in M-mode. When the `MLPE` field is 0, the Zicfilp extension is not enabled in M-mode and the following rules apply to M-mode. * The hart does not update the `ELP` state; it remains as `NO_LP_EXPECTED`. * The `LPAD` instruction operates as a no-op. When XLEN=32 only, `mseccfgh` is a 32-bit read/write register that aliases bits 63:32 of `mseccfg`. Register `mseccfgh` does not exist when XLEN=64. === Machine-Level Memory-Mapped Registers ==== Machine Timer (`mtime` and `mtimecmp`) Registers Platforms provide a real-time counter, exposed as a memory-mapped machine-mode read-write register, `mtime`. `mtime` must increment at constant frequency, and the platform must provide a mechanism for determining the period of an `mtime` tick. The `mtime` register will wrap around if the count overflows. The `mtime` register has a 64-bit precision on all RV32 and RV64 systems. Platforms provide a 64-bit memory-mapped machine-mode timer compare register (`mtimecmp`). A machine timer interrupt becomes pending whenever `mtime` contains a value greater than or equal to `mtimecmp`, treating the values as unsigned integers. The interrupt remains posted until `mtimecmp` becomes greater than `mtime` (typically as a result of writing `mtimecmp`). The interrupt will only be taken if interrupts are enabled and the MTIE bit is set in the `mie` register. .Machine time register (memory-mapped control register). include::images/bytefield/mtime.edn[] .Machine time compare register (memory-mapped control register). include::images/bytefield/mtimecmp.edn[] [NOTE] ==== The timer facility is defined to use wall-clock time rather than a cycle counter to support modern processors that run with a highly variable clock frequency to save energy through dynamic voltage and frequency scaling. Accurate real-time clocks (RTCs) are relatively expensive to provide (requiring a crystal or MEMS oscillator) and have to run even when the rest of system is powered down, and so there is usually only one in a system located in a different frequency/voltage domain from the processors. Hence, the RTC must be shared by all the harts in a system and accesses to the RTC will potentially incur the penalty of a voltage-level-shifter and clock-domain crossing. It is thus more natural to expose `mtime` as a memory-mapped register than as a CSR. Lower privilege levels do not have their own `timecmp` registers. Instead, machine-mode software can implement any number of virtual timers on a hart by multiplexing the next timer interrupt into the `mtimecmp` register. Simple fixed-frequency systems can use a single clock for both cycle counting and wall-clock time. ==== If the result of the comparison between `mtime` and `mtimecmp` changes, it is guaranteed to be reflected in MTIP eventually, but not necessarily immediately. [NOTE] ==== A spurious timer interrupt might occur if an interrupt handler increments `mtimecmp` then immediately returns, because MTIP might not yet have fallen in the interim. All software should be written to assume this event is possible, but most software should assume this event is extremely unlikely. It is almost always more performant to incur an occasional spurious timer interrupt than to poll MTIP until it falls. ==== In RV32, memory-mapped writes to `mtimecmp` modify only one 32-bit part of the register. The following code sequence sets a 64-bit `mtimecmp` value without spuriously generating a timer interrupt due to the intermediate value of the comparand: For RV64, naturally aligned 64-bit memory accesses to the `mtime` and `mtimecmp` registers are additionally supported and are atomic. .Sample code for setting the 64-bit time comparand in RV32 assuming a little-endian memory system and that the registers live in a strongly ordered I/O region. Storing -1 to the low-order bits of `mtimecmp` prevents `mtimecmp` from temporarily becoming smaller than the lesser of the old and new values. .... # New comparand is in a1:a0. li t0, -1 la t1, mtimecmp sw t0, 0(t1) # No smaller than old value. sw a1, 4(t1) # No smaller than new value. sw a0, 0(t1) # New value. .... The `time` CSR is a read-only shadow of the memory-mapped `mtime` register. When XLEN=32, the `timeh` CSR is a read-only shadow of the upper 32 bits of the memory-mapped `mtime` register, while `time` shadows only the lower 32 bits of `mtime`. When `mtime` changes, it is guaranteed to be reflected in `time` and `timeh` eventually, but not necessarily immediately. === Machine-Mode Privileged Instructions ==== Environment Call and Breakpoint include::images/wavedrom/mm-env-call.edn[] The ECALL instruction is used to make a request to the supporting execution environment. When executed in U-mode, S-mode, or M-mode, it generates an environment-call-from-U-mode exception, environment-call-from-S-mode exception, or environment-call-from-M-mode exception, respectively, and performs no other operation. [NOTE] ==== ECALL generates a different exception for each originating privilege mode so that environment call exceptions can be selectively delegated. A typical use case for Unix-like operating systems is to delegate to S-mode the environment-call-from-U-mode exception but not the others. ==== The EBREAK instruction is used by debuggers to cause control to be transferred back to a debugging environment. Unless overridden by an external debug environment, EBREAK raises a breakpoint exception and performs no other operation. [NOTE] ==== As described in the "C" Standard Extension for Compressed Instructions in Volume I of this manual, the C.EBREAK instruction performs the same operation as the EBREAK instruction. ==== ECALL and EBREAK cause the receiving privilege mode’s `epc` register to be set to the address of the ECALL or EBREAK instruction itself, _not_ the address of the following instruction. As ECALL and EBREAK cause synchronous exceptions, they are not considered to retire, and should not increment the `minstret` CSR. [[otherpriv]] ==== Trap-Return Instructions Instructions to return from trap are encoded under the PRIV minor opcode. include::images/wavedrom/trap-return.edn[] To return after handling a trap, there are separate trap return instructions per privilege level, MRET and SRET. MRET is always provided. SRET must be provided if supervisor mode is supported, and should raise an illegal-instruction exception otherwise. SRET should also raise an illegal-instruction exception when TSR=1 in `mstatus`, as described in <>. An __x__RET instruction can be executed in privilege mode _x_ or higher, where executing a lower-privilege __x__RET instruction will pop the relevant lower-privilege interrupt enable and privilege mode stack. Attempting to execute an __x__RET instruction in a mode less privileged than _x_ will raise an illegal-instruction exception. In addition to manipulating the privilege stack as described in <>, __x__RET sets the `pc` to the value stored in the `__x__epc` register. If the A extension is supported, the __x__RET instruction is allowed to clear any outstanding LR address reservation but is not required to. Trap handlers should explicitly clear the reservation if required (e.g., by using a dummy SC) before executing the __x__RET. [NOTE] ==== If __x__RET instructions always cleared LR reservations, it would be impossible to single-step through LR/SC sequences using a debugger. ==== [[wfi]] ==== Wait for Interrupt The Wait for Interrupt instruction (WFI) informs the implementation that the current hart can be stalled until an interrupt might need servicing. Execution of the WFI instruction can also be used to inform the hardware platform that suitable interrupts should preferentially be routed to this hart. WFI is available in all privileged modes, and optionally available to U-mode. This instruction may raise an illegal-instruction exception when TW=1 in `mstatus`, as described in <>. include::images/wavedrom/wfi.edn[] If an enabled interrupt is present or later becomes present while the hart is stalled, the interrupt trap will be taken on the following instruction, i.e., execution resumes in the trap handler and `mepc` = `pc` + 4. [NOTE] ==== The following instruction takes the interrupt trap so that a simple return from the trap handler will execute code after the WFI instruction. ==== Implementations are permitted to resume execution for any reason, even if an enabled interrupt has not become pending. Hence, a legal implementation is to simply implement the WFI instruction as a NOP. [NOTE] ==== If the implementation does not stall the hart on execution of the instruction, then the interrupt will be taken on some instruction in the idle loop containing the WFI, and on a simple return from the handler, the idle loop will resume execution. ==== The WFI instruction can also be executed when interrupts are disabled. The operation of WFI must be unaffected by the global interrupt bits in `mstatus` (MIE and SIE) and the delegation register `mideleg` (i.e., the hart must resume if a locally enabled interrupt becomes pending, even if it has been delegated to a less-privileged mode), but should honor the individual interrupt enables (e.g, MTIE) (i.e., implementations should avoid resuming the hart if the interrupt is pending but not individually enabled). WFI is also required to resume execution for locally enabled interrupts pending at any privilege level, regardless of the global interrupt enable at each privilege level. If the event that causes the hart to resume execution does not cause an interrupt to be taken, execution will resume at `pc` + 4, and software must determine what action to take, including looping back to repeat the WFI if there was no actionable event. [NOTE] ==== By allowing wakeup when interrupts are disabled, an alternate entry point to an interrupt handler can be called that does not require saving the current context, as the current context can be saved or discarded before the WFI is executed. As implementations are free to implement WFI as a NOP, software must explicitly check for any relevant pending but disabled interrupts in the code following an WFI, and should loop back to the WFI if no suitable interrupt was detected. The `mip` or `sip` registers can be interrogated to determine the presence of any interrupt in machine or supervisor mode respectively. The operation of WFI is unaffected by the delegation register settings. WFI is defined so that an implementation can trap into a higher privilege mode, either immediately on encountering the WFI or after some interval to initiate a machine-mode transition to a lower power state, for example. *** The same "wait-for-event" template might be used for possible future extensions that wait on memory locations changing, or message arrival. ==== ==== Custom SYSTEM Instructions The subspace of the SYSTEM major opcode shown in <> is designated for custom use. It is recommended that these instructions use bits 29:28 to designate the minimum required privilege mode, as do other SYSTEM instructions. [[customsys]] .SYSTEM instruction encodings designated for custom use. include::images/bytefield/cust-sys-instr.edn[] [[reset]] === Reset Upon reset, a hart’s privilege mode is set to M. The `mstatus` fields MIE and MPRV are reset to 0. If little-endian memory accesses are supported, the `mstatus`/`mstatush` field MBE is reset to 0. The `misa` register is reset to enable the maximal set of supported extensions, as described in <>. For implementations with the "A" standard extension, there is no valid load reservation. The `pc` is set to an implementation-defined reset vector. The `mcause` register is set to a value indicating the cause of the reset. Writable PMP registers’ A and L fields are set to 0, unless the platform mandates a different reset value for some PMP registers’ A and L fields. If the hypervisor extension is implemented, the `hgatp`.MODE and `vsatp`.MODE fields are reset to 0. If the Smrnmi extension is implemented, the `mnstatus`.NMIE field is reset to 0. No *WARL* field contains an illegal value. If the Zicfilp extension is implemented, the `mseccfg`.MLPE field is reset to 0. All other hart state is UNSPECIFIED. The `mcause` values after reset have implementation-specific interpretation, but the value 0 should be returned on implementations that do not distinguish different reset conditions. Implementations that distinguish different reset conditions should only use 0 to indicate the most complete reset. [NOTE] ==== Some designs may have multiple causes of reset (e.g., power-on reset, external hard reset, brownout detected, watchdog timer elapse, sleep-mode wakeup), which machine-mode software and debuggers may wish to distinguish. `mcause` reset values may alias `mcause` values following synchronous exceptions. There should be no ambiguity in this overlap, since on reset the `pc` is typically set to a different value than on other traps. ==== [[nmi]] === Non-Maskable Interrupts Non-maskable interrupts (NMIs) are only used for hardware error conditions, and cause an immediate jump to an implementation-defined NMI vector running in M-mode regardless of the state of a hart’s interrupt enable bits. The `mepc` register is written with the virtual address of the instruction that was interrupted, and `mcause` is set to a value indicating the source of the NMI. The NMI can thus overwrite state in an active machine-mode interrupt handler. The values written to `mcause` on an NMI are implementation-defined. The high Interrupt bit of `mcause` should be set to indicate that this was an interrupt. An Exception Code of 0 is reserved to mean "unknown cause" and implementations that do not distinguish sources of NMIs via the `mcause` register should return 0 in the Exception Code. Unlike resets, NMIs do not reset processor state, enabling diagnosis, reporting, and possible containment of the hardware error. [[pma]] === Physical Memory Attributes The physical memory map for a complete system includes various address ranges, some corresponding to memory regions and some to memory-mapped control registers, portions of which might not be accessible. Some memory regions might not support reads, writes, or execution; some might not support subword or subblock accesses; some might not support atomic operations; and some might not support cache coherence or might have different memory models. Similarly, memory-mapped control registers vary in their supported access widths, support for atomic operations, and whether read and write accesses have associated side effects. In RISC-V systems, these properties and capabilities of each region of the machine's physical address space are termed _physical memory attributes_ (PMAs). This section describes RISC-V PMA terminology and how RISC-V systems implement and check PMAs. PMAs are inherent properties of the underlying hardware and rarely change during system operation. Unlike physical memory protection values described in <>, PMAs do not vary by execution context. The PMAs of some memory regions are fixed at chip design time—for example, for an on-chip ROM. Others are fixed at board design time, depending, for example, on which other chips are connected to off-chip buses. Off-chip buses might also support devices that could be changed on every power cycle (cold pluggable) or dynamically while the system is running (hot pluggable). Some devices might be configurable at run time to support different uses that imply different PMAs—for example, an on-chip scratchpad RAM might be cached privately by one core in one end-application, or accessed as a shared non-cached memory in another end-application. Most systems will require that at least some PMAs are dynamically checked in hardware later in the execution pipeline after the physical address is known, as some operations will not be supported at all physical memory addresses, and some operations require knowing the current setting of a configurable PMA attribute. While many other architectures specify some PMAs in the virtual memory page tables and use the TLB to inform the pipeline of these properties, this approach injects platform-specific information into a virtualized layer and can cause system errors unless attributes are correctly initialized in each page-table entry for each physical memory region. In addition, the available page sizes might not be optimal for specifying attributes in the physical memory space, leading to address-space fragmentation and inefficient use of expensive TLB entries. For RISC-V, we separate out specification and checking of PMAs into a separate hardware structure, the _PMA checker_. In many cases, the attributes are known at system design time for each physical address region, and can be hardwired into the PMA checker. Where the attributes are run-time configurable, platform-specific memory-mapped control registers can be provided to specify these attributes at a granularity appropriate to each region on the platform (e.g., for an on-chip SRAM that can be flexibly divided between cacheable and uncacheable uses). PMAs are checked for any access to physical memory, including accesses that have undergone virtual to physical memory translation. To aid in system debugging, we strongly recommend that, where possible, RISC-V processors precisely trap physical memory accesses that fail PMA checks. Precisely trapped PMA violations manifest as instruction, load, or store access-fault exceptions, distinct from virtual-memory page-fault exceptions. Precise PMA traps might not always be possible, for example, when probing a legacy bus architecture that uses access failures as part of the discovery mechanism. In this case, error responses from peripheral devices will be reported as imprecise bus-error interrupts. PMAs must also be readable by software to correctly access certain devices or to correctly configure other hardware components that access memory, such as DMA engines. As PMAs are tightly tied to a given physical platform’s organization, many details are inherently platform-specific, as is the means by which software can learn the PMA values for a platform. Some devices, particularly legacy buses, do not support discovery of PMAs and so will give error responses or time out if an unsupported access is attempted. Typically, platform-specific machine-mode code will extract PMAs and ultimately present this information to higher-level less-privileged software using some standard representation. Where platforms support dynamic reconfiguration of PMAs, an interface will be provided to set the attributes by passing requests to a machine-mode driver that can correctly reconfigure the platform. For example, switching cacheability attributes on some memory regions might involve platform-specific operations, such as cache flushes, that are available only to machine-mode. ==== Main Memory versus I/O Regions The most important characterization of a given memory address range is whether it holds regular main memory or I/O devices. Regular main memory is required to have a number of properties, specified below, whereas I/O devices can have a much broader range of attributes. Memory regions that do not fit into regular main memory, for example, device scratchpad RAMs, are categorized as I/O regions. NOTE: What previous versions of this specification termed _vacant_ regions are no longer a distinct category; they are now described as I/O regions that are not accessible (i.e. lacking read, write, and execute permissions). Main memory regions that are not accessible are also allowed. ==== Supported Access Type PMAs Access types specify which access widths, from 8-bit byte to long multi-word burst, are supported, and also whether misaligned accesses are supported for each access width. [NOTE] ==== Although software running on a RISC-V hart cannot directly generate bursts to memory, software might have to program DMA engines to access I/O devices and might therefore need to know which access sizes are supported. ==== Main memory regions always support read and write of all access widths required by the attached devices, and can specify whether instruction fetch is supported. [NOTE] ==== Some platforms might mandate that all of main memory support instruction fetch. Other platforms might prohibit instruction fetch from some main memory regions. *** In some cases, the design of a processor or device accessing main memory might support other widths, but must be able to function with the types supported by the main memory. ==== I/O regions can specify which combinations of read, write, or execute accesses to which data widths are supported. For systems with page-based virtual memory, I/O and memory regions can specify which combinations of hardware page-table reads and hardware page-table writes are supported. [NOTE] ==== Unix-like operating systems generally require that all of cacheable main memory supports page-table walks. ==== ==== Atomicity PMAs Atomicity PMAs describes which atomic instructions are supported in this address region. Support for atomic instructions is divided into two categories: _LR/SC_ and _AMOs_. [NOTE] ==== Some platforms might mandate that all of cacheable main memory support all atomic operations required by the attached processors. ==== ===== AMO PMA Within AMOs, there are four levels of support: _AMONone_, _AMOSwap_, _AMOLogical_, and _AMOArithmetic_. AMONone indicates that no AMO operations are supported. AMOSwap indicates that only `amoswap` instructions are supported in this address range. AMOLogical indicates that swap instructions plus all the logical AMOs (`amoand`, `amoor`, `amoxor`) are supported. AMOArithmetic indicates that all RISC-V AMOs are supported. For each level of support, naturally aligned AMOs of a given width are supported if the underlying memory region supports reads and writes of that width. Main memory and I/O regions may only support a subset or none of the processor-supported atomic operations. .Classes of AMOs supported by I/O regions. [i%autowidth,float="center",align="center",cols="<,<",options="header"] |=== |AMO Class |Supported Operations |AMONone + AMOSwap + AMOLogical + AMOArithmetic |_None_ + `amoswap` + above + `amoand`, `amoor`, `amoxor` + above + `amoadd`, `amomin`, `amomax`, `amominu`, `amomaxu` |=== [NOTE] ==== We recommend providing at least AMOLogical support for I/O regions where possible. ==== ===== Reservability PMA For _LR/SC_, there are three levels of support indicating combinations of the reservability and eventuality properties: _RsrvNone_, _RsrvNonEventual_, and _RsrvEventual_. RsrvNone indicates that no LR/SC operations are supported (the location is non-reservable). RsrvNonEventual indicates that the operations are supported (the location is reservable), but without the eventual success guarantee described in the unprivileged ISA specification. RsrvEventual indicates that the operations are supported and provide the eventual success guarantee. [NOTE] ==== We recommend providing RsrvEventual support for main memory regions where possible. Most I/O regions will not support LR/SC accesses, as these are most conveniently built on top of a cache-coherence scheme, but some may support RsrvNonEventual or RsrvEventual. *** When LR/SC is used for memory locations marked RsrvNonEventual, software should provide alternative fall-back mechanisms used when lack of progress is detected. ==== ==== Misaligned Atomicity Granule PMA The misaligned atomicity granule PMA provides constrained support for misaligned AMOs. This PMA, if present, specifies the size of a _misaligned atomicity granule_, a naturally aligned power-of-two number of bytes. Specific supported values for this PMA are represented by MAG__NN__, e.g., MAG16 indicates the misaligned atomicity granule is at least 16 bytes. The misaligned atomicity granule PMA applies only to AMOs, loads and stores defined in the base ISAs, and loads and stores of no more than XLEN bits defined in the F, D, and Q extensions. For an instruction in that set, if all accessed bytes lie within the same misaligned atomicity granule, the instruction will not raise an exception for reasons of address alignment, and the instruction will give rise to only one memory operation for the purposes of RVWMO--i.e., it will execute atomically. If a misaligned AMO accesses a region that does not specify a misaligned atomicity granule PMA, or if not all accessed bytes lie within the same misaligned atomicity granule, then an exception is raised. For regular loads and stores that access such a region or for which not all accessed bytes lie within the same atomicity granule, then either an exception is raised, or the access proceeds but is not guaranteed to be atomic. Implementations may raise access-fault exceptions instead of address-misaligned exceptions for some misaligned accesses, indicating the instruction should not be emulated by a trap handler. NOTE: LR/SC instructions are unaffected by this PMA and so always raise an exception when misaligned. Vector memory accesses are also unaffected, so might execute non-atomically even when contained within a misaligned atomicity granule. Implicit accesses are similarly unaffected by this PMA. ==== Memory-Ordering PMAs Regions of the address space are classified as either _main memory_ or _I/O_ for the purposes of ordering by the FENCE instruction and atomic-instruction ordering bits. Accesses by one hart to main memory regions are observable not only by other harts but also by other devices with the capability to initiate requests in the main memory system (e.g., DMA engines). Coherent main memory regions always have either the RVWMO or RVTSO memory model. Incoherent main memory regions have an implementation-defined memory model. Accesses by one hart to an I/O region are observable not only by other harts and bus mastering devices but also by the targeted I/O devices, and I/O regions may be accessed with either _relaxed_ or _strong_ ordering. Accesses to an I/O region with relaxed ordering are generally observed by other harts and bus mastering devices in a manner similar to the ordering of accesses to an RVWMO memory region, as discussed in the I/O Ordering section in the RVWMO Explanatory Material appendix of Volume I of this specification. By contrast, accesses to an I/O region with strong ordering are generally observed by other harts and bus mastering devices in program order. Each strongly ordered I/O region specifies a numbered ordering channel, which is a mechanism by which ordering guarantees can be provided between different I/O regions. Channel 0 is used to indicate point-to-point strong ordering only, where only accesses by the hart to the single associated I/O region are strongly ordered. Channel 1 is used to provide global strong ordering across all I/O regions. Any accesses by a hart to any I/O region associated with channel 1 can only be observed to have occurred in program order by all other harts and I/O devices, including relative to accesses made by that hart to relaxed I/O regions or strongly ordered I/O regions with different channel numbers. In other words, any access to a region in channel 1 is equivalent to executing a `fence io,io` instruction before and after the instruction. Other larger channel numbers provide program ordering to accesses by that hart across any regions with the same channel number. Systems might support dynamic configuration of ordering properties on each memory region. [NOTE] ==== Strong ordering can be used to improve compatibility with legacy device driver code, or to enable increased performance compared to insertion of explicit ordering instructions when the implementation is known to not reorder accesses. Local strong ordering (channel 0) is the default form of strong ordering as it is often straightforward to provide if there is only a single in-order communication path between the hart and the I/O device. Generally, different strongly ordered I/O regions can share the same ordering channel without additional ordering hardware if they share the same interconnect path and the path does not reorder requests. ==== ==== Coherence and Cacheability PMAs Coherence is a property defined for a single physical address, and indicates that writes to that address by one agent will eventually be made visible to other coherent agents in the system. Coherence is not to be confused with the memory consistency model of a system, which defines what values a memory read can return given the previous history of reads and writes to the entire memory system. In RISC-V platforms, the use of hardware-incoherent regions is discouraged due to software complexity, performance, and energy impacts. The cacheability of a memory region should not affect the software view of the region except for differences reflected in other PMAs, such as main memory versus I/O classification, memory ordering, supported accesses and atomic operations, and coherence. For this reason, we treat cacheability as a platform-level setting managed by machine-mode software only. Where a platform supports configurable cacheability settings for a memory region, a platform-specific machine-mode routine will change the settings and flush caches if necessary, so the system is only incoherent during the transition between cacheability settings. This transitory state should not be visible to lower privilege levels. [NOTE] ==== Coherence is straightforward to provide for a shared memory region that is not cached by any agent. The PMA for such a region would simply indicate it should not be cached in a private or shared cache. Coherence is also straightforward for read-only regions, which can be safely cached by multiple agents without requiring a cache-coherence scheme. The PMA for this region would indicate that it can be cached, but that writes are not supported. Some read-write regions might only be accessed by a single agent, in which case they can be cached privately by that agent without requiring a coherence scheme. The PMA for such regions would indicate they can be cached. The data can also be cached in a shared cache, as other agents should not access the region. If an agent can cache a read-write region that is accessible by other agents, whether caching or non-caching, a cache-coherence scheme is required to avoid use of stale values. In regions lacking hardware cache coherence (hardware-incoherent regions), cache coherence can be implemented entirely in software, but software coherence schemes are notoriously difficult to implement correctly and often have severe performance impacts due to the need for conservative software-directed cache-flushing. Hardware cache-coherence schemes require more complex hardware and can impact performance due to the cache-coherence probes, but are otherwise invisible to software. For each hardware cache-coherent region, the PMA would indicate that the region is coherent and which hardware coherence controller to use if the system has multiple coherence controllers. For some systems, the coherence controller might be an outer-level shared cache, which might itself access further outer-level cache-coherence controllers hierarchically. Most memory regions within a platform will be coherent to software, because they will be fixed as either uncached, read-only, hardware cache-coherent, or only accessed by one agent. ==== If a PMA indicates non-cacheability, then accesses to that region must be satisfied by the memory itself, not by any caches. [NOTE] ==== For implementations with a cacheability-control mechanism, the situation may arise that a program uncacheably accesses a memory location that is currently cache-resident. In this situation, the cached copy must be ignored. This constraint is necessary to prevent more-privileged modes’ speculative cache refills from affecting the behavior of less-privileged modes’ uncacheable accesses. ==== ==== Idempotency PMAs Idempotency PMAs describe whether reads and writes to an address region are idempotent. Main memory regions are assumed to be idempotent. For I/O regions, idempotency on reads and writes can be specified separately (e.g., reads are idempotent but writes are not). If accesses are non-idempotent, i.e., there is potentially a side effect on any read or write access, then speculative or redundant accesses must be avoided. For the purposes of defining the idempotency PMAs, changes in observed memory ordering created by redundant accesses are not considered a side effect. [NOTE] ==== While hardware should always be designed to avoid speculative or redundant accesses to memory regions marked as non-idempotent, it is also necessary to ensure software or compiler optimizations do not generate spurious accesses to non-idempotent memory regions. Non-idempotent regions might not support misaligned accesses. Misaligned accesses to such regions should raise access-fault exceptions rather than address-misaligned exceptions, indicating that software should not emulate the misaligned access using multiple smaller accesses, which could cause unexpected side effects. ==== For non-idempotent regions, implicit reads and writes must not be performed early or speculatively, with the following exceptions. When a non-speculative implicit read is performed, an implementation is permitted to additionally read any of the bytes within a naturally aligned power-of-2 region containing the address of the non-speculative implicit read. Furthermore, when a non-speculative instruction fetch is performed, an implementation is permitted to additionally read any of the bytes within the _next_ naturally aligned power-of-2 region of the same size (with the address of the region taken modulo 2^XLEN^. The results of these additional reads may be used to satisfy subsequent early or speculative implicit reads. The size of these naturally aligned power-of-2 regions is implementation-defined, but, for systems with page-based virtual memory, must not exceed the smallest supported page size. [[pmp]] === Physical Memory Protection To support secure processing and contain faults, it is desirable to limit the physical addresses accessible by software running on a hart. An optional physical memory protection (PMP) unit provides per-hart machine-mode control registers to allow physical memory access privileges (read, write, execute) to be specified for each physical memory region. The PMP values are checked in parallel with the PMA checks described in <>. The granularity of PMP access control settings are platform-specific, but the standard PMP encoding supports regions as small as four bytes. Certain regions’ privileges can be hardwired—for example, some regions might only ever be visible in machine mode but in no lower-privilege layers. [NOTE] ==== Platforms vary widely in demands for physical memory protection, and some platforms may provide other PMP structures in addition to or instead of the scheme described in this section. ==== PMP checks are applied to all accesses whose effective privilege mode is S or U, including instruction fetches and data accesses in S and U mode, and data accesses in M-mode when the MPRV bit in `mstatus` is set and the MPP field in `mstatus` contains S or U. PMP checks are also applied to page-table accesses for virtual-address translation, for which the effective privilege mode is S. Optionally, PMP checks may additionally apply to M-mode accesses, in which case the PMP registers themselves are locked, so that even M-mode software cannot change them until the hart is reset. In effect, PMP can _grant_ permissions to S and U modes, which by default have none, and can _revoke_ permissions from M-mode, which by default has full permissions. PMP violations are always trapped precisely at the processor. ==== Physical Memory Protection CSRs PMP entries are described by an 8-bit configuration register and one MXLEN-bit address register. Some PMP settings additionally use the address register associated with the preceding PMP entry. Up to 64 PMP entries are supported. Implementations may implement zero, 16, or 64 PMP entries; the lowest-numbered PMP entries must be implemented first. All PMP CSR fields are *WARL* and may be read-only zero. PMP CSRs are only accessible to M-mode. The PMP configuration registers are densely packed into CSRs to minimize context-switch time. For RV32, sixteen CSRs, `pmpcfg0`–`pmpcfg15`, hold the configurations `pmp0cfg`–`pmp63cfg` for the 64 PMP entries, as shown in <>. For RV64, eight even-numbered CSRs, `pmpcfg0`, `pmpcfg2`, …, `pmpcfg14`, hold the configurations for the 64 PMP entries, as shown in <>. For RV64, the odd-numbered configuration registers, `pmpcfg1`, `pmpcfg3`, …, `pmpcfg15`, are illegal. [NOTE] ==== RV64 harts use `pmpcfg2`, rather than `pmpcfg1`, to hold configurations for PMP entries 8-15. This design reduces the cost of supporting multiple MXLEN values, since the configurations for PMP entries 8-11 appear in `pmpcfg2`[31:0] for both RV32 and RV64. ==== [[pmpcfg-rv32]] .RV32 PMP configuration CSR layout. include::images/bytefield/pmp-rv32.edn[] [[pmpcfg-rv64]] .RV64 PMP configuration CSR layout. include::images/bytefield/pmp-rv64.edn[] The PMP address registers are CSRs named `pmpaddr0`-`pmpaddr63`. Each PMP address register encodes bits 33-2 of a 34-bit physical address for RV32, as shown in <>. For RV64, each PMP address register encodes bits 55-2 of a 56-bit physical address, as shown in <>. Not all physical address bits may be implemented, and so the `pmpaddr` registers are *WARL*. [NOTE] ==== The Sv32 page-based virtual-memory scheme described in <> supports 34-bit physical addresses for RV32, so the PMP scheme must support addresses wider than XLEN for RV32. The Sv39 and Sv48 page-based virtual-memory schemes described in <> and <> support a 56-bit physical address space, so the RV64 PMP address registers impose the same limit. ==== [[pmpaddr-rv32]] .PMP address register format, RV32. include::images/bytefield/pmpaddr-rv32.edn[] [[pmpaddr-rv64]] .PMP address register format, RV64. include::images/bytefield/pmpaddr-rv64.edn[] <> shows the layout of a PMP configuration register. The R, W, and X bits, when set, indicate that the PMP entry permits read, write, and instruction execution, respectively. When one of these bits is clear, the corresponding access type is denied. The R, W, and X fields form a collective *WARL* field for which the combinations with R=0 and W=1 are reserved. The remaining two fields, A and L, are described in the following sections. [[pmpcfg]] .PMP configuration register format. include::images/bytefield/pmpcfg.edn[] Attempting to fetch an instruction from a PMP region that does not have execute permissions raises an instruction access-fault exception. Attempting to execute a load or load-reserved instruction which accesses a physical address within a PMP region without read permissions raises a load access-fault exception. Attempting to execute a store, store-conditional, or AMO instruction which accesses a physical address within a PMP region without write permissions raises a store access-fault exception. ===== Address Matching The A field in a PMP entry's configuration register encodes the address-matching mode of the associated PMP address register. The encoding of this field is shown in <>. When A=0, this PMP entry is disabled and matches no addresses. Two other address-matching modes are supported: naturally aligned power-of-2 regions (NAPOT), including the special case of naturally aligned four-byte regions (NA4); and the top boundary of an arbitrary range (TOR). These modes support four-byte granularity. [[pmpcfg-a]] .Encoding of A field in PMP configuration registers. [%autowidth,float="center",align="center",cols=">,^,<",options="header"] |=== |A |Name |Description |0 + 1 + 2 + 3 |OFF + TOR + NA4 + NAPOT |Null region (disabled) + Top of range + Naturally aligned four-byte region + Naturally aligned power-of-two region, ≥8 bytes |=== NAPOT ranges make use of the low-order bits of the associated address register to encode the size of the range, as shown in <>. [[pmpcfg-napot]] .`NAPOT` range encoding in PMP address and configuration registers. [%autowidth,float="center",align="center",cols="^,^,<",options="header"] |=== |`pmpaddr` |`pmpcfg`.A |Match type and size |`yyyy...yyyy` + `yyyy...yyy0` + `yyyy...yy01` + `yyyy...y011` + ... + `yy01...1111` + `y011...1111` + `0111...1111` + `1111...1111` |NA4 + NAPOT + NAPOT + NAPOT + ... + NAPOT + NAPOT + NAPOT + NAPOT |4-byte NAPOT range + 8-byte NAPOT range + 16-byte NAPOT range + 32-byte NAPOT range + … + 2^XLEN^-byte NAPOT range + 2^XLEN+1^-byte NAPOT range + 2^XLEN+2^-byte NAPOT range + 2^XLEN+3^-byte NAPOT range |=== If TOR is selected, the associated address register forms the top of the address range, and the preceding PMP address register forms the bottom of the address range. If PMP entry __i__'s A field is set to TOR, the entry matches any address _y_ such that `pmpaddr~i-1~`≤__y__<``pmpaddr~i~`` (irrespective of the value of `pmpcfg~i-1~`). If PMP entry 0's A field is set to TOR, zero is used for the lower bound, and so it matches any address `y>. When paging is enabled, instructions that access virtual memory may result in multiple physical-memory accesses, including implicit references to the page tables. The PMP checks apply to all of these accesses. The effective privilege mode for implicit page-table accesses is S. Implementations with virtual memory are permitted to perform address translations speculatively and earlier than required by an explicit memory access, and are permitted to cache them in address translation cache structures—including possibly caching the identity mappings from effective address to physical address used in Bare translation modes and M-mode. The PMP settings for the resulting physical address may be checked (and possibly cached) at any point between the address translation and the explicit memory access. Hence, when the PMP settings are modified, M-mode software must synchronize the PMP settings with the virtual memory system and any PMP or address-translation caches. This is accomplished by executing an SFENCE.VMA instruction with _rs1_=`x0` and _rs2_=`x0`, after the PMP CSRs are written. See <> for additional synchronization requirements when the hypervisor extension is implemented. If page-based virtual memory is not implemented, memory accesses check the PMP settings synchronously, so no SFENCE.VMA is needed.