1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
|
[[supervisor]]
== Supervisor-Level ISA, Version 1.12
This chapter describes the RISC-V supervisor-level architecture, which
contains a common core that is used with various supervisor-level
address translation and protection schemes.
[NOTE]
====
Supervisor mode is deliberately restricted in terms of interactions with
underlying physical hardware, such as physical memory and device
interrupts, to support clean virtualization. In this spirit, certain
supervisor-level facilities, including requests for timer and
interprocessor interrupts, are provided by implementation-specific
mechanisms. In some systems, a supervisor execution environment (SEE)
provides these facilities in a manner specified by a supervisor binary
interface (SBI). Other systems supply these facilities directly, through
some other implementation-defined mechanism.
====
=== Supervisor CSRs
A number of CSRs are provided for the supervisor.
[NOTE]
====
The supervisor should only view CSR state that should be visible to a
supervisor-level operating system. In particular, there is no
information about the existence (or non-existence) of higher privilege
levels (machine level or other) visible in the CSRs accessible by the
supervisor.
Many supervisor CSRs are a subset of the equivalent machine-mode CSR,
and the machine-mode chapter should be read first to help understand the
supervisor-level CSR descriptions.
====
[[sstatus]]
==== Supervisor Status Register (`sstatus`)
The `sstatus` register is an SXLEN-bit read/write register formatted as
shown in <<sstatusreg-rv32>> when SXLEN=32
and <<sstatusreg>> when SXLEN=64. The `sstatus`
register keeps track of the processor's current operating state.
include::images/bytefield/sstatus32-1.edn[]
[[sstatusreg-rv32]]
.Supervisor-mode status register (`sstatus`) when SXLEN=32.
include::images/bytefield/sstatus32-2.edn[]
include::images/bytefield/sstatus64.edn[]
[[sstatusreg]]
.Supervisor-mode status register (`sstatus`) when SXLEN=64.
include::images/bytefield/sstatus32-2.edn[]
The SPP bit indicates the privilege level at which a hart was executing
before entering supervisor mode. When a trap is taken, SPP is set to 0
if the trap originated from user mode, or 1 otherwise. When an SRET
instruction (see <<otherpriv>>) is executed to
return from the trap handler, the privilege level is set to user mode if
the SPP bit is 0, or supervisor mode if the SPP bit is 1; SPP is then
set to 0.
The SIE bit enables or disables all interrupts in supervisor mode. When
SIE is clear, interrupts are not taken while in supervisor mode. When
the hart is running in user-mode, the value in SIE is ignored, and
supervisor-level interrupts are enabled. The supervisor can disable
individual interrupt sources using the `sie` CSR.
The SPIE bit indicates whether supervisor interrupts were enabled prior
to trapping into supervisor mode. When a trap is taken into supervisor
mode, SPIE is set to SIE, and SIE is set to 0. When an SRET instruction
is executed, SIE is set to SPIE, then SPIE is set to 1.
The `sstatus` register is a subset of the `mstatus` register.
[NOTE]
====
In a straightforward implementation, reading or writing any field in
`sstatus` is equivalent to reading or writing the homonymous field in
`mstatus`.
====
===== Base ISA Control in `sstatus` Register
The UXL field controls the value of XLEN for U-mode, termed _UXLEN_,
which may differ from the value of XLEN for S-mode, termed _SXLEN_. The
encoding of UXL is the same as that of the MXL field of `misa`, shown in
<<misabase>>.
When SXLEN=32, the UXL field does not exist, and UXLEN=32. When
SXLEN=64, it is a *WARL* field that encodes the current value of UXLEN. In
particular, an implementation may make UXL be a read-only field whose
value always ensures that UXLEN=SXLEN.
If UXLEN≠SXLEN, instructions executed in the narrower
mode must ignore source register operand bits above the configured XLEN,
and must sign-extend results to fill the widest supported XLEN in the
destination register.
If UXLEN latexmath:[$<$] SXLEN, user-mode instruction-fetch addresses
and load and store effective addresses are taken modulo
latexmath:[$2^{\text{UXLEN}}$]. For example, when UXLEN=32 and SXLEN=64,
user-mode memory accesses reference the lowest of the address space.
[[sum]]
===== Memory Privilege in `sstatus` Register
The MXR (Make eXecutable Readable) bit modifies the privilege with which
loads access virtual memory. When MXR=0, only loads from pages marked
readable (R=1 in <<sv32pte>>) will succeed. When
MXR=1, loads from pages marked either readable or executable (R=1 or
X=1) will succeed. MXR has no effect when page-based virtual memory is
not in effect.
The SUM (permit Supervisor User Memory access) bit modifies the
privilege with which S-mode loads and stores access virtual memory. When
SUM=0, S-mode memory accesses to pages that are accessible by U-mode
(U=1 in <<sv32pte>>) will fault. When SUM=1, these
accesses are permitted. SUM has no effect when page-based virtual memory
is not in effect, nor when executing in U-mode. Note that S-mode can
never execute instructions from user pages, regardless of the state of
SUM.
SUM is read-only 0 if `satp`.MODE is read-only 0.
[NOTE]
====
The SUM mechanism prevents supervisor software from inadvertently
accessing user memory. Operating systems can execute the majority of
code with SUM clear; the few code segments that should access user
memory can temporarily set SUM.
The SUM mechanism does not avail S-mode software of permission to
execute instructions in user code pages. Legitimate uses cases for
execution from user memory in supervisor context are rare in general and
nonexistent in POSIX environments. However, bugs in supervisors that
lead to arbitrary code execution are much easier to exploit if the
supervisor exploit code can be stored in a user buffer at a virtual
address chosen by an attacker.
Some non-POSIX single address space operating systems do allow certain
privileged software to partially execute in supervisor mode, while most
programs run in user mode, all in a shared address space. This use case
can be realized by mapping the physical code pages at multiple virtual
addresses with different permissions, possibly with the assistance of
the instruction page-fault handler to direct supervisor software to use
the alternate mapping.
====
===== Endianness Control in `sstatus` Register
The UBE bit is a *WARL* field that controls the endianness of explicit memory
accesses made from U-mode, which may differ from the endianness of
memory accesses in S-mode. An implementation may make UBE be a read-only
field that always specifies the same endianness as for S-mode.
UBE controls whether explicit load and store memory accesses made from
U-mode are little-endian (UBE=0) or big-endian (UBE=1).
UBE has no effect on instruction fetches, which are _implicit_ memory
accesses that are always little-endian.
For _implicit_ accesses to supervisor-level memory management data
structures, such as page tables, S-mode endianness always applies and
UBE is ignored.
[NOTE]
====
Standard RISC-V ABIs are expected to be purely little-endian-only or
big-endian-only, with no accommodation for mixing endianness.
Nevertheless, endianness control has been defined so as to permit an OS
of one endianness to execute user-mode programs of the opposite
endianness.
====
==== Supervisor Trap Vector Base Address Register (`stvec`)
The `stvec` register is an SXLEN-bit read/write register that holds trap
vector configuration, consisting of a vector base address (BASE) and a
vector mode (MODE).
.Supervisor trap vector base address register (`stvec`).
include::images/bytefield/stvec.edn[]
The BASE field in `stvec` is a field that can hold any valid virtual or
physical address, subject to the following alignment constraints: the
address must be 4-byte aligned, and MODE settings other than Direct
might impose additional alignment constraints on the value in the BASE
field.
[[stvec-mode]]
.Encoding of `stvec` MODE field.
[%autowidth,float="center",align="center",cols=">,^,<",options="header",]
|===
|Value |Name |Description
|0 +
1 +
≥2
|Direct +
Vectored
|All exceptions set `pc` to BASE. +
Asynchronous interrupts set `pc` to BASE+4×cause. +
_Reserved_
|===
The encoding of the MODE field is shown in
<<stvec-mode>>. When MODE=Direct, all traps into
supervisor mode cause the `pc` to be set to the address in the BASE
field. When MODE=Vectored, all synchronous exceptions into supervisor
mode cause the `pc` to be set to the address in the BASE field, whereas
interrupts cause the `pc` to be set to the address in the BASE field
plus four times the interrupt cause number. For example, a
supervisor-mode timer interrupt (see <<scauses>>)
causes the `pc` to be set to BASE+`0x14`. Setting MODE=Vectored may
impose a stricter alignment constraint on BASE.
==== Supervisor Interrupt Registers (`sip` and `sie`)
The `sip` register is an SXLEN-bit read/write register containing
information on pending interrupts, while `sie` is the corresponding
SXLEN-bit read/write register containing interrupt enable bits.
Interrupt cause number _i_ (as reported in CSR `scause`,
<<scause>>) corresponds with bit _i_ in both `sip` and
`sie`. Bits 15:0 are allocated to standard interrupt causes only, while
bits 16 and above are designated for platform or custom use.
.Supervisor interrupt-pending register (`sip`).
include::images/bytefield/sip.edn[]
.Supervisor interrupt-enable register (`sie`).
include::images/bytefield/sie.edn[]
An interrupt _i_ will trap to S-mode if both of the following are true:
(a) either the current privilege mode is S and the SIE bit in the
`sstatus` register is set, or the current privilege mode has less
privilege than S-mode; and (b) bit _i_ is set in both `sip` and `sie`.
These conditions for an interrupt trap to occur must be evaluated in a
bounded amount of time from when an interrupt becomes, or ceases to be,
pending in `sip`, and must also be evaluated immediately following the
execution of an SRET instruction or an explicit write to a CSR on which
these interrupt trap conditions expressly depend (including `sip`, `sie`
and `sstatus`).
Interrupts to S-mode take priority over any interrupts to lower
privilege modes.
Each individual bit in register `sip` may be writable or may be
read-only. When bit _i_ in `sip` is writable, a pending interrupt _i_
can be cleared by writing 0 to this bit. If interrupt _i_ can become
pending but bit _i_ in `sip` is read-only, the implementation must
provide some other mechanism for clearing the pending interrupt (which
may involve a call to the execution environment).
A bit in `sie` must be writable if the corresponding interrupt can ever
become pending. Bits of `sie` that are not writable are read-only zero.
The standard portions (bits 15:0) of registers `sip` and `sie` are
formatted as shown in Figures <<sipreg-standard>>
and <<siereg-standard>> respectively.
[[sipreg-standard]]
.Standard portion (bits 15:0) of `sip`.
include::images/bytefield/sipreg-standard.edn[]
[[siereg-standard]]
.Standard portion (bits 15:0) of `sie`.
include::images/bytefield/siereg-standard.edn[]
Bits `sip`.SEIP and `sie`.SEIE are the interrupt-pending and
interrupt-enable bits for supervisor-level external interrupts. If
implemented, SEIP is read-only in `sip`, and is set and cleared by the
execution environment, typically through a platform-specific interrupt
controller.
Bits `sip`.STIP and `sie`.STIE are the interrupt-pending and
interrupt-enable bits for supervisor-level timer interrupts. If
implemented, STIP is read-only in `sip`, and is set and cleared by the
execution environment.
Bits `sip`.SSIP and `sie`.SSIE are the interrupt-pending and
interrupt-enable bits for supervisor-level software interrupts. If
implemented, SSIP is writable in `sip` and may also be set to 1 by a
platform-specific interrupt controller.
[NOTE]
====
Interprocessor interrupts are sent to other harts by
implementation-specific means, which will ultimately cause the SSIP bit
to be set in the recipient hart’s `sip` register.
====
Each standard interrupt type (SEI, STI, or SSI) may not be implemented,
in which case the corresponding interrupt-pending and interrupt-enable
bits are read-only zeros. All bits in `sip` and `sie` are *WARL* fields. The
implemented interrupts may be found by writing one to every bit location
in `sie`, then reading back to see which bit positions hold a one.
[NOTE]
====
The `sip` and `sie` registers are subsets of the `mip` and `mie`
registers. Reading any implemented field, or writing any writable field,
of `sip`/`sie` effects a read or write of the homonymous field of
`mip`/`mie`.
Bits 3, 7, and 11 of `sip` and `sie` correspond to the machine-mode
software, timer, and external interrupts, respectively. Since most
platforms will choose not to make these interrupts delegatable from
M-mode to S-mode, they are shown as 0 in
<<sipreg-standard>> and <<siereg-standard>>.
====
Multiple simultaneous interrupts destined for supervisor mode are
handled in the following decreasing priority order: SEI, SSI, STI.
==== Supervisor Timers and Performance Counters
Supervisor software uses the same hardware performance monitoring
facility as user-mode software, including the `time`, `cycle`, and
`instret` CSRs. The implementation should provide a mechanism to modify
the counter values.
The implementation must provide a facility for scheduling timer
interrupts in terms of the real-time counter, `time`.
==== Counter-Enable Register (`scounteren`)
.Counter-enable register (`scounteren`)
include::images/bytefield/scounteren.edn[]
The counter-enable register `scounteren` is a 32-bit register that
controls the availability of the hardware performance monitoring
counters to U-mode.
When the CY, TM, IR, or HPM__n__ bit in the `scounteren` register is
clear, attempts to read the `cycle`, `time`, `instret`, or `hpmcountern`
register while executing in U-mode will cause an illegal instruction
exception. When one of these bits is set, access to the corresponding
register is permitted.
`scounteren` must be implemented. However, any of the bits may be
read-only zero, indicating reads to the corresponding counter will cause
an exception when executing in U-mode. Hence, they are effectively
*WARL* fields.
[NOTE]
====
The setting of a bit in `mcounteren` does not affect whether the
corresponding bit in `scounteren` is writable. However, U-mode may only
access a counter if the corresponding bits in `scounteren` and
`mcounteren` are both set.
====
==== Supervisor Scratch Register (`sscratch`)
The `sscratch` register is an SXLEN-bit read/write register, dedicated
for use by the supervisor. Typically, `sscratch` is used to hold a
pointer to the hart-local supervisor context while the hart is executing
user code. At the beginning of a trap handler, `sscratch` is swapped
with a user register to provide an initial working register.
.Supervisor Scratch Register
include::images/bytefield/sscratch.edn[]
==== Supervisor Exception Program Counter (`sepc`)
`sepc` is an SXLEN-bit read/write register formatted as shown in
<<epcreg>>. The low bit of `sepc` (`sepc[0]`) is always zero. On implementations that support only IALIGN=32, the two low bits (`sepc[1:0]`) are always zero.
If an implementation allows IALIGN to be either 16 or 32 (by changing
CSR `misa`, for example), then, whenever IALIGN=32, bit `sepc[1]` is
masked on reads so that it appears to be 0. This masking occurs also for
the implicit read by the SRET instruction. Though masked, `sepc[1]`
remains writable when IALIGN=32.
`sepc` is a *WARL* register that must be able to hold all valid virtual
addresses. It need not be capable of holding all possible invalid
addresses. Prior to writing `sepc`, implementations may convert an
invalid address into some other invalid address that `sepc` is capable
of holding.
When a trap is taken into S-mode, `sepc` is written with the virtual
address of the instruction that was interrupted or that encountered the
exception. Otherwise, `sepc` is never written by the implementation,
though it may be explicitly written by software.
[[epcreg]]
.Supervisor exception program counter register.
include::images/bytefield/epcreg.edn[]
[[scause]]
==== Supervisor Cause Register (`scause`)
The `scause` register is an SXLEN-bit read-write register formatted as
shown in <<scausereg>>. When a trap is taken into
S-mode, `scause` is written with a code indicating the event that
caused the trap. Otherwise, `scause` is never written by the
implementation, though it may be explicitly written by software.
The Interrupt bit in the `scause` register is set if the trap was caused
by an interrupt. The Exception Code field contains a code identifying
the last exception or interrupt. <<scauses>> lists
the possible exception codes for the current supervisor ISAs. The
Exception Code is a *WLRL* field. It is required to hold the values 0–31
(i.e., bits 4–0 must be implemented), but otherwise it is only
guaranteed to hold supported exception codes.
[[scausereg]]
.Supervisor Cause register `scause`.
include::images/bytefield/scausereg.edn[]
[[scauses]]
.Supervisor cause register (`scause`) values after trap. Synchronous exception priorities are given by <<exception-priority>>.
[%autowidth,float="center",align="center",cols=">,>,3",options="header"]
|===
|Interrupt |Exception Code |Description
|1 +
1 +
1 +
1 +
1 +
1 +
1 +
1
|0 +
1 +
2-4 +
5 +
6-8 +
9 +
10-15 +
≥16
|_Reserved_ +
Supervisor software interrupt +
_Reserved_ +
Supervisor timer interrupt +
_Reserved_ +
Supervisor external interrupt +
_Reserved_ +
_Designated for platform use_
|0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0 +
0
|0 +
1 +
2 +
3 +
4 +
5 +
6 +
7 +
8 +
9 +
10-11 +
12 +
13 +
14 +
15 +
16-23 +
24-31 +
32-47 +
48-63 +
≥64
|Instruction address misaligned +
Instruction access fault +
Illegal instruction +
Breakpoint +
Load address misaligned +
Load access fault +
Store/AMO address misaligned +
Store/AMO access fault +
Environment call from U-mode +
Environment call from S-mode +
_Reserved_ +
Instruction page fault +
Load page fault +
_Reserved_ +
Store/AMO page fault +
_Reserved_ +
_Designated for custom use_ +
_Reserved_ +
_Designated for custom use_ +
_Reserved_
|===
==== Supervisor Trap Value (`stval`) Register
The `stval` register is an SXLEN-bit read-write register formatted as
shown in <<stvalreg>>. When a trap is taken into
S-mode, `stval` is written with exception-specific information to assist
software in handling the trap. Otherwise, `stval` is never written by
the implementation, though it may be explicitly written by software. The
hardware platform will specify which exceptions must set `stval`
informatively and which may unconditionally set it to zero.
If `stval` is written with a nonzero value when a breakpoint,
address-misaligned, access-fault, or page-fault exception occurs on an
instruction fetch, load, or store, then `stval` will contain the
faulting virtual address.
[[stvalreg]]
.Supervisor Trap Value register.
include::images/bytefield/stvalreg.edn[]
If `stval` is written with a nonzero value when a misaligned load or
store causes an access-fault or page-fault exception, then `stval` will
contain the virtual address of the portion of the access that caused the
fault.
If `stval` is written with a nonzero value when an instruction
access-fault or page-fault exception occurs on a system with
variable-length instructions, then `stval` will contain the virtual
address of the portion of the instruction that caused the fault, while
`sepc` will point to the beginning of the instruction.
The `stval` register can optionally also be used to return the faulting
instruction bits on an illegal instruction exception (`sepc` points to
the faulting instruction in memory). If `stval` is written with a
nonzero value when an illegal-instruction exception occurs, then `stval`
will contain the shortest of:
* the actual faulting instruction
* the first ILEN bits of the faulting instruction
* the first SXLEN bits of the faulting instruction
The value loaded into `stval` on an illegal-instruction exception is
right-justified and all unused upper bits are cleared to zero.
For other traps, `stval` is set to zero, but a future standard may
redefine `stval`’s setting for other traps.
`stval` is a *WARL* register that must be able to hold all valid virtual
addresses and the value 0. It need not be capable of holding all
possible invalid addresses. Prior to writing `stval`, implementations
may convert an invalid address into some other invalid address that
`stval` is capable of holding. If the feature to return the faulting
instruction bits is implemented, `stval` must also be able to hold all
values less than latexmath:[$2^N$], where latexmath:[$N$] is the smaller
of SXLEN and ILEN.
==== Supervisor Environment Configuration Register (`senvcfg`)
The `senvcfg` CSR is an SXLEN-bit read/write register, formatted as
shown in <<senvcfg>>, that controls certain
characteristics of the U-mode execution environment.
[[senvcfg]]
.Supervisor environment configuration register (`senvcfg`).
include::images/bytefield/senvcfg.edn[]
If bit FIOM (Fence of I/O implies Memory) is set to one in `senvcfg`,
FENCE instructions executed in U-mode are modified so the requirement to
order accesses to device I/O implies also the requirement to order main
memory accesses. <<senvcfg-FIOM>> details the modified
interpretation of FENCE instruction bits PI, PO, SI, and SO in U-mode
when FIOM=1.
Similarly, for U-mode when FIOM=1, if an atomic instruction that
accesses a region ordered as device I/O has its _aq_ and/or _rl_ bit
set, then that instruction is ordered as though it accesses both device
I/O and memory.
If `satp`.MODE is read-only zero (always Bare), the implementation may
make FIOM read-only zero.
[[senvcfg-FIOM]]
.Modified interpretation of FENCE predecessor and successor sets in U-mode when FIOM=1.
[%autowidth,float="center",align="center",cols="^,<",options="header"]
|===
|Instruction bit |Meaning when set
|PI +
PO
|Predecessor device input and memory reads (PR implied) +
Predecessor device output and memory writes (PW implied)
|SI +
SO
|Successor device input and memory reads (SR implied) +
Successor device output and memory writes (SW implied)
|===
[NOTE]
====
Bit FIOM exists for a specific circumstance when an I/O device is being
emulated for U-mode and both of the following are true: (a) the emulated
device has a memory buffer that should be I/O space but is actually
mapped to main memory via address translation, and (b) multiple physical
harts are involved in accessing this emulated device from U-mode.
A hypervisor running in S-mode without the benefit of the hypervisor
extension of <<hypervisor>> may need to emulate
a device for U-mode if paravirtualization cannot be employed. If the
same hypervisor provides a virtual machine (VM) with multiple virtual
harts, mapped one-to-one to real harts, then multiple harts may
concurrently access the emulated device, perhaps because: (a) the guest
OS within the VM assigns device interrupt handling to one hart while the
device is also accessed by a different hart outside of an interrupt
handler, or (b) control of the device (or partial control) is being
migrated from one hart to another, such as for interrupt load balancing
within the VM. For such cases, guest software within the VM is expected
to properly coordinate access to the (emulated) device across multiple
harts using mutex locks and/or interprocessor interrupts as usual, which
in part entails executing I/O fences. But those I/O fences may not be
sufficient if some of the device ``I/O'' is actually main memory,
unknown to the guest. Setting FIOM=1 modifies those fences (and all
other I/O fences executed in U-mode) to include main memory, too.
Software can always avoid the need to set FIOM by never using main
memory to emulate a device memory buffer that should be I/O space.
However, this choice usually requires trapping all U-mode accesses to
the emulated buffer, which might have a noticeable impact on
performance. The alternative offered by FIOM is sufficiently inexpensive
to implement that we consider it worth supporting even if only rarely
enabled.
====
The definition of the CBZE field will be furnished by the forthcoming
Zicboz extension. Its allocation within `senvcfg` may change prior to
the ratification of that extension.
The definitions of the CBCFE and CBIE fields will be furnished by the
forthcoming Zicbom extension. Their allocations within `senvcfg` may
change prior to the ratification of that extension.
[[satp]]
==== Supervisor Address Translation and Protection (`satp`) Register
The `satp` register is an SXLEN-bit read/write register, formatted as
shown in <<rv32satp>> for SXLEN=32 and
<<rv64satp>> for SXLEN=64, which controls
supervisor-mode address translation and protection. This register holds
the physical page number (PPN) of the root page table, i.e., its
supervisor physical address divided by ; an address space identifier
(ASID), which facilitates address-translation fences on a
per-address-space basis; and the MODE field, which selects the current
address-translation scheme. Further details on the access to this
register are described in <<virt-control>>.
[[rv32satp]]
.Supervisor address translation and protection register `satp` when SXLEN=32.
include::images/bytefield/rv32satp.edn[]
[NOTE]
====
Storing a PPN in `satp`, rather than a physical address, supports a
physical address space larger than 4 GiB for RV32.
The `satp`.PPN field might not be capable of holding all physical page
numbers. Some platform standards might place constraints on the values
`satp`.PPN may assume, e.g., by requiring that all physical page numbers
corresponding to main memory be representable.
====
[[rv64satp]]
.Supervisor address translation and protection register `satp` when SXLEN=64, for MODE values Bare, Sv39, Sv38, and Sv57.
include::images/bytefield/rv64satp.edn[]
[NOTE]
====
We store the ASID and the page table base address in the same CSR to
allow the pair to be changed atomically on a context switch. Swapping
them non-atomically could pollute the old virtual address space with new
translations, or vice-versa. This approach also slightly reduces the
cost of a context switch.
====
<<satp-mode>> shows the encodings of the MODE field when
SXLEN=32 and SXLEN=64. When MODE=Bare, supervisor virtual addresses are
equal to supervisor physical addresses, and there is no additional
memory protection beyond the physical memory protection scheme described
in <<pmp>>. To select MODE=Bare, software must write
zero to the remaining fields of `satp` (bits 30–0 when SXLEN=32, or bits
59–0 when SXLEN=64). Attempting to select MODE=Bare with a nonzero
pattern in the remaining fields has an UNSPECIFIED effect on the value that the
remaining fields assume and an UNSPECIFIED effect on address translation and
protection behavior.
When SXLEN=32, the `satp` encodings corresponding to MODE=Bare and
ASID[8:7]=3 are designated for custom use, whereas the encodings
corresponding to MODE=Bare and ASID[8:7]≠3 are reserved
for future standard use. When SXLEN=64, all `satp` encodings
corresponding to MODE=Bare are reserved for future standard use.
[NOTE]
====
Version 1.11 of this standard stated that the remaining fields in `satp`
had no effect when MODE=Bare. Making these fields reserved facilitates
future definition of additional translation and protection modes,
particularly in RV32, for which all patterns of the existing MODE field
have already been allocated.
====
When SXLEN=32, the only other valid setting for MODE is Sv32, a paged
virtual-memory scheme described in <<sv32>>.
When SXLEN=64, three paged virtual-memory schemes are defined: Sv39,
Sv48, and Sv57, described in <<sv39>>, <<sv48>>,
and <<sv57>>, respectively. One additional scheme, Sv64, will be
defined in a later version of this specification. The remaining MODE
settings are reserved for future use and may define different
interpretations of the other fields in `satp`.
Implementations are not required to support all MODE settings, and if
`satp` is written with an unsupported MODE, the entire write has no
effect; no fields in `satp` are modified.
The number of ASID bits is and may be zero. The number of implemented
ASID bits, termed _ASIDLEN_, may be determined by writing one to every
bit position in the ASID field, then reading back the value in `satp` to
see which bit positions in the ASID field hold a one. The
least-significant bits of ASID are implemented first: that is, if
ASIDLEN latexmath:[$>$] 0, ASID[ASIDLEN-1:0] is writable. The maximal
value of ASIDLEN, termed ASIDMAX, is 9 for Sv32 or 16 for Sv39, Sv48,
and Sv57.
<<<
[[satp-mode]]
.Encoding of `satp` MODE field.
[%autowidth,float="center",align="center",cols="^,^,<",options="header"]
|===
3+|SXLEN=32
|Value |Name |Description
|0 +
1
|Bare +
Sv32
|No translation or protection. +
Page-based 32-bit virtual addressing (see <<sv32>>).
3+|*SXLEN=64*
|Value |Name |Description
|0 +
1-7 +
8 +
9 +
10 +
11 +
12-13 +
14-15
|Bare +
- +
Sv39 +
Sv48 +
Sv57 +
Sv64 +
- +
-
|No translation or protection. +
_Reserved for standard use_ +
Page-based 39-bit virtual addressing (see <<sv39>>). +
Page-based 48-bit virtual addressing (see <<sv48>>). +
Page-based 57-bit virtual addressing (see <<sv57>>). +
_Reserved for page-based 64-bit virtual addressing._ +
_Reserved for standard use_ +
_Designated for custom use_
|===
[NOTE]
====
For many applications, the choice of page size has a substantial
performance impact. A large page size increases TLB reach and loosens
the associativity constraints on virtually indexed, physically tagged
caches. At the same time, large pages exacerbate internal fragmentation,
wasting physical memory and possibly cache capacity.
After much deliberation, we have settled on a conventional page size of
4 KiB for both RV32 and RV64. We expect this decision to ease the
porting of low-level runtime software and device drivers. The TLB reach
problem is ameliorated by transparent superpage support in modern
operating systems. cite:[transparent-superpages] Additionally, multi-level TLB hierarchies are quite
inexpensive relative to the multi-level cache hierarchies whose address
space they map.
====
The `satp` register is considered _active_ when the effective privilege
mode is S-mode or U-mode. Executions of the address-translation
algorithm may only begin using a given value of `satp` when `satp` is
active.
[NOTE]
====
Translations that began while `satp` was active are not required to
complete or terminate when `satp` is no longer active, unless an
SFENCE.VMA instruction matching the address and ASID is executed. The
SFENCE.VMA instruction must be used to ensure that updates to the
address-translation data structures are observed by subsequent implicit
reads to those structures by a hart.
====
Note that writing `satp` does not imply any ordering constraints between
page-table updates and subsequent address translations, nor does it
imply any invalidation of address-translation caches. If the new address
space’s page tables have been modified, or if an ASID is reused, it may
be necessary to execute an SFENCE.VMA instruction (see
<<sfence.vma>>) after, or in some cases before, writing
`satp`.
[NOTE]
====
Not imposing upon implementations to flush address-translation caches
upon `satp` writes reduces the cost of context switches, provided a
sufficiently large ASID space.
====
=== Supervisor Instructions
In addition to the SRET instruction defined in <<otherpriv>>, one new supervisor-level instruction is provided.
[[sfence.vma]]
==== Supervisor Memory-Management Fence Instruction
include::images/wavedrom/sfencevma.edn[]
The supervisor memory-management fence instruction SFENCE.VMA is used to
synchronize updates to in-memory memory-management data structures with
current execution. Instruction execution causes implicit reads and
writes to these data structures; however, these implicit references are
ordinarily not ordered with respect to explicit loads and stores.
Executing an SFENCE.VMA instruction guarantees that any previous stores
already visible to the current RISC-V hart are ordered before certain
implicit references by subsequent instructions in that hart to the
memory-management data structures. The specific set of operations
ordered by SFENCE.VMA is determined by _rs1_ and _rs2_, as described
below. SFENCE.VMA is also used to invalidate entries in the
address-translation cache associated with a hart (see <<sv32algorithm>>). Further details on the behavior of this instruction are described in <<virt-control>> and <<pmp-vmem>>.
[NOTE]
====
The SFENCE.VMA is used to flush any local hardware caches related to
address translation. It is specified as a fence rather than a TLB flush
to provide cleaner semantics with respect to which instructions are
affected by the flush operation and to support a wider variety of
dynamic caching structures and memory-management schemes. SFENCE.VMA is
also used by higher privilege levels to synchronize page table writes
and the address translation hardware.
====
SFENCE.VMA orders only the local hart’s implicit references to the
memory-management data structures.
[NOTE]
====
Consequently, other harts must be notified separately when the
memory-management data structures have been modified. One approach is to
use 1) a local data fence to ensure local writes are visible globally,
then 2) an interprocessor interrupt to the other thread, then 3) a local
SFENCE.VMA in the interrupt handler of the remote thread, and finally 4)
signal back to originating thread that operation is complete. This is,
of course, the RISC-V analog to a TLB shootdown.
====
For the common case that the translation data structures have only been
modified for a single address mapping (i.e., one page or superpage),
_rs1_ can specify a virtual address within that mapping to effect a
translation fence for that mapping only. Furthermore, for the common
case that the translation data structures have only been modified for a
single address-space identifier, _rs2_ can specify the address space.
The behavior of SFENCE.VMA depends on _rs1_ and _rs2_ as follows:
* If __rs1__=`x0` and __rs2__=`x0`, the fence orders all reads and writes
made to any level of the page tables, for all address spaces. The fence
also invalidates all address-translation cache entries, for all address
spaces.
* If __rs1__=`x0` and __rs2__≠``x0``, the fence orders all
reads and writes made to any level of the page tables, but only for the
address space identified by integer register _rs2_. Accesses to _global_
mappings (see <<translation>>) are not ordered. The
fence also invalidates all address-translation cache entries matching
the address space identified by integer register _rs2_, except for
entries containing global mappings.
* If __rs1__≠``x0`` and __rs2__=`x0`, the fence orders only
reads and writes made to leaf page table entries corresponding to the
virtual address in __rs1__, for all address spaces. The fence also
invalidates all address-translation cache entries that contain leaf page
table entries corresponding to the virtual address in _rs1_, for all
address spaces.
* If __rs1__≠``x0`` and __rs2__≠``x0``, the
fence orders only reads and writes made to leaf page table entries
corresponding to the virtual address in _rs1_, for the address space
identified by integer register _rs2_. Accesses to global mappings are
not ordered. The fence also invalidates all address-translation cache
entries that contain leaf page table entries corresponding to the
virtual address in _rs1_ and that match the address space identified by
integer register _rs2_, except for entries containing global mappings.
If the value held in _rs1_ is not a valid virtual address, then the
SFENCE.VMA instruction has no effect. No exception is raised in this
case.
When __rs2__≠``x0``, bits SXLEN-1:ASIDMAX of the value held
in _rs2_ are reserved for future standard use. Until their use is
defined by a standard extension, they should be zeroed by software and
ignored by current implementations. Furthermore, if
ASIDLEN<ASIDMAX, the implementation shall ignore bits
ASIDMAX-1:ASIDLEN of the value held in _rs2_.
[NOTE]
====
It is always legal to over-fence, e.g., by fencing only based on a
subset of the bits in _rs1_ and/or _rs2_, and/or by simply treating all
SFENCE.VMA instructions as having _rs1_=`x0` and/or _rs2_=`x0`. For
example, simpler implementations can ignore the virtual address in _rs1_
and the ASID value in _rs2_ and always perform a global fence. The
choice not to raise an exception when an invalid virtual address is held
in _rs1_ facilitates this type of simplification.
====
An implicit read of the memory-management data structures may return any
translation for an address that was valid at any time since the most
recent SFENCE.VMA that subsumes that address. The ordering implied by
SFENCE.VMA does not place implicit reads and writes to the
memory-management data structures into the global memory order in a way
that interacts cleanly with the standard RVWMO ordering rules. In
particular, even though an SFENCE.VMA orders prior explicit accesses
before subsequent implicit accesses, and those implicit accesses are
ordered before their associated explicit accesses, SFENCE.VMA does not
necessarily place prior explicit accesses before subsequent explicit
accesses in the global memory order. These implicit loads also need not
otherwise obey normal program order semantics with respect to prior
loads or stores to the same address.
[NOTE]
====
A consequence of this specification is that an implementation may use
any translation for an address that was valid at any time since the most
recent SFENCE.VMA that subsumes that address. In particular, if a leaf
PTE is modified but a subsuming SFENCE.VMA is not executed, either the
old translation or the new translation will be used, but the choice is
unpredictable. The behavior is otherwise well-defined.
In a conventional TLB design, it is possible for multiple entries to
match a single address if, for example, a page is upgraded to a
superpage without first clearing the original non-leaf PTE’s valid bit
and executing an SFENCE.VMA with __rs1__=`x0`. In this case, a similar
remark applies: it is unpredictable whether the old non-leaf PTE or the
new leaf PTE is used, but the behavior is otherwise well defined.
Another consequence of this specification is that it is generally unsafe
to update a PTE using a set of stores of a width less than the width of
the PTE, as it is legal for the implementation to read the PTE at any
time, including when only some of the partial stores have taken effect.
***
This specification permits the caching of PTEs whose V (Valid) bit is
clear. Operating systems must be written to cope with this possibility,
but implementers are reminded that eagerly caching invalid PTEs will
reduce performance by causing additional page faults.
====
Implementations must only perform implicit reads of the translation data
structures pointed to by the current contents of the `satp` register or
a subsequent valid (V=1) translation data structure entry, and must only
raise exceptions for implicit accesses that are generated as a result of
instruction execution, not those that are performed speculatively.
Changes to the `sstatus` fields SUM and MXR take effect immediately,
without the need to execute an SFENCE.VMA instruction. Changing
`satp`.MODE from Bare to other modes and vice versa also takes effect
immediately, without the need to execute an SFENCE.VMA instruction.
Likewise, changes to `satp`.ASID take effect immediately.
[TIP]
====
The following common situations typically require executing an
SFENCE.VMA instruction:
* When software recycles an ASID (i.e., reassociates it with a different
page table), it should _first_ change `satp` to point to the new page
table using the recycled ASID, _then_ execute SFENCE.VMA with __rs1__=`x0`
and _rs2_ set to the recycled ASID. Alternatively, software can execute
the same SFENCE.VMA instruction while a different ASID is loaded into
`satp`, provided the next time `satp` is loaded with the recycled ASID,
it is simultaneously loaded with the new page table.
* If the implementation does not provide ASIDs, or software chooses to
always use ASID 0, then after every `satp` write, software should
execute SFENCE.VMA with __rs1__=`x0`. In the common case that no global
translations have been modified, _rs2_ should be set to a register other
than `x0` but which contains the value zero, so that global translations
are not flushed.
* If software modifies a non-leaf PTE, it should execute SFENCE.VMA with
__rs1__=`x0`. If any PTE along the traversal path had its G bit set, _rs2_
must be `x0`; otherwise, _rs2_ should be set to the ASID for which the
translation is being modified.
* If software modifies a leaf PTE, it should execute SFENCE.VMA with
_rs1_ set to a virtual address within the page. If any PTE along the
traversal path had its G bit set, _rs2_ must be `x0`; otherwise, _rs2_
should be set to the ASID for which the translation is being modified.
* For the special cases of increasing the permissions on a leaf PTE and
changing an invalid PTE to a valid leaf, software may choose to execute
the SFENCE.VMA lazily. After modifying the PTE but before executing
SFENCE.VMA, either the new or old permissions will be used. In the
latter case, a page-fault exception might occur, at which point software
should execute SFENCE.VMA in accordance with the previous bullet point.
====
If a hart employs an address-translation cache, that cache must appear
to be private to that hart. In particular, the meaning of an ASID is
local to a hart; software may choose to use the same ASID to refer to
different address spaces on different harts.
[NOTE]
====
A future extension could redefine ASIDs to be global across the SEE,
enabling such options as shared translation caches and hardware support
for broadcast TLB shootdown. However, as OSes have evolved to
significantly reduce the scope of TLB shootdowns using novel
ASID-management techniques, we expect the local-ASID scheme to remain
attractive for its simplicity and possibly better scalability.
====
For implementations that make `satp`.MODE read-only zero (always Bare),
attempts to execute an SFENCE.VMA instruction might raise an illegal
instruction exception.
[[sv32]]
=== Sv32: Page-Based 32-bit Virtual-Memory Systems
When Sv32 is written to the MODE field in the `satp` register (see
<<satp>>), the supervisor operates in a 32-bit paged
virtual-memory system. In this mode, supervisor and user virtual
addresses are translated into supervisor physical addresses by
traversing a radix-tree page table. Sv32 is supported when SXLEN=32 and
is designed to include mechanisms sufficient for supporting modern
Unix-based operating systems.
[NOTE]
====
The initial RISC-V paged virtual-memory architectures have been designed
as straightforward implementations to support existing operating
systems. We have architected page table layouts to support a hardware
page-table walker. Software TLB refills are a performance bottleneck on
high-performance systems, and are especially troublesome with decoupled
specialized coprocessors. An implementation can choose to implement
software TLB refills using a machine-mode trap handler as an extension
to M-mode.
***
Some ISAs architecturally expose _virtually indexed, physically tagged_
caches, in that accesses to the same physical address via different
virtual addresses might not be coherent unless the virtual addresses lie
within the same cache set. Implicitly, this specification does not
permit such behavior to be architecturally exposed.
====
[[translation]]
==== Addressing and Memory Protection
Sv32 implementations support a 32-bit virtual address space, divided
into pages. An Sv32 virtual address is partitioned into a virtual page
number (VPN) and page offset, as shown in <<sv32va>>.
When Sv32 virtual memory mode is selected in the MODE field of the
`satp` register, supervisor virtual addresses are translated into
supervisor physical addresses via a two-level page table. The 20-bit VPN
is translated into a 22-bit physical page number (PPN), while the 12-bit
page offset is untranslated. The resulting supervisor-level physical
addresses are then checked using any physical memory protection
structures (<<pmp>>), before being directly
converted to machine-level physical addresses. If necessary,
supervisor-level physical addresses are zero-extended to the number of
physical address bits found in the implementation.
[NOTE]
====
For example, consider an RV32 system supporting 34 bits of physical
address. When the value of `satp`.MODE is Sv32, a 34-bit physical
address is produced directly, and therefore no zero-extension is needed.
When the value of `satp`.MODE is Bare, the 32-bit virtual address is
translated (unmodified) into a 32-bit physical address, and then that
physical address is zero-extended into a 34-bit machine-level physical
address.
====
[[sv32va]]
.Sv32 virtual address.
include::images/bytefield/sv32va.edn[]
Sv32 page tables consist of 2^10^ page-table entries
(PTEs), each of four bytes. A page table is exactly the size of a page
and must always be aligned to a page boundary. The physical page number
of the root page table is stored in the `satp` register.
[[sv32pa]]
.SV32 physical address.
include::images/bytefield/sv32pa.edn[]
[[sv32pte]]
.Sv32 page table entry.
include::images/bytefield/sv32pte.edn[]
The PTE format for Sv32 is shown in <<sv32pte>>.
The V bit indicates whether the PTE is valid; if it is 0, all other bits
in the PTE are don’t-cares and may be used freely by software. The
permission bits, R, W, and X, indicate whether the page is readable,
writable, and executable, respectively. When all three are zero, the PTE
is a pointer to the next level of the page table; otherwise, it is a
leaf PTE. Writable pages must also be marked readable; the contrary
combinations are reserved for future use. <<pteperm>>
summarizes the encoding of the permission bits.
[[pteperm]]
.Encoding of PTE R/W/X fields.
[%autowidth,float="center",align="center",cols="^,^,^,<",options="header"]
|===
|X |W |R |Meaning
|0 +
0 +
0 +
0 +
1 +
1 +
1 +
1
|0 +
0 +
1 +
1 +
0 +
0 +
1 +
1
|0 +
1 +
0 +
1 +
0 +
1 +
0 +
1
|Pointer to next level of page table. +
Read-only page. +
_Reserved for future use._ +
Read-write page. +
Execute-only page. +
Read-execute page. +
_Reserved for future use._ +
Read-write-execute page.
|===
Attempting to fetch an instruction from a page that does not have
execute permissions raises a fetch page-fault exception. Attempting to
execute a load or load-reserved instruction whose effective address lies
within a page without read permissions raises a load page-fault
exception. Attempting to execute a store, store-conditional, or AMO
instruction whose effective address lies within a page without write
permissions raises a store page-fault exception.
[NOTE]
====
AMOs never raise load page-fault exceptions. Since any unreadable page
is also unwritable, attempting to perform an AMO on an unreadable page
always raises a store page-fault exception.
====
The U bit indicates whether the page is accessible to user mode. U-mode
software may only access the page when U=1. If the SUM bit in the
`sstatus` register is set, supervisor mode software may also access
pages with U=1. However, supervisor code normally operates with the SUM
bit clear, in which case, supervisor code will fault on accesses to
user-mode pages. Irrespective of SUM, the supervisor may not execute
code on pages with U=1.
[NOTE]
====
An alternative PTE format would support different permissions for
supervisor and user. We omitted this feature because it would be largely
redundant with the SUM mechanism (see <<sum>>) and would require more encoding space in the PTE.
====
The G bit designates a _global_ mapping. Global mappings are those that
exist in all address spaces. For non-leaf PTEs, the global setting
implies that all mappings in the subsequent levels of the page table are
global. Note that failing to mark a global mapping as global merely
reduces performance, whereas marking a non-global mapping as global is a
software bug that, after switching to an address space with a different
non-global mapping for that address range, can unpredictably result in
either mapping being used.
[NOTE]
====
Global mappings need not be stored redundantly in address-translation
caches for multiple ASIDs. Additionally, they need not be flushed from
local address-translation caches when an SFENCE.VMA instruction is
executed with __rs2__≠``x0``.
====
The RSW field is reserved for use by supervisor software; the
implementation shall ignore this field.
Each leaf PTE contains an accessed (A) and dirty (D) bit. The A bit
indicates the virtual page has been read, written, or fetched from since
the last time the A bit was cleared. The D bit indicates the virtual
page has been written since the last time the D bit was cleared.
Two schemes to manage the A and D bits are permitted:
* When a virtual page is accessed and the A bit is clear, or is written
and the D bit is clear, a page-fault exception is raised.
* When a virtual page is accessed and the A bit is clear, or is written
and the D bit is clear, the implementation sets the corresponding bit(s)
in the PTE. The PTE update must be atomic with respect to other accesses
to the PTE, and must atomically check that the PTE is valid and grants
sufficient permissions. Updates of the A bit may be performed as a
result of speculation, but updates to the D bit must be exact (i.e., not
speculative), and observed in program order by the local hart.
Furthermore, the PTE update must appear in the global memory order no
later than the explicit memory access, or any subsequent explicit memory
access to that virtual page by the local hart. The ordering on loads and
stores provided by FENCE instructions and the acquire/release bits on
atomic instructions also orders the PTE updates associated with those
loads and stores as observed by remote harts.
+
The PTE update is not required to be atomic with respect to the explicit
memory access that caused the update, and the sequence is interruptible.
However, the hart must not perform the explicit memory access before the
PTE update is globally visible.
All harts in a system must employ the same PTE-update scheme as each
other.
[NOTE]
====
Prior versions of this specification required PTE A bit updates to be
exact, but allowing the A bit to be updated as a result of speculation
simplifies the implementation of address translation prefetchers. System
software typically uses the A bit as a page replacement policy hint, but
does not require exactness for functional correctness. On the other
hand, D bit updates are still required to be exact and performed in
program order, as the D bit affects the functional correctness of page
eviction.
Implementations are of course still permitted to perform both A and D
bit updates only in an exact manner.
In both cases, requiring atomicity ensures that the PTE update will not
be interrupted by other intervening writes to the page table, as such
interruptions could lead to A/D bits being set on PTEs that have been
reused for other purposes, on memory that has been reclaimed for other
purposes, and so on. Simple implementations may instead generate
page-fault exceptions.
The A and D bits are never cleared by the implementation. If the
supervisor software does not rely on accessed and/or dirty bits, e.g. if
it does not swap memory pages to secondary storage or if the pages are
being used to map I/O space, it should always set them to 1 in the PTE
to improve performance.
====
Any level of PTE may be a leaf PTE, so in addition to 4 KiB pages, Sv32
supports 4 MiB _megapages_. A megapage must be virtually and physically
aligned to a 4 MiB boundary; a page-fault exception is raised if the
physical address is insufficiently aligned.
For non-leaf PTEs, the D, A, and U bits are reserved for future standard
use. Until their use is defined by a standard extension, they must be
cleared by software for forward compatibility.
For implementations with both page-based virtual memory and the "A"
standard extension, the LR/SC reservation set must lie completely within
a single base physical page (i.e., a naturally aligned 4 KiB physical-memory
region).
[[sv32algorithm]]
==== Virtual Address Translation Process
A virtual address _va_ is translated into a physical address _pa_ as follows:
. Let _a_ be ``satp``.__ppn__×PAGESIZE, and let __i__=LEVELS-1. (For Sv32, PAGESIZE=2^12^ and LEVELS=2.) The `satp` register must be
_active_, i.e., the effective privilege mode must be S-mode or U-mode.
. Let _pte_ be the value of the PTE at address __a__+__va__.__vpn__[__i__]×PTESIZE. (For Sv32, PTESIZE=4.) If accessing _pte_ violates a PMA or PMP check, raise an access-fault exception corresponding to the original access type.
. If _pte_._v_=0, or if _pte_._r_=0 and _pte_._w_=1, or if any bits or encodings that are reserved for future standard use are set within _pte_, stop and raise a page-fault exception corresponding to the original access type.
. Otherwise, the PTE is valid. If __pte__.__r__=1 or __pte__.__x__=1, go to step 5. Otherwise, this PTE is a pointer to the next level of the page table. Let __i=i__-1. If __i__<0, stop and raise a page-fault exception corresponding to the original access type. Otherwise, let
__a__=__pte__.__ppn__×PAGESIZE and go to step 2.
. A leaf PTE has been found. Determine if the requested memory access is
allowed by the _pte_._r_, _pte_._w_, _pte_._x_, and _pte_._u_ bits, given the current privilege mode and the value of the SUM and MXR fields of the `mstatus` register. If not, stop and raise a page-fault exception corresponding to the original access type.
. If _i>0_ and _pte_._ppn_[__i__-1:0] ≠ 0, this is a misaligned superpage; stop and raise a page-fault exception corresponding to the original access type.
. If _pte_._a_=0, or if the original memory access is a store and _pte_._d_=0, either raise a page-fault exception corresponding to the original access type, or:
* If a store to _pte_ would violate a PMA or PMP check,
raise an access-fault exception corresponding to the original access
type.
* Perform the following steps atomically:
** Compare _pte_ to the value of the PTE at address __a__+__va.vpn__[__i__]×PTESIZE.
** If the values match, set _pte_._a_ to 1 and, if the
original memory access is a store, also set _pte_._d_ to 1.
** If the comparison fails, return to step 2.
. The translation is successful. The translated physical address is
given as follows:
* _pa.pgoff_ = _va.pgoff_.
* If _i_>0, then this is a superpage translation and __pa.ppn__[__i__-1:0] = __va.vpn__[__i__-1:0].
* _pa.ppn_[LEVELS-1:__i__] = _pte_._ppn_[LEVELS-1:__i__].
All implicit accesses to the address-translation data structures in this
algorithm are performed using width PTESIZE.
[NOTE]
====
This implies, for example, that an Sv48 implementation may not use two
separate 4B reads to non-atomically access a single 8B PTE, and that A/D
bit updates performed by the implementation are treated as atomically
updating the entire PTE, rather than just the A and/or D bit alone (even
though the PTE value does not otherwise change).
====
The results of implicit address-translation reads in step 2 may be held
in a read-only, incoherent _address-translation cache_ but not shared
with other harts. The address-translation cache may hold an arbitrary
number of entries, including an arbitrary number of entries for the same
address and ASID. Entries in the address-translation cache may then
satisfy subsequent step 2 reads if the ASID associated with the entry
matches the ASID loaded in step 0 or if the entry is associated with a
_global_ mapping. To ensure that implicit reads observe writes to the
same memory locations, an SFENCE.VMA instruction must be executed after
the writes to flush the relevant cached translations.
The address-translation cache cannot be used in step 7; accessed and
dirty bits may only be updated in memory directly.
[NOTE]
====
It is permitted for multiple address-translation cache entries to
co-exist for the same address. This represents the fact that in a
conventional TLB hierarchy, it is possible for multiple entries to match
a single address if, for example, a page is upgraded to a superpage
without first clearing the original non-leaf PTE’s valid bit and
executing an SFENCE.VMA with _rs1_=`x0`, or if multiple TLBs exist in
parallel at a given level of the hierarchy. In this case, just as if an
SFENCE.VMA is not executed between a write to the memory-management
tables and subsequent implicit read of the same address: it is
unpredictable whether the old non-leaf PTE or the new leaf PTE is used,
but the behavior is otherwise well defined.
====
Implementations may also execute the address-translation algorithm
speculatively at any time, for any virtual address, as long as `satp` is
active (as defined in <<satp>>). Such speculative
executions have the effect of pre-populating the address-translation
cache.
Speculative executions of the address-translation algorithm behave as
non-speculative executions of the algorithm do, except that they must
not set the dirty bit for a PTE, they must not trigger an exception, and
they must not create address-translation cache entries if those entries
would have been invalidated by any SFENCE.VMA instruction executed by
the hart since the speculative execution of the algorithm began.
[NOTE]
====
For instance, it is illegal for both non-speculative and speculative
executions of the translation algorithm to begin, read the level 2 page
table, pause while the hart executes an SFENCE.VMA with
_rs1_=_rs2_=`x0`, then resume using the now-stale level 2 PTE, as
subsequent implicit reads could populate the address-translation cache
with stale PTEs.
In many implementations, an SFENCE.VMA instruction with _rs1_=`x0` will
therefore either terminate all previously-launched speculative
executions of the address-translation algorithm (for the specified ASID,
if applicable), or simply wait for them to complete (in which case any
address-translation cache entries created will be invalidated by the
SFENCE.VMA as appropriate). Likewise, an SFENCE.VMA instruction with
__rs1__≠``x0`` generally must either ensure that
previously-launched speculative executions of the address-translation
algorithm (for the specified ASID, if applicable) are prevented from
creating new address-translation cache entries mapping leaf PTEs, or
wait for them to complete.
A consequence of implementations being permitted to read the translation
data structures arbitrarily early and speculatively is that at any time,
all page table entries reachable by executing the algorithm may be
loaded into the address-translation cache.
Although it would be uncommon to place page tables in non-idempotent
memory, there is no explicit prohibition against doing so. Since the
algorithm may only touch page tables reachable from the root page table
indicated in `satp`, the range of addresses that an implementation's
page table walker will touch is fully under supervisor control.
***
The algorithm does not admit the possibility of ignoring high-order PPN
bits for implementations with narrower physical addresses.
====
[[sv39]]
=== Sv39: Page-Based 39-bit Virtual-Memory System
This section describes a simple paged virtual-memory system for
SXLEN=64, which supports 39-bit virtual address spaces. The design of
Sv39 follows the overall scheme of Sv32, and this section details only
the differences between the schemes.
[NOTE]
====
We specified multiple virtual memory systems for RV64 to relieve the
tension between providing a large address space and minimizing
address-translation cost. For many systems, 39 bits of virtual-address space is
ample, and so Sv39 suffices. Sv48 increases the virtual address space to
48 bits, but increases the physical memory capacity dedicated to page tables,
the latency of page-table traversals, and the size of hardware
structures that store virtual addresses. Sv57 increases the virtual
address space, page table capacity requirement, and translation latency
even further.
====
[[addressing-and-memory-protection]]
==== Addressing and Memory Protection
Sv39 implementations support a 39-bit virtual address space, divided
into pages. An Sv39 address is partitioned as shown in
<<sv39va>>. Instruction fetch addresses and load and
store effective addresses, which are 64 bits, must have bits 63–39 all
equal to bit 38, or else a page-fault exception will occur. The 27-bit
VPN is translated into a 44-bit PPN via a three-level page table, while
the 12-bit page offset is untranslated.
[NOTE]
====
When mapping between narrower and wider addresses, RISC-V zero-extends a
narrower physical address to a wider size. The mapping between 64-bit
virtual addresses and the 39-bit usable address space of Sv39 is not
based on zero-extension but instead follows an entrenched convention
that allows an OS to use one or a few of the most-significant bits of a
full-size (64-bit) virtual address to quickly distinguish user and
supervisor address regions.
====
[[sv39va]]
.Sv39 virtual address.
include::images/bytefield/sv39va.edn[]
[[sv39pa]]
.Sv39 physical address.
include::images/bytefield/sv39pa.edn[]
[[sv39pte]]
.Sv39 page table entry.
include::images/bytefield/sv39pte.edn[]
Sv39 page tables contain 2^9^ page table entries (PTEs),
eight bytes each. A page table is exactly the size of a page and must
always be aligned to a page boundary. The physical page number of the
root page table is stored in the `satp` register's PPN field.
The PTE format for Sv39 is shown in <<sv39pte>>.
Bits 9-0 have the same meaning as for Sv32. Bit 63 is reserved for use
by the Svnapot extension in <<svnapot>>. If Svnapot is not
implemented, bit 63 remains reserved and must be zeroed by software for
forward compatibility, or else a page-fault exception is raised. Bits
62-61 are reserved for use by the Svpbmt extension in
<<svpbmt>>. If Svpbmt is not implemented, bits 62-61 remain
reserved and must be zeroed by software for forward compatibility, or
else a page-fault exception is raised. Bits 60-54 are reserved for
future standard use and, until their use is defined by some standard
extension, must be zeroed by software for forward compatibility. If any
of these bits are set, a page-fault exception is raised.
[NOTE]
====
We reserved several PTE bits for a possible extension that improves
support for sparse address spaces by allowing page-table levels to be
skipped, reducing memory usage and TLB refill latency. These reserved
bits may also be used to facilitate research experimentation. The cost
is reducing the physical address space, but is presently ample. When it
no longer suffices, the reserved bits that remain unallocated could be
used to expand the physical address space.
====
Any level of PTE may be a leaf PTE, so in addition to 4 KiB pages, Sv39
supports 2 MiB _megapages_ and 1 GiB _gigapages_, each of which must be virtually and physically aligned to a boundary equal to its size. A page-fault exception is raised if the physical address is insufficiently aligned.
The algorithm for virtual-to-physical address translation is the same as
in <<sv32algorithm>>, except LEVELS equals 3 and PTESIZE equals 8.
[[sv48]]
=== Sv48: Page-Based 48-bit Virtual-Memory System
This section describes a simple paged virtual-memory system for
SXLEN=64, which supports 48-bit virtual address spaces. Sv48 is intended
for systems for which a 39-bit virtual address space is insufficient. It
closely follows the design of Sv39, simply adding an additional level of
page table, and so this chapter only details the differences between the
two schemes.
Implementations that support Sv48 must also support Sv39.
[NOTE]
====
Systems that support Sv48 can also support Sv39 at essentially no cost,
and so should do so to maintain compatibility with supervisor software
that assumes Sv39.
====
[[addressing-and-memory-protection-1]]
==== Addressing and Memory Protection
Sv48 implementations support a 48-bit virtual address space, divided
into pages. An Sv48 address is partitioned as shown in
<<sv48va>>. Instruction fetch addresses and load and
store effective addresses, which are 64 bits, must have bits 63–48 all
equal to bit 47, or else a page-fault exception will occur. The 36-bit
VPN is translated into a 44-bit PPN via a four-level page table, while
the 12-bit page offset is untranslated.
[[sv48va]]
.Sv48 virtual address.
include::images/bytefield/sv48va.edn[]
[[sv48pa]]
.Sv48 physical address.
include::images/bytefield/sv48pa.edn[]
[[sv48pte]]
.Sv48 page table entry.
include::images/bytefield/sv48pte.edn[]
The PTE format for Sv48 is shown in <<sv48pte>>.
Bits 63-54 and 9-0 have the same meaning as for Sv39. Any level of PTE
may be a leaf PTE, so in addition to pages, Sv48 supports _megapages_,
_gigapages_, and _terapages_, each of which must be virtually and
physically aligned to a boundary equal to its size. A page-fault
exception is raised if the physical address is insufficiently aligned.
The algorithm for virtual-to-physical address translation is the same as
in <<sv32algorithm>>, except LEVELS equals 4 and
PTESIZE equals 8.
[[sv57]]
=== Sv57: Page-Based 57-bit Virtual-Memory System
This section describes a simple paged virtual-memory system designed for
RV64 systems, which supports 57-bit virtual address spaces. Sv57 is
intended for systems for which a 48-bit virtual address space is
insufficient. It closely follows the design of Sv48, simply adding an
additional level of page table, and so this chapter only details the
differences between the two schemes.
Implementations that support Sv57 must also support Sv48.
[NOTE]
====
Systems that support Sv57 can also support Sv48 at essentially no cost,
and so should do so to maintain compatibility with supervisor software
that assumes Sv48.
====
[[addressing-and-memory-protection-2]]
==== Addressing and Memory Protection
Sv57 implementations support a 57-bit virtual address space, divided
into pages. An Sv57 address is partitioned as shown in
<<sv57va>>. Instruction fetch addresses and load and
store effective addresses, which are 64 bits, must have bits 63–57 all
equal to bit 56, or else a page-fault exception will occur. The 45-bit
VPN is translated into a 44-bit PPN via a five-level page table, while
the 12-bit page offset is untranslated.
[[sv57va]]
.Sv57 virtual address.
include::images/bytefield/sv57va.edn[]
[[sv57pa]]
.Sv57 physical address.
include::images/bytefield/sv57pa.edn[]
[[sv57pte]]
.Sv57 page table entry.
include::images/bytefield/sv57pte.edn[]
The PTE format for Sv57 is shown in <<sv57pte>>.
Bits 63–54 and 9–0 have the same meaning as for Sv39. Any level of PTE
may be a leaf PTE, so in addition to pages, Sv57 supports _megapages_,
_gigapages_, _terapages_, and _petapages_, each of which must be
virtually and physically aligned to a boundary equal to its size. A
page-fault exception is raised if the physical address is insufficiently
aligned.
The algorithm for virtual-to-physical address translation is the same as
in <<sv32algorithm>>, except LEVELS equals 5 and
PTESIZE equals 8.
[[svnapot]]
== "Svnapot" Standard Extension for NAPOT Translation Contiguity, Version 1.0
In Sv39, Sv48, and Sv57, when a PTE has N=1, the PTE represents a
translation that is part of a range of contiguous virtual-to-physical
translations with the same values for PTE bits 5–0. Such ranges must be
of a naturally aligned power-of-2 (NAPOT) granularity larger than the
base page size.
The Svnapot extension depends on Sv39.
[[ptenapot]]
.Page table entry encodings when __pte__.N=1
[%autowidth,float="center",align="center",cols="^,^,<,^",options="header"]
|===
|i |_pte_._ppn_[_i_] |Description |_pte_.__napot_bits__
|0 +
0 +
0 +
0 +
0 +
≥1
|`x xxxx xxx1` +
`x xxxx xx1x` +
`x xxxx x1xx` +
`x xxxx 1000` +
`x xxxx 0xxx` +
`x xxxx xxxx`
|_Reserved_ +
_Reserved_ +
_Reserved_ +
64 KiB contiguous region +
_Reserved_ +
_Reserved_
| - +
- +
- +
4 +
- +
-
|===
NAPOT PTEs behave identically to non-NAPOT PTEs within the
address-translation algorithm in <<sv32algorithm>>,
except that:
* If the encoding in _pte_ is valid according to
<<ptenapot>>, then instead of returning the original
value of _pte_, implicit reads of a NAPOT PTE return a copy
of _pte_ in which __pte__.__ppn__[__i__][__pte__.__napot_bits__-1:0] is replaced by
__vpn__[__i__][__pte__.__napot_bits__-1:0]. If the encoding in _pte_ is reserved according to
<<ptenapot>>, then a page-fault exception must be raised.
* Implicit reads of NAPOT page table entries may create
address-translation cache entries mapping
_a_ + _j_×PTESIZE to a copy of _pte_ in which _pte_._ppn_[_i_][_pte_.__napot_bits__-1:0]
is replaced by _vpn[i][pte.napot_bits_-1:0], for any or all _j_ such that
__j__ >> __napot_bits__ = __vpn__[__i__] >> __napot_bits__, all for the address space identified in _satp_ as loaded by step 1.
[NOTE]
====
The motivation for a NAPOT PTE is that it can be cached in a TLB as one
or more entries representing the contiguous region as if it were a
single (large) page covered by a single translation. This compaction can
help relieve TLB pressure in some scenarios. The encoding is designed to
fit within the pre-existing Sv39, Sv48, and Sv57 PTE formats so as not
to disrupt existing implementations or designs that choose not to
implement the scheme. It is also designed so as not to complicate the
definition of the address-translation algorithm.
The address translation cache abstraction captures the behavior that
would result from the creation of a single TLB entry covering the entire
NAPOT region. It is also designed to be consistent with implementations
that support NAPOT PTEs by splitting the NAPOT region into TLB entries
covering any smaller power-of-two region sizes. For example, a 64 KiB
NAPOT PTE might trigger the creation of 16 standard 4 KiB TLB entries,
all with contents generated from the NAPOT PTE (even if the PTEs for the
other 4 KiB regions have different contents).
In typical usage scenarios, NAPOT PTEs in the same region will have the
same attributes, same PPNs, and same values for bits 5-0. RSW remains
reserved for supervisor software control. It is the responsibility of
the OS and/or hypervisor to configure the page tables in such a way that
there are no inconsistencies between NAPOT PTEs and other NAPOT or
non-NAPOT PTEs that overlap the same address range. If an update needs
to be made, the OS generally should first mark all of the PTEs invalid,
then issue SFENCE.VMA instruction(s) covering all 4 KiB regions within
the range (either via a single SFENCE.VMA with _rs1_=`x0`, or with
multiple SFENCE.VMA instructions with _rs1_≠`x0`), then update the PTE(s), as described in <<sfence.vma>>, unless any inconsistencies are known to be benign. If any inconsistencies do exist, then the effect is the same as when SFENCE.VMA
is used incorrectly: one of the translations will be chosen, but the
choice is unpredictable.
If an implementation chooses to use a NAPOT PTE (or cached version
thereof), it might not consult the PTE directly specified by the
algorithm in <<sv32algorithm>> at all. Therefore, the D
and A bits may not be identical across all mappings of the same address
range even in typical use cases The operating system must query all
NAPOT aliases of a page to determine whether that page has been accessed
and/or is dirty. If the OS manually sets the A and/or D bits for a page,
it is recommended that the OS also set the A and/or D bits for other
NAPOT aliases as appropriate in order to avoid unnecessary traps.
Just as with normal PTEs, TLBs are permitted to cache NAPOT PTEs whose V
(Valid) bit is clear.
Depending on need, the NAPOT scheme may be extended to other
intermediate page sizes and/or to other levels of the page table in the
future. The encoding is designed to accommodate other NAPOT sizes should
that need arise. For example:
__
[%autowidth,float="center",align="center",cols="^,^,<,^",options="header"]
|===
|i |_pte_._ppn_[_i_] |Description |_pte_.__napot_bits__
|0 +
0 +
0 +
0 +
0 +
... +
1 +
1 +
...
|`x xxxx xxx1` +
`x xxxx xx10` +
`x xxxx x100` +
`x xxxx 1000` +
`x xxx1 0000` +
... +
`x xxxx xxx1` +
`x xxxx xx10` +
...
|8 KiB contiguous region +
16 KiB contiguous region +
32 KiB contiguous region +
64 KiB contiguous region +
128 KiB contiguous region +
... +
4 MiB contiguous region +
8 MiB contiguous region +
...
| 1 +
2 +
3 +
4 +
5 +
... +
1 +
2 +
...
|===
In such a case, an implementation may or may not support all options.
The discoverability mechanism for this extension would be extended to
allow system software to determine which sizes are supported.
Other sizes may remain deliberately excluded, so that PPN bits not being
used to indicate a valid NAPOT region size (e.g., the least-significant
bit of _pte_._ppn_[_i_]) may be repurposed for other uses in the
future.
However, in case finer-grained intermediate page size support proves not
to be useful, we have chosen to standardize only 64 KiB support as a
first step.
====
[[svpbmt]]
== "Svpbmt" Standard Extension for Page-Based Memory Types, Version 1.0
In Sv39, Sv48, and Sv57, bits 62-61 of a leaf page table entry indicate
the use of page-based memory types that override the PMA(s) for the
associated memory pages. The encoding for the PBMT bits is captured in
<<pbmt>>.
The Svpbmt extension depends on Sv39.
[[pbmt]]
.Encodings for PBMT field in Sv39, Sv48, and Sv57 PTEs. Attributes not mentioned are inherited from PMA associated with the physical address.
[%autowidth,float="center",align="center",cols="^,^,<",options="header"]
|===
|Mode |Value |Requested Memory Attributes
|PMA +
NC +
IO +
-
|0 +
1 +
2 +
3
|None +
Non-cacheable, idempotent, weakly-ordered (RVWMO), main memory +
Non-cacheable, non-idempotent, strongly-ordered (I/O ordering), I/O +
_Reserved for future standard use_
|===
[NOTE]
====
Future extensions may provide more and/or finer-grained control over
which PMAs can be overridden.
====
For non-leaf PTEs, bits 62-61 are reserved for future standard use.
Until their use is defined by a standard extension, they must be cleared
by software for forward compatibility, or else a page-fault exception is
raised.
For leaf PTEs, setting bits 62-61 to the value 3 is reserved for future
standard use. Until this value is defined by a standard extension, using
this reserved value in a leaf PTE raises a page-fault exception.
If the underlying physical memory attribute for a page is vacant, the
PBMT settings do not override that.
When PBMT settings override a main memory page into I/O or vice versa,
memory accesses to such pages obey the memory ordering rules of the
final effective attribute, as follows.
If the underlying physical memory attribute for a page is I/O, and the
page has PBMT=NC, then accesses to that page obey RVWMO. However,
accesses to such pages are considered to be _both_ I/O and main memory
accesses for the purposes of FENCE, _.aq_, and _.rl_.
If the underlying physical memory attribute for a page is main memory,
and the page has PBMT=IO, then accesses to that page obey strong channel
0 I/O ordering rules.
However, accesses to
such pages are considered to be _both_ I/O and main memory accesses for
the purposes of FENCE, _.aq_, and _.rl_.
[NOTE]
====
A device driver written to rely on I/O strong ordering rules will not
operate correctly if the address range is mapped with PBMT=NC. As such,
this configuration is discouraged.
It will often still be useful to map physical I/O regions using PBMT=NC
so that write combining and speculative accesses can be performed. Such
optimizations will likely improve performance when applied with adequate
care.
====
When Svpbmt is used with non-zero PBMT encodings, it is possible for
multiple virtual aliases of the same physical page to exist
simultaneously with different memory attributes. It is also possible for
a U-mode or S-mode mapping through a PTE with Svpbmt enabled to observe
different memory attributes for a given region of physical memory than a
concurrent access to the same page performed by M-mode or when
MODE=Bare. In such cases, the behaviors dictated by the attributes
(including coherence, which is otherwise unaffected) may be violated.
Accessing the same location using different attributes that are both
non-cacheable (e.g., NC and IO) does not cause loss of coherence, but
might result in weaker memory ordering than the stricter attribute
ordinarily guarantees. Executing a `fence iorw, iorw` instruction
between such accesses suffices to prevent loss of memory ordering.
Accessing the same location using different cacheability attributes may
cause loss of coherence. Executing the following sequence between such
accesses prevents both loss of coherence and loss of memory ordering:
`fence iorw, iorw`, followed by `cbo.flush` to an address of that
location, followed by a `fence iorw, iorw`.
[NOTE]
====
It follows that, if the same location might later be referenced using
the original attributes, then this sequence must be repeated beforehand.
***
In certain cases, a weaker sequence might suffice to prevent loss of
coherence. These situations will be detailed following the forthcoming
formalization of the interaction of the RVWMO memory model with the
instructions in the Zicbom extension.
====
When two-stage address translation is enabled within the H extension,
the page-based memory types are also applied in two stages. First, if
`hgatp`.MODE is not equal to zero, non-zero G-stage PTE PBMT bits
override the attributes in the PMA to produce an intermediate set of
attributes. Otherwise, the PMAs serve as the intermediate attributes.
Second, if `vsatp`.MODE is not equal to zero, non-zero VS-stage PTE PBMT
bits override the intermediate attributes to produce the final set of
attributes used by accesses to the page in question. Otherwise, the
intermediate attributes are used as the final set of attributes.
[[svinval]]
== "Svinval" Standard Extension for Fine-Grained Address-Translation Cache Invalidation, Version 1.0
The Svinval extension splits SFENCE.VMA, HFENCE.VVMA, and HFENCE.GVMA
instructions into finer-grained invalidation and ordering operations
that can be more efficiently batched or pipelined on certain classes of
high-performance implementation.
include::images/wavedrom/sinvalvma.edn[]
The SINVAL.VMA instruction invalidates any address-translation cache
entries that an SFENCE.VMA instruction with the same values of _rs1_ and
_rs2_ would invalidate. However, unlike SFENCE.VMA, SINVAL.VMA
instructions are only ordered with respect to SFENCE.VMA,
SFENCE.W.INVAL, and SFENCE.INVAL.IR instructions as defined below.
include::images/wavedrom/sfencewinval.edn[]
include::images/wavedrom/sfenceinvalir.edn[]
The SFENCE.W.INVAL instruction guarantees that any previous stores
already visible to the current RISC-V hart are ordered before subsequent
SINVAL.VMA instructions executed by the same hart. The SFENCE.INVAL.IR
instruction guarantees that any previous SINVAL.VMA instructions
executed by the current hart are ordered before subsequent implicit
references by that hart to the memory-management data structures.
When executed in order (but not necessarily consecutively) by a single
hart, the sequence SFENCE.W.INVAL, SINVAL.VMA, and SFENCE.INVAL.IR has
the same effect as a hypothetical SFENCE.VMA instruction in which:
* the values of _rs1_ and _rs2_ for the SFENCE.VMA are the same as those
used in the SINVAL.VMA,
* reads and writes prior to the SFENCE.W.INVAL are considered to be
those prior to the SFENCE.VMA, and
* reads and writes following the SFENCE.INVAL.IR are considered to be
those subsequent to the SFENCE.VMA.
include::images/wavedrom/hinvalvvma.edn[]
include::images/wavedrom/hinvalgvma.edn[]
If the hypervisor extension is implemented, the Svinval extension also
provides two additional instructions: HINVAL.VVMA and HINVAL.GVMA. These
have the same semantics as SINVAL.VMA, except that they combine with
SFENCE.W.INVAL and SFENCE.INVAL.IR to replace HFENCE.VVMA and
HFENCE.GVMA, respectively, instead of SFENCE.VMA. In addition,
HINVAL.GVMA uses VMIDs instead of ASIDs.
SINVAL.VMA, HINVAL.VVMA, and HINVAL.GVMA require the same permissions
and raise the same exceptions as SFENCE.VMA, HFENCE.VVMA, and
HFENCE.GVMA, respectively. In particular, an attempt to execute any of
these instructions in U-mode always raises an illegal instruction
exception, and an attempt to execute SINVAL.VMA or HINVAL.GVMA in S-mode
or HS-mode when `mstatus`.TVM=1 also raises an illegal instruction
exception. An attempt to execute HINVAL.VVMA or HINVAL.GVMA in VS-mode
or VU-mode, or to execute SINVAL.VMA in VU-mode, raises a virtual
instruction exception. When `hstatus`.VTVM=1, an attempt to execute
SINVAL.VMA in VS-mode also raises a virtual instruction exception.
[NOTE]
====
SFENCE.W.INVAL and SFENCE.INVAL.IR instructions do not need to be
trapped when `mstatus`.TVM=1 or when `hstatus`.VTVM=1, as they only have
ordering effects but no visible side effects. Trapping of the SINVAL.VMA
instruction is sufficient to enable emulation of the intended overall
TLB maintenance functionality.
In typical usage, software will invalidate a range of virtual addresses
in the address-translation caches by executing an SFENCE.W.INVAL
instruction, executing a series of SINVAL.VMA, HINVAL.VVMA, or
HINVAL.GVMA instructions to the addresses (and optionally ASIDs or
VMIDs) in question, and then executing an SFENCE.INVAL.IR instruction.
High-performance implementations will be able to pipeline the
address-translation cache invalidation operations, and will defer any
pipeline stalls or other memory ordering enforcement until an
SFENCE.W.INVAL, SFENCE.INVAL.IR, SFENCE.VMA, HFENCE.GVMA, or HFENCE.VVMA
instruction is executed.
Simpler implementations may implement SINVAL.VMA, HINVAL.VVMA, and
HINVAL.GVMA identically to SFENCE.VMA, HFENCE.VVMA, and HFENCE.GVMA,
respectively, while implementing SFENCE.W.INVAL and SFENCE.INVAL.IR
instructions as no-ops.
====
|