aboutsummaryrefslogtreecommitdiff
path: root/target/arm/t32.decode
diff options
context:
space:
mode:
authorPeter Maydell <peter.maydell@linaro.org>2021-06-28 14:58:32 +0100
committerPeter Maydell <peter.maydell@linaro.org>2021-07-02 11:48:37 +0100
commitf4ae6c8cbda8d9b21290e9b8ae21b785ca24aace (patch)
treee48fada5b87faf37247fafdd739889b5ea6a44f0 /target/arm/t32.decode
parentd43ebd9dc8a268195dcc8219ced96f9e3bdc4050 (diff)
downloadqemu-f4ae6c8cbda8d9b21290e9b8ae21b785ca24aace.zip
qemu-f4ae6c8cbda8d9b21290e9b8ae21b785ca24aace.tar.gz
qemu-f4ae6c8cbda8d9b21290e9b8ae21b785ca24aace.tar.bz2
target/arm: Implement MVE long shifts by immediate
The MVE extension to v8.1M includes some new shift instructions which sit entirely within the non-coprocessor part of the encoding space and which operate only on general-purpose registers. They take up the space which was previously UNPREDICTABLE MOVS and ORRS encodings with Rm == 13 or 15. Implement the long shifts by immediate, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with an immediate shift count between 1 and 32. Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for the Rm==13,15 case, we need to explicitly emit code to UNDEF for the cases where v8.1M now requires that. (Trying to change MOVS and ORRS is too difficult, because the functions that generate the code are shared between a dozen different kinds of arithmetic or logical instruction for all A32, T16 and T32 encodings, and for some insns and some encodings Rm==13,15 are valid.) We make the helper functions we need for UQSHLL and SQSHLL take a 32-bit value which the helper casts to int8_t because we'll need these helpers also for the shift-by-register insns, where the shift count might be < 0 or > 32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
Diffstat (limited to 'target/arm/t32.decode')
-rw-r--r--target/arm/t32.decode28
1 files changed, 28 insertions, 0 deletions
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
index 0f9326c..d740320 100644
--- a/target/arm/t32.decode
+++ b/target/arm/t32.decode
@@ -48,6 +48,13 @@
&mcr !extern cp opc1 crn crm opc2 rt
&mcrr !extern cp opc1 crm rt rt2
+&mve_shl_ri rdalo rdahi shim
+
+# rdahi: bits [3:1] from insn, bit 0 is 1
+# rdalo: bits [3:1] from insn, bit 0 is 0
+%rdahi_9 9:3 !function=times_2_plus_1
+%rdalo_17 17:3 !function=times_2
+
# Data-processing (register)
%imm5_12_6 12:3 6:2
@@ -59,12 +66,33 @@
@S_xrr_shi ....... .... . rn:4 .... .... .. shty:2 rm:4 \
&s_rrr_shi shim=%imm5_12_6 s=1 rd=0
+@mve_shl_ri ....... .... . ... . . ... ... . .. .. .... \
+ &mve_shl_ri shim=%imm5_12_6 rdalo=%rdalo_17 rdahi=%rdahi_9
+
{
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
AND_rrri 1110101 0000 . .... 0 ... .... .... .... @s_rrr_shi
}
BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
{
+ # The v8.1M MVE shift insns overlap in encoding with MOVS/ORRS
+ # and are distinguished by having Rm==13 or 15. Those are UNPREDICTABLE
+ # cases for MOVS/ORRS. We decode the MVE cases first, ensuring that
+ # they explicitly call unallocated_encoding() for cases that must UNDEF
+ # (eg "using a new shift insn on a v8.1M CPU without MVE"), and letting
+ # the rest fall through (where ORR_rrri and MOV_rxri will end up
+ # handling them as r13 and r15 accesses with the same semantics as A32).
+ [
+ LSLL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 00 1111 @mve_shl_ri
+ LSRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 01 1111 @mve_shl_ri
+ ASRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 10 1111 @mve_shl_ri
+
+ UQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 00 1111 @mve_shl_ri
+ URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
+ SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
+ SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
+ ]
+
MOV_rxri 1110101 0010 . 1111 0 ... .... .... .... @s_rxr_shi
ORR_rrri 1110101 0010 . .... 0 ... .... .... .... @s_rrr_shi
}