aboutsummaryrefslogtreecommitdiff
path: root/lldb/source/Commands/CommandObjectExpression.cpp
diff options
context:
space:
mode:
authorCraig Topper <craig.topper@sifive.com>2021-04-22 09:33:24 -0700
committerCraig Topper <craig.topper@sifive.com>2021-04-22 09:50:07 -0700
commit77f14c96e53a4b4bbef9f5b4c925f24eab1b5835 (patch)
treeac0d82f0ae33fa9654d222d9d4d42b43ee480d31 /lldb/source/Commands/CommandObjectExpression.cpp
parentdeda60fcaf0be162e893ff68d8d91355e3ac5542 (diff)
downloadllvm-77f14c96e53a4b4bbef9f5b4c925f24eab1b5835.zip
llvm-77f14c96e53a4b4bbef9f5b4c925f24eab1b5835.tar.gz
llvm-77f14c96e53a4b4bbef9f5b4c925f24eab1b5835.tar.bz2
[RISCV] Use stack temporary to splat two GPRs into SEW=64 vector on RV32.
Rather than doing splatting each separately and doing bit manipulation to merge them in the vector domain, copy the data to the stack and splat it using a strided load with x0 stride. At least on some implementations this vector load is optimized to not do a load for each element. This is equivalent to how we move i64 to f64 on RV32. I've only implemented this for the intrinsic fallbacks in this patch. I think we do similar splatting/shifting/oring in other places. If this is approved, I'll refactor the others to share the code. Differential Revision: https://reviews.llvm.org/D101002
Diffstat (limited to 'lldb/source/Commands/CommandObjectExpression.cpp')
0 files changed, 0 insertions, 0 deletions