aboutsummaryrefslogtreecommitdiff
path: root/flang/lib/Frontend/CompilerInvocation.cpp
diff options
context:
space:
mode:
authorPhilip Reames <preames@rivosinc.com>2023-09-07 16:01:16 -0700
committerGitHub <noreply@github.com>2023-09-07 16:01:16 -0700
commitb4a99f1cd67d7ba89a015c39c13dc92224d49963 (patch)
tree599d0be05d7a6ce545110d0bbac2382ded9cd7d3 /flang/lib/Frontend/CompilerInvocation.cpp
parent9208065a7b51958b44ea012259097e742825b7f4 (diff)
downloadllvm-b4a99f1cd67d7ba89a015c39c13dc92224d49963.zip
llvm-b4a99f1cd67d7ba89a015c39c13dc92224d49963.tar.gz
llvm-b4a99f1cd67d7ba89a015c39c13dc92224d49963.tar.bz2
[RISCV] Lower constant build_vectors with few non-sign bits via vsext (#65648)
If we have a build_vector such as [i64 0, i64 3, i64 1, i64 2], we instead lower this as vsext([i8 0, i8 3, i8 1, i8 2]). For vectors with 4 or fewer elements, the resulting narrow vector can be generated via scalar materialization. For shuffles which get lowered to vrgathers, constant build_vectors of small constants are idiomatic. As such, this change covers all shuffles with an output type of 4 or less. I deliberately started narrow here. I think it makes sense to expand this to longer vectors, but we need a more robust profit model on the recursive expansion. It's questionable if we want to do the zsext if we're going to generate a constant pool load for the narrower type anyways. One possibility for future exploration is to allow the narrower VT to be less than 8 bits. We can't use vsext for that, but we could use something analogous to our widening interleave lowering with some extra shifts and ands.
Diffstat (limited to 'flang/lib/Frontend/CompilerInvocation.cpp')
0 files changed, 0 insertions, 0 deletions