aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Target/X86/Utils/X86ShuffleDecode.cpp
diff options
context:
space:
mode:
authorChandler Carruth <chandlerc@gmail.com>2014-08-02 10:39:15 +0000
committerChandler Carruth <chandlerc@gmail.com>2014-08-02 10:39:15 +0000
commit4c57955fe33914e0b514ac7aeaaa223828112251 (patch)
tree3e9f6c90fdc1afb0582c68f35e5bee4d77c7f8f3 /llvm/lib/Target/X86/Utils/X86ShuffleDecode.cpp
parentd10b29240c2103d957bba87618d88aee52d51a0a (diff)
downloadllvm-4c57955fe33914e0b514ac7aeaaa223828112251.zip
llvm-4c57955fe33914e0b514ac7aeaaa223828112251.tar.gz
llvm-4c57955fe33914e0b514ac7aeaaa223828112251.tar.bz2
[x86] Largely complete the use of PSHUFB in the new vector shuffle
lowering with a small addition to it and adding PSHUFB combining. There is one obvious place in the new vector shuffle lowering where we should form PSHUFBs directly: when without them we will unpack a vector of i8s across two different registers and do a potentially 4-way blend as i16s only to re-pack them into i8s afterward. This is the crazy expensive fallback path for i8 shuffles and we can just directly use pshufb here as it will always be cheaper (the unpack and pack are two instructions so even a single shuffle between them hits our three instruction limit for forming PSHUFB). However, this doesn't generate very good code in many cases, and it leaves a bunch of common patterns not using PSHUFB. So this patch also adds support for extracting a shuffle mask from PSHUFB in the X86 lowering code, and uses it to handle PSHUFBs in the recursive shuffle combining. This allows us to combine through them, combine multiple ones together, and generally produce sufficiently high quality code. Extracting the PSHUFB mask is annoyingly complex because it could be either pre-legalization or post-legalization. At least this doesn't have to deal with re-materialized constants. =] I've added decode routines to handle the different patterns that show up at this level and we dispatch through them as appropriate. The two primary test cases are updated. For the v16 test case there is still a lot of room for improvement. Since I was going through it systematically I left behind a bunch of FIXME lines that I'm hoping to turn into ALL lines by the end of this. llvm-svn: 214628
Diffstat (limited to 'llvm/lib/Target/X86/Utils/X86ShuffleDecode.cpp')
-rw-r--r--llvm/lib/Target/X86/Utils/X86ShuffleDecode.cpp20
1 files changed, 19 insertions, 1 deletions
diff --git a/llvm/lib/Target/X86/Utils/X86ShuffleDecode.cpp b/llvm/lib/Target/X86/Utils/X86ShuffleDecode.cpp
index 83ee12b..863da74 100644
--- a/llvm/lib/Target/X86/Utils/X86ShuffleDecode.cpp
+++ b/llvm/lib/Target/X86/Utils/X86ShuffleDecode.cpp
@@ -208,7 +208,6 @@ void DecodeVPERM2X128Mask(MVT VT, unsigned Imm,
}
}
-/// \brief Decode PSHUFB masks stored in an LLVM Constant.
void DecodePSHUFBMask(const ConstantDataSequential *C,
SmallVectorImpl<int> &ShuffleMask) {
Type *MaskTy = C->getType();
@@ -240,6 +239,25 @@ void DecodePSHUFBMask(const ConstantDataSequential *C,
}
}
+void DecodePSHUFBMask(ArrayRef<uint64_t> RawMask,
+ SmallVectorImpl<int> &ShuffleMask) {
+ for (int i = 0, e = RawMask.size(); i < e; ++i) {
+ uint64_t M = RawMask[i];
+ // For AVX vectors with 32 bytes the base of the shuffle is the half of
+ // the vector we're inside.
+ int Base = i < 16 ? 0 : 16;
+ // If the high bit (7) of the byte is set, the element is zeroed.
+ if (M & (1 << 7))
+ ShuffleMask.push_back(SM_SentinelZero);
+ else {
+ int Index = Base + M;
+ assert((Index >= 0 && (unsigned)Index < RawMask.size()) &&
+ "Out of bounds shuffle index for pshub instruction!");
+ ShuffleMask.push_back(Index);
+ }
+ }
+}
+
/// DecodeVPERMMask - this decodes the shuffle masks for VPERMQ/VPERMPD.
/// No VT provided since it only works on 256-bit, 4 element vectors.
void DecodeVPERMMask(unsigned Imm, SmallVectorImpl<int> &ShuffleMask) {