aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Target/WebAssembly/WebAssemblyFixFunctionBitcasts.cpp
diff options
context:
space:
mode:
authorCraig Topper <craig.topper@intel.com>2019-06-21 17:24:21 +0000
committerCraig Topper <craig.topper@intel.com>2019-06-21 17:24:21 +0000
commit6af1be96641f34e10bf3b4866f72571b63fab27c (patch)
tree5316df5a0d97530c01855f9f7d587b6109a68927 /llvm/lib/Target/WebAssembly/WebAssemblyFixFunctionBitcasts.cpp
parent4c9def4a51ac10d9a249f31ea712c32474e89914 (diff)
downloadllvm-6af1be96641f34e10bf3b4866f72571b63fab27c.zip
llvm-6af1be96641f34e10bf3b4866f72571b63fab27c.tar.gz
llvm-6af1be96641f34e10bf3b4866f72571b63fab27c.tar.bz2
[X86] Use vmovq for v4i64/v4f64/v8i64/v8f64 vzmovl.
We already use vmovq for v2i64/v2f64 vzmovl. But we were using a blendpd+xorpd for v4i64/v4f64/v8i64/v8f64 under opt speed. Or movsd+xorpd under optsize. I think the blend with 0 or movss/d is only needed for vXi32 where we don't have an instruction that can move 32 bits from one xmm to another while zeroing upper bits. movq is no worse than blendpd on any known CPUs. llvm-svn: 364079
Diffstat (limited to 'llvm/lib/Target/WebAssembly/WebAssemblyFixFunctionBitcasts.cpp')
0 files changed, 0 insertions, 0 deletions