diff options
author | Craig Topper <craig.topper@intel.com> | 2019-06-21 19:10:21 +0000 |
---|---|---|
committer | Craig Topper <craig.topper@intel.com> | 2019-06-21 19:10:21 +0000 |
commit | 4649a051bf0b80732ebe805c65a40756e883df6a (patch) | |
tree | 36d74cb0cd7ffc5277ae353aab39efc04f836a40 /clang/lib/Basic/SourceManager.cpp | |
parent | 410b650e674496e61506fa88f3026759b8759d0f (diff) | |
download | llvm-4649a051bf0b80732ebe805c65a40756e883df6a.zip llvm-4649a051bf0b80732ebe805c65a40756e883df6a.tar.gz llvm-4649a051bf0b80732ebe805c65a40756e883df6a.tar.bz2 |
[X86] Add DAG combine to turn (vzmovl (insert_subvector undef, X, 0)) into (insert_subvector allzeros, (vzmovl X), 0)
128/256 bit scalar_to_vectors are canonicalized to (insert_subvector undef, (scalar_to_vector), 0). We have isel patterns that try to match this pattern being used by a vzmovl to use a 128-bit instruction and a subreg_to_reg.
This patch detects the insert_subvector undef portion of this and pulls it through the vzmovl, creating a narrower vzmovl and an insert_subvector allzeroes. We can then match the insertsubvector into a subreg_to_reg operation by itself. Then we can fall back on existing (vzmovl (scalar_to_vector)) patterns.
Note, while the scalar_to_vector case is the motivating case I didn't restrict to just that case. I'm also wondering about shrinking any 256/512 vzmovl to an extract_subvector+vzmovl+insert_subvector(allzeros) but I fear that would have bad implications to shuffle combining.
I also think there is more canonicalization we can do with vzmovl with loads or scalar_to_vector with loads to create vzload.
Differential Revision: https://reviews.llvm.org/D63512
llvm-svn: 364095
Diffstat (limited to 'clang/lib/Basic/SourceManager.cpp')
0 files changed, 0 insertions, 0 deletions