aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-vector-builder.c
diff options
context:
space:
mode:
authorJakub Jelinek <jakub@redhat.com>2020-02-06 11:08:59 +0100
committerJakub Jelinek <jakub@redhat.com>2020-02-06 11:08:59 +0100
commit3f740c67dbb90177aa71d3c60ef9b0fd2f44dbd9 (patch)
treeda4f56c7d249b3940ba60ff223273b8326db16fe /gcc/tree-vector-builder.c
parentcb3f06480a17f98579704b9927632627a3814c5c (diff)
downloadgcc-3f740c67dbb90177aa71d3c60ef9b0fd2f44dbd9.zip
gcc-3f740c67dbb90177aa71d3c60ef9b0fd2f44dbd9.tar.gz
gcc-3f740c67dbb90177aa71d3c60ef9b0fd2f44dbd9.tar.bz2
i386: Improve avx* vector concatenation [PR93594]
The following testcase shows that for _mm256_set*_m128i and similar intrinsics, we sometimes generate bad code. All 4 routines are expressing the same thing, a 128-bit vector zero padded to 256-bit vector, but only the 3rd one actually emits the desired vmovdqa %xmm0, %xmm0 insn, the others vpxor %xmm1, %xmm1, %xmm1; vinserti128 $0x1, %xmm1, %ymm0, %ymm0 The problem is that the cast builtins use UNSPEC_CAST which is after reload simplified using a splitter, but during combine it prevents optimizations. We do have avx_vec_concat* patterns that generate efficient code, both for this low part + zero concatenation special case and for other cases too, so the following define_insn_and_split just recognizes avx_vec_concat made of a low half of a cast and some other reg. 2020-02-06 Jakub Jelinek <jakub@redhat.com> PR target/93594 * config/i386/predicates.md (avx_identity_operand): New predicate. * config/i386/sse.md (*avx_vec_concat<mode>_1): New define_insn_and_split. * gcc.target/i386/avx2-pr93594.c: New test.
Diffstat (limited to 'gcc/tree-vector-builder.c')
0 files changed, 0 insertions, 0 deletions