diff options
author | Andrew Pinski <apinski@marvell.com> | 2021-10-10 01:28:59 +0000 |
---|---|---|
committer | Andrew Pinski <apinski@marvell.com> | 2021-10-10 02:05:12 +0000 |
commit | 882d806c1a8f9d2d2ade1133de88d63e5d4fe40c (patch) | |
tree | c96e6eba543dbcb010bef443d8df2d587a8fed0b /gcc/builtins.c | |
parent | c9db17b8803e8ac294016a68352ffdcfa0699ab4 (diff) | |
download | gcc-882d806c1a8f9d2d2ade1133de88d63e5d4fe40c.zip gcc-882d806c1a8f9d2d2ade1133de88d63e5d4fe40c.tar.gz gcc-882d806c1a8f9d2d2ade1133de88d63e5d4fe40c.tar.bz2 |
tree-optimization: [PR102622]: wrong code due to signed one bit integer and "a?-1:0"
So it turns out this is kinda of a latent bug but not really latent.
In GCC 9 and 10, phi-opt would transform a?-1:0 (even for signed 1-bit integer)
to -(type)a but the type is an one bit integer which means the negation is
undefined. GCC 11 fixed the problem by checking for a?pow2cst:0 transformation
before a?-1:0 transformation.
When I added the transformations to match.pd, I had swapped the order not paying
attention and I didn't expect anything of it. Because there was no testcase failing
due to this.
Anyways this fixes the problem on the trunk by swapping the order in match.pd and
adding a comment of why the order is this way.
I will try to come up with a patch for GCC 9 and 10 series later on which fixes
the problem there too.
Note I didn't include the original testcase which requires the vectorizer and AVX-512f
as I can't figure out the right dg options to restrict it to avx-512f but I did come up
with a testcase which shows the problem and even more shows the problem with the 9/10
series as mentioned.
OK? Bootstrapped and tested on x86_64-linux-gnu.
PR tree-optimization/102622
gcc/ChangeLog:
* match.pd: Swap the order of a?pow2cst:0 and a?-1:0 transformations.
Swap the order of a?0:pow2cst and a?0:-1 transformations.
gcc/testsuite/ChangeLog:
* gcc.c-torture/execute/bitfld-10.c: New test.
Diffstat (limited to 'gcc/builtins.c')
0 files changed, 0 insertions, 0 deletions