diff options
author | Jakub Jelinek <jakub@redhat.com> | 2020-11-26 10:50:23 +0100 |
---|---|---|
committer | Jakub Jelinek <jakub@redhat.com> | 2020-11-26 10:50:23 +0100 |
commit | 39f5e9aded23e8b7e0e7080fc6020478b9c5b7b5 (patch) | |
tree | 78e54cc7831648237feab7f647b73b7bdc8d9fef /gcc | |
parent | 776a37f6ac5682dae9a1ef07bc04570ea80f42ca (diff) | |
download | gcc-39f5e9aded23e8b7e0e7080fc6020478b9c5b7b5.zip gcc-39f5e9aded23e8b7e0e7080fc6020478b9c5b7b5.tar.gz gcc-39f5e9aded23e8b7e0e7080fc6020478b9c5b7b5.tar.bz2 |
match.pd: Avoid ICE with shifts [PR97979]
My recent wide_int_binop changes caused ICE on this testcase.
The problem is that for shift where amount has MSB set now fails to optimize
into a constant (IMHO we should treat out of bounds shifts the same later),
but there is a precedent for that already - e.g. division by zero fails
to optimize into a constant too. I think it is better if path isolation
checks for these UBs and does something the user chooses (__builtin_trap vs.
__builtin_unreachable, and either a deferred warning about the UB or
nothing).
This patch just doesn't optimize if int_const_binop failed.
2020-11-26 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/97979
* match.pd ((X {&,^,|} C2) << C1 into (X << C1) {&,^,|} (C2 << C1)):
Only optimize if int_const_binop returned non-NULL.
* gcc.dg/pr97979.c: New test.
* gcc.c-torture/compile/pr97979.c: New test.
Diffstat (limited to 'gcc')
-rw-r--r-- | gcc/match.pd | 3 | ||||
-rw-r--r-- | gcc/testsuite/gcc.c-torture/compile/pr97979.c | 7 | ||||
-rw-r--r-- | gcc/testsuite/gcc.dg/pr97979.c | 13 |
3 files changed, 22 insertions, 1 deletions
diff --git a/gcc/match.pd b/gcc/match.pd index 4d290ad..f8b6515 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -3119,7 +3119,8 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) (shift (convert?:s (bit_op:s @0 INTEGER_CST@2)) INTEGER_CST@1) (if (tree_nop_conversion_p (type, TREE_TYPE (@0))) (with { tree mask = int_const_binop (shift, fold_convert (type, @2), @1); } - (bit_op (shift (convert @0) @1) { mask; })))))) + (if (mask) + (bit_op (shift (convert @0) @1) { mask; }))))))) /* ~(~X >> Y) -> X >> Y (for arithmetic shift). */ (simplify diff --git a/gcc/testsuite/gcc.c-torture/compile/pr97979.c b/gcc/testsuite/gcc.c-torture/compile/pr97979.c new file mode 100644 index 0000000..f4f88a4 --- /dev/null +++ b/gcc/testsuite/gcc.c-torture/compile/pr97979.c @@ -0,0 +1,7 @@ +/* PR tree-optimization/97979 */ + +int +foo (int x) +{ + return (x & 0x123) << -3; +} diff --git a/gcc/testsuite/gcc.dg/pr97979.c b/gcc/testsuite/gcc.dg/pr97979.c new file mode 100644 index 0000000..44aaff2 --- /dev/null +++ b/gcc/testsuite/gcc.dg/pr97979.c @@ -0,0 +1,13 @@ +/* PR tree-optimization/97979 */ +/* { dg-do compile } */ +/* { dg-options "-O2 -fno-tree-ccp" } */ + +short a = 0; +int b = 0; + +void +foo (void) +{ + unsigned short d = b; + a = d >> -2U; /* { dg-warning "right shift count >= width of type" } */ +} |