diff options
author | Jakub Jelinek <jakub@redhat.com> | 2020-11-26 10:50:23 +0100 |
---|---|---|
committer | Jakub Jelinek <jakub@redhat.com> | 2020-11-26 10:50:23 +0100 |
commit | 39f5e9aded23e8b7e0e7080fc6020478b9c5b7b5 (patch) | |
tree | 78e54cc7831648237feab7f647b73b7bdc8d9fef /gcc/gimple-isel.cc | |
parent | 776a37f6ac5682dae9a1ef07bc04570ea80f42ca (diff) | |
download | gcc-39f5e9aded23e8b7e0e7080fc6020478b9c5b7b5.zip gcc-39f5e9aded23e8b7e0e7080fc6020478b9c5b7b5.tar.gz gcc-39f5e9aded23e8b7e0e7080fc6020478b9c5b7b5.tar.bz2 |
match.pd: Avoid ICE with shifts [PR97979]
My recent wide_int_binop changes caused ICE on this testcase.
The problem is that for shift where amount has MSB set now fails to optimize
into a constant (IMHO we should treat out of bounds shifts the same later),
but there is a precedent for that already - e.g. division by zero fails
to optimize into a constant too. I think it is better if path isolation
checks for these UBs and does something the user chooses (__builtin_trap vs.
__builtin_unreachable, and either a deferred warning about the UB or
nothing).
This patch just doesn't optimize if int_const_binop failed.
2020-11-26 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/97979
* match.pd ((X {&,^,|} C2) << C1 into (X << C1) {&,^,|} (C2 << C1)):
Only optimize if int_const_binop returned non-NULL.
* gcc.dg/pr97979.c: New test.
* gcc.c-torture/compile/pr97979.c: New test.
Diffstat (limited to 'gcc/gimple-isel.cc')
0 files changed, 0 insertions, 0 deletions