diff options
author | Andrew Pinski <quic_apinski@quicinc.com> | 2024-05-20 00:16:40 -0700 |
---|---|---|
committer | Andrew Pinski <quic_apinski@quicinc.com> | 2024-05-21 07:24:36 -0700 |
commit | 49c87d22535ac4f8aacf088b3f462861c26cacb4 (patch) | |
tree | 507f030e4aced5f9f608c86b7d5b87d3e6121085 /gcc/cp/std-name-hint.gperf | |
parent | 232a86f9640cde6908d0875b8df52c36030c5b5e (diff) | |
download | gcc-49c87d22535ac4f8aacf088b3f462861c26cacb4.zip gcc-49c87d22535ac4f8aacf088b3f462861c26cacb4.tar.gz gcc-49c87d22535ac4f8aacf088b3f462861c26cacb4.tar.bz2 |
match: Disable `(type)zero_one_valuep*CST` for 1bit signed types [PR115154]
The problem here is the pattern added in r13-1162-g9991d84d2a8435
assumes that it is well defined to multiply zero_one_valuep by the truncated
converted integer constant. It is well defined for all types except for signed 1bit types.
Where `a * -1` is produced which is undefined/
So disable this pattern for 1bit signed types.
Note the pattern added in r14-3432-gddd64a6ec3b38e is able to workaround the undefinedness except when
`-fsanitize=undefined` is turned on, this is why I added a testcase for that.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR tree-optimization/115154
gcc/ChangeLog:
* match.pd (convert (mult zero_one_valued_p@1 INTEGER_CST@2)): Disable
for 1bit signed types.
gcc/testsuite/ChangeLog:
* c-c++-common/ubsan/signed1bitfield-1.c: New test.
* gcc.c-torture/execute/signed1bitfield-1.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Diffstat (limited to 'gcc/cp/std-name-hint.gperf')
0 files changed, 0 insertions, 0 deletions