diff options
author | Jakub Jelinek <jakub@redhat.com> | 2023-08-30 10:47:21 +0200 |
---|---|---|
committer | Jakub Jelinek <jakub@redhat.com> | 2023-08-30 10:47:21 +0200 |
commit | 49a3b35c4068091900b657cd36e5cffd41ef0c47 (patch) | |
tree | 9cd98fd9070619aabedc849ab946b939e5eba950 /contrib/unused_functions.py | |
parent | 0394184cebc15e5e3f13d04d9ffbc787a16018bd (diff) | |
download | gcc-49a3b35c4068091900b657cd36e5cffd41ef0c47.zip gcc-49a3b35c4068091900b657cd36e5cffd41ef0c47.tar.gz gcc-49a3b35c4068091900b657cd36e5cffd41ef0c47.tar.bz2 |
store-merging: Fix up >= 64 bit insertion [PR111015]
The following testcase shows that we mishandle bit insertion for
info->bitsize >= 64. The problem is in using unsigned HOST_WIDE_INT
shift + subtraction + build_int_cst to compute mask, the shift invokes
UB at compile time for info->bitsize 64 and larger and e.g. on the testcase
with info->bitsize happens to compute mask of 0x3f rather than
0x3f'ffffffff'ffffffff.
The patch fixes that by using wide_int wi::mask + wide_int_to_tree, so it
handles masks in any precision (up to WIDE_INT_MAX_PRECISION ;) ).
2023-08-30 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/111015
* gimple-ssa-store-merging.cc
(imm_store_chain_info::output_merged_store): Use wi::mask and
wide_int_to_tree instead of unsigned HOST_WIDE_INT shift and
build_int_cst to build BIT_AND_EXPR mask.
* gcc.dg/pr111015.c: New test.
Diffstat (limited to 'contrib/unused_functions.py')
0 files changed, 0 insertions, 0 deletions