aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJakub Jelinek <jakub@redhat.com>2025-05-06 13:00:10 +0200
committerJakub Jelinek <jakub@gcc.gnu.org>2025-05-06 13:06:37 +0200
commita14d65f81e18e70144ceddfc3142a8103984919d (patch)
treebcd9c178fcd60efdfcdb6e37309d10cc7253cb3e
parent941a1b4a00664f23812af00ea56e9795a42a50a4 (diff)
downloadgcc-a14d65f81e18e70144ceddfc3142a8103984919d.zip
gcc-a14d65f81e18e70144ceddfc3142a8103984919d.tar.gz
gcc-a14d65f81e18e70144ceddfc3142a8103984919d.tar.bz2
gimple-fold: Fix fold_truth_andor_for_ifcombine [PR120074]
The following testcase ICEs because of a mismatch between wide_int precision, in particular lr_and_mask has 32-bit precision while sign has 16-bit. decode_field_reference ensures that {ll,lr,rl,rr}_and_mask has {ll,lr,rl,rr}_bitsize precision, so the ll_and_mask |= sign; and rl_and_mask |= sign; and ll_and_mask &= sign; and rl_and_mask &= sign; cases should work right, sign has in those cases {ll,rl}_bitsize precision. The problem is that nothing until much later guarantees that ll_bitsize == lr_bitsize or rl_bitsize == rr_bitsize. In the testcase there is ((b ^ a) & 3) < 0 where a is 16-bit and b is 32-bit, so it is the lsignbit handling, and because of the xor the xor operand is moved to the *r_and_mask, so with ll_and_mask being 16-bit 3 and lr_and_mask being 32-bit 3. Now, either b in the above case would be INTEGER_CST, in that case if rr_arg was also INTEGER_CST we'd use the l_const && r_const case and try to handle it, or we'd run into (though much later) if (ll_bitsize != lr_bitsize || rl_bitsize != rr_bitsize ... return 0; One possibility is dealing with a different precision using wide_int::from. Another option used in this patch as it is safest is + if (ll_bitsize != lr_bitsize) + return 0; if (!lr_and_mask.get_precision ()) lr_and_mask = sign; else lr_and_mask &= sign; and similarly in the other hunk, i.e. punt if there is a mismatch early. And yet another option would be to compute the sign wide_int sign = wi::mask (ll_bitsize - 1, true, ll_bitsize); /* If ll_arg is zero-extended and we're testing the sign bit, we know what the result should be. Shifting the sign bit out of sign will get us to mask the entire field out, yielding zero, i.e., the sign bit of the zero-extended value. We know the masked value is being compared with zero, so the compare will get us the result we're looking for: TRUE if EQ_EXPR, FALSE if NE_EXPR. */ if (lsignbit > ll_bitsize && ll_unsignedp) sign <<= 1; once again for the lr_and_mask and rr_and_mask cases using rl_bitsize. As we just return 0; anyway unless l_const && r_const, if l_const & r_const are false it doesn't really matter what is chosen, but for the const cases it matters and I'm not sure what is right. So the second option might be safest. 2025-05-06 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/120074 * gimple-fold.cc (fold_truth_andor_for_ifcombine): For lsignbit && l_xor case, punt if ll_bitsize != lr_bitsize. Similarly for rsignbit && r_xor case, punt if rl_bitsize != rr_bitsize. Formatting fix. * gcc.dg/pr120074.c: New test. (cherry picked from commit 81475602c3dd57ff6987e5f902814e8e3a0a0dde)
-rw-r--r--gcc/gimple-fold.cc6
-rw-r--r--gcc/testsuite/gcc.dg/pr120074.c20
2 files changed, 25 insertions, 1 deletions
diff --git a/gcc/gimple-fold.cc b/gcc/gimple-fold.cc
index b645613..a64922a 100644
--- a/gcc/gimple-fold.cc
+++ b/gcc/gimple-fold.cc
@@ -8300,6 +8300,8 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
ll_and_mask &= sign;
if (l_xor)
{
+ if (ll_bitsize != lr_bitsize)
+ return 0;
if (!lr_and_mask.get_precision ())
lr_and_mask = sign;
else
@@ -8321,6 +8323,8 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
rl_and_mask &= sign;
if (r_xor)
{
+ if (rl_bitsize != rr_bitsize)
+ return 0;
if (!rr_and_mask.get_precision ())
rr_and_mask = sign;
else
@@ -8728,7 +8732,7 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
wide_int lr_mask, rr_mask;
if (lr_and_mask.get_precision ())
lr_mask = wi::lshift (wide_int::from (lr_and_mask, rnprec, UNSIGNED),
- xlr_bitpos);
+ xlr_bitpos);
else
lr_mask = wi::shifted_mask (xlr_bitpos, lr_bitsize, false, rnprec);
if (rr_and_mask.get_precision ())
diff --git a/gcc/testsuite/gcc.dg/pr120074.c b/gcc/testsuite/gcc.dg/pr120074.c
new file mode 100644
index 0000000..3f31516
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr120074.c
@@ -0,0 +1,20 @@
+/* PR tree-optimization/120074 */
+/* { dg-do compile } */
+/* { dg-options "-O1 -fno-tree-copy-prop -fno-tree-forwprop -fno-tree-ccp" } */
+
+int foo (int);
+short a;
+int b;
+
+int
+bar (int d, int e)
+{
+ return d < 0 || d > __INT_MAX__ >> e;
+}
+
+int
+main ()
+{
+ int f = bar ((b ^ a) & 3, __SIZEOF_INT__ * __CHAR_BIT__ - 2);
+ foo (f);
+}