From 06ac981ffb3c0d6997f2e1c01ffaf6253b6a244f Mon Sep 17 00:00:00 2001 From: Jakub Jelinek Date: Tue, 31 Aug 2021 10:29:23 +0200 Subject: tree-ssa-ccp: Fix up bit_value_binop on RSHIFT_EXPR [PR102134] As mentioned in the PR, this hunk is guarded with !wi::neg_p (r1val | r1mask, sgn) which means if sgn is UNSIGNED, it is always true, but r1val | r1mask in widest_int is still sign-extended. That means wi::clz (arg) returns 0, wi::get_precision (arg) returns some very large number (WIDE_INT_MAX_PRECISION, on x86_64 576 bits) and width is 64, so we end up with lzcount of -512 where the code afterwards expects a non-negative lzcount. For arg without the sign bit set the code works right, those numbers are zero extended and so wi::clz must return wi::get_precision (arg) - width plus number of leading zero bits within the width precision. The patch fixes it by handling the sign-extension specially, either it could be done through wi::neg_p (arg) check, but lzcount == 0 works identically. 2021-08-31 Jakub Jelinek PR tree-optimization/102134 * tree-ssa-ccp.c (bit_value_binop) : If sgn is UNSIGNED and r1val | r1mask has MSB set, ensure lzcount doesn't become negative. * gcc.c-torture/execute/pr102134.c: New test. --- gcc/tree-ssa-ccp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'gcc/tree-ssa-ccp.c') diff --git a/gcc/tree-ssa-ccp.c b/gcc/tree-ssa-ccp.c index f4a99ac..70ce6a4 100644 --- a/gcc/tree-ssa-ccp.c +++ b/gcc/tree-ssa-ccp.c @@ -1695,7 +1695,8 @@ bit_value_binop (enum tree_code code, signop sgn, int width, /* Logical right shift, or zero sign bit. */ widest_int arg = r1val | r1mask; int lzcount = wi::clz (arg); - lzcount -= wi::get_precision (arg) - width; + if (lzcount) + lzcount -= wi::get_precision (arg) - width; widest_int tmp = wi::mask (width, false); tmp = wi::lrshift (tmp, lzcount); tmp = wi::lrshift (tmp, wi::bit_and_not (r2val, r2mask)); -- cgit v1.1