aboutsummaryrefslogtreecommitdiff
path: root/gcc/gimple-lower-bitint.cc
diff options
context:
space:
mode:
authorJakub Jelinek <jakub@redhat.com>2024-01-15 09:58:37 +0100
committerJakub Jelinek <jakub@redhat.com>2024-01-15 09:58:37 +0100
commit8a1927856a10c5ff1b86504c345c8be449b53c30 (patch)
tree7d22951c261dbceb6665d944c70d2cb328a0dabe /gcc/gimple-lower-bitint.cc
parent0627d1f5340c693699ad36fa2b741ff11d6f026a (diff)
downloadgcc-8a1927856a10c5ff1b86504c345c8be449b53c30.zip
gcc-8a1927856a10c5ff1b86504c345c8be449b53c30.tar.gz
gcc-8a1927856a10c5ff1b86504c345c8be449b53c30.tar.bz2
lower-bitint: Fix up handling of INTEGER_CSTs in handle_operand in right shifts or comparisons [PR113370]
The INTEGER_CST code uses the remainder bits in computations whether to use whole constant or just part of it and extend it at runtime, and furthermore uses it to avoid using all bits even when using the (almost) whole constant. The problem is that the prec % (2 * limb_prec) computation it uses is appropriate only for the normal lowering of mergeable operations (where we process 2 limbs at a time in a loop starting with least significant limbs and process the remaining 0-2 limbs after the loop (there with constant indexes). For that case it is ok not to emit the upper prec % (2 * limb_prec) bits into the constant, because those bits will be extracted using INTEGER_CST idx and so will be used directly in the statements as INTEGER_CSTs. For other cases, where we either process just a single limb in a loop, process it downwards (e.g. non-equality comparisons) or with some runtime addends (some shifts), there is either just at most one limb lowered with INTEGER_CST idx after the loop (e.g. for right shift) or before the loop (e.g. non-equality comparisons), or all limbs are processed with non-INTEGER_CST indexes (e.g. for left shift, when m_var_msb is set). Now, the m_var_msb case is already handled through if (m_var_msb) type = TREE_TYPE (op); else /* If we have a guarantee the most significant partial limb (if any) will be only accessed through handle_operand with INTEGER_CST idx, we don't need to include the partial limb in .rodata. */ type = build_bitint_type (prec - rem, 1); but for the right shifts or comparisons the prec - rem when rem was prec % (2 * limb_prec) was incorrect, so the following patch fixes it to use remainder for 2 limbs only if m_upwards_2limb and remainder for 1 limb otherwise. 2024-01-15 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/113370 * gimple-lower-bitint.cc (bitint_large_huge::handle_operand): Only set rem to prec % (2 * limb_prec) if m_upwards_2limb, otherwise set it to just prec % limb_prec. * gcc.dg/torture/bitint-48.c: New test.
Diffstat (limited to 'gcc/gimple-lower-bitint.cc')
-rw-r--r--gcc/gimple-lower-bitint.cc4
1 files changed, 2 insertions, 2 deletions
diff --git a/gcc/gimple-lower-bitint.cc b/gcc/gimple-lower-bitint.cc
index ac7fa30..672a9a3 100644
--- a/gcc/gimple-lower-bitint.cc
+++ b/gcc/gimple-lower-bitint.cc
@@ -869,7 +869,7 @@ bitint_large_huge::handle_operand (tree op, tree idx)
&& m_data[m_data_cnt + 1] == NULL_TREE))
{
unsigned int prec = TYPE_PRECISION (TREE_TYPE (op));
- unsigned int rem = prec % (2 * limb_prec);
+ unsigned int rem = prec % ((m_upwards_2limb ? 2 : 1) * limb_prec);
int ext;
unsigned min_prec = bitint_min_cst_precision (op, ext);
if (m_first)
@@ -996,7 +996,7 @@ bitint_large_huge::handle_operand (tree op, tree idx)
if (m_data[m_data_cnt + 1] == integer_type_node)
{
unsigned int prec = TYPE_PRECISION (TREE_TYPE (op));
- unsigned rem = prec % (2 * limb_prec);
+ unsigned rem = prec % ((m_upwards_2limb ? 2 : 1) * limb_prec);
int ext = wi::neg_p (wi::to_wide (op)) ? -1 : 0;
tree c = m_data[m_data_cnt];
unsigned min_prec = TYPE_PRECISION (TREE_TYPE (c));