diff options
author | Aldy Hernandez <aldyh@redhat.com> | 2023-07-14 12:16:17 +0200 |
---|---|---|
committer | Aldy Hernandez <aldyh@redhat.com> | 2023-07-17 09:17:59 +0200 |
commit | 56cf8b01fe1d4d4bb33a107e5d490f589d5f05bc (patch) | |
tree | a6f850cc864497e4f67fb69a79b9ad506054e39b /gcc/value-range.cc | |
parent | 0407ae8a7732d90622a65ddf1798c9d51d450e9d (diff) | |
download | gcc-56cf8b01fe1d4d4bb33a107e5d490f589d5f05bc.zip gcc-56cf8b01fe1d4d4bb33a107e5d490f589d5f05bc.tar.gz gcc-56cf8b01fe1d4d4bb33a107e5d490f589d5f05bc.tar.bz2 |
Normalize irange_bitmask before union/intersect.
The bit twiddling in union/intersect for the value/mask pair must be
normalized to have the unknown bits with a value of 0 in order to make
the math simpler. Normalizing at construction slowed VRP by 1.5% so I
opted to normalize before updating the bitmask in range-ops, since it
was the only user. However, with upcoming changes there will be
multiple setters of the mask (IPA and CCP), so we need something more
general.
I played with various alternatives, and settled on normalizing before
union/intersect which were the ones needing the bits cleared. With
this patch, there's no noticeable difference in performance either in
VRP or in overall compilation.
gcc/ChangeLog:
* value-range.cc (irange_bitmask::verify_mask): Mask need not be
normalized.
* value-range.h (irange_bitmask::union_): Normalize beforehand.
(irange_bitmask::intersect): Same.
Diffstat (limited to 'gcc/value-range.cc')
-rw-r--r-- | gcc/value-range.cc | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/gcc/value-range.cc b/gcc/value-range.cc index 011bdbd..2abf57b 100644 --- a/gcc/value-range.cc +++ b/gcc/value-range.cc @@ -1953,9 +1953,6 @@ void irange_bitmask::verify_mask () const { gcc_assert (m_value.get_precision () == m_mask.get_precision ()); - // Unknown bits must have their corresponding value bits cleared as - // it simplifies union and intersect. - gcc_assert (wi::bit_and (m_mask, m_value) == 0); } void |