diff options
author | Aldy Hernandez <aldyh@redhat.com> | 2024-04-23 10:12:56 +0200 |
---|---|---|
committer | Aldy Hernandez <aldyh@redhat.com> | 2024-04-28 21:03:01 +0200 |
commit | d71308d5a681de008888ea291136c162e5b46c7c (patch) | |
tree | 47c8e3131d4a7b74f97a78c9acfb997e0f5b66f3 /libcpp/init.cc | |
parent | 3b9abfd2df5fe720798aab1e21b4a11876607561 (diff) | |
download | gcc-d71308d5a681de008888ea291136c162e5b46c7c.zip gcc-d71308d5a681de008888ea291136c162e5b46c7c.tar.gz gcc-d71308d5a681de008888ea291136c162e5b46c7c.tar.bz2 |
Callers of irange_bitmask must normalize value/mask pairs.
As per the documentation, irange_bitmask must have the unknown bits in
the mask set to 0 in the value field. Even though we say we must have
normalized value/mask bits, we don't enforce it, opting to normalize
on the fly in union and intersect. Avoiding this lazy enforcing as
well as the extra saving/restoring involved in returning the changed
status, gives us a performance increase of 1.25% for VRP and 1.51% for
ipa-CP.
gcc/ChangeLog:
* tree-ssa-ccp.cc (ccp_finalize): Normalize before calling
set_bitmask.
* value-range.cc (irange::intersect_bitmask): Calculate changed
irange_bitmask bits on our own.
(irange::union_bitmask): Same.
(irange_bitmask::verify_mask): Verify that bits are normalized.
* value-range.h (irange_bitmask::union_): Do not normalize.
Remove return value.
(irange_bitmask::intersect): Same.
Diffstat (limited to 'libcpp/init.cc')
0 files changed, 0 insertions, 0 deletions