aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-ssanames.cc
diff options
context:
space:
mode:
authorAldy Hernandez <aldyh@redhat.com>2023-07-14 12:16:17 +0200
committerAldy Hernandez <aldyh@redhat.com>2023-07-17 09:17:59 +0200
commit56cf8b01fe1d4d4bb33a107e5d490f589d5f05bc (patch)
treea6f850cc864497e4f67fb69a79b9ad506054e39b /gcc/tree-ssanames.cc
parent0407ae8a7732d90622a65ddf1798c9d51d450e9d (diff)
downloadgcc-56cf8b01fe1d4d4bb33a107e5d490f589d5f05bc.zip
gcc-56cf8b01fe1d4d4bb33a107e5d490f589d5f05bc.tar.gz
gcc-56cf8b01fe1d4d4bb33a107e5d490f589d5f05bc.tar.bz2
Normalize irange_bitmask before union/intersect.
The bit twiddling in union/intersect for the value/mask pair must be normalized to have the unknown bits with a value of 0 in order to make the math simpler. Normalizing at construction slowed VRP by 1.5% so I opted to normalize before updating the bitmask in range-ops, since it was the only user. However, with upcoming changes there will be multiple setters of the mask (IPA and CCP), so we need something more general. I played with various alternatives, and settled on normalizing before union/intersect which were the ones needing the bits cleared. With this patch, there's no noticeable difference in performance either in VRP or in overall compilation. gcc/ChangeLog: * value-range.cc (irange_bitmask::verify_mask): Mask need not be normalized. * value-range.h (irange_bitmask::union_): Normalize beforehand. (irange_bitmask::intersect): Same.
Diffstat (limited to 'gcc/tree-ssanames.cc')
0 files changed, 0 insertions, 0 deletions