aboutsummaryrefslogtreecommitdiff
path: root/gcc/value-range.cc
AgeCommit message (Collapse)AuthorFilesLines
2023-08-21[frange] Return false if nothing changed in union_nans().Aldy Hernandez1-5/+31
When one operand is a known NAN, we always return TRUE from union_nans(), even if no change occurred. This patch fixes the oversight. gcc/ChangeLog: * value-range.cc (frange::union_nans): Return false if nothing changed. (range_tests_floats): New test.
2023-08-18[irange] Return FALSE if updated bitmask is unchanged [PR110753]Aldy Hernandez1-0/+18
The mask/value pair we track in the irange is a bit fickle in that it can sometimes contradict the bitmask inherent in the range. This can happen when a series of calculations yield a combination such as: [3, 1000] MASK 0xfffffffe VALUE 0x0 The mask/value above implies that the lowest bit is a known 0, which would exclude the 3 in the range. At one time we tried keeping mask and ranges 100% consistent, but the performance penalty was too high (5% in VRP). Also, it's unclear whether the intersection of two incompatible known bits should make the whole range undefined, or just the contradicting bits. This is all documented in irange::get_bitmask(). We could revisit both of these assumptions in the future. In this testcase IPA ends up with a range where the lower 2 bits are expected to be 0, but the range is [1,1]. [irange] long int [1, 1] MASK 0xfffffffffffffffc VALUE 0x0 This causes irange::union_bitmask() to think an update occurred, when no semantic change happened, thus triggering an assert in IPA-cp. We could get rid of the assert, but it's cleaner to make irange::{union,intersect}_bitmask always tell the truth. Beside, the ranger's cache also depends on union being truthful. PR ipa/110753 gcc/ChangeLog: * value-range.cc (irange::union_bitmask): Return FALSE if updated bitmask is semantically equivalent to the original mask. (irange::intersect_bitmask): Same. (irange::get_bitmask): Add comment. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/pr110753.c: New test.
2023-07-17Normalize irange_bitmask before union/intersect.Aldy Hernandez1-3/+0
The bit twiddling in union/intersect for the value/mask pair must be normalized to have the unknown bits with a value of 0 in order to make the math simpler. Normalizing at construction slowed VRP by 1.5% so I opted to normalize before updating the bitmask in range-ops, since it was the only user. However, with upcoming changes there will be multiple setters of the mask (IPA and CCP), so we need something more general. I played with various alternatives, and settled on normalizing before union/intersect which were the ones needing the bits cleared. With this patch, there's no noticeable difference in performance either in VRP or in overall compilation. gcc/ChangeLog: * value-range.cc (irange_bitmask::verify_mask): Mask need not be normalized. * value-range.h (irange_bitmask::union_): Normalize beforehand. (irange_bitmask::intersect): Same.
2023-07-07A singleton irange has all known bits.Aldy Hernandez1-1/+18
gcc/ChangeLog: * value-range.cc (irange::get_bitmask_from_range): Return all the known bits for a singleton. (irange::set_range_from_bitmask): Set a range of a singleton when all bits are known.
2023-07-07The caller to irange::intersect (wide_int, wide_int) must normalize the range.Aldy Hernandez1-2/+5
Per the function comment, the caller to intersect(wide_int, wide_int) must handle the mask. This means it must also normalize the range if anything changed. gcc/ChangeLog: * value-range.cc (irange::intersect): Leave normalization to caller.
2023-07-07Implement value/mask tracking for irange.Aldy Hernandez1-91/+157
Integer ranges (irange) currently track known 0 bits. We've wanted to track known 1 bits for some time, and instead of tracking known 0 and known 1's separately, it has been suggested we track a value/mask pair similarly to what we do for CCP and RTL. This patch implements such a thing. With this we now track a VALUE integer which are the known values, and a MASK which tells us which bits contain meaningful information. This allows us to fix a handful of enhancement requests, such as PR107043 and PR107053. There is a 4.48% performance penalty for VRP and 0.42% in overall compilation for this entire patchset. It is expected and in line with the loss incurred when we started tracking known 0 bits. This patch just provides the value/mask tracking support. All the nonzero users (range-op, IPA, CCP, etc), are still using the nonzero nomenclature. For that matter, this patch reimplements the nonzero accessors with the value/mask functionality. In follow-up patches I will enhance these passes to use the value/mask information, and fix the aforementioned PRs. gcc/ChangeLog: * data-streamer-in.cc (streamer_read_value_range): Adjust for value/mask. * data-streamer-out.cc (streamer_write_vrange): Same. * range-op.cc (operator_cast::fold_range): Same. * value-range-pretty-print.cc (vrange_printer::print_irange_bitmasks): Same. * value-range-storage.cc (irange_storage::write_lengths_address): Same. (irange_storage::set_irange): Same. (irange_storage::get_irange): Same. (irange_storage::size): Same. (irange_storage::dump): Same. * value-range-storage.h: Same. * value-range.cc (debug): New. (irange_bitmask::dump): New. (add_vrange): Adjust for value/mask. (irange::operator=): Same. (irange::set): Same. (irange::verify_range): Same. (irange::operator==): Same. (irange::contains_p): Same. (irange::irange_single_pair_union): Same. (irange::union_): Same. (irange::intersect): Same. (irange::invert): Same. (irange::get_nonzero_bits_from_range): Rename to... (irange::get_bitmask_from_range): ...this. (irange::set_range_from_nonzero_bits): Rename to... (irange::set_range_from_bitmask): ...this. (irange::set_nonzero_bits): Rename to... (irange::update_bitmask): ...this. (irange::get_nonzero_bits): Rename to... (irange::get_bitmask): ...this. (irange::intersect_nonzero_bits): Rename to... (irange::intersect_bitmask): ...this. (irange::union_nonzero_bits): Rename to... (irange::union_bitmask): ...this. (irange_bitmask::verify_mask): New. * value-range.h (class irange_bitmask): New. (irange_bitmask::set_unknown): New. (irange_bitmask::unknown_p): New. (irange_bitmask::irange_bitmask): New. (irange_bitmask::get_precision): New. (irange_bitmask::get_nonzero_bits): New. (irange_bitmask::set_nonzero_bits): New. (irange_bitmask::operator==): New. (irange_bitmask::union_): New. (irange_bitmask::intersect): New. (class irange): Friend vrange_printer. (irange::varying_compatible_p): Adjust for bitmask. (irange::set_varying): Same. (irange::set_nonzero): Same. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/pr107009.c: Adjust irange dumping for value/mask changes. * gcc.dg/tree-ssa/vrp-unreachable.c: Same. * gcc.dg/tree-ssa/vrp122.c: Same.
2023-06-29Tidy up the range normalization code.Aldy Hernandez1-51/+48
There's a few spots where a range is being altered in-place, but we fail to call normalize the range. This patch makes sure we always call normalize_kind(), and that normalize_kind in turn calls verify_range to make sure verything is canonical. gcc/ChangeLog: * value-range.cc (frange::set): Do not call verify_range. (frange::normalize_kind): Verify range. (frange::union_nans): Do not call verify_range. (frange::union_): Same. (frange::intersect): Same. (irange::irange_single_pair_union): Call normalize_kind if necessary. (irange::union_): Same. (irange::intersect): Same. (irange::set_range_from_nonzero_bits): Verify range. (irange::set_nonzero_bits): Call normalize_kind if necessary. (irange::get_nonzero_bits): Tweak comment. (irange::intersect_nonzero_bits): Call normalize_kind if necessary. (irange::union_nonzero_bits): Same. * value-range.h (irange::normalize_kind): Verify range.
2023-06-27Implement ipa_vr hashing.Aldy Hernandez1-15/+0
Implement hashing for ipa_vr. When all is said and done, all these patches incurr a 7.64% slowdown for ipa-cp, with is entirely covered by the similar 7% increase in this area last week. So we get type agnostic ranges with "infinite" range precision close to free. There is no change in overall compilation. gcc/ChangeLog: * ipa-prop.cc (struct ipa_vr_ggc_hash_traits): Adjust for use with ipa_vr instead of value_range. (gt_pch_nx): Same. (gt_ggc_mx): Same. (ipa_get_value_range): Same. * value-range.cc (gt_pch_nx): Move to ipa-prop.cc and adjust for ipa_vr. (gt_ggc_mx): Same.
2023-05-25Disallow setting of NANs in frange setter unless setting trees.Aldy Hernandez1-8/+1
frange::set() is confusing in that we can set a NAN by specifying a bound of +-NAN, even though we tecnically disallow NANs in the setter because the kind can never be VR_NAN. This is a wart for get_tree_range(), which builds a range out of a tree from the source, to work correctly. It's ugly, and it showed its limitation while implementing LTO streaming of ranges. This patch disallows passing NAN bounds in frange::set() and fixes get_tree_range. gcc/ChangeLog: * value-query.cc (range_query::get_tree_range): Set NAN directly if necessary. * value-range.cc (frange::set): Assert that bounds are not NAN.
2023-05-25Hash known NANs correctly for franges.Aldy Hernandez1-7/+7
We're ICEing when trying to hash a known NAN. This is unnoticeable because the only user would be IPA, and even so, it currently doesn't handle floats. However, handling floats is a flip of a switch, so it's best to handle them already. gcc/ChangeLog: * value-range.cc (add_vrange): Handle known NANs.
2023-05-23Remove buggy special case in irange::invert [PR109934].Aldy Hernandez1-8/+0
This patch removes a buggy special case in irange::invert which seems to have been broken for a while, and probably never triggered because the legacy code was handled elsewhere, and the non-legacy code was using an int_range_max of int_range<255> which made it extremely likely for num_ranges == 255. However, with auto-resizing ranges, int_range_max will start off at 3 and can hit this bogus code in the unswitching code. PR tree-optimization/109934 gcc/ChangeLog: * value-range.cc (irange::invert): Remove buggy special case. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/pr109934.c: New test.
2023-05-17Provide support for copying unsupported ranges.Aldy Hernandez1-1/+4
The unsupported_range class is provided for completness sake. It is a way to set VARYING/UNDEFINED ranges for unsupported ranges (currently anything not float, integer, or pointer). You can't do anything with them, except set_varying, and set_undefined. We will trap on any other operation. This patch provides a way to copy them, just in case they creep in. This could happen in IPA under certain circumstances. gcc/ChangeLog: * value-range.cc (vrange::operator=): Add a stub to copy unsupported ranges. * value-range.h (is_a <unsupported_range>): New. (Value_Range::operator=): Support copying unsupported ranges.
2023-05-15Add auto-resizing capability to irange's [PR109695]Aldy Hernandez1-0/+14
<tldr> We can now have int_range<N, RESIZABLE=false> for automatically resizable ranges. int_range_max is now int_range<3, true> for a 69X reduction in size from current trunk, and 6.9X reduction from GCC12. This incurs a 5% performance penalty for VRP that is more than covered by our > 13% improvements recently. </tldr> int_range_max is the temporary range object we use in the ranger for integers. With the conversion to wide_int, this structure bloated up significantly because wide_ints are huge (80 bytes a piece) and are about 10 times as big as a plain tree. Since the temporary object requires 255 sub-ranges, that's 255 * 80 * 2, plus the control word. This means the structure grew from 4112 bytes to 40912 bytes. This patch adds the ability to resize ranges as needed, defaulting to no resizing, while int_range_max now defaults to 3 sub-ranges (instead of 255) and grows to 255 when the range being calculated does not fit. For example: int_range<1> foo; // 1 sub-range with no resizing. int_range<5> foo; // 5 sub-ranges with no resizing. int_range<5, true> foo; // 5 sub-ranges with resizing. I ran some tests and found that 3 sub-ranges cover 99% of cases, so I've set the int_range_max default to that: typedef int_range<3, /*RESIZABLE=*/true> int_range_max; We don't bother growing incrementally, since the default covers most cases and we have a 255 hard-limit. This hard limit could be reduced to 128, since my tests never saw a range needing more than 124, but we could do that as a follow-up if needed. With 3-subranges, int_range_max is now 592 bytes versus 40912 for trunk, and versus 4112 bytes for GCC12! The penalty is 5.04% for VRP and 3.02% for threading, with no noticeable change in overall compilation (0.27%). This is more than covered by our 13.26% improvements for the legacy removal + wide_int conversion. I think this approach is a good alternative, while providing us with flexibility going forward. For example, we could try defaulting to a 8 sub-ranges for a noticeable improvement in VRP. We could also use large sub-ranges for switch analysis to avoid resizing. Another approach I tried was always resizing. With this, we could drop the whole int_range<N> nonsense, and have irange just hold a resizable range. This simplified things, but incurred a 7% penalty on ipa_cp. This was hard to pinpoint, and I'm not entirely convinced this wasn't some artifact of valgrind. However, until we're sure, let's avoid massive changes, especially since IPA changes are coming up. For the curious, a particular hot spot for IPA in this area was: ipcp_vr_lattice::meet_with_1 (const value_range *other_vr) { ... ... value_range save (m_vr); m_vr.union_ (*other_vr); return m_vr != save; } The problem isn't the resizing (since we do that at most once) but the fact that for some functions with lots of callers we end up a huge range that gets copied and compared for every meet operation. Maybe the IPA algorithm could be adjusted somehow??. Anywhooo... for now there is nothing to worry about, since value_range still has 2 subranges and is not resizable. But we should probably think what if anything we want to do here, as I envision IPA using infinite ranges here (well, int_range_max) and handling frange's, etc. gcc/ChangeLog: PR tree-optimization/109695 * value-range.cc (irange::operator=): Resize range. (irange::union_): Same. (irange::intersect): Same. (irange::invert): Same. (int_range_max): Default to 3 sub-ranges and resize as needed. * value-range.h (irange::maybe_resize): New. (~int_range): New. (int_range::int_range): Adjust for resizing. (int_range::operator=): Same.
2023-05-15Only return changed=true in union_nonzero when appropriate.Aldy Hernandez1-2/+3
irange::union_ was being overly pessimistic in its return value. It was returning false when the nonzero mask was possibly the same. The reason for this is because the nonzero mask is not entirely kept up to date. We avoid setting it up when a new range is set (from a set, intersect, union, etc), because calculating a mask from a range is measurably expensive. However, irange::get_nonzero_bits() will always return the correct mask because it will calculate the nonzero mask inherit in the mask on the fly and bitwise or it with the saved mask. This was an optimization because last release it was a big penalty to keep the mask up to date. This may not necessarily be the case with the conversion to wide_int's. We should investigate. Just to be clear, the result from get_nonzero_bits() is always correct as seen by the user, but the wide_int in the irange does not contain all the information, since part of the nonzero bits can be determined by the range itself, on the fly. The fix here is to make sure that the result the user sees (callers of get_nonzero_bits()) changed when unioning bits. This allows ipcp_vr_lattice::meet_with_1 to avoid unnecessary copies when determining if a range changed. This patch yields an 6.89% improvement to the ipa_cp pass. I'm including the IPA changes in this patch, as it's a testcase of sorts for the change. gcc/ChangeLog: * ipa-cp.cc (ipcp_vr_lattice::meet_with_1): Avoid unnecessary range copying * value-range.cc (irange::union_nonzero_bits): Return TRUE only when range changed.
2023-05-03Allow varying ranges of unknown types in irange::verify_range [PR109711]Aldy Hernandez1-0/+7
The old legacy code allowed building ranges of unknown types so passes like IPA could build and propagate VARYING. For now it's easiest to allow the old behavior, it's not like you can do anything with these ranges except build them and copy them. Eventually we should convert all users of set_varying() to use supported types. I will address this in my upcoming IPA work. PR tree-optimization/109711 gcc/ChangeLog: * value-range.cc (irange::verify_range): Allow types of error_mark_node.
2023-05-01Cleanup irange::set.Aldy Hernandez1-126/+49
Now that anti-ranges are no more and iranges contain wide_ints instead of trees, various cleanups are possible. This is one of a handful of patches improving the performance of irange::set() which is not on a hot path, but quite sensitive because it is so pervasive. gcc/ChangeLog: * gimple-range-op.cc (cfn_ffs::fold_range): Use the correct precision. * gimple-ssa-warn-alloca.cc (alloca_call_type): Use <2> for invalid_range, as it is an inverse range. * tree-vrp.cc (find_case_label_range): Avoid trees. * value-range.cc (irange::irange_set): Delete. (irange::irange_set_1bit_anti_range): Delete. (irange::irange_set_anti_range): Delete. (irange::set): Cleanup. * value-range.h (class irange): Remove irange_set, irange_set_anti_range, irange_set_1bit_anti_range. (irange::set_undefined): Remove set to m_type.
2023-05-01Convert internal representation of irange to wide_ints.Aldy Hernandez1-149/+118
gcc/ChangeLog: * range-op.cc (update_known_bitmask): Adjust for irange containing wide_ints internally. * tree-ssanames.cc (set_nonzero_bits): Same. * tree-ssanames.h (set_nonzero_bits): Same. * value-range-storage.cc (irange_storage::set_irange): Same. (irange_storage::get_irange): Same. * value-range.cc (irange::operator=): Same. (irange::irange_set): Same. (irange::irange_set_1bit_anti_range): Same. (irange::irange_set_anti_range): Same. (irange::set): Same. (irange::verify_range): Same. (irange::contains_p): Same. (irange::irange_single_pair_union): Same. (irange::union_): Same. (irange::irange_contains_p): Same. (irange::intersect): Same. (irange::invert): Same. (irange::set_range_from_nonzero_bits): Same. (irange::set_nonzero_bits): Same. (mask_to_wi): Same. (irange::intersect_nonzero_bits): Same. (irange::union_nonzero_bits): Same. (gt_ggc_mx): Same. (gt_pch_nx): Same. (tree_range): Same. (range_tests_strict_enum): Same. (range_tests_misc): Same. (range_tests_nonzero_bits): Same. * value-range.h (irange::type): Same. (irange::varying_compatible_p): Same. (irange::irange): Same. (int_range::int_range): Same. (irange::set_undefined): Same. (irange::set_varying): Same. (irange::lower_bound): Same. (irange::upper_bound): Same.
2023-05-01Replace vrp_val* with wide_ints.Aldy Hernandez1-31/+6
This patch removes all uses of vrp_val_{min,max} in favor for a irange_val_* which are wide_int based. This will leave only one use of vrp_val_* which returns trees in range_of_ssa_name_with_loop_info() because it needs to work with non-integers (floats, etc). In a follow-up patch, this function will also be cleaned up such that vrp_val_* can be deleted. The functions min_limit and max_limit in range-op.cc are now useless as they're basically irange_val*. I didn't rename them yet to avoid churn. I'll do it in a later patch. gcc/ChangeLog: * gimple-range-fold.cc (adjust_pointer_diff_expr): Rewrite with irange_val*. (vrp_val_max): New. (vrp_val_min): New. * gimple-range-op.cc (cfn_strlen::fold_range): Use irange_val_*. * range-op.cc (max_limit): Same. (min_limit): Same. (plus_minus_ranges): Same. (operator_rshift::op1_range): Same. (operator_cast::inside_domain_p): Same. * value-range.cc (vrp_val_is_max): Delete. (vrp_val_is_min): Delete. (range_tests_misc): Use irange_val_*. * value-range.h (vrp_val_is_min): Delete. (vrp_val_is_max): Delete. (vrp_val_max): Delete. (irange_val_min): New. (vrp_val_min): Delete. (irange_val_max): New. * vr-values.cc (check_for_binary_op_overflow): Use irange_val_*.
2023-05-01Conversion to irange wide_int API.Aldy Hernandez1-190/+282
This converts the irange API to use wide_ints exclusively, along with its users. This patch will slow down VRP, as there will be more useless wide_int to tree conversions. However, this slowdown is only temporary, as a follow-up patch will convert the internal representation of iranges to wide_ints for a net overall gain in performance. gcc/ChangeLog: * fold-const.cc (expr_not_equal_to): Convert to irange wide_int API. * gimple-fold.cc (size_must_be_zero_p): Same. * gimple-loop-versioning.cc (loop_versioning::prune_loop_conditions): Same. * gimple-range-edge.cc (gcond_edge_range): Same. (gimple_outgoing_range::calc_switch_ranges): Same. * gimple-range-fold.cc (adjust_imagpart_expr): Same. (adjust_realpart_expr): Same. (fold_using_range::range_of_address): Same. (fold_using_range::relation_fold_and_or): Same. * gimple-range-gori.cc (gori_compute::gori_compute): Same. (range_is_either_true_or_false): Same. * gimple-range-op.cc (cfn_toupper_tolower::get_letter_range): Same. (cfn_clz::fold_range): Same. (cfn_ctz::fold_range): Same. * gimple-range-tests.cc (class test_expr_eval): Same. * gimple-ssa-warn-alloca.cc (alloca_call_type): Same. * ipa-cp.cc (ipa_value_range_from_jfunc): Same. (propagate_vr_across_jump_function): Same. (decide_whether_version_node): Same. * ipa-prop.cc (ipa_get_value_range): Same. * ipa-prop.h (ipa_range_set_and_normalize): Same. * range-op.cc (get_shift_range): Same. (value_range_from_overflowed_bounds): Same. (value_range_with_overflow): Same. (create_possibly_reversed_range): Same. (equal_op1_op2_relation): Same. (not_equal_op1_op2_relation): Same. (lt_op1_op2_relation): Same. (le_op1_op2_relation): Same. (gt_op1_op2_relation): Same. (ge_op1_op2_relation): Same. (operator_mult::op1_range): Same. (operator_exact_divide::op1_range): Same. (operator_lshift::op1_range): Same. (operator_rshift::op1_range): Same. (operator_cast::op1_range): Same. (operator_logical_and::fold_range): Same. (set_nonzero_range_from_mask): Same. (operator_bitwise_or::op1_range): Same. (operator_bitwise_xor::op1_range): Same. (operator_addr_expr::fold_range): Same. (pointer_plus_operator::wi_fold): Same. (pointer_or_operator::op1_range): Same. (INT): Same. (UINT): Same. (INT16): Same. (UINT16): Same. (SCHAR): Same. (UCHAR): Same. (range_op_cast_tests): Same. (range_op_lshift_tests): Same. (range_op_rshift_tests): Same. (range_op_bitwise_and_tests): Same. (range_relational_tests): Same. * range.cc (range_zero): Same. (range_nonzero): Same. * range.h (range_true): Same. (range_false): Same. (range_true_and_false): Same. * tree-data-ref.cc (split_constant_offset_1): Same. * tree-ssa-loop-ch.cc (entry_loop_condition_is_static): Same. * tree-ssa-loop-unswitch.cc (struct unswitch_predicate): Same. (find_unswitching_predicates_for_bb): Same. * tree-ssa-phiopt.cc (value_replacement): Same. * tree-ssa-threadbackward.cc (back_threader::find_taken_edge_cond): Same. * tree-ssanames.cc (ssa_name_has_boolean_range): Same. * tree-vrp.cc (find_case_label_range): Same. * value-query.cc (range_query::get_tree_range): Same. * value-range.cc (irange::set_nonnegative): Same. (frange::contains_p): Same. (frange::singleton_p): Same. (frange::internal_singleton_p): Same. (irange::irange_set): Same. (irange::irange_set_1bit_anti_range): Same. (irange::irange_set_anti_range): Same. (irange::set): Same. (irange::operator==): Same. (irange::singleton_p): Same. (irange::contains_p): Same. (irange::set_range_from_nonzero_bits): Same. (DEFINE_INT_RANGE_INSTANCE): Same. (INT): Same. (UINT): Same. (SCHAR): Same. (UINT128): Same. (UCHAR): Same. (range): New. (tree_range): New. (range_int): New. (range_uint): New. (range_uint128): New. (range_uchar): New. (range_char): New. (build_range3): Convert to irange wide_int API. (range_tests_irange3): Same. (range_tests_int_range_max): Same. (range_tests_strict_enum): Same. (range_tests_misc): Same. (range_tests_nonzero_bits): Same. (range_tests_nan): Same. (range_tests_signed_zeros): Same. * value-range.h (Value_Range::Value_Range): Same. (irange::set): Same. (irange::nonzero_p): Same. (irange::contains_p): Same. (range_includes_zero_p): Same. (irange::set_nonzero): Same. (irange::set_zero): Same. (contains_zero_p): Same. (frange::contains_p): Same. * vr-values.cc (simplify_using_ranges::op_with_boolean_value_range_p): Same. (bounds_of_var_in_loop): Same. (simplify_using_ranges::legacy_fold_cond_overflow): Same.
2023-05-01Merge irange::union/intersect into irange_union/intersect.Aldy Hernandez1-4/+7
gcc/ChangeLog: * value-range.cc (irange::irange_union): Rename to... (irange::union_): ...this. (irange::irange_intersect): Rename to... (irange::intersect): ...this. * value-range.h (irange::union_): Delete. (irange::intersect): Delete.
2023-05-01Remove irange::tree_{lower,upper}_bound.Aldy Hernandez1-18/+18
gcc/ChangeLog: * value-range.cc (irange::irange_set_anti_range): Remove uses of tree_lower_bound and tree_upper_bound. (irange::verify_range): Same. (irange::operator==): Same. (irange::singleton_p): Same. * value-range.h (irange::tree_lower_bound): Delete. (irange::tree_upper_bound): Delete. (irange::lower_bound): Delete. (irange::upper_bound): Delete. (irange::zero_p): Remove uses of tree_lower_bound and tree_upper_bound.
2023-05-01Remove irange::{min,max,kind}.Aldy Hernandez1-49/+0
gcc/ChangeLog: * tree-ssa-loop-niter.cc (refine_value_range_using_guard): Remove kind() call. (determine_value_range): Same. (record_nonwrapping_iv): Same. (infer_loop_bounds_from_signedness): Same. (scev_var_range_cant_overflow): Same. * tree-vrp.cc (operand_less_p): Delete. * tree-vrp.h (operand_less_p): Delete. * value-range.cc (get_legacy_range): Remove uses of deprecated API. (irange::value_inside_range): Delete. * value-range.h (vrange::kind): Delete. (irange::num_pairs): Remove check of m_kind. (irange::min): Delete. (irange::max): Delete.
2023-04-27Normalize addresses in IPA before calling range_op_handler [PR109639]Aldy Hernandez1-0/+3
The old legacy code would allow building ranges containing symbolics, even though the entire ranger ecosystem does not handle them. These were normalized into non-zero ranges by helper functions in VRP (range_fold_*_expr) before calling the ranger. The only users of these functions should have been legacy VRP, which is no more. However, a handful of users crept into IPA, even though these functions shouldn't never been called outside of VRP or vr-values. The issue here is that IPA is building a range of [&foo, &foo] and expecting range_fold_binary to normalize it to non-zero. Fixed by adding a helper function before calling the range_op handler. I think these covers the problematic ranges. If not, I'll come up with something more generalized that does not involve polluting irange::set with the normalization code. After all, this only involves a handful of IPA places. I've also added an assert in irange::set() making it easier to detect any possible fallout without having to drill deep into the setter. gcc/ChangeLog: PR tree-optimization/109639 * ipa-cp.cc (ipa_value_range_from_jfunc): Normalize range. (propagate_vr_across_jump_function): Same. * ipa-fnsummary.cc (evaluate_conditions_for_known_args): Same. * ipa-prop.h (ipa_range_set_and_normalize): New. * value-range.cc (irange::set): Assert min and max are INTEGER_CST.
2023-04-26Remove legacy range support.Aldy Hernandez1-1151/+37
This patch removes all the code paths guarded by legacy_mode_p(), thus allowing us to re-use the int_range<1> idiom for a range of one sub-range. This allows us to represent these simple ranges in a more efficient manner. gcc/ChangeLog: * range-op.cc (range_op_cast_tests): Remove legacy support. * value-range-storage.h (vrange_allocator::alloc_irange): Same. * value-range.cc (irange::operator=): Same. (get_legacy_range): Same. (irange::copy_legacy_to_multi_range): Delete. (irange::copy_to_legacy): Delete. (irange::irange_set_anti_range): Delete. (irange::set): Remove legacy support. (irange::verify_range): Same. (irange::legacy_lower_bound): Delete. (irange::legacy_upper_bound): Delete. (irange::legacy_equal_p): Delete. (irange::operator==): Remove legacy support. (irange::singleton_p): Same. (irange::value_inside_range): Same. (irange::contains_p): Same. (intersect_ranges): Delete. (irange::legacy_intersect): Delete. (union_ranges): Delete. (irange::legacy_union): Delete. (irange::legacy_verbose_union_): Delete. (irange::legacy_verbose_intersect): Delete. (irange::irange_union): Remove legacy support. (irange::irange_intersect): Same. (irange::intersect): Same. (irange::invert): Same. (ranges_from_anti_range): Delete. (gt_pch_nx): Adjust for legacy removal. (gt_ggc_mx): Same. (range_tests_legacy): Delete. (range_tests_misc): Adjust for legacy removal. (range_tests): Same. * value-range.h (class irange): Same. (irange::legacy_mode_p): Delete. (ranges_from_anti_range): Delete. (irange::nonzero_p): Adjust for legacy removal. (irange::lower_bound): Same. (irange::upper_bound): Same. (irange::union_): Same. (irange::intersect): Same. (irange::set_nonzero): Same. (irange::set_zero): Same. * vr-values.cc (simplify_using_ranges::legacy_fold_cond_overflow): Same.
2023-04-26Remove range_has_numeric_bounds_p.Aldy Hernandez1-9/+3
gcc/ChangeLog: * value-range.cc (irange::copy_legacy_to_multi_range): Rewrite use of range_has_numeric_bounds_p with irange API. (range_has_numeric_bounds_p): Delete. * value-range.h (range_has_numeric_bounds_p): Delete.
2023-04-26Fix swapping of ranges.Aldy Hernandez1-47/+0
The legacy range code has logic to swap out of order endpoints in the irange constructor. The new irange code expects the caller to fix any inconsistencies, thus speeding up the common case. However, this means that when we remove legacy, any stragglers must be fixed. This patch fixes the 3 culprits found during the conversion. gcc/ChangeLog: * range-op.cc (operator_cast::op1_range): Use create_possibly_reversed_range. (operator_bitwise_and::simple_op1_range_solver): Same. * value-range.cc (swap_out_of_order_endpoints): Delete. (irange::set): Remove call to swap_out_of_order_endpoints.
2023-04-26Convert users of legacy API to get_legacy_range() function.Aldy Hernandez1-26/+63
This patch converts the users of the legacy API to a function called get_legacy_range() which will return the pieces of the soon to be removed API (min, max, and kind). This is a temporary measure while these users are converted. In upcoming patches I will convert most users, but most of the middle-end warning uses will remain. Naive attempts to remove them showed that a lot of these uses are quite dependant on the anti-range idiom, and converting them to the new API broke the tests, even when the conversion was conceptually correct. Perhaps someone who understands these passes could take a stab at it. In the meantime, the legacy uses can be trivially found by grepping for get_legacy_range. gcc/ChangeLog: * builtins.cc (determine_block_size): Convert use of legacy API to get_legacy_range. * gimple-array-bounds.cc (check_out_of_bounds_and_warn): Same. (array_bounds_checker::check_array_ref): Same. * gimple-ssa-warn-restrict.cc (builtin_memref::extend_offset_range): Same. * ipa-cp.cc (ipcp_store_vr_results): Same. * ipa-fnsummary.cc (set_switch_stmt_execution_predicate): Same. * ipa-prop.cc (struct ipa_vr_ggc_hash_traits): Same. (ipa_write_jump_function): Same. * pointer-query.cc (get_size_range): Same. * tree-data-ref.cc (split_constant_offset): Same. * tree-ssa-strlen.cc (get_range): Same. (maybe_diag_stxncpy_trunc): Same. (strlen_pass::get_len_or_size): Same. (strlen_pass::count_nonzero_bytes_addr): Same. * tree-vect-patterns.cc (vect_get_range_info): Same. * value-range.cc (irange::maybe_anti_range): Remove. (get_legacy_range): New. (irange::copy_to_legacy): Use get_legacy_range. (ranges_from_anti_range): Same. * value-range.h (class irange): Remove maybe_anti_range. (get_legacy_range): New. * vr-values.cc (check_for_binary_op_overflow): Convert use of legacy API to get_legacy_range. (compare_ranges): Same. (compare_range_with_value): Same. (bounds_of_var_in_loop): Same. (find_case_label_ranges): Same. (simplify_using_ranges::simplify_switch_using_ranges): Same.
2023-04-26Remove irange::constant_p.Aldy Hernandez1-14/+0
gcc/ChangeLog: * value-range-pretty-print.cc (vrange_printer::visit): Remove constant_p use. * value-range.cc (irange::constant_p): Remove. (irange::get_nonzero_bits_from_range): Remove constant_p use. * value-range.h (class irange): Remove constant_p. (irange::num_pairs): Remove constant_p use.
2023-04-26Remove symbolics from irange.Aldy Hernandez1-135/+4
gcc/ChangeLog: * value-range.cc (irange::copy_legacy_to_multi_range): Remove symbolics support. (irange::set): Same. (irange::legacy_lower_bound): Same. (irange::legacy_upper_bound): Same. (irange::contains_p): Same. (range_tests_legacy): Same. (irange::normalize_addresses): Remove. (irange::normalize_symbolics): Remove. (irange::symbolic_p): Remove. * value-range.h (class irange): Remove symbolic_p, normalize_symbolics, and normalize_addresses. * vr-values.cc (simplify_using_ranges::two_valued_val_range_p): Remove symbolics support.
2023-04-26Remove irange::may_contain_p.Aldy Hernandez1-8/+0
The deprecated irange::may_contain_p method differed from contains_p in that it could handle symbolics, which no longer exist in VRP. gcc/ChangeLog: * value-range.cc (irange::may_contain_p): Remove. * value-range.h (range_includes_zero_p): Rewrite may_contain_p usage with contains_p. * vr-values.cc (compare_range_with_value): Same.
2023-04-25Remove default constructor to nan_state.Aldy Hernandez1-2/+1
I think it's best to specify the default behavior of nan_state, since it's not obvious that nan_state() defaults to TRUE. Also, this avoids the ugly nan_state(false, false) idiom. gcc/ChangeLog: * value-range.cc (frange::set): Adjust constructor. * value-range.h (nan_state::nan_state): Replace default constructor with one taking an argument.
2023-04-23Handle NANs in frange::operator== [PR109593]Aldy Hernandez1-0/+10
This patch... commit 10e481b154c5fc63e6ce4b449ce86cecb87a6015 Return true from operator== for two identical ranges containing NAN. removed the check for NANs, which caused us to read from m_min and m_max which are undefined for NANs. gcc/ChangeLog: PR tree-optimization/109593 * value-range.cc (frange::operator==): Handle NANs.
2023-04-18Add GTY support for vrange.Aldy Hernandez1-0/+85
IPA currently puts *some* irange's in GC memory. When I contribute support for generic ranges in IPA, we'll need to change this to vrange. This patch adds GTY support for both vrange and frange. gcc/ChangeLog: * value-range.cc (gt_ggc_mx): New. (gt_pch_nx): New. * value-range.h (class vrange): Add GTY marker. (class frange): Same. (gt_ggc_mx): Remove. (gt_pch_nx): Remove.
2023-04-18Declare dconstm0 to go along with dconst0 and friends.Aldy Hernandez1-4/+3
Negating dconst0 is getting pretty old, and we will keep adding copies of the same idiom. Fixed by adding a dconstm0 constant to go along with dconst1, dconstm1, etc. gcc/ChangeLog: * emit-rtl.cc (init_emit_once): Initialize dconstm0. * gimple-range-op.cc (class cfn_signbit): Remove dconstm0 declaration. * range-op-float.cc (zero_range): Use dconstm0. (zero_to_inf_range): Same. * real.h (dconstm0): New. * value-range.cc (frange::flush_denormals_to_zero): Use dconstm0. (frange::set_zero): Do not declare dconstm0.
2023-04-18Return true from operator== for two identical ranges containing NAN.Aldy Hernandez1-10/+0
The == operator for ranges signifies that two ranges contain the same thing, not that they are ultimately equal. So [2,4] == [2,4], even though one may be a 2 and the other may be a 3. Similarly with two VARYING ranges. There is an oversight in frange::operator== where we are returning false for two identical NANs. This is causing us to never cache NANs in sbr_sparse_bitmap::set_bb_range. gcc/ChangeLog: * value-range.cc (frange::operator==): Adjust for NAN. (range_tests_nan): Remove some NAN tests.
2023-04-18Add inchash support for vrange.Aldy Hernandez1-0/+52
This patch provides inchash support for vrange. It is along the lines of the streaming support I just posted and will be used for IPA hashing of ranges. gcc/ChangeLog: * inchash.cc (hash::add_real_value): New. * inchash.h (class hash): Add add_real_value. * value-range.cc (add_vrange): New. * value-range.h (inchash::add_vrange): New.
2023-03-28range-op-float: Only flush_denormals_to_zero for +-*/ [PR109154]Jakub Jelinek1-2/+0
As discussed in the PR, flushing denormals to zero on every frange::set might be harmful for e.g. x < 0.0 comparisons, because we then on both sides use ranges that include zero [-Inf, -0.0] on the true side, and [-0.0, +Inf] NAN on the false side, rather than [-Inf, nextafter (-0.0, -Inf)] on the true side. The following patch does it only in range_operator_float::fold_range which is right now used for +-*/ (both normal and reverse ops of those). Though, I don't see any difference on the testcase in the PR, but not sure what I should be looking at and the reduced testcase there has undefined behavior. 2023-03-28 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/109154 * value-range.h (frange::flush_denormals_to_zero): Make it public rather than private. * value-range.cc (frange::set): Don't call flush_denormals_to_zero here. * range-op-float.cc (range_operator_float::fold_range): Call flush_denormals_to_zero.
2023-03-23ranger: Ranger meets aspellJakub Jelinek1-2/+2
I've noticed a comment typo in tree-vrp.cc and decided to quickly skim aspell -c on the ranger sources (with quick I on everything that looked ok or roughly ok). But not being a native English speaker, I could get stuff wrong. 2023-03-23 Jakub Jelinek <jakub@redhat.com> * value-range.cc (irange::irange_union, irange::intersect): Fix comment spelling bugs. * gimple-range-trace.cc (range_tracer::do_header): Likewise. * gimple-range-trace.h: Likewise. * gimple-range-edge.cc: Likewise. (gimple_outgoing_range_stmt_p, gimple_outgoing_range::switch_edge_range, gimple_outgoing_range::edge_range_p): Likewise. * gimple-range.cc (gimple_ranger::prefill_stmt_dependencies, gimple_ranger::fold_stmt, gimple_ranger::register_transitive_infer, assume_query::assume_query, assume_query::calculate_phi): Likewise. * gimple-range-edge.h: Likewise. * value-range.h (Value_Range::set, Value_Range::lower_bound, Value_Range::upper_bound, frange::set_undefined): Likewise. * gimple-range-gori.h (range_def_chain::depend, gori_map::m_outgoing, gori_compute): Likewise. * gimple-range-fold.h (fold_using_range): Likewise. * gimple-range-path.cc (path_range_query::compute_ranges_in_phis): Likewise. * gimple-range-gori.cc (range_def_chain::in_chain_p, range_def_chain::dump, gori_map::calculate_gori, gori_compute::compute_operand_range_switch, gori_compute::logical_combine, gori_compute::refine_using_relation, gori_compute::compute_operand1_range, gori_compute::may_recompute_p): Likewise. * gimple-range.h: Likewise. (enable_ranger): Likewise. * range-op.h (empty_range_varying): Likewise. * value-query.h (value_query): Likewise. * gimple-range-cache.cc (block_range_cache::set_bb_range, block_range_cache::dump, ssa_global_cache::clear_global_range, temporal_cache::temporal_value, temporal_cache::current_p, ranger_cache::range_of_def, ranger_cache::propagate_updated_value, ranger_cache::range_from_dom, ranger_cache::register_inferred_value): Likewise. * gimple-range-fold.cc (fur_edge::get_phi_operand, fur_stmt::get_operand, gimple_range_adjustment, fold_using_range::range_of_phi, fold_using_range::relation_fold_and_or): Likewise. * value-range-storage.h (irange_storage_slot::MAX_INTS): Likewise. * value-query.cc (range_query::value_of_expr, range_query::value_on_edge, range_query::query_relation): Likewise. * tree-vrp.cc (remove_unreachable::remove_and_update_globals, intersect_range_with_nonzero_bits): Likewise. * gimple-range-infer.cc (gimple_infer_range::check_assume_func, exit_range): Likewise. * value-relation.h: Likewise. (equiv_oracle, relation_trio::relation_trio, value_relation, value_relation::value_relation, pe_min): Likewise. * range-op-float.cc (range_operator_float::rv_fold, frange_arithmetic, foperator_unordered_equal::op1_range, foperator_div::rv_fold): Likewise. * gimple-range-op.cc (cfn_clz::fold_range): Likewise. * value-relation.cc (equiv_oracle::query_relation, equiv_oracle::register_equiv, equiv_oracle::add_equiv_to_block, value_relation::apply_transitive, relation_chain_head::find_relation, dom_oracle::query_relation, dom_oracle::find_relation_block, dom_oracle::find_relation_dom, path_oracle::register_equiv): Likewise. * range-op.cc (range_operator::wi_fold_in_parts_equiv, create_possibly_reversed_range, adjust_op1_for_overflow, operator_mult::wi_fold, operator_exact_divide::op1_range, operator_cast::lhs_op1_relation, operator_cast::fold_pair, operator_cast::fold_range, operator_abs::wi_fold, range_op_cast_tests, range_op_lshift_tests): Likewise.
2023-03-22frange: Implement nan_state class [PR109008]Aldy Hernandez1-3/+15
This patch implements a nan_state class, that allows us to query or pass around the NANness of an frange. We can store +NAN, -NAN, +-NAN, or not-a-NAN with it. I tried to touch as little as possible, leaving other cleanups to the next release. For example, we should replace the m_*_nan fields in frange with nan_state, and provide relevant accessors to nan_state (isnan, etc). PR tree-optimization/109008 gcc/ChangeLog: * value-range.cc (frange::set): Add nan_state argument. * value-range.h (class nan_state): New. (frange::get_nan_state): New.
2023-02-03irange: Compare nonzero bits in irange with widest_int [PR108639]Aldy Hernandez1-2/+9
The problem here is we are trying to compare two ranges with different precisions and the == operator in wide_int is complaining. Interestingly, the problem is not the nonzero bits, but the fact that the entire ranges have different precisions. The reason we don't ICE when comparing the sub-ranges, is because the code in irange::operator== works on trees, and tree_int_cst_equal is promoting the comparison to a widest int: if (TREE_CODE (t1) == INTEGER_CST && TREE_CODE (t2) == INTEGER_CST && wi::to_widest (t1) == wi::to_widest (t2)) return 1; This is why we don't see the ICE until the nonzero bits comparison is done on wide ints. I think we should maintain the current equality behavior, and follow suit in the nonzero bit comparison. I have also fixed the legacy equality code, even though technically nonzero bits shouldn't appear in legacy. But better safe than sorry. PR tree-optimization/108639 gcc/ChangeLog: * value-range.cc (irange::legacy_equal_p): Compare nonzero bits as widest_int. (irange::operator==): Same.
2023-01-02Update copyright years.Jakub Jelinek1-1/+1
2022-11-12[frange] Avoid testing signed zero test for -fno-signed-zeros.Aldy Hernandez1-4/+5
This patch moves a test that is meant to only work for signed zeros into range_tests_signed_zeros. I am not aware of any architectures where this is failing, but it is annoying to see selftests failing when -fno-signed-zeros is used. gcc/ChangeLog: * value-range.cc (range_tests_signbit): Move to set from here... (range_tests_signed_zeros): ...to here.
2022-11-10Do not specify NAN sign in frange::set_nonnegative.Aldy Hernandez1-5/+7
After further reading of the IEEE 754 standard, it has become clear that there are no guarantees with regards to the sign of a NAN when it comes to any operation other than copy, copysign, abs, and negate. Currently, set_nonnegative() is only used in one place in ranger applicable to floating point values, when expanding unknown calls. Since we already specially handle copy, copysign, abs, and negate, all the calls to set_nonnegative() must be NAN-sign agnostic. The cleanest solution is to leave the sign unspecificied in frange::set_nonnegative(). Any special case, must be handled by the caller. gcc/ChangeLog: * value-range.cc (frange::set_nonnegative): Remove NAN sign handling. (range_tests_signed_zeros): Adjust test.
2022-11-09Clear NAN when reading back a global range if necessary.Aldy Hernandez1-0/+9
When reading back from the global store, we must clear the NAN bit if necessary. The reason it's not happening is because the constructor sets a NAN by default (when HONOR_NANS). We must be careful to clear the NAN bit if the original range didn't have a NAN. I have commented the reason we use the constructor instead of filling out the fields by hand, because it wasn't clear at re-reading this code. PR 107569/tree-optimization gcc/ChangeLog: * value-range-storage.cc (frange_storage_slot::get_frange): Clear NAN if appropriate. * value-range.cc (range_tests_floats): New test.
2022-11-08Provide normalized and denormal format version of real_isdenormal.Aldy Hernandez1-2/+3
Implement a variant of real_isdenormal() to be used within real.cc where the argument is known to be in denormal format. Rewrite real_isdenormal() for use outside of real.cc where the argument is known to be normalized. gcc/ChangeLog: * real.cc (real_isdenormal): New. (encode_ieee_single): Call real_isdenormal. (encode_ieee_double): Same. (encode_ieee_extended): Same. (encode_ieee_quad): Same. (encode_ieee_half): Same. (encode_arm_bfloat_half): Same. * real.h (real_isdenormal): Add mode argument. Rewrite for normalized values. * value-range.cc (frange::flush_denormals_to_zero): Pass mode to real_isdenormal.
2022-11-02Fix bug in frange::contains_p() for signed zeros.Aldy Hernandez1-1/+9
The contains_p() code wasn't returning true for non-singleton ranges containing signed zeros. With this patch we now handle: -0.0 exists in [-3, +5.0] +0.0 exists in [-3, +5.0] gcc/ChangeLog: * value-range.cc (frange::contains_p): Fix signed zero handling. (range_tests_signed_zeros): New test.
2022-11-01Intersect with nonzero bits can indicate change incorrectly.Andrew MacLeod1-0/+4
* value-range.cc (irange::intersect_nonzero_bits): If new non-zero mask is the same as original, flag no change.
2022-10-28Change remaining flag_finite_math_only use in value-range.cc.Aldy Hernandez1-1/+1
gcc/ChangeLog: * value-range.cc (range_tests_floats): Use HONOR_INFINITIES.
2022-10-26Convert flag_finite_math_only uses in frange to HONOR_*.Aldy Hernandez1-3/+3
As mentioned earlier, we should be using HONOR_* on types rather than flag_finite_math_only. gcc/ChangeLog: * value-range.cc (frange::set): Use HONOR_*. (frange::verify_range): Same. * value-range.h (frange_val_min): Same. (frange_val_max): Same.
2022-10-24Check HONOR_NANS instead of flag_finite_math_only in frange:verify_range.Aldy Hernandez1-8/+25
[Jakub and other FP experts, would this be OK, or am I missing something?] Vax does not seem to have !flag_finite_math_only, but float_type_node does not HONOR_NANS. The check in frange::verify_range dependend on flag_finite_math_only, which is technically not correct since frange::set_varying() checks HONOR_NANS instead of flag_finite_math_only. I'm actually getting tired of flag_finite_math_only and !flag_finite_math_only discrepancies in the selftests (Vax and rx-elf come to mind). I think we should just test both alternatives in the selftests as in this patch. We could also check flag_finite_math_only=0 with a float_type_node that does not HONOR_NANs, but I have no idea how to twiddle FLOAT_MODE_FORMAT temporarily, and that may be over thinking it. PR tree-optimization/107365 gcc/ChangeLog: * value-range.cc (frange::verify_range): Predicate NAN check in VARYING range on HONOR_NANS instead of flag_finite_math_only. (range_tests_floats): Same. (range_tests_floats_various): New. (range_tests): Call range_tests_floats_various.