aboutsummaryrefslogtreecommitdiff
path: root/gcc/value-range.cc
AgeCommit message (Collapse)AuthorFilesLines
2024-06-17Rename Value_Range to value_range.Aldy Hernandez1-2/+2
Now that all remaining users of value_range have been renamed to int_range<>, we can reclaim value_range as a temporary, thus removing the annoying CamelCase. gcc/ChangeLog: * data-streamer-in.cc (streamer_read_value_range): Rename Value_Range to value_range. * data-streamer.h (streamer_read_value_range): Same. * gimple-pretty-print.cc (dump_ssaname_info): Same. * gimple-range-cache.cc (ssa_block_ranges::dump): Same. (ssa_lazy_cache::merge): Same. (block_range_cache::dump): Same. (ssa_cache::merge_range): Same. (ssa_cache::dump): Same. (ranger_cache::edge_range): Same. (ranger_cache::propagate_cache): Same. (ranger_cache::fill_block_cache): Same. (ranger_cache::resolve_dom): Same. (ranger_cache::range_from_dom): Same. (ranger_cache::register_inferred_value): Same. * gimple-range-fold.cc (op1_range): Same. (op2_range): Same. (fold_relations): Same. (fold_using_range::range_of_range_op): Same. (fold_using_range::range_of_phi): Same. (fold_using_range::range_of_call): Same. (fold_using_range::condexpr_adjust): Same. (fold_using_range::range_of_cond_expr): Same. (fur_source::register_outgoing_edges): Same. * gimple-range-fold.h (gimple_range_type): Same. (gimple_range_ssa_p): Same. * gimple-range-gori.cc (gori_compute::compute_operand_range): Same. (gori_compute::logical_combine): Same. (gori_compute::refine_using_relation): Same. (gori_compute::compute_operand1_range): Same. (gori_compute::compute_operand2_range): Same. (gori_compute::compute_operand1_and_operand2_range): Same. (gori_calc_operands): Same. (gori_name_helper): Same. * gimple-range-infer.cc (gimple_infer_range::check_assume_func): Same. (gimple_infer_range::gimple_infer_range): Same. (infer_range_manager::maybe_adjust_range): Same. (infer_range_manager::add_range): Same. * gimple-range-infer.h: Same. * gimple-range-op.cc (gimple_range_op_handler::gimple_range_op_handler): Same. (gimple_range_op_handler::calc_op1): Same. (gimple_range_op_handler::calc_op2): Same. (gimple_range_op_handler::maybe_builtin_call): Same. * gimple-range-path.cc (path_range_query::internal_range_of_expr): Same. (path_range_query::ssa_range_in_phi): Same. (path_range_query::compute_ranges_in_phis): Same. (path_range_query::compute_ranges_in_block): Same. (path_range_query::add_to_exit_dependencies): Same. * gimple-range-trace.cc (debug_seed_ranger): Same. * gimple-range.cc (gimple_ranger::range_of_expr): Same. (gimple_ranger::range_on_entry): Same. (gimple_ranger::range_on_edge): Same. (gimple_ranger::range_of_stmt): Same. (gimple_ranger::prefill_stmt_dependencies): Same. (gimple_ranger::register_inferred_ranges): Same. (gimple_ranger::register_transitive_inferred_ranges): Same. (gimple_ranger::export_global_ranges): Same. (gimple_ranger::dump_bb): Same. (assume_query::calculate_op): Same. (assume_query::calculate_phi): Same. (assume_query::dump): Same. (dom_ranger::range_of_stmt): Same. * ipa-cp.cc (ipcp_vr_lattice::meet_with_1): Same. (ipa_vr_operation_and_type_effects): Same. (ipa_value_range_from_jfunc): Same. (propagate_bits_across_jump_function): Same. (propagate_vr_across_jump_function): Same. (ipcp_store_vr_results): Same. * ipa-cp.h: Same. * ipa-fnsummary.cc (evaluate_conditions_for_known_args): Same. (evaluate_properties_for_edge): Same. * ipa-prop.cc (struct ipa_vr_ggc_hash_traits): Same. (ipa_vr::get_vrange): Same. (ipa_vr::streamer_read): Same. (ipa_vr::streamer_write): Same. (ipa_vr::dump): Same. (ipa_set_jfunc_vr): Same. (ipa_compute_jump_functions_for_edge): Same. (ipcp_get_parm_bits): Same. (ipcp_update_vr): Same. (ipa_record_return_value_range): Same. (ipa_return_value_range): Same. * ipa-prop.h (ipa_return_value_range): Same. (ipa_record_return_value_range): Same. * range-op.h (range_cast): Same. * tree-ssa-dom.cc (dom_opt_dom_walker::set_global_ranges_from_unreachable_edges): Same. (cprop_operand): Same. * tree-ssa-loop-ch.cc (loop_static_stmt_p): Same. * tree-ssa-loop-niter.cc (record_nonwrapping_iv): Same. * tree-ssa-loop-split.cc (split_at_bb_p): Same. * tree-ssa-phiopt.cc (value_replacement): Same. * tree-ssa-strlen.cc (get_range): Same. * tree-ssa-threadedge.cc (hybrid_jt_simplifier::simplify): Same. (hybrid_jt_simplifier::compute_exit_dependencies): Same. * tree-ssanames.cc (set_range_info): Same. (duplicate_ssa_name_range_info): Same. * tree-vrp.cc (remove_unreachable::handle_early): Same. (remove_unreachable::remove_and_update_globals): Same. (execute_ranger_vrp): Same. * value-query.cc (range_query::value_of_expr): Same. (range_query::value_on_edge): Same. (range_query::value_of_stmt): Same. (range_query::value_on_entry): Same. (range_query::value_on_exit): Same. (range_query::get_tree_range): Same. * value-range-storage.cc (vrange_storage::set_vrange): Same. * value-range.cc (Value_Range::dump): Same. (value_range::dump): Same. (debug): Same. * value-range.h (enum value_range_discriminator): Same. (class vrange): Same. (class Value_Range): Same. (class value_range): Same. (Value_Range::Value_Range): Same. (value_range::value_range): Same. (Value_Range::~Value_Range): Same. (value_range::~value_range): Same. (Value_Range::set_type): Same. (value_range::set_type): Same. (Value_Range::init): Same. (value_range::init): Same. (Value_Range::operator=): Same. (value_range::operator=): Same. (Value_Range::operator==): Same. (value_range::operator==): Same. (Value_Range::operator!=): Same. (value_range::operator!=): Same. (Value_Range::supports_type_p): Same. (value_range::supports_type_p): Same. * vr-values.cc (simplify_using_ranges::fold_cond_with_ops): Same. (simplify_using_ranges::legacy_fold_cond): Same.
2024-06-12pretty_printer: make all fields privateDavid Malcolm1-2/+2
No functional change intended. gcc/analyzer/ChangeLog: * access-diagram.cc (access_range::dump): Update for fields of pretty_printer becoming private. * call-details.cc (call_details::dump): Likewise. * call-summary.cc (call_summary::dump): Likewise. (call_summary_replay::dump): Likewise. * checker-event.cc (checker_event::debug): Likewise. * constraint-manager.cc (range::dump): Likewise. (bounded_range::dump): Likewise. (constraint_manager::dump): Likewise. * engine.cc (exploded_node::dump): Likewise. (exploded_path::dump): Likewise. (exploded_path::dump_to_file): Likewise. * feasible-graph.cc (feasible_graph::dump_feasible_path): Likewise. * program-point.cc (program_point::dump): Likewise. * program-state.cc (extrinsic_state::dump_to_file): Likewise. (sm_state_map::dump): Likewise. (program_state::dump_to_file): Likewise. * ranges.cc (symbolic_byte_offset::dump): Likewise. (symbolic_byte_range::dump): Likewise. * record-layout.cc (record_layout::dump): Likewise. * region-model-reachability.cc (reachable_regions::dump): Likewise. * region-model.cc (region_to_value_map::dump): Likewise. (region_model::dump): Likewise. (model_merger::dump): Likewise. * region-model.h (one_way_id_map<T>::dump): Likewise. * region.cc (region_offset::dump): Likewise. (region::dump): Likewise. * sm-malloc.cc (deallocator_set::dump): Likewise. * store.cc (uncertainty_t::dump): Likewise. (binding_key::dump): Likewise. (bit_range::dump): Likewise. (byte_range::dump): Likewise. (binding_map::dump): Likewise. (binding_cluster::dump): Likewise. (store::dump): Likewise. * supergraph.cc (supergraph::dump_dot_to_file): Likewise. (superedge::dump): Likewise. * svalue.cc (svalue::dump): Likewise. gcc/c-family/ChangeLog: * c-ada-spec.cc (dump_ads): Update for fields of pretty_printer becoming private. * c-pretty-print.cc: Likewise throughout. gcc/c/ChangeLog: * c-objc-common.cc (print_type): Update for fields of pretty_printer becoming private. (c_tree_printer): Likewise. gcc/cp/ChangeLog: * cxx-pretty-print.cc: Update throughout for fields of pretty_printer becoming private. * error.cc: Likewise. gcc/ChangeLog: * diagnostic.cc (diagnostic_context::urls_init): Update for fields of pretty_printer becoming private. (diagnostic_context::print_any_cwe): Likewise. (diagnostic_context::print_any_rules): Likewise. (diagnostic_context::print_option_information): Likewise. * diagnostic.h (diagnostic_format_decoder): Likewise. (diagnostic_prefixing_rule): Likewise, fixing typo. * digraph.cc (test_dump_to_dot): Likewise. * digraph.h (digraph<GraphTraits>::dump_dot_to_file): Likewise. * dumpfile.cc (dump_pretty_printer::emit_any_pending_textual_chunks): Likewise. * gimple-pretty-print.cc (print_gimple_stmt): Likewise. (print_gimple_expr): Likewise. (print_gimple_seq): Likewise. (dump_ssaname_info_to_file): Likewise. (gimple_dump_bb): Likewise. * graph.cc (print_graph_cfg): Likewise. (start_graph_dump): Likewise. * langhooks.cc (lhd_print_error_function): Likewise. * lto-wrapper.cc (print_lto_docs_link): Likewise. * pretty-print.cc (pp_set_real_maximum_length): Convert to... (pretty_printer::set_real_maximum_length): ...this. (pp_clear_state): Convert to... (pretty_printer::clear_state): ...this. (pp_wrap_text): Update for pp_remaining_character_count_for_line becoming a member function. (urlify_quoted_string): Update for fields of pretty_printer becoming private. (pp_format): Convert to... (pretty_printer::format): ...this. Reduce the scope of local variables "old_line_length" and "old_wrapping_mode" and make const. Reduce the scope of locals "args", "new_chunk_array", "curarg", "any_unnumbered", and "any_numbered". (pp_output_formatted_text): Update for fields of pretty_printer becoming private. (pp_flush): Likewise. (pp_really_flush): Likewise. (pp_set_line_maximum_length): Likewise. (pp_set_prefix): Convert to... (pretty_printer::set_prefix): ...this. (pp_take_prefix): Update for fields of pretty_printer gaining "m_" prefixes. (pp_destroy_prefix): Likewise. (pp_emit_prefix): Convert to... (pretty_printer::emit_prefix): ...this. (pretty_printer::pretty_printer): Update both ctors for fields gaining "m_" prefixes. (pretty_printer::~pretty_printer): Likewise for dtor. (pp_append_text): Update for pp_emit_prefix becoming pretty_printer::emit_prefix. (pp_remaining_character_count_for_line): Convert to... (pretty_printer::remaining_character_count_for_line): ...this. (pp_character): Update for above change. (pp_maybe_space): Convert to... (pretty_printer::maybe_space): ...this. (pp_begin_url): Convert to... (pretty_printer::begin_url): ...this. (get_end_url_string): Update for fields of pretty_printer becoming private. (pp_end_url): Convert to... (pretty_printer::end_url): ...this. (selftest::test_pretty_printer::test_pretty_printer): Update for fields of pretty_printer becoming private. (selftest::test_urls): Likewise. (selftest::test_null_urls): Likewise. (selftest::test_urlification): Likewise. * pretty-print.h (pp_line_cutoff): Convert from macro to inline function. (pp_prefixing_rule): Likewise. (pp_wrapping_mode): Likewise. (pp_format_decoder): Likewise. (pp_needs_newline): Likewise. (pp_indentation): Likewise. (pp_translate_identifiers): Likewise. (pp_show_color): Likewise. (pp_buffer): Likewise. (pp_get_prefix): Add forward decl to allow friend decl. (pp_take_prefix): Likewise. (pp_destroy_prefix): Likewise. (class pretty_printer): Fix typo in leading comment. Add "friend" decls for the various new accessor functions that were formerly macros and for pp_get_prefix, pp_take_prefix, and pp_destroy_prefix. Make all fields private. (pretty_printer::set_output_stream): New. (pretty_printer::set_prefix): New decl. (pretty_printer::emit_prefix): New decl. (pretty_printer::format): New decl. (pretty_printer::maybe_space): New decl. (pretty_printer::supports_urls_p): New. (pretty_printer::get_url_format): New. (pretty_printer::set_url_format): New. (pretty_printer::begin_url): New decl. (pretty_printer::end_url): New decl. (pretty_printer::set_verbatim_wrapping): New. (pretty_printer::set_padding): New. (pretty_printer::get_padding): New. (pretty_printer::clear_state): New decl. (pretty_printer::set_real_maximum_length): New decl. (pretty_printer::remaining_character_count_for_line): New decl. (pretty_printer::buffer): Rename to... (pretty_printer::m_buffer): ...this. (pretty_printer::prefix): Rename to... (pretty_printer::m_prefix): ...this; (pretty_printer::padding): Rename to... (pretty_printer::m_padding): ...this; (pretty_printer::maximum_length): Rename to... (pretty_printer::m_maximum_length): ...this; (pretty_printer::indent_skip): Rename to... (pretty_printer::m_indent_skip): ...this; (pretty_printer::wrapping): Rename to... (pretty_printer::m_wrapping): ...this; (pretty_printer::format_decoder): Rename to... (pretty_printer::m_format_decoder): ...this; (pretty_printer::emitted_prefix): Rename to... (pretty_printer::m_emitted_prefix): ...this; (pretty_printer::need_newline): Rename to... (pretty_printer::m_need_newline): ...this; (pretty_printer::translate_identifiers): Rename to... (pretty_printer::m_translate_identifiers): ...this; (pretty_printer::show_color): Rename to... (pretty_printer::m_show_color): ...this; (pretty_printer::url_format): Rename to... (pretty_printer::m_url_format): ...this; (pp_get_prefix): Reformat. (pp_format_postprocessor): New inline function. (pp_take_prefix): Move decl to before class pretty_printer. (pp_destroy_prefix): Likewise. (pp_set_prefix): Convert to inline function. (pp_emit_prefix): Convert to inline function. (pp_format): Convert to inline function. (pp_maybe_space): Convert to inline function. (pp_begin_url): Convert to inline function. (pp_end_url): Convert to inline function. (pp_set_verbatim_wrapping): Convert from macro to inline function, renaming... (pp_set_verbatim_wrapping_): ...this. * print-rtl.cc (dump_value_slim): Update for fields of pretty_printer becoming private. (dump_insn_slim): Likewise. (dump_rtl_slim): Likewise. * print-tree.cc (print_node): Likewise. * sched-rgn.cc (dump_rgn_dependencies_dot): Likewise. * text-art/canvas.cc (canvas::print_to_pp): Likewise. (canvas::debug): Likewise. (selftest::test_canvas_urls): Likewise. * text-art/dump.h (dump_to_file): Likewise. * text-art/selftests.cc (selftest::assert_canvas_streq): Likewise. * text-art/style.cc (style::print_changes): Likewise. * text-art/styled-string.cc (styled_string::from_fmt_va): Likewise. * tree-diagnostic-path.cc (control_flow_tests): Update for pp_show_color becoming an inline function. * tree-loop-distribution.cc (dot_rdg_1): Update for fields of pretty_printer becoming private. * tree-pretty-print.cc (maybe_init_pretty_print): Likewise. * value-range.cc (vrange::dump): Likewise. (irange_bitmask::dump): Likewise. gcc/fortran/ChangeLog: * error.cc (gfc_clear_pp_buffer): Likewise. (gfc_warning): Likewise. (gfc_warning_check): Likewise. (gfc_error_opt): Likewise. (gfc_error_check): Likewise. gcc/jit/ChangeLog: * jit-recording.cc (recording::function::dump_to_dot): Update for fields of pretty_printer becoming private. gcc/testsuite/ChangeLog: * gcc.dg/plugin/analyzer_cpython_plugin.c (dump_refcnt_info): Update for fields of pretty_printer becoming private. Signed-off-by: David Malcolm <dmalcolm@redhat.com>
2024-06-12pretty_printer: rename instances named "buffer" to "pp"David Malcolm1-13/+13
Various pretty_printer instances are named "buffer", but a pretty_printer *has* a buffer, rather than *is* a buffer. For example, pp_buffer (buffer)->digit_buffer is referring to "buffer"'s buffer's digit_buffer. This mechanical patch renames such variables to "pp", which I find much clearer; the above becomes: pp_buffer (pp)->digit_buffer i.e. "pp's buffer's digit_buffer". No functional change intended. Signed-off-by: David Malcolm <dmalcolm@redhat.com> gcc/c-family/ChangeLog: * c-ada-spec.cc: Rename pretty_printer "buffer" to "pp" throughout. gcc/ChangeLog: * gimple-pretty-print.cc: Rename pretty_printer "buffer" to "pp" throughout. * print-tree.cc (print_node): Likewise. * tree-loop-distribution.cc (dot_rdg_1): Likewise. * tree-pretty-print.h (dump_location): Likewise. * value-range.cc (vrange::dump): Likewise. (irange_bitmask::dump): Likewise. Signed-off-by: David Malcolm <dmalcolm@redhat.com>
2024-06-03Remove value_range typedef.Aldy Hernandez1-14/+0
Now that pointers and integers have been disambiguated from irange, and all the pointer range temporaries use prange, we can reclaim value_range as a general purpose range container. This patch removes the typedef, in favor of int_range_max, thus providing slightly better ranges in places. I have also used int_range<1> or <2> when it's known ahead of time how big the range will be, thus saving a few words. In a follow-up patch I will rename the Value_Range temporary to value_range. No change in performance. gcc/ChangeLog: * builtins.cc (expand_builtin_strnlen): Replace value_range use with int_range_max or irange when appropriate. (determine_block_size): Same. * fold-const.cc (minmax_from_comparison): Same. * gimple-array-bounds.cc (check_out_of_bounds_and_warn): Same. (array_bounds_checker::check_array_ref): Same. * gimple-fold.cc (size_must_be_zero_p): Same. * gimple-predicate-analysis.cc (find_var_cmp_const): Same. * gimple-ssa-sprintf.cc (get_int_range): Same. (format_integer): Same. (try_substitute_return_value): Same. (handle_printf_call): Same. * gimple-ssa-warn-restrict.cc (builtin_memref::extend_offset_range): Same. * graphite-sese-to-poly.cc (add_param_constraints): Same. * internal-fn.cc (get_min_precision): Same. * match.pd: Same. * pointer-query.cc (get_size_range): Same. * range-op.cc (get_shift_range): Same. (operator_trunc_mod::op1_range): Same. (operator_trunc_mod::op2_range): Same. * range.cc (range_negatives): Same. * range.h (range_positives): Same. (range_negatives): Same. * tree-affine.cc (expr_to_aff_combination): Same. * tree-data-ref.cc (compute_distributive_range): Same. (nop_conversion_for_offset_p): Same. (split_constant_offset): Same. (split_constant_offset_1): Same. (dr_step_indicator): Same. * tree-dfa.cc (get_ref_base_and_extent): Same. * tree-scalar-evolution.cc (iv_can_overflow_p): Same. * tree-ssa-math-opts.cc (optimize_spaceship): Same. * tree-ssa-pre.cc (insert_into_preds_of_block): Same. * tree-ssa-reassoc.cc (optimize_range_tests_to_bit_test): Same. * tree-ssa-strlen.cc (compare_nonzero_chars): Same. (dump_strlen_info): Same. (get_range_strlen_dynamic): Same. (set_strlen_range): Same. (maybe_diag_stxncpy_trunc): Same. (strlen_pass::get_len_or_size): Same. (strlen_pass::handle_builtin_string_cmp): Same. (strlen_pass::count_nonzero_bytes_addr): Same. (strlen_pass::handle_integral_assign): Same. * tree-switch-conversion.cc (bit_test_cluster::emit): Same. * tree-vect-loop-manip.cc (vect_gen_vector_loop_niters): Same. (vect_do_peeling): Same. * tree-vect-patterns.cc (vect_get_range_info): Same. (vect_recog_divmod_pattern): Same. * tree.cc (get_range_pos_neg): Same. * value-range.cc (debug): Remove value_range variants. * value-range.h (value_range): Remove typedef. * vr-values.cc (simplify_using_ranges::op_with_boolean_value_range_p): Replace value_range use with int_range_max or irange when appropriate. (check_for_binary_op_overflow): Same. (simplify_using_ranges::legacy_fold_cond_overflow): Same. (find_case_label_ranges): Same. (simplify_using_ranges::simplify_abs_using_ranges): Same. (test_for_singularity): Same. (simplify_using_ranges::simplify_compare_using_ranges_1): Same. (simplify_using_ranges::simplify_casted_compare): Same. (simplify_using_ranges::simplify_switch_using_ranges): Same. (simplify_conversion_using_ranges): Same. (simplify_using_ranges::two_valued_val_range_p): Same.
2024-05-17[prange] Drop range to VARYING if the bitmask intersection made it so [PR115131]Aldy Hernandez1-0/+21
If the intersection of the bitmasks made the range span the entire domain, normalize the range to VARYING. gcc/ChangeLog: PR middle-end/115131 * value-range.cc (prange::intersect): Set VARYING if intersection of bitmasks made the range span the entire domain. (range_tests_misc): New test.
2024-05-16Revert "Revert: "Enable prange support.""Aldy Hernandez1-0/+1
This reverts commit d7bb8eaade3cd3aa70715c8567b4d7b08098e699 and enables prange support again.
2024-05-10[prange] Fix thinko in prange::update_bitmask() [PR115026]Aldy Hernandez1-1/+1
gcc/ChangeLog: PR tree-optimization/115026 * value-range.cc (prange::update_bitmask): Use operand bitmask.
2024-05-10Revert: "Enable prange support." [PR114985]Aldy Hernandez1-1/+0
This reverts commit 36e877996936abd8bd08f8b1d983c8d1023a5842 until the IPA pass is fixed with regards to POINTER = POINTER <RELOP> POINTER.
2024-05-08Enable prange support.Aldy Hernandez1-0/+1
This throws the switch on prange. After this patch, it is no longer valid to store a pointer in an irange (or vice versa). Instead, they must go in prange, which is faster and more memory efficient. I will push this now, so I have time to do any follow-up bugfixing before going on paternity leave. There are various cleanups we plan on doing after this patch (faster intersect/union, remove range-op-mixed.h, remove value_range in favor of int_range_max, reclaim the name for the Value_Range temporary, clean up range-ops, etc etc). But we will hold off on those for now to make it easier to revert this patch, if for some reason we need to do so while I'm away. Tested on x86-64 Linux. gcc/ChangeLog: * gimple-range-cache.cc (sbr_sparse_bitmap::sbr_sparse_bitmap): Change irange to prange. * gimple-range-fold.cc (fold_using_range::fold_stmt): Same. (fold_using_range::range_of_address): Same. * gimple-range-fold.h (range_of_address): Same. * gimple-range-infer.cc (gimple_infer_range::add_nonzero): Same. * gimple-range-op.cc (class cfn_strlen): Same. * gimple-range-path.cc (path_range_query::adjust_for_non_null_uses): Same. * gimple-ssa-warn-access.cc (pass_waccess::check_pointer_uses): Same. * tree-ssa-structalias.cc (find_what_p_points_to): Same. * range-op-ptr.cc (range_op_table::initialize_pointer_ops): Remove hybrid entries in table. * range-op.cc (range_op_table::range_op_table): Add pointer entries for bitwise and/or and min/max. * value-range.cc (irange::verify_range): Add assert. * value-range.h (irange::varying_compatible_p): Remove check for error_mark_node. (irange::supports_p): Remove pointer support. * ipa-cp.h (ipa_supports_p): Add prange support.
2024-05-04Add prange implementation for get_legacy_range.Aldy Hernandez1-2/+33
gcc/ChangeLog: * value-range.cc (get_legacy_range): New version for prange.
2024-05-04Add hashing support for prange.Aldy Hernandez1-0/+16
gcc/ChangeLog: * value-range.cc (add_vrange): Add prange support.
2024-05-04Implement basic prange class.Aldy Hernandez1-5/+298
This provides a bare prange class with bounds and bitmasks. It will be a drop-in replacement for pointer ranges, so we can pull their support from irange. The range-op code will be contributed as a follow-up. The code is disabled by default, as irange::supports_p still accepts pointers: inline bool irange::supports_p (const_tree type) { return INTEGRAL_TYPE_P (type) || POINTER_TYPE_P (type); } Once the prange operators are implemented in range-ops, pointer support will be removed from irange to activate pranges. gcc/ChangeLog: * value-range-pretty-print.cc (vrange_printer::visit): New. * value-range-pretty-print.h: Declare prange visit() method. * value-range.cc (vrange::operator=): Add prange support. (vrange::operator==): Same. (prange::accept): New. (prange::set_nonnegative): New. (prange::set): New. (prange::contains_p): New. (prange::singleton_p): New. (prange::lbound): New. (prange::ubound): New. (prange::union_): New. (prange::intersect): New. (prange::operator=): New. (prange::operator==): New. (prange::invert): New. (prange::verify_range): New. (prange::update_bitmask): New. (range_tests_misc): Use prange. * value-range.h (enum value_range_discriminator): Add VR_PRANGE. (class prange): New. (Value_Range::init): Add prange support. (Value_Range::operator=): Same. (Value_Range::supports_type_p): Same. (prange::prange): New. (prange::supports_p): New. (prange::supports_type_p): New. (prange::set_undefined): New. (prange::set_varying): New. (prange::set_nonzero): New. (prange::set_zero): New. (prange::contains_p): New. (prange::zero_p): New. (prange::nonzero_p): New. (prange::type): New. (prange::lower_bound): New. (prange::upper_bound): New. (prange::varying_compatible_p): New. (prange::get_bitmask): New. (prange::fits_p): New.
2024-05-01Cleanups to unsupported_range.Aldy Hernandez1-3/+7
Here are some cleanups to unsupported_range so the assignment operator takes an unsupported_range and behaves like the other ranges. This makes subsequent cleanups easier. gcc/ChangeLog: * value-range.cc (unsupported_range::union_): Cast vrange to unsupported_range. (unsupported_range::intersect): Same. (unsupported_range::operator=): Make argument an unsupported_range. * value-range.h: New constructor.
2024-04-28Callers of irange_bitmask must normalize value/mask pairs.Aldy Hernandez1-2/+5
As per the documentation, irange_bitmask must have the unknown bits in the mask set to 0 in the value field. Even though we say we must have normalized value/mask bits, we don't enforce it, opting to normalize on the fly in union and intersect. Avoiding this lazy enforcing as well as the extra saving/restoring involved in returning the changed status, gives us a performance increase of 1.25% for VRP and 1.51% for ipa-CP. gcc/ChangeLog: * tree-ssa-ccp.cc (ccp_finalize): Normalize before calling set_bitmask. * value-range.cc (irange::intersect_bitmask): Calculate changed irange_bitmask bits on our own. (irange::union_bitmask): Same. (irange_bitmask::verify_mask): Verify that bits are normalized. * value-range.h (irange_bitmask::union_): Do not normalize. Remove return value. (irange_bitmask::intersect): Same.
2024-04-28Remove range_zero and range_nonzero.Aldy Hernandez1-3/+4
Remove legacy range_zero and range_nonzero as they return by value, which make it not work in a separate irange and prange world. Also, we already have set_zero and set_nonzero methods in vrange. gcc/ChangeLog: * range-op-ptr.cc (pointer_plus_operator::wi_fold): Use method range setters instead of out of line functions. (pointer_min_max_operator::wi_fold): Same. (pointer_and_operator::wi_fold): Same. (pointer_or_operator::wi_fold): Same. * range-op.cc (operator_negate::fold_range): Same. (operator_addr_expr::fold_range): Same. (range_op_cast_tests): Same. * range.cc (range_zero): Remove. (range_nonzero): Remove. * range.h (range_zero): Remove. (range_nonzero): Remove. * value-range.cc (range_tests_misc): Use method instead of out of line function.
2024-04-28Move get_bitmask_from_range out of irange class.Aldy Hernandez1-26/+26
prange will also have bitmasks, so it will need to use get_bitmask_from_range. gcc/ChangeLog: * value-range.cc (get_bitmask_from_range): Move out of irange class. (irange::get_bitmask): Call function instead of internal method. * value-range.h (class irange): Remove get_bitmask_from_range.
2024-04-28Accept a vrange in get_legacy_range.Aldy Hernandez1-1/+16
In preparation for prange, make get_legacy_range take a generic vrange, not just an irange. gcc/ChangeLog: * value-range.cc (get_legacy_range): Make static and add another version of get_legacy_range that takes a vrange. * value-range.h (class irange): Remove unnecessary friendship with get_legacy_range. (get_legacy_range): Accept a vrange.
2024-04-28Remove GTY support for vrange and derived classes.Aldy Hernandez1-73/+0
Now that we have a vrange storage class to save ranges in long-term memory, there is no need for GTY markers for any of the vrange classes, since they should never live in GC. gcc/ChangeLog: * value-range-storage.h: Remove friends. * value-range.cc (gt_ggc_mx): Remove. (gt_pch_nx): Remove. * value-range.h (class vrange): Remove GTY markers. (class irange): Same. (class int_range): Same. (class frange): Same. (gt_ggc_mx): Remove. (gt_pch_nx): Remove.
2024-04-28Move bitmask routines to vrange base class.Aldy Hernandez1-2/+14
Any range can theoretically have a bitmask of set bits. This patch moves the bitmask accessors to the base class. This cleans up some users in IPA*, and will provide a cleaner interface when prange is in place. gcc/ChangeLog: * ipa-cp.cc (propagate_bits_across_jump_function): Access bitmask through base class. (ipcp_store_vr_results): Same. * ipa-prop.cc (ipa_compute_jump_functions_for_edge): Same. (ipcp_get_parm_bits): Same. (ipcp_update_vr): Same. * range-op-mixed.h (update_known_bitmask): Change argument to vrange. * range-op.cc (update_known_bitmask): Same. * value-range.cc (vrange::update_bitmask): New. (irange::set_nonzero_bits): Move to vrange class. (irange::get_nonzero_bits): Same. * value-range.h (class vrange): Add update_bitmask, get_bitmask, get_nonzero_bits, and set_nonzero_bits. (class irange): Make bitmask methods virtual overrides. (class Value_Range): Add get_bitmask and update_bitmask.
2024-04-28Add tree versions of lower and upper bounds to vrange.Aldy Hernandez1-20/+36
This patch adds vrange::lbound() and vrange::ubound() that return trees. These can be used in generic code that is type agnostic, and avoids special casing for pointers and integers in places where we handle both. It also cleans up a wart in the Value_Range class. gcc/ChangeLog: * tree-ssa-loop-niter.cc (refine_value_range_using_guard): Convert bound to wide_int. * value-range.cc (Value_Range::lower_bound): Remove. (Value_Range::upper_bound): Remove. (unsupported_range::lbound): New. (unsupported_range::ubound): New. (frange::lbound): New. (frange::ubound): New. (irange::lbound): New. (irange::ubound): New. * value-range.h (class vrange): Add lbound() and ubound(). (class irange): Same. (class frange): Same. (class unsupported_range): Same. (class Value_Range): Rename lower_bound and upper_bound to lbound and ubound respectively.
2024-04-28Make vrange an abstract class.Aldy Hernandez1-22/+40
Explicitly make vrange an abstract class. This involves fleshing out the unsupported_range overrides which we were inheriting by default from vrange. gcc/ChangeLog: * value-range.cc (unsupported_range::accept): Move down. (vrange::contains_p): Rename to... (unsupported_range::contains_p): ...this. (vrange::singleton_p): Rename to... (unsupported_range::singleton_p): ...this. (vrange::set): Rename to... (unsupported_range::set): ...this. (vrange::type): Rename to... (unsupported_range::type): ...this. (vrange::supports_type_p): Rename to... (unsupported_range::supports_type_p): ...this. (vrange::set_undefined): Rename to... (unsupported_range::set_undefined): ...this. (vrange::set_varying): Rename to... (unsupported_range::set_varying): ...this. (vrange::union_): Rename to... (unsupported_range::union_): ...this. (vrange::intersect): Rename to... (unsupported_range::intersect): ...this. (vrange::zero_p): Rename to... (unsupported_range::zero_p): ...this. (vrange::nonzero_p): Rename to... (unsupported_range::nonzero_p): ...this. (vrange::set_nonzero): Rename to... (unsupported_range::set_nonzero): ...this. (vrange::set_zero): Rename to... (unsupported_range::set_zero): ...this. (vrange::set_nonnegative): Rename to... (unsupported_range::set_nonnegative): ...this. (vrange::fits_p): Rename to... (unsupported_range::fits_p): ...this. (unsupported_range::operator=): New. (frange::fits_p): New. * value-range.h (class vrange): Make an abstract class. (class unsupported_range): Declare override methods.
2024-04-09Fix up duplicated words mostly in comments, part 2Jakub Jelinek1-1/+1
Another patch from eyeballing git grep -v 'long long\|optab optab\|template template\|double double' | grep ' \([a-zA-Z]\+\) \1 ' output, this time in gcc/ subdirectory. 2024-04-09 Jakub Jelinek <jakub@redhat.com> gcc/ * expr.cc (convert_mode_scalar): Fix duplicated words in comment; into into -> it into. * function.h (function::cond_uids): Fix duplicated words in comment; same same -> same. * config/riscv/riscv-vector-costs.cc (costs::adjust_vect_cost_per_loop): Fix duplicated words in comment; model model -> model. * config/riscv/riscv-vector-builtins-shapes.cc (build_base): Fix duplicated words in comment; for for -> for. * config/riscv/riscv-avlprop.cc (pass_avlprop::execute): Fix duplicated words in comment; more more -> more. * config/aarch64/driver-aarch64.cc (host_detect_local_cpu): Fix duplicated words in comment; be be -> be. * tree-profile.cc (masking_vectors): Fix duplicated words in comment; has has -> has, the the -> the. * value-range.cc (irange::set_range_from_bitmask): Fix duplicated words in comment; the the -> the. * gcov.cc (add_condition_counts): Fix duplicated words in comment; to to -> to. * vr-values.cc (get_scev_info): Fix duplicated words in comment; the the -> to the. * tree-vrp.cc (fully_replaceable): Fix duplicated words in comment; by by -> by. * mode-switching.cc (single_succ_confluence_n): Fix duplicated words in comment; the the -> the. * tree-ssa-phiopt.cc (value_replacement): Fix duplicated words in comment; can can -> we can. * gimple-range-phi.cc (phi_analyzer::process_phi): Fix duplicated words in comment; it it -> it is. * tree-ssa-sccvn.cc (visit_phi): Fix duplicated words in comment; to to -> to. * rtl-ssa/accesses.h (use_info::next_debug_insn_use): Fix duplicated words in comment; if if -> if. * doc/options.texi (InverseMask): Fix duplicated words; and and -> and. Change take to takes. * doc/invoke.texi (fanalyzer-undo-inlining): Fix duplicated words; be be -> be. (-minline-memops-threshold): Likewise. gcc/analyzer/ * analyzer.opt (Wanalyzer-undefined-behavior-strtok): Fix duplicated words; in in -> in. * program-state.cc (sm_state_map::replay_call_summary): Fix duplicated words in comment; to to -> to. (program_state::replay_call_summary): Likewise. * region-model.cc (region_model::replay_call_summary): Likewise. gcc/c/ * c-decl.cc (previous_tag): Fix duplicated words in comment; the the -> the. (diagnose_mismatched_decls): Fix duplicated words in comment; about about -> about. gcc/cp/ * constexpr.cc (build_new_constexpr_heap_type): Fix duplicated words in comment; is is -> is. * cp-tree.def (CO_RETURN_EXPR): Fix duplicated words in comment; for for -> for. * parser.cc (fixup_blocks_walker): Fix duplicated words in comment; is is -> is. * semantics.cc (fixup_template_type): Fix duplicated words in comment; for for -> for. (finish_omp_for): Fix duplicated words in comment; the the -> the. * pt.cc (more_specialized_fn): Fix duplicated words in comment; think think -> think. (type_targs_deducible_from): Fix duplicated words in comment; the the -> the. gcc/jit/ * docs/topics/expressions.rst (Constructor expressions): Fix duplicated words; have have -> have.
2024-01-03Update copyright years.Jakub Jelinek1-1/+1
2023-11-03Remove simple ranges from trailing zero bitmasks.Andrew MacLeod1-0/+30
During the intersection operation, it can be helpful to remove any low-end ranges when the bitmask has trailing zeros. This prevents obviously incorrect ranges from appearing without requiring a bitmask check. * value-range.cc (irange_bitmask::adjust_range): New. (irange::intersect_bitmask): Call adjust_range. * value-range.h (irange_bitmask::adjust_range): New prototype.
2023-10-25Faster irange union for appending ranges.Andrew MacLeod1-1/+44
A common pattern to to append a range to an existing range via union. This optimizes that process. * value-range.cc (irange::union_append): New. (irange::union_): Call union_append when appropriate. * value-range.h (irange::union_append): New prototype.
2023-10-15wide-int: Fix estimation of buffer sizes for wide_int printing [PR111800]Jakub Jelinek1-5/+4
As mentioned in the PR, my estimations on needed buffer size for wide_int and especially widest_int printing were incorrect, I've used get_len () in the estimations, but that is true only for !wi::neg_p (x) values. Under the hood, we have 3 ways to print numbers. print_decs which if if ((wi.get_precision () <= HOST_BITS_PER_WIDE_INT) || (wi.get_len () == 1)) uses sprintf which always fits into WIDE_INT_PRINT_BUFFER_SIZE (positive or negative) and otherwise uses print_hex, print_decu which if if ((wi.get_precision () <= HOST_BITS_PER_WIDE_INT) || (wi.get_len () == 1 && !wi::neg_p (wi))) uses sprintf which always fits into WIDE_INT_PRINT_BUFFER_SIZE (positive only) and print_hex, which doesn't print most significant limbs which are zero and the first limb which is non-zero prints such that redundant 0 hex digits aren't printed, while all limbs below that are printed with "%016" PRIx64. For wi::neg_p (x) values, the first limb of the precision is always non-zero, so we print all the limbs for the precision. So, the current estimations are accurate if !wi::neg_p (x), or when print_decs will be used and x.get_len () == 1, otherwise we need to use estimation based on get_precision () rather than get_len (). I've introduced new inlines print_{dec{,s,u},hex}_buf_size which compute the needed buffer length in bytes and return true if WIDE_INT_PRINT_BUFFER_SIZE isn't sufficient and caller should XALLOCAVEC the buffer. 2023-10-15 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/111800 gcc/ * wide-int-print.h (print_dec_buf_size, print_decs_buf_size, print_decu_buf_size, print_hex_buf_size): New inline functions. * wide-int.cc (assert_deceq): Use print_dec_buf_size. (assert_hexeq): Use print_hex_buf_size. * wide-int-print.cc (print_decs): Use print_decs_buf_size. (print_decu): Use print_decu_buf_size. (print_hex): Use print_hex_buf_size. (pp_wide_int_large): Use print_dec_buf_size. * value-range.cc (irange_bitmask::dump): Use print_hex_buf_size. * value-range-pretty-print.cc (vrange_printer::print_irange_bitmasks): Likewise. * tree-ssa-loop-niter.cc (do_warn_aggressive_loop_optimizations): Use print_dec_buf_size. Use TYPE_SIGN macro in print_dec call argument. gcc/c-family/ * c-warn.cc (match_case_to_enum_1): Assert w.get_precision () is smaller or equal to WIDE_INT_MAX_INL_PRECISION rather than w.get_len () is smaller or equal to WIDE_INT_MAX_INL_ELTS.
2023-10-12wide-int: Allow up to 16320 bits wide_int and change widest_int precision to ↵Jakub Jelinek1-5/+12
32640 bits [PR102989] As mentioned in the _BitInt support thread, _BitInt(N) is currently limited by the wide_int/widest_int maximum precision limitation, which is depending on target 191, 319, 575 or 703 bits (one less than WIDE_INT_MAX_PRECISION). That is fairly low limit for _BitInt, especially on the targets with the 191 bit limitation. The following patch bumps that limit to 16319 bits on all arches (which support _BitInt at all), which is the limit imposed by INTEGER_CST representation (unsigned char members holding number of HOST_WIDE_INT limbs). In order to achieve that, wide_int is changed from a trivially copyable type which contained just an inline array of WIDE_INT_MAX_ELTS (3, 5, 9 or 11 limbs depending on target) limbs into a non-trivially copy constructible, copy assignable and destructible type which for the usual small cases (up to WIDE_INT_MAX_INL_ELTS which is the former WIDE_INT_MAX_ELTS) still uses an inline array of limbs, but for larger precisions uses heap allocated limb array. This makes wide_int unusable in GC structures, so for dwarf2out which was the only place which needed it there is a new rwide_int type (restricted wide_int) which supports only up to RWIDE_INT_MAX_ELTS limbs inline and is trivially copyable (dwarf2out should never deal with large _BitInt constants, those should have been lowered earlier). Similarly, widest_int has been changed from a trivially copyable type which contained also an inline array of WIDE_INT_MAX_ELTS limbs (but unlike wide_int didn't contain precision and assumed that to be WIDE_INT_MAX_PRECISION) into a non-trivially copy constructible, copy assignable and destructible type which has always WIDEST_INT_MAX_PRECISION precision (32640 bits currently, twice as much as INTEGER_CST limitation allows) and unlike wide_int decides depending on get_len () value whether it uses an inline array (again, up to WIDE_INT_MAX_INL_ELTS) or heap allocated one. In wide-int.h this means we need to estimate an upper bound on how many limbs will wide-int.cc (usually, sometimes wide-int.h) need to write, heap allocate if needed based on that estimation and upon set_len which is done at the end if we guessed over WIDE_INT_MAX_INL_ELTS and allocated dynamically, while we actually need less than that copy/deallocate. The unexact guesses are needed because the exact computation of the length in wide-int.cc is sometimes quite complex and especially canonicalize at the end can decrease it. widest_int is again because of this not usable in GC structures, so cfgloop.h has been changed to use fixed_wide_int_storage <WIDE_INT_MAX_INL_PRECISION> and punt if we'd have larger _BitInt based iterators, programs having more than 128-bit iterators will be hopefully rare and I think it is fine to treat loops with more than 2^127 iterations as effectively possibly infinite, omp-general.cc is changed to use fixed_wide_int_storage <1024>, as it better should support scores with the same precision on all arches. Code which used WIDE_INT_PRINT_BUFFER_SIZE sized buffers for printing wide_int/widest_int into buffer had to be changed to use XALLOCAVEC for larger lengths. On x86_64, the patch in --enable-checking=yes,rtl,extra configured bootstrapped cc1plus enlarges the .text section by 1.01% - from 0x25725a5 to 0x25e5555 and similarly at least when compiling insn-recog.cc with the usual bootstrap option slows compilation down by 1.01%, user 4m22.046s and 4m22.384s on vanilla trunk vs. 4m25.947s and 4m25.581s on patched trunk. I'm afraid some code size growth and compile time slowdown is unavoidable in this case, we use wide_int and widest_int everywhere, and while the rare cases are marked with UNLIKELY macros, it still means extra checks for it. The patch also regresses +FAIL: gm2/pim/fail/largeconst.mod, -O +FAIL: gm2/pim/fail/largeconst.mod, -O -g +FAIL: gm2/pim/fail/largeconst.mod, -O3 -fomit-frame-pointer +FAIL: gm2/pim/fail/largeconst.mod, -O3 -fomit-frame-pointer -finline-functions +FAIL: gm2/pim/fail/largeconst.mod, -Os +FAIL: gm2/pim/fail/largeconst.mod, -g +FAIL: gm2/pim/fail/largeconst2.mod, -O +FAIL: gm2/pim/fail/largeconst2.mod, -O -g +FAIL: gm2/pim/fail/largeconst2.mod, -O3 -fomit-frame-pointer +FAIL: gm2/pim/fail/largeconst2.mod, -O3 -fomit-frame-pointer -finline-functions +FAIL: gm2/pim/fail/largeconst2.mod, -Os +FAIL: gm2/pim/fail/largeconst2.mod, -g tests, which previously were rejected with error: constant literal ‘12345678912345678912345679123456789123456789123456789123456789123456791234567891234567891234567891234567891234567912345678912345678912345678912345678912345679123456789123456789’ exceeds internal ZTYPE range kind of errors, but now are accepted. Seems the FE tries to parse constants into widest_int in that case and only diagnoses if widest_int overflows, that seems wrong, it should at least punt if stuff doesn't fit into WIDE_INT_MAX_PRECISION, but perhaps far less than that, if it wants support for middle-end for precisions above 128-bit, it better should be using BITINT_TYPE. Will file a PR and defer to Modula2 maintainer. 2023-10-12 Jakub Jelinek <jakub@redhat.com> PR c/102989 * wide-int.h: Adjust file comment. (WIDE_INT_MAX_INL_ELTS): Define to former value of WIDE_INT_MAX_ELTS. (WIDE_INT_MAX_INL_PRECISION): Define. (WIDE_INT_MAX_ELTS): Change to 255. Assert that WIDE_INT_MAX_INL_ELTS is smaller than WIDE_INT_MAX_ELTS. (RWIDE_INT_MAX_ELTS, RWIDE_INT_MAX_PRECISION, WIDEST_INT_MAX_ELTS, WIDEST_INT_MAX_PRECISION): Define. (WI_BINARY_RESULT_VAR, WI_UNARY_RESULT_VAR): Change write_val callers to pass 0 as a new argument. (class widest_int_storage): Likewise. (widest_int, widest2_int): Change typedefs to use widest_int_storage rather than fixed_wide_int_storage. (enum wi::precision_type): Add INL_CONST_PRECISION enumerator. (struct binary_traits): Add partial specializations for INL_CONST_PRECISION. (generic_wide_int): Add needs_write_val_arg static data member. (int_traits): Likewise. (wide_int_storage): Replace val non-static data member with a union u of it and HOST_WIDE_INT *valp. Declare copy constructor, copy assignment operator and destructor. Add unsigned int argument to write_val. (wide_int_storage::wide_int_storage): Initialize precision to 0 in the default ctor. Remove unnecessary {}s around STATIC_ASSERTs. Assert in non-default ctor T's precision_type is not INL_CONST_PRECISION and allocate u.valp for large precision. Add copy constructor. (wide_int_storage::~wide_int_storage): New. (wide_int_storage::operator=): Add copy assignment operator. In assignment operator remove unnecessary {}s around STATIC_ASSERTs, assert ctor T's precision_type is not INL_CONST_PRECISION and if precision changes, deallocate and/or allocate u.valp. (wide_int_storage::get_val): Return u.valp rather than u.val for large precision. (wide_int_storage::write_val): Likewise. Add an unused unsigned int argument. (wide_int_storage::set_len): Use write_val instead of writing val directly. (wide_int_storage::from, wide_int_storage::from_array): Adjust write_val callers. (wide_int_storage::create): Allocate u.valp for large precisions. (wi::int_traits <wide_int_storage>::get_binary_precision): New. (fixed_wide_int_storage::fixed_wide_int_storage): Make default ctor defaulted. (fixed_wide_int_storage::write_val): Add unused unsigned int argument. (fixed_wide_int_storage::from, fixed_wide_int_storage::from_array): Adjust write_val callers. (wi::int_traits <fixed_wide_int_storage>::get_binary_precision): New. (WIDEST_INT): Define. (widest_int_storage): New template class. (wi::int_traits <widest_int_storage>): New. (trailing_wide_int_storage::write_val): Add unused unsigned int argument. (wi::get_binary_precision): Use wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_precision rather than get_precision on get_binary_result. (wi::copy): Adjust write_val callers. Don't call set_len if needs_write_val_arg. (wi::bit_not): If result.needs_write_val_arg, call write_val again with upper bound estimate of len. (wi::sext, wi::zext, wi::set_bit): Likewise. (wi::bit_and, wi::bit_and_not, wi::bit_or, wi::bit_or_not, wi::bit_xor, wi::add, wi::sub, wi::mul, wi::mul_high, wi::div_trunc, wi::div_floor, wi::div_ceil, wi::div_round, wi::divmod_trunc, wi::mod_trunc, wi::mod_floor, wi::mod_ceil, wi::mod_round, wi::lshift, wi::lrshift, wi::arshift): Likewise. (wi::bswap, wi::bitreverse): Assert result.needs_write_val_arg is false. (gt_ggc_mx, gt_pch_nx): Remove generic template for all generic_wide_int, instead add functions and templates for each storage of generic_wide_int. Make functions for generic_wide_int <wide_int_storage> and templates for generic_wide_int <widest_int_storage <N>> deleted. (wi::mask, wi::shifted_mask): Adjust write_val calls. * wide-int.cc (zeros): Decrease array size to 1. (BLOCKS_NEEDED): Use CEIL. (canonize): Use HOST_WIDE_INT_M1. (wi::from_buffer): Pass 0 to write_val. (wi::to_mpz): Use CEIL. (wi::from_mpz): Likewise. Pass 0 to write_val. Use WIDE_INT_MAX_INL_ELTS instead of WIDE_INT_MAX_ELTS. (wi::mul_internal): Use WIDE_INT_MAX_INL_PRECISION instead of MAX_BITSIZE_MODE_ANY_INT in automatic array sizes, for prec above WIDE_INT_MAX_INL_PRECISION estimate precision from lengths of operands. Use XALLOCAVEC allocated buffers for prec above WIDE_INT_MAX_INL_PRECISION. (wi::divmod_internal): Likewise. (wi::lshift_large): For len > WIDE_INT_MAX_INL_ELTS estimate it from xlen and skip. (rshift_large_common): Remove xprecision argument, add len argument with len computed in caller. Don't return anything. (wi::lrshift_large, wi::arshift_large): Compute len here and pass it to rshift_large_common, for lengths above WIDE_INT_MAX_INL_ELTS using estimations from xlen if possible. (assert_deceq, assert_hexeq): For lengths above WIDE_INT_MAX_INL_ELTS use XALLOCAVEC allocated buffer. (test_printing): Use WIDE_INT_MAX_INL_PRECISION instead of WIDE_INT_MAX_PRECISION. * wide-int-print.h (WIDE_INT_PRINT_BUFFER_SIZE): Use WIDE_INT_MAX_INL_PRECISION instead of WIDE_INT_MAX_PRECISION. * wide-int-print.cc (print_decs, print_decu, print_hex): For lengths above WIDE_INT_MAX_INL_ELTS use XALLOCAVEC allocated buffer. * tree.h (wi::int_traits<extended_tree <N>>): Change precision_type to INL_CONST_PRECISION for N == ADDR_MAX_PRECISION. (widest_extended_tree): Use WIDEST_INT_MAX_PRECISION instead of WIDE_INT_MAX_PRECISION. (wi::ints_for): Use int_traits <extended_tree <N> >::precision_type instead of hard coded CONST_PRECISION. (widest2_int_cst): Use WIDEST_INT_MAX_PRECISION instead of WIDE_INT_MAX_PRECISION. (wi::extended_tree <N>::get_len): Use WIDEST_INT_MAX_PRECISION rather than WIDE_INT_MAX_PRECISION. (wi::ints_for::zero): Use wi::int_traits <wi::extended_tree <N> >::precision_type instead of wi::CONST_PRECISION. * tree.cc (build_replicated_int_cst): Formatting fix. Use WIDE_INT_MAX_INL_ELTS rather than WIDE_INT_MAX_ELTS. * print-tree.cc (print_node): Don't print TREE_UNAVAILABLE on INTEGER_CSTs, TREE_VECs or SSA_NAMEs. * double-int.h (wi::int_traits <double_int>::precision_type): Change to INL_CONST_PRECISION from CONST_PRECISION. * poly-int.h (struct poly_coeff_traits): Add partial specialization for wi::INL_CONST_PRECISION. * cfgloop.h (bound_wide_int): New typedef. (struct nb_iter_bound): Change bound type from widest_int to bound_wide_int. (struct loop): Change nb_iterations_upper_bound, nb_iterations_likely_upper_bound and nb_iterations_estimate type from widest_int to bound_wide_int. * cfgloop.cc (record_niter_bound): Return early if wi::min_precision of i_bound is too large for bound_wide_int. Adjustments for the widest_int to bound_wide_int type change in non-static data members. (get_estimated_loop_iterations, get_max_loop_iterations, get_likely_max_loop_iterations): Adjustments for the widest_int to bound_wide_int type change in non-static data members. * tree-vect-loop.cc (vect_transform_loop): Likewise. * tree-ssa-loop-niter.cc (do_warn_aggressive_loop_optimizations): Use XALLOCAVEC allocated buffer for i_bound len above WIDE_INT_MAX_INL_ELTS. (record_estimate): Return early if wi::min_precision of i_bound is too large for bound_wide_int. Adjustments for the widest_int to bound_wide_int type change in non-static data members. (wide_int_cmp): Use bound_wide_int instead of widest_int. (bound_index): Use bound_wide_int instead of widest_int. (discover_iteration_bound_by_body_walk): Likewise. Use widest_int::from to convert it to widest_int when passed to record_niter_bound. (maybe_lower_iteration_bound): Use widest_int::from to convert it to widest_int when passed to record_niter_bound. (estimate_numbers_of_iteration): Don't record upper bound if loop->nb_iterations has too large precision for bound_wide_int. (n_of_executions_at_most): Use widest_int::from. * tree-ssa-loop-ivcanon.cc (remove_redundant_iv_tests): Adjust for the widest_int to bound_wide_int changes. * match.pd (fold_sign_changed_comparison simplification): Use wide_int::from on wi::to_wide instead of wi::to_widest. * value-range.h (irange::maybe_resize): Avoid using memcpy on non-trivially copyable elements. * value-range.cc (irange_bitmask::dump): Use XALLOCAVEC allocated buffer for mask or value len above WIDE_INT_PRINT_BUFFER_SIZE. * fold-const.cc (fold_convert_const_int_from_int, fold_unary_loc): Use wide_int::from on wi::to_wide instead of wi::to_widest. * tree-ssa-ccp.cc (bit_value_binop): Zero extend r1max from width before calling wi::udiv_trunc. * lto-streamer-out.cc (output_cfg): Adjustments for the widest_int to bound_wide_int type change in non-static data members. * lto-streamer-in.cc (input_cfg): Likewise. (lto_input_tree_1): Use WIDE_INT_MAX_INL_ELTS rather than WIDE_INT_MAX_ELTS. For length above WIDE_INT_MAX_INL_ELTS use XALLOCAVEC allocated buffer. Formatting fix. * data-streamer-in.cc (streamer_read_wide_int, streamer_read_widest_int): Likewise. * tree-affine.cc (aff_combination_expand): Use placement new to construct name_expansion. (free_name_expansion): Destruct name_expansion. * gimple-ssa-strength-reduction.cc (struct slsr_cand_d): Change index type from widest_int to offset_int. (class incr_info_d): Change incr type from widest_int to offset_int. (alloc_cand_and_find_basis, backtrace_base_for_ref, restructure_reference, slsr_process_ref, create_mul_ssa_cand, create_mul_imm_cand, create_add_ssa_cand, create_add_imm_cand, slsr_process_add, cand_abs_increment, replace_mult_candidate, replace_unconditional_candidate, incr_vec_index, create_add_on_incoming_edge, create_phi_basis_1, replace_conditional_candidate, record_increment, record_phi_increments_1, phi_incr_cost_1, phi_incr_cost, lowest_cost_path, total_savings, ncd_with_phi, ncd_of_cand_and_phis, nearest_common_dominator_for_cands, insert_initializers, all_phi_incrs_profitable_1, replace_one_candidate, replace_profitable_candidates): Use offset_int rather than widest_int and wi::to_offset rather than wi::to_widest. * real.cc (real_to_integer): Use WIDE_INT_MAX_INL_ELTS rather than 2 * WIDE_INT_MAX_ELTS and for words above that use XALLOCAVEC allocated buffer. * tree-ssa-loop-ivopts.cc (niter_for_exit): Use placement new to construct tree_niter_desc and destruct it on failure. (free_tree_niter_desc): Destruct tree_niter_desc if value is non-NULL. * gengtype.cc (main): Remove widest_int handling. * graphite-isl-ast-to-gimple.cc (widest_int_from_isl_expr_int): Use WIDEST_INT_MAX_ELTS instead of WIDE_INT_MAX_ELTS. * gimple-ssa-warn-alloca.cc (pass_walloca::execute): Use WIDE_INT_MAX_INL_PRECISION instead of WIDE_INT_MAX_PRECISION and assert get_len () fits into it. * value-range-pretty-print.cc (vrange_printer::print_irange_bitmasks): For mask or value lengths above WIDE_INT_MAX_INL_ELTS use XALLOCAVEC allocated buffer. * gimple-ssa-sprintf.cc (adjust_range_for_overflow): Use wide_int::from on wi::to_wide instead of wi::to_widest. * omp-general.cc (score_wide_int): New typedef. (omp_context_compute_score): Use score_wide_int instead of widest_int and adjust for those changes. (struct omp_declare_variant_entry): Change score and score_in_declare_simd_clone non-static data member type from widest_int to score_wide_int. (omp_resolve_late_declare_variant, omp_resolve_declare_variant): Use score_wide_int instead of widest_int and adjust for those changes. (omp_lto_output_declare_variant_alt): Likewise. (omp_lto_input_declare_variant_alt): Likewise. * godump.cc (go_output_typedef): Assert get_len () is smaller than WIDE_INT_MAX_INL_ELTS. gcc/c-family/ * c-warn.cc (match_case_to_enum_1): Use wi::to_wide just once instead of 3 times, assert get_len () is smaller than WIDE_INT_MAX_INL_ELTS. gcc/testsuite/ * gcc.dg/bitint-38.c: New test.
2023-08-31Add overflow API for plus minus mult on rangeJiufu Guo1-0/+12
In previous reviews, adding overflow APIs to range-op would be useful. Those APIs could help to check if overflow happens when operating between two 'range's, like: plus, minus, and mult. Previous discussions are here: https://gcc.gnu.org/pipermail/gcc-patches/2023-July/624067.html https://gcc.gnu.org/pipermail/gcc-patches/2023-July/624701.html gcc/ChangeLog: * range-op-mixed.h (operator_plus::overflow_free_p): New declare. (operator_minus::overflow_free_p): New declare. (operator_mult::overflow_free_p): New declare. * range-op.cc (range_op_handler::overflow_free_p): New function. (range_operator::overflow_free_p): New default function. (operator_plus::overflow_free_p): New function. (operator_minus::overflow_free_p): New function. (operator_mult::overflow_free_p): New function. * range-op.h (range_op_handler::overflow_free_p): New declare. (range_operator::overflow_free_p): New declare. * value-range.cc (irange::nonnegative_p): New function. (irange::nonpositive_p): New function. * value-range.h (irange::nonnegative_p): New declare. (irange::nonpositive_p): New declare.
2023-08-21[frange] Return false if nothing changed in union_nans().Aldy Hernandez1-5/+31
When one operand is a known NAN, we always return TRUE from union_nans(), even if no change occurred. This patch fixes the oversight. gcc/ChangeLog: * value-range.cc (frange::union_nans): Return false if nothing changed. (range_tests_floats): New test.
2023-08-18[irange] Return FALSE if updated bitmask is unchanged [PR110753]Aldy Hernandez1-0/+18
The mask/value pair we track in the irange is a bit fickle in that it can sometimes contradict the bitmask inherent in the range. This can happen when a series of calculations yield a combination such as: [3, 1000] MASK 0xfffffffe VALUE 0x0 The mask/value above implies that the lowest bit is a known 0, which would exclude the 3 in the range. At one time we tried keeping mask and ranges 100% consistent, but the performance penalty was too high (5% in VRP). Also, it's unclear whether the intersection of two incompatible known bits should make the whole range undefined, or just the contradicting bits. This is all documented in irange::get_bitmask(). We could revisit both of these assumptions in the future. In this testcase IPA ends up with a range where the lower 2 bits are expected to be 0, but the range is [1,1]. [irange] long int [1, 1] MASK 0xfffffffffffffffc VALUE 0x0 This causes irange::union_bitmask() to think an update occurred, when no semantic change happened, thus triggering an assert in IPA-cp. We could get rid of the assert, but it's cleaner to make irange::{union,intersect}_bitmask always tell the truth. Beside, the ranger's cache also depends on union being truthful. PR ipa/110753 gcc/ChangeLog: * value-range.cc (irange::union_bitmask): Return FALSE if updated bitmask is semantically equivalent to the original mask. (irange::intersect_bitmask): Same. (irange::get_bitmask): Add comment. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/pr110753.c: New test.
2023-07-17Normalize irange_bitmask before union/intersect.Aldy Hernandez1-3/+0
The bit twiddling in union/intersect for the value/mask pair must be normalized to have the unknown bits with a value of 0 in order to make the math simpler. Normalizing at construction slowed VRP by 1.5% so I opted to normalize before updating the bitmask in range-ops, since it was the only user. However, with upcoming changes there will be multiple setters of the mask (IPA and CCP), so we need something more general. I played with various alternatives, and settled on normalizing before union/intersect which were the ones needing the bits cleared. With this patch, there's no noticeable difference in performance either in VRP or in overall compilation. gcc/ChangeLog: * value-range.cc (irange_bitmask::verify_mask): Mask need not be normalized. * value-range.h (irange_bitmask::union_): Normalize beforehand. (irange_bitmask::intersect): Same.
2023-07-07A singleton irange has all known bits.Aldy Hernandez1-1/+18
gcc/ChangeLog: * value-range.cc (irange::get_bitmask_from_range): Return all the known bits for a singleton. (irange::set_range_from_bitmask): Set a range of a singleton when all bits are known.
2023-07-07The caller to irange::intersect (wide_int, wide_int) must normalize the range.Aldy Hernandez1-2/+5
Per the function comment, the caller to intersect(wide_int, wide_int) must handle the mask. This means it must also normalize the range if anything changed. gcc/ChangeLog: * value-range.cc (irange::intersect): Leave normalization to caller.
2023-07-07Implement value/mask tracking for irange.Aldy Hernandez1-91/+157
Integer ranges (irange) currently track known 0 bits. We've wanted to track known 1 bits for some time, and instead of tracking known 0 and known 1's separately, it has been suggested we track a value/mask pair similarly to what we do for CCP and RTL. This patch implements such a thing. With this we now track a VALUE integer which are the known values, and a MASK which tells us which bits contain meaningful information. This allows us to fix a handful of enhancement requests, such as PR107043 and PR107053. There is a 4.48% performance penalty for VRP and 0.42% in overall compilation for this entire patchset. It is expected and in line with the loss incurred when we started tracking known 0 bits. This patch just provides the value/mask tracking support. All the nonzero users (range-op, IPA, CCP, etc), are still using the nonzero nomenclature. For that matter, this patch reimplements the nonzero accessors with the value/mask functionality. In follow-up patches I will enhance these passes to use the value/mask information, and fix the aforementioned PRs. gcc/ChangeLog: * data-streamer-in.cc (streamer_read_value_range): Adjust for value/mask. * data-streamer-out.cc (streamer_write_vrange): Same. * range-op.cc (operator_cast::fold_range): Same. * value-range-pretty-print.cc (vrange_printer::print_irange_bitmasks): Same. * value-range-storage.cc (irange_storage::write_lengths_address): Same. (irange_storage::set_irange): Same. (irange_storage::get_irange): Same. (irange_storage::size): Same. (irange_storage::dump): Same. * value-range-storage.h: Same. * value-range.cc (debug): New. (irange_bitmask::dump): New. (add_vrange): Adjust for value/mask. (irange::operator=): Same. (irange::set): Same. (irange::verify_range): Same. (irange::operator==): Same. (irange::contains_p): Same. (irange::irange_single_pair_union): Same. (irange::union_): Same. (irange::intersect): Same. (irange::invert): Same. (irange::get_nonzero_bits_from_range): Rename to... (irange::get_bitmask_from_range): ...this. (irange::set_range_from_nonzero_bits): Rename to... (irange::set_range_from_bitmask): ...this. (irange::set_nonzero_bits): Rename to... (irange::update_bitmask): ...this. (irange::get_nonzero_bits): Rename to... (irange::get_bitmask): ...this. (irange::intersect_nonzero_bits): Rename to... (irange::intersect_bitmask): ...this. (irange::union_nonzero_bits): Rename to... (irange::union_bitmask): ...this. (irange_bitmask::verify_mask): New. * value-range.h (class irange_bitmask): New. (irange_bitmask::set_unknown): New. (irange_bitmask::unknown_p): New. (irange_bitmask::irange_bitmask): New. (irange_bitmask::get_precision): New. (irange_bitmask::get_nonzero_bits): New. (irange_bitmask::set_nonzero_bits): New. (irange_bitmask::operator==): New. (irange_bitmask::union_): New. (irange_bitmask::intersect): New. (class irange): Friend vrange_printer. (irange::varying_compatible_p): Adjust for bitmask. (irange::set_varying): Same. (irange::set_nonzero): Same. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/pr107009.c: Adjust irange dumping for value/mask changes. * gcc.dg/tree-ssa/vrp-unreachable.c: Same. * gcc.dg/tree-ssa/vrp122.c: Same.
2023-06-29Tidy up the range normalization code.Aldy Hernandez1-51/+48
There's a few spots where a range is being altered in-place, but we fail to call normalize the range. This patch makes sure we always call normalize_kind(), and that normalize_kind in turn calls verify_range to make sure verything is canonical. gcc/ChangeLog: * value-range.cc (frange::set): Do not call verify_range. (frange::normalize_kind): Verify range. (frange::union_nans): Do not call verify_range. (frange::union_): Same. (frange::intersect): Same. (irange::irange_single_pair_union): Call normalize_kind if necessary. (irange::union_): Same. (irange::intersect): Same. (irange::set_range_from_nonzero_bits): Verify range. (irange::set_nonzero_bits): Call normalize_kind if necessary. (irange::get_nonzero_bits): Tweak comment. (irange::intersect_nonzero_bits): Call normalize_kind if necessary. (irange::union_nonzero_bits): Same. * value-range.h (irange::normalize_kind): Verify range.
2023-06-27Implement ipa_vr hashing.Aldy Hernandez1-15/+0
Implement hashing for ipa_vr. When all is said and done, all these patches incurr a 7.64% slowdown for ipa-cp, with is entirely covered by the similar 7% increase in this area last week. So we get type agnostic ranges with "infinite" range precision close to free. There is no change in overall compilation. gcc/ChangeLog: * ipa-prop.cc (struct ipa_vr_ggc_hash_traits): Adjust for use with ipa_vr instead of value_range. (gt_pch_nx): Same. (gt_ggc_mx): Same. (ipa_get_value_range): Same. * value-range.cc (gt_pch_nx): Move to ipa-prop.cc and adjust for ipa_vr. (gt_ggc_mx): Same.
2023-05-25Disallow setting of NANs in frange setter unless setting trees.Aldy Hernandez1-8/+1
frange::set() is confusing in that we can set a NAN by specifying a bound of +-NAN, even though we tecnically disallow NANs in the setter because the kind can never be VR_NAN. This is a wart for get_tree_range(), which builds a range out of a tree from the source, to work correctly. It's ugly, and it showed its limitation while implementing LTO streaming of ranges. This patch disallows passing NAN bounds in frange::set() and fixes get_tree_range. gcc/ChangeLog: * value-query.cc (range_query::get_tree_range): Set NAN directly if necessary. * value-range.cc (frange::set): Assert that bounds are not NAN.
2023-05-25Hash known NANs correctly for franges.Aldy Hernandez1-7/+7
We're ICEing when trying to hash a known NAN. This is unnoticeable because the only user would be IPA, and even so, it currently doesn't handle floats. However, handling floats is a flip of a switch, so it's best to handle them already. gcc/ChangeLog: * value-range.cc (add_vrange): Handle known NANs.
2023-05-23Remove buggy special case in irange::invert [PR109934].Aldy Hernandez1-8/+0
This patch removes a buggy special case in irange::invert which seems to have been broken for a while, and probably never triggered because the legacy code was handled elsewhere, and the non-legacy code was using an int_range_max of int_range<255> which made it extremely likely for num_ranges == 255. However, with auto-resizing ranges, int_range_max will start off at 3 and can hit this bogus code in the unswitching code. PR tree-optimization/109934 gcc/ChangeLog: * value-range.cc (irange::invert): Remove buggy special case. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/pr109934.c: New test.
2023-05-17Provide support for copying unsupported ranges.Aldy Hernandez1-1/+4
The unsupported_range class is provided for completness sake. It is a way to set VARYING/UNDEFINED ranges for unsupported ranges (currently anything not float, integer, or pointer). You can't do anything with them, except set_varying, and set_undefined. We will trap on any other operation. This patch provides a way to copy them, just in case they creep in. This could happen in IPA under certain circumstances. gcc/ChangeLog: * value-range.cc (vrange::operator=): Add a stub to copy unsupported ranges. * value-range.h (is_a <unsupported_range>): New. (Value_Range::operator=): Support copying unsupported ranges.
2023-05-15Add auto-resizing capability to irange's [PR109695]Aldy Hernandez1-0/+14
<tldr> We can now have int_range<N, RESIZABLE=false> for automatically resizable ranges. int_range_max is now int_range<3, true> for a 69X reduction in size from current trunk, and 6.9X reduction from GCC12. This incurs a 5% performance penalty for VRP that is more than covered by our > 13% improvements recently. </tldr> int_range_max is the temporary range object we use in the ranger for integers. With the conversion to wide_int, this structure bloated up significantly because wide_ints are huge (80 bytes a piece) and are about 10 times as big as a plain tree. Since the temporary object requires 255 sub-ranges, that's 255 * 80 * 2, plus the control word. This means the structure grew from 4112 bytes to 40912 bytes. This patch adds the ability to resize ranges as needed, defaulting to no resizing, while int_range_max now defaults to 3 sub-ranges (instead of 255) and grows to 255 when the range being calculated does not fit. For example: int_range<1> foo; // 1 sub-range with no resizing. int_range<5> foo; // 5 sub-ranges with no resizing. int_range<5, true> foo; // 5 sub-ranges with resizing. I ran some tests and found that 3 sub-ranges cover 99% of cases, so I've set the int_range_max default to that: typedef int_range<3, /*RESIZABLE=*/true> int_range_max; We don't bother growing incrementally, since the default covers most cases and we have a 255 hard-limit. This hard limit could be reduced to 128, since my tests never saw a range needing more than 124, but we could do that as a follow-up if needed. With 3-subranges, int_range_max is now 592 bytes versus 40912 for trunk, and versus 4112 bytes for GCC12! The penalty is 5.04% for VRP and 3.02% for threading, with no noticeable change in overall compilation (0.27%). This is more than covered by our 13.26% improvements for the legacy removal + wide_int conversion. I think this approach is a good alternative, while providing us with flexibility going forward. For example, we could try defaulting to a 8 sub-ranges for a noticeable improvement in VRP. We could also use large sub-ranges for switch analysis to avoid resizing. Another approach I tried was always resizing. With this, we could drop the whole int_range<N> nonsense, and have irange just hold a resizable range. This simplified things, but incurred a 7% penalty on ipa_cp. This was hard to pinpoint, and I'm not entirely convinced this wasn't some artifact of valgrind. However, until we're sure, let's avoid massive changes, especially since IPA changes are coming up. For the curious, a particular hot spot for IPA in this area was: ipcp_vr_lattice::meet_with_1 (const value_range *other_vr) { ... ... value_range save (m_vr); m_vr.union_ (*other_vr); return m_vr != save; } The problem isn't the resizing (since we do that at most once) but the fact that for some functions with lots of callers we end up a huge range that gets copied and compared for every meet operation. Maybe the IPA algorithm could be adjusted somehow??. Anywhooo... for now there is nothing to worry about, since value_range still has 2 subranges and is not resizable. But we should probably think what if anything we want to do here, as I envision IPA using infinite ranges here (well, int_range_max) and handling frange's, etc. gcc/ChangeLog: PR tree-optimization/109695 * value-range.cc (irange::operator=): Resize range. (irange::union_): Same. (irange::intersect): Same. (irange::invert): Same. (int_range_max): Default to 3 sub-ranges and resize as needed. * value-range.h (irange::maybe_resize): New. (~int_range): New. (int_range::int_range): Adjust for resizing. (int_range::operator=): Same.
2023-05-15Only return changed=true in union_nonzero when appropriate.Aldy Hernandez1-2/+3
irange::union_ was being overly pessimistic in its return value. It was returning false when the nonzero mask was possibly the same. The reason for this is because the nonzero mask is not entirely kept up to date. We avoid setting it up when a new range is set (from a set, intersect, union, etc), because calculating a mask from a range is measurably expensive. However, irange::get_nonzero_bits() will always return the correct mask because it will calculate the nonzero mask inherit in the mask on the fly and bitwise or it with the saved mask. This was an optimization because last release it was a big penalty to keep the mask up to date. This may not necessarily be the case with the conversion to wide_int's. We should investigate. Just to be clear, the result from get_nonzero_bits() is always correct as seen by the user, but the wide_int in the irange does not contain all the information, since part of the nonzero bits can be determined by the range itself, on the fly. The fix here is to make sure that the result the user sees (callers of get_nonzero_bits()) changed when unioning bits. This allows ipcp_vr_lattice::meet_with_1 to avoid unnecessary copies when determining if a range changed. This patch yields an 6.89% improvement to the ipa_cp pass. I'm including the IPA changes in this patch, as it's a testcase of sorts for the change. gcc/ChangeLog: * ipa-cp.cc (ipcp_vr_lattice::meet_with_1): Avoid unnecessary range copying * value-range.cc (irange::union_nonzero_bits): Return TRUE only when range changed.
2023-05-03Allow varying ranges of unknown types in irange::verify_range [PR109711]Aldy Hernandez1-0/+7
The old legacy code allowed building ranges of unknown types so passes like IPA could build and propagate VARYING. For now it's easiest to allow the old behavior, it's not like you can do anything with these ranges except build them and copy them. Eventually we should convert all users of set_varying() to use supported types. I will address this in my upcoming IPA work. PR tree-optimization/109711 gcc/ChangeLog: * value-range.cc (irange::verify_range): Allow types of error_mark_node.
2023-05-01Cleanup irange::set.Aldy Hernandez1-126/+49
Now that anti-ranges are no more and iranges contain wide_ints instead of trees, various cleanups are possible. This is one of a handful of patches improving the performance of irange::set() which is not on a hot path, but quite sensitive because it is so pervasive. gcc/ChangeLog: * gimple-range-op.cc (cfn_ffs::fold_range): Use the correct precision. * gimple-ssa-warn-alloca.cc (alloca_call_type): Use <2> for invalid_range, as it is an inverse range. * tree-vrp.cc (find_case_label_range): Avoid trees. * value-range.cc (irange::irange_set): Delete. (irange::irange_set_1bit_anti_range): Delete. (irange::irange_set_anti_range): Delete. (irange::set): Cleanup. * value-range.h (class irange): Remove irange_set, irange_set_anti_range, irange_set_1bit_anti_range. (irange::set_undefined): Remove set to m_type.
2023-05-01Convert internal representation of irange to wide_ints.Aldy Hernandez1-149/+118
gcc/ChangeLog: * range-op.cc (update_known_bitmask): Adjust for irange containing wide_ints internally. * tree-ssanames.cc (set_nonzero_bits): Same. * tree-ssanames.h (set_nonzero_bits): Same. * value-range-storage.cc (irange_storage::set_irange): Same. (irange_storage::get_irange): Same. * value-range.cc (irange::operator=): Same. (irange::irange_set): Same. (irange::irange_set_1bit_anti_range): Same. (irange::irange_set_anti_range): Same. (irange::set): Same. (irange::verify_range): Same. (irange::contains_p): Same. (irange::irange_single_pair_union): Same. (irange::union_): Same. (irange::irange_contains_p): Same. (irange::intersect): Same. (irange::invert): Same. (irange::set_range_from_nonzero_bits): Same. (irange::set_nonzero_bits): Same. (mask_to_wi): Same. (irange::intersect_nonzero_bits): Same. (irange::union_nonzero_bits): Same. (gt_ggc_mx): Same. (gt_pch_nx): Same. (tree_range): Same. (range_tests_strict_enum): Same. (range_tests_misc): Same. (range_tests_nonzero_bits): Same. * value-range.h (irange::type): Same. (irange::varying_compatible_p): Same. (irange::irange): Same. (int_range::int_range): Same. (irange::set_undefined): Same. (irange::set_varying): Same. (irange::lower_bound): Same. (irange::upper_bound): Same.
2023-05-01Replace vrp_val* with wide_ints.Aldy Hernandez1-31/+6
This patch removes all uses of vrp_val_{min,max} in favor for a irange_val_* which are wide_int based. This will leave only one use of vrp_val_* which returns trees in range_of_ssa_name_with_loop_info() because it needs to work with non-integers (floats, etc). In a follow-up patch, this function will also be cleaned up such that vrp_val_* can be deleted. The functions min_limit and max_limit in range-op.cc are now useless as they're basically irange_val*. I didn't rename them yet to avoid churn. I'll do it in a later patch. gcc/ChangeLog: * gimple-range-fold.cc (adjust_pointer_diff_expr): Rewrite with irange_val*. (vrp_val_max): New. (vrp_val_min): New. * gimple-range-op.cc (cfn_strlen::fold_range): Use irange_val_*. * range-op.cc (max_limit): Same. (min_limit): Same. (plus_minus_ranges): Same. (operator_rshift::op1_range): Same. (operator_cast::inside_domain_p): Same. * value-range.cc (vrp_val_is_max): Delete. (vrp_val_is_min): Delete. (range_tests_misc): Use irange_val_*. * value-range.h (vrp_val_is_min): Delete. (vrp_val_is_max): Delete. (vrp_val_max): Delete. (irange_val_min): New. (vrp_val_min): Delete. (irange_val_max): New. * vr-values.cc (check_for_binary_op_overflow): Use irange_val_*.
2023-05-01Conversion to irange wide_int API.Aldy Hernandez1-190/+282
This converts the irange API to use wide_ints exclusively, along with its users. This patch will slow down VRP, as there will be more useless wide_int to tree conversions. However, this slowdown is only temporary, as a follow-up patch will convert the internal representation of iranges to wide_ints for a net overall gain in performance. gcc/ChangeLog: * fold-const.cc (expr_not_equal_to): Convert to irange wide_int API. * gimple-fold.cc (size_must_be_zero_p): Same. * gimple-loop-versioning.cc (loop_versioning::prune_loop_conditions): Same. * gimple-range-edge.cc (gcond_edge_range): Same. (gimple_outgoing_range::calc_switch_ranges): Same. * gimple-range-fold.cc (adjust_imagpart_expr): Same. (adjust_realpart_expr): Same. (fold_using_range::range_of_address): Same. (fold_using_range::relation_fold_and_or): Same. * gimple-range-gori.cc (gori_compute::gori_compute): Same. (range_is_either_true_or_false): Same. * gimple-range-op.cc (cfn_toupper_tolower::get_letter_range): Same. (cfn_clz::fold_range): Same. (cfn_ctz::fold_range): Same. * gimple-range-tests.cc (class test_expr_eval): Same. * gimple-ssa-warn-alloca.cc (alloca_call_type): Same. * ipa-cp.cc (ipa_value_range_from_jfunc): Same. (propagate_vr_across_jump_function): Same. (decide_whether_version_node): Same. * ipa-prop.cc (ipa_get_value_range): Same. * ipa-prop.h (ipa_range_set_and_normalize): Same. * range-op.cc (get_shift_range): Same. (value_range_from_overflowed_bounds): Same. (value_range_with_overflow): Same. (create_possibly_reversed_range): Same. (equal_op1_op2_relation): Same. (not_equal_op1_op2_relation): Same. (lt_op1_op2_relation): Same. (le_op1_op2_relation): Same. (gt_op1_op2_relation): Same. (ge_op1_op2_relation): Same. (operator_mult::op1_range): Same. (operator_exact_divide::op1_range): Same. (operator_lshift::op1_range): Same. (operator_rshift::op1_range): Same. (operator_cast::op1_range): Same. (operator_logical_and::fold_range): Same. (set_nonzero_range_from_mask): Same. (operator_bitwise_or::op1_range): Same. (operator_bitwise_xor::op1_range): Same. (operator_addr_expr::fold_range): Same. (pointer_plus_operator::wi_fold): Same. (pointer_or_operator::op1_range): Same. (INT): Same. (UINT): Same. (INT16): Same. (UINT16): Same. (SCHAR): Same. (UCHAR): Same. (range_op_cast_tests): Same. (range_op_lshift_tests): Same. (range_op_rshift_tests): Same. (range_op_bitwise_and_tests): Same. (range_relational_tests): Same. * range.cc (range_zero): Same. (range_nonzero): Same. * range.h (range_true): Same. (range_false): Same. (range_true_and_false): Same. * tree-data-ref.cc (split_constant_offset_1): Same. * tree-ssa-loop-ch.cc (entry_loop_condition_is_static): Same. * tree-ssa-loop-unswitch.cc (struct unswitch_predicate): Same. (find_unswitching_predicates_for_bb): Same. * tree-ssa-phiopt.cc (value_replacement): Same. * tree-ssa-threadbackward.cc (back_threader::find_taken_edge_cond): Same. * tree-ssanames.cc (ssa_name_has_boolean_range): Same. * tree-vrp.cc (find_case_label_range): Same. * value-query.cc (range_query::get_tree_range): Same. * value-range.cc (irange::set_nonnegative): Same. (frange::contains_p): Same. (frange::singleton_p): Same. (frange::internal_singleton_p): Same. (irange::irange_set): Same. (irange::irange_set_1bit_anti_range): Same. (irange::irange_set_anti_range): Same. (irange::set): Same. (irange::operator==): Same. (irange::singleton_p): Same. (irange::contains_p): Same. (irange::set_range_from_nonzero_bits): Same. (DEFINE_INT_RANGE_INSTANCE): Same. (INT): Same. (UINT): Same. (SCHAR): Same. (UINT128): Same. (UCHAR): Same. (range): New. (tree_range): New. (range_int): New. (range_uint): New. (range_uint128): New. (range_uchar): New. (range_char): New. (build_range3): Convert to irange wide_int API. (range_tests_irange3): Same. (range_tests_int_range_max): Same. (range_tests_strict_enum): Same. (range_tests_misc): Same. (range_tests_nonzero_bits): Same. (range_tests_nan): Same. (range_tests_signed_zeros): Same. * value-range.h (Value_Range::Value_Range): Same. (irange::set): Same. (irange::nonzero_p): Same. (irange::contains_p): Same. (range_includes_zero_p): Same. (irange::set_nonzero): Same. (irange::set_zero): Same. (contains_zero_p): Same. (frange::contains_p): Same. * vr-values.cc (simplify_using_ranges::op_with_boolean_value_range_p): Same. (bounds_of_var_in_loop): Same. (simplify_using_ranges::legacy_fold_cond_overflow): Same.
2023-05-01Merge irange::union/intersect into irange_union/intersect.Aldy Hernandez1-4/+7
gcc/ChangeLog: * value-range.cc (irange::irange_union): Rename to... (irange::union_): ...this. (irange::irange_intersect): Rename to... (irange::intersect): ...this. * value-range.h (irange::union_): Delete. (irange::intersect): Delete.
2023-05-01Remove irange::tree_{lower,upper}_bound.Aldy Hernandez1-18/+18
gcc/ChangeLog: * value-range.cc (irange::irange_set_anti_range): Remove uses of tree_lower_bound and tree_upper_bound. (irange::verify_range): Same. (irange::operator==): Same. (irange::singleton_p): Same. * value-range.h (irange::tree_lower_bound): Delete. (irange::tree_upper_bound): Delete. (irange::lower_bound): Delete. (irange::upper_bound): Delete. (irange::zero_p): Remove uses of tree_lower_bound and tree_upper_bound.
2023-05-01Remove irange::{min,max,kind}.Aldy Hernandez1-49/+0
gcc/ChangeLog: * tree-ssa-loop-niter.cc (refine_value_range_using_guard): Remove kind() call. (determine_value_range): Same. (record_nonwrapping_iv): Same. (infer_loop_bounds_from_signedness): Same. (scev_var_range_cant_overflow): Same. * tree-vrp.cc (operand_less_p): Delete. * tree-vrp.h (operand_less_p): Delete. * value-range.cc (get_legacy_range): Remove uses of deprecated API. (irange::value_inside_range): Delete. * value-range.h (vrange::kind): Delete. (irange::num_pairs): Remove check of m_kind. (irange::min): Delete. (irange::max): Delete.