aboutsummaryrefslogtreecommitdiff
path: root/libstdc++-v3/include/bits/hashtable.h
AgeCommit message (Collapse)AuthorFilesLines
2024-06-13libstdc++: Improve diagnostics for invalid std::hash specializations [PR115420]Jonathan Wakely1-0/+2
When using a key type without a valid std::hash specialization the unordered containers give confusing diagnostics about the default constructor being deleted. Add a static_assert that will fail for disabled std::hash specializations (and for a subset of custom hash functions). libstdc++-v3/ChangeLog: PR libstdc++/115420 * include/bits/hashtable.h (_Hashtable): Add static_assert to check that hash function is copy constructible. * testsuite/23_containers/unordered_map/115420.cc: New test.
2024-06-12libstdc++: Do not use memset in _Hashtable::clear()Jonathan Wakely1-6/+4
Using memset is incorrect if the __bucket_ptr type is non-trivial, or does not use an all-zero bit pattern for its null value. Replace the three uses of memset with std::fill_n to set the pointers to nullptr. libstdc++-v3/ChangeLog: * include/bits/hashtable.h (_Hashtable::clear): Do not use memset to zero out bucket pointers. (_Hashtable::_M_assign_elements): Likewise.
2024-06-10libstdc++: [_Hashtable] Optimize destructorFrançois Dumont1-1/+1
Hashtable destructor do not need to call clear() method that in addition to destroying all nodes also reset all buckets to nullptr. libstdc++-v3/ChangeLog: * include/bits/hashtable.h (~_Hashtable()): Replace clear call with a _M_deallocate_nodes call.
2024-03-22libstdc++: Destroy allocators in re-inserted container nodes [PR114401]Jonathan Wakely1-6/+6
The allocator objects in container node handles were not being destroyed after the node was re-inserted into a container. They are stored in a union and so need to be explicitly destroyed when the node becomes empty. The containers were zeroing the node handle's pointer, which makes it empty, causing the handle's destructor to think there's nothign to clean up. Add a new member function to the node handle which destroys the allocator and zeros the pointer. Change the containers to call that instead of just changing the pointer manually. We can also remove the _M_empty member of the union which is not necessary. libstdc++-v3/ChangeLog: PR libstdc++/114401 * include/bits/hashtable.h (_Hashtable::_M_reinsert_node): Call release() on node handle instead of just zeroing its pointer. (_Hashtable::_M_reinsert_node_multi): Likewise. (_Hashtable::_M_merge_unique): Likewise. (_Hashtable::_M_merge_multi): Likewise. * include/bits/node_handle.h (_Node_handle_common::release()): New member function. (_Node_handle_common::_Optional_alloc::_M_empty): Remove unnecessary union member. (_Node_handle_common): Declare _Hashtable as a friend. * include/bits/stl_tree.h (_Rb_tree::_M_reinsert_node_unique): Call release() on node handle instead of just zeroing its pointer. (_Rb_tree::_M_reinsert_node_equal): Likewise. (_Rb_tree::_M_reinsert_node_hint_unique): Likewise. (_Rb_tree::_M_reinsert_node_hint_equal): Likewise. * testsuite/23_containers/multiset/modifiers/114401.cc: New test. * testsuite/23_containers/set/modifiers/114401.cc: New test. * testsuite/23_containers/unordered_multiset/modifiers/114401.cc: New test. * testsuite/23_containers/unordered_set/modifiers/114401.cc: New test.
2024-01-24libstdc++: [_Hashtable] Remove useless check for _M_before_begin nodeHuanghui Nie1-9/+4
When removing the first node of a bucket it is useless to check if this bucket is the one containing the _M_before_begin node. The bucket before-begin node is already transfered to the next pointed-to bucket regardeless if it is the container before-begin node. libstdc++-v3/ChangeLog: * include/bits/hashtable.h (_Hahstable<>::_M_remove_bucket_begin): Remove _M_before_begin check and cleanup implementation. Co-authored-by: Théo Papadopoulo <papadopoulo@gmail.com>
2024-01-03Update copyright years.Jakub Jelinek1-1/+1
2023-12-31libstdc++: [_Hashtable] Extend the small size optimizationFrançois Dumont1-26/+123
A number of methods were still not using the small size optimization which is to prefer an O(N) research to a hash computation as long as N is small. libstdc++-v3/ChangeLog: * include/bits/hashtable.h: Move comment about all equivalent values being next to each other in the class documentation header. (_M_reinsert_node, _M_merge_unique): Implement small size optimization. (_M_find_tr, _M_count_tr, _M_equal_range_tr): Likewise.
2023-11-16libstdc++: Only declare feature test macros in standard headersJonathan Wakely1-5/+2
This change moves the definitions of feature test macros (or strictly speaking, the requests for <bits/version.h> to define them) so that only standard headers define them. For example, <bits/shared_ptr.h> will no longer define macros related to std::shared_ptr, only <memory> and <version> will define them. This means that __cpp_lib_shared_ptr_arrays will not be defined by <future> or by other headers that include <bits/shared_ptr.h>. It will only be defined when <memory> has been included. This will discourage users from relying on transitive includes. As a result, internal headers that need to query the macros should use the internal macros like __glibcxx_shared_ptr_arrays instead of __cpp_lib_shared_ptr_arrays, as those internal macros are defined by the internal headers after icluding <bits/version.h>. There are some exceptions to this rule, because __cpp_lib_is_constant_evaluated is defined by bits/c++config.h and so is available everywhere, and __cpp_lib_three_way_comparison is defined by <compare> which several headers are explicitly specified to include, so its macro is guaranteed to be usable too. N.B. not many internal headers actually need an explicit include of <bits/version.h>, because most of them include <type_traits> and so get all the __glibcxx_foo internal macros from there. libstdc++-v3/ChangeLog: * include/bits/algorithmfwd.h: Do not define standard feature test macro here. * include/bits/align.h: Likewise. Test internal macros instead of standard macros. * include/bits/alloc_traits.h: Likewise. * include/bits/allocator.h: Likewise. * include/bits/atomic_base.h: Likewise. * include/bits/atomic_timed_wait.h: Likewise. * include/bits/atomic_wait.h: Likewise. * include/bits/basic_string.h: Likewise. * include/bits/basic_string.tcc: Likewise. * include/bits/char_traits.h: Likewise. * include/bits/chrono.h: Likewise. * include/bits/cow_string.h: Likewise. * include/bits/forward_list.h: Likewise. * include/bits/hashtable.h: Likewise. * include/bits/ios_base.h: Likewise. * include/bits/memory_resource.h: Likewise. * include/bits/move.h: Likewise. * include/bits/move_only_function.h: Likewise. * include/bits/node_handle.h: Likewise. * include/bits/ptr_traits.h: Likewise. * include/bits/range_access.h: Likewise. * include/bits/ranges_algo.h: Likewise. * include/bits/ranges_cmp.h: Likewise. * include/bits/ranges_util.h: Likewise. * include/bits/semaphore_base.h: Likewise. * include/bits/shared_ptr.h: Likewise. * include/bits/shared_ptr_atomic.h: Likewise. * include/bits/shared_ptr_base.h: Likewise. * include/bits/stl_algo.h: Likewise. * include/bits/stl_algobase.h: Likewise. * include/bits/stl_function.h: Likewise. * include/bits/stl_iterator.h: Likewise. * include/bits/stl_list.h: Likewise. * include/bits/stl_map.h: Likewise. * include/bits/stl_pair.h: Likewise. * include/bits/stl_queue.h: Likewise. * include/bits/stl_stack.h: Likewise. * include/bits/stl_tree.h: Likewise. * include/bits/stl_uninitialized.h: Likewise. * include/bits/stl_vector.h: Likewise. * include/bits/unique_ptr.h: Likewise. * include/bits/unordered_map.h: Likewise. * include/bits/uses_allocator_args.h: Likewise. * include/bits/utility.h: Likewise. * include/bits/erase_if.h: Add comment. * include/std/algorithm: Define standard feature test macros here. * include/std/atomic: Likewise. * include/std/array: Likewise. * include/std/chrono: Likewise. * include/std/condition_variable: Likewise. * include/std/deque: Likewise. * include/std/format: Likewise. * include/std/functional: Likewise. * include/std/forward_list: Likewise. * include/std/ios: Likewise. * include/std/iterator: Likewise. * include/std/list: Likewise. * include/std/map: Likewise. * include/std/memory: Likewise. * include/std/numeric: Likewise. * include/std/queue: Likewise. * include/std/ranges: Likewise. * include/std/regex: Likewise. * include/std/set: Likewise. * include/std/stack: Likewise. * include/std/stop_token: Likewise. * include/std/string: Likewise. * include/std/string_view: * include/std/tuple: Likewise. * include/std/unordered_map: * include/std/unordered_set: * include/std/utility: Likewise. * include/std/vector: Likewise. * include/std/scoped_allocator: Query internal macros instead of standard macros.
2023-11-09libstdc++: [_Hashtable] Use RAII type to manage rehash functor stateFrançois Dumont1-41/+17
Replace usage of __try/__catch with a RAII type to restore rehash functor state when needed. libstdc++-v3/ChangeLog: * include/bits/hashtable_policy.h (_RehashStateGuard): New. (_Insert_base<>::_M_insert_range(_IIt, _IIt, const _NodeGet&, false_type)): Adapt. * include/bits/hashtable.h (__rehash_guard_t): New. (__rehash_state): Remove. (_M_rehash): Remove. (_M_rehash_aux): Rename into _M_rehash. (_M_assign_elements, _M_insert_unique_node, _M_insert_multi_node): Adapt. (rehash): Adapt.
2023-10-19libstdc++: [_Hashtable] Do not reuse untrusted cached hash codeFrançois Dumont1-2/+17
On merge, reuse a merged node's possibly cached hash code only if we are on the same type of hash and this hash is stateless. Usage of function pointers or std::function as hash functor will prevent reusing cached hash code. libstdc++-v3/ChangeLog * include/bits/hashtable_policy.h (_Hash_code_base::_M_hash_code(const _Hash&, const _Hash_node_value<>&)): Remove. (_Hash_code_base::_M_hash_code<_H2>(const _H2&, const _Hash_node_value<>&)): Remove. * include/bits/hashtable.h (_M_src_hash_code<_H2>(const _H2&, const key_type&, const __node_value_type&)): New. (_M_merge_unique<>, _M_merge_multi<>): Use latter. * testsuite/23_containers/unordered_map/modifiers/merge.cc (test04, test05, test06): New test cases.
2023-08-16libstdc++: Replace all manual FTM definitions and useArsen Arsenović1-4/+5
libstdc++-v3/ChangeLog: * libsupc++/typeinfo: Switch to bits/version.h for __cpp_lib_constexpr_typeinfo. * libsupc++/new: Switch to bits/version.h for __cpp_lib_{launder,hardware_interference_size,destroying_delete}. (launder): Guard behind __cpp_lib_launder. (hardware_destructive_interference_size) (hardware_constructive_interference_size): Guard behind __cpp_lib_hardware_interference_size. * libsupc++/exception: Switch to bits/version.h for __cpp_lib_uncaught_exceptions. (uncaught_exceptions): Guard behind __cpp_lib_uncaught_exceptions. * libsupc++/compare: Switch to bits/version.h for __cpp_lib_three_way_comparison. (three_way_comparable, three_way_comparable_with) (compare_three_way, weak_order, strong_order, partial_order): Guard behind __cpp_lib_three_way_comparison >= 201907L. * include/std/chrono: Drop __cpp_lib_chrono definition. * include/std/vector: Switch to bits/version.h for __cpp_lib_erase_if. (erase, erase_if): Guard behind __cpp_lib_erase_if. * include/std/variant: Switch to bits/version.h for __cpp_lib_variant. Guard whole header behind that FTM. * include/std/utility: Switch to bits/version.h for __cpp_lib_{exchange_function,constexpr_algorithms,as_const}, __cpp_lib_{integer_comparison_functions,to_underlying}, and __cpp_lib_unreachable. (exchange): Guard behind __cpp_lib_exchange_function. (cmp_equal, cmp_not_equal, cmp_less, cmp_greater, cmp_less_equal) (cmp_greater_equal, in_range): Guard behind __cpp_lib_integer_comparison_functions. (to_underlying): Guard behind __cpp_lib_to_underlying. (unreachable): Guard behind __cpp_lib_unreachable. * include/std/type_traits: Switch to bits/version.h for __cpp_lib_is_{null_pointer,final,nothrow_convertible,aggregate}, __cpp_lib_is_{constant_evaluated,invocable,layout_compatible}, __cpp_lib_is_{pointer_interconvertible,scoped_enum,swappable}, __cpp_lib_{logical_traits,reference_from_temporary,remove_cvref}, __cpp_lib_{result_of_sfinae,transformation_trait_aliases}, __cpp_lib_{type_identity,type_trait_variable_templates}, __cpp_lib_{unwrap_ref,void_t,integral_constant_callable}, __cpp_lib_{bool_constant,bounded_array_traits}, and __cpp_lib_has_unique_object_representations. (integral_constant::operator()): Guard behind __cpp_lib_integral_constant_callable. (bool_constant): Guard behind __cpp_lib_bool_constant. (conjunction, disjunction, negation, conjunction_v, disjunction_v) (negation_v): Guard behind __cpp_lib_logical_traits. (is_null_pointer): Guard behind __cpp_lib_is_null_pointer. (is_final): Guard behind __cpp_lib_is_final. (is_nothrow_convertible, is_nothrow_convertible_v): Guard behind __cpp_lib_is_nothrow_convertible. (remove_const_t, remove_volatile_t, remove_cv_t) (add_const_t, add_volatile_t, add_cv_t): Guard behind __cpp_lib_transformation_trait_aliases. (void_t): Guard behind __cpp_lib_void_t. (is_swappable_with_v, is_nothrow_swappable_with_v) (is_swappable_with, is_nothrow_swappable_with): Guard behind __cpp_lib_is_swappable. (is_nothrow_invocable_r, is_invocable_r, invoke_result) (is_invocable, invoke_result_t): Guard behind __cpp_lib_is_invocable. (alignment_of_v, extent_v, has_virtual_destructor_v) (is_abstract_v, is_arithmetic_v, is_array_v) (is_assignable_v, is_base_of_v, is_class_v, is_compound_v) (is_constructible_v, is_const_v, is_convertible_v) (is_copy_assignable_v, is_copy_constructible_v) (is_default_constructible_v, is_destructible_v) (is_empty_v, is_enum_v, is_final_v, is_floating_point_v) (is_function_v, is_fundamental_v, is_integral_v) (is_invocable_r_v, is_invocable_v, is_literal_type_v) (is_lvalue_reference_v, is_member_function_pointer_v) (is_member_object_pointer_v, is_member_pointer_v) (is_move_assignable_v, is_move_constructible_v) (is_nothrow_assignable_v, is_nothrow_constructible_v) (is_nothrow_copy_assignable_v, is_nothrow_copy_constructible_v) (is_nothrow_default_constructible_v, is_nothrow_destructible_v) (is_nothrow_invocable_r_v, is_nothrow_invocable_v) (is_nothrow_move_assignable_v, is_nothrow_move_constructible_v) (is_null_pointer_v, is_object_v, is_pod_v, is_pointer_v) (is_polymorphic_v, is_reference_v, is_rvalue_reference_v) (is_same_v, is_scalar_v, is_signed_v, is_standard_layout_v) (is_trivially_assignable_v, is_trivially_constructible_v) (is_trivially_copyable_v, is_trivially_copy_assignable_v) (is_trivially_copy_constructible_v) (is_trivially_default_constructible_v) (is_trivially_destructible_v, is_trivially_move_assignable_v) (is_trivially_move_constructible_v, is_trivial_v, is_union_v) (is_unsigned_v, is_void_v, is_volatile_v, rank_v, as variadic): Guard behind __cpp_lib_type_trait_variable_templates. (has_unique_object_representations) (has_unique_object_representations_v): Guard behind __cpp_lib_has_unique_object_representation. (is_aggregate): Guard behind __cpp_lib_is_aggregate. (remove_cvref, remove_cvref_t): Guard behind __cpp_lib_remove_cvref. (type_identity, type_identity_t): Guard behind __cpp_lib_type_identity. (unwrap_reference, unwrap_reference_t, unwrap_ref_decay) (unwrap_ref_decay_t): Guard behind __cpp_lib_unwrap_ref. (is_bounded_array_v, is_unbounded_array_v, is_bounded_array) (is_unbounded_array): Guard behind __cpp_lib_bounded_array_traits. (is_scoped_enum, is_scoped_enum_v): Guard behind __cpp_lib_is_scoped_enum. (reference_constructs_from_temporary) (reference_constructs_from_temporary_v): Guard behind __cpp_lib_reference_from_temporary. * include/std/tuple: Switch to bits/version.h for __cpp_lib_{constexpr_tuple,tuple_by_type,apply_make_from_tuple}. (get<T>): Guard behind __cpp_lib_tuple_by_type. (apply): Guard behind __cpp_lib_apply. (make_from_tuple): Guard behind __cpp_lib_make_from_tuple. * include/std/syncstream: Switch to bits/version.h for __cpp_lib_syncbuf. Guard header behind that FTM. * include/std/string_view: Switch to bits/version.h for __cpp_lib_{string_{view,contains},constexpr_string_view} and __cpp_lib_starts_ends_with. (basic_string_view::starts_with, basic_string_view::ends_with): Guard behind __cpp_lib_starts_ends_with. [C++23 && _GLIBCXX_HOSTED && !defined(__cpp_lib_string_contains)]: Assert as impossible ithout a bug in C++23. * include/std/string: Switch to bits/version.h for __cpp_lib_erase_if. (erase, erase_if): Guard behind __cpp_lib_erase_if. * include/std/thread: Switch to bits/version.h for __cpp_lib_jthread. * include/std/stop_token: Switch to bits/version.h for __cpp_lib_jthread. * include/std/spanstream: Switch to bits/version.h for __cpp_lib_spanstream. Guard header behind that FTM. * include/std/span: Switch to bits/version.h for __cpp_lib_span. Guard header behind that FTM. * include/std/source_location: Switch to bits/version.h for __cpp_lib_source_location. Guard header with that FTM. * include/std/shared_mutex: Switch to bits/version.h for __cpp_lib_shared{,_timed}_mutex. (shared_mutex): Guard behind __cpp_lib_shared_mutex. * include/std/semaphore: Switch to bits/version.h for __cpp_lib_semaphore. Guard header behind that FTM. * include/std/ranges: Switch to bits/version.h for __cpp_lib_ranges_{zip,chunk{,_by},slide,join_with}, __cpp_lib_ranges_{repeat_stride,cartesian_product,as_rvalue}, and __cpp_lib_ranges_{as_const,enumerate,iota}. (ranges::zip et al, ranges::chunk et al, ranges::slide et al) (ranges::chunk_by et al, ranges::join_with et al) (ranges::stride et al, ranges::cartesian_product et al) (ranges::as_rvalue et al, ranges::as_const et al) (ranges::enumerate et al): Guard behind appropriate FTM. * include/std/optional: Switch to bits/version.h for __cpp_lib_optional. Guard header behind that FTM. * include/std/numeric: Switch to bits/version.h for __cpp_lib_{gcd{,_lcm},lcm,constexpr_numeric,interpolate} and __cpp_lib_parallel_algorithm. (gcd, lcm): Guard behind __cpp_lib_gcd_lcm. (midpoint): Guard behind __cpp_lib_interpolate. * include/std/numbers: Switch to bits/version.h for __cpp_lib_math_constants. Guard header behind that FTM. * include/std/mutex: Switch to bits/version.h for __cpp_lib_scoped_lock. (scoped_Lock): Guard behind __cpp_lib_scoped_lock. * include/std/memory_resource: Switch to bits/version.h for __cpp_lib_{polymorphic_allocator,memory_resource}. (synchronized_pool_resource): Guard behind __cpp_lib_memory_resource >= 201603L. (polymorphic_allocator): Guard behind __cpp_lib_polymorphic_allocator. * include/std/memory: Switch to bits/version.h for __cpp_lib_{parallel_algorithm,atomic_value_initialization}. * include/std/list: Switch to bits/version.h for __cpp_lib_erase_if. (erase, erase_if): Guard behind __cpp_lib_erase_if. * include/std/latch: Switch to bits/version.h for __cpp_lib_latch. Guard header behind that FTM. * include/std/iterator: Switch to bits/version.h for __cpp_lib_null_iterators. * include/std/iomanip: Switch to bits/version.h for __cpp_lib_quoted_string_io. (quoted): Guard behind __cpp_lib_quoted_string_io. * include/std/functional: Switch to bits/version.h for __cpp_lib_{invoke{,_r},constexpr_functional,bind_front} and __cpp_lib_{not_fn,booyer_moore_searcher}. (invoke): Guard behind __cpp_lib_invoke. (invoke_r): Guard behind __cpp_lib_invoke_r. (bind_front): Guard behind __cpp_lib_bind_front. (not_fn): Guard behind __cpp_lib_not_fn. (boyer_moore_searcher, boyer_moore_horspool_searcher): Guard definition behind __cpp_lib_boyer_moore_searcher. * include/std/forward_list: Switch to bits/version.h for __cpp_lib_erase_if. (erase, erase_if): Guard behind __cpp_lib_erase_if. * include/std/format: Switch to bits/version.h for __cpp_lib_format. Guard header behind that FTM. * include/std/filesystem: Switch to bits/version.h for __cpp_lib_filesystem. Guard header behind that FTM. * include/std/expected: Switch to bits/version.h for __cpp_lib_expected. Guard header behind it. * include/std/execution: Switch to bits/version.h for __cpp_lib_{execution,parallel_algorithm}. Guard header behind either. * include/std/deque: Switch to bits/version.h for __cpp_lib_erase_if. (erase, erase_if): Guard behind __cpp_lib_erase_if. * include/std/coroutine: Switch to bits/version.h for __cpp_lib_coroutine. Guard header behind that FTM. * include/std/concepts: Switch to bits/version.h for __cpp_lib_concepts. Guard header behind that FTM. * include/std/complex: Switch to bits/version.h for __cpp_lib_{complex_udls,constexpr_complex}. (operator""if, operator""i, operator""il): Guard behind __cpp_lib_complex_udls. * include/std/charconv: Swtich to bits/version.h for __cpp_lib_{to_chars,constexpr_charconv}. * include/std/bitset: Switch to bits/version.h for __cpp_lib_constexpr_bitset. * include/std/bit: Switch to bits/version.h for __cpp_lib_{bit_cast,byteswap,bitops,int_pow2,endian}. (bit_cast): Guard behind __cpp_lib_bit_cast. (byteswap): Guard behind __cpp_lib_byteswap. (rotl, rotr, countl_zero, countl_one, countr_zero, countr_one) (popcount): Guard behind __cpp_lib_bitops. (has_single_bit, bit_ceil, bit_floor, bit_width): Guard behind __cpp_lib_int_pow2. (endian): Guard behind __cpp_lib_endian. * include/std/barrier: Switch to bits/version.h for __cpp_lib_barrier. Guard header behind that FTM. * include/std/atomic: Switch to bits/version.h for __cpp_lib_atomic_{is_always_lock_free,float,ref} and __cpp_lib_lock_free_type_aliases. (*::is_always_lock_free): Guard behind __cpp_lib_atomic_is_always_lock_free. (atomic<float>): Guard behind __cpp_lib_atomic_float. (atomic_ref): Guard behind __cpp_lib_atomic_ref. (atomic_signed_lock_free, atomic_unsigned_lock_free): Guard behind __cpp_lib_atomic_lock_free_type_aliases. * include/std/array: Switch to bits/version.h for __cpp_lib_to_array. (to_array): Guard behind __cpp_lib_to_array. * include/std/any: Switch to bits/version.h for __cpp_lib_any. Guard header behind that FTM. * include/std/algorithm: Switch to bits/version.h for __cpp_lib_parallel_algorithm. * include/c_global/cstddef: Switch to bits/version.h for __cpp_lib_byte. (byte): Guard behind __cpp_lib_byte. * include/c_global/cmath: Switch to bits/version.h for __cpp_lib_{hypot,interpolate}. (hypot3): Guard behind __cpp_lib_hypot. (lerp): Guard behind __cpp_lib_interpolate. * include/c_compatibility/stdatomic.h: Switch to bits/stl_version.h for __cpp_lib_atomic. Guard header behind that FTM. * include/bits/utility.h: Switch to bits/version.h for __cpp_lib_{tuple_element_t,integer_sequence,ranges_zip}. (tuple_element_t): Guard behind __cpp_lib_tuple_element_t. (integer_sequence et al): Guard behind __cpp_lib_integer_sequence. * include/bits/uses_allocator_args.h: Switch to bits/version.h for __cpp_lib_make_obj_using_allocator. Guard header behind that FTM. * include/bits/unordered_map.h: Switch to bits/version.h for __cpp_lib_unordered_map_try_emplace. (try_emplace): Guard behind __cpp_lib_unordered_map_try_emplace. * include/bits/unique_ptr.h: Switch to bits/version.h for __cpp_lib_{constexpr_memory,make_unique}. (make_unique): Guard behind __cpp_lib_make_unique. * include/bits/stl_vector.h: Switch to bits/version.h for __cpp_lib_constexpr_vector. * include/bits/stl_uninitialized.h: Switch to bits/version.h for __cpp_lib_raw_memory_algorithms. (uninitialized_default_construct) (uninitialized_default_construct_n, uninitialized_move) (uninitialized_move_n, uninitialized_value_construct) (uninitialized_value_construct_n): Guard behind __cpp_lib_raw_memory_algorithms. * include/bits/stl_tree.h: Switch to bits/version.h for __cpp_lib_generic_associative_lookup. * include/bits/stl_stack.h: Switch to bits/version.h for __cpp_lib_adaptor_iterator_pair_constructor. (stack): Guard iterator-pair constructor behind __cpp_lib_adaptor_iterator_pair_constructor. * include/bits/stl_queue.h: Switch to bits/version.h for __cpp_lib_adaptor_iterator_pair_constructor. (queue): Guard iterator-pair constructor behind __cpp_lib_adaptor_iterator_pair_constructor. * include/bits/stl_pair.h: Switch to bits/version.h for __cpp_lib_{concepts,tuples_by_type}. (get): Guard type-getting overloads behind __cpp_lib_tuples_by_type. * include/bits/stl_map.h: Switch to bits/version.h for __cpp_lib_map_try_emplace. (map<>::try_emplace): Guard behind __cpp_lib_map_try_emplace. * include/bits/stl_list.h: Switch to bits/version.h for __cpp_lib_list_remove_return_type. (__remove_return_type, _GLIBCXX_LIST_REMOVE_RETURN_TYPE_TAG) [C++20]: guard behind __cpp_lib_list_remove_return_type instead. * include/bits/stl_iterator.h: Switch to bits/version.h for __cpp_lib_{constexpr_iterator,array_constexpr} and __cpp_lib_{make_reverse_iterator,move_iterator_concept}. (make_reverse_iterator): Guard behind __cpp_lib_make_reverse_iterator. (iterator_concept et al): Guard __cpp_lib_move_iterator_concept changes behind that FTM. * include/bits/stl_function.h: Switch to bits/version.h for __cpp_lib_transparent_operators. (equal_to, not_equal_to, greater, less, greater_equal) (less_equal, bit_and, bit_or, bit_xor, bit_not, logical_and) (logical_or, logical_not, plus, minus, multiplies, divides) (modulus, negate): Guard '= void' fwdecls behind __cpp_lib_transparent_operators. (plus<void>, minus<void>, multiplies<void>, divides<void>) (modulus<void>, negate<void>, logical_and<void>, logical_or<void>) (logical_not<void>, bit_and<void>, bit_or<void>, bit_xor<void>) (equal_to<void>, not_equal_to<void>, greater<void>, less<void>) (greater_equal<void>, less_equal<void>, bit_not<void>) (__has_is_transparent): Guard behind __cpp_lib_transparent_operators. * include/bits/stl_algobase.h: Switch to bits/version.h for __cpp_lib_robust_nonmodifying_seq_ops. (robust equal, mismatch): Guard behind __cpp_lib_nonmember_container_access. * include/bits/stl_algo.h: Swtich to bits/version.h for __cpp_lib_{clamp,sample}. (clamp): Guard behind __cpp_lib_clamp. (sample): Guard behind __cpp_lib_sample. * include/bits/specfun.h: Switch to bits/version.h for __cpp_lib_math_special_functions and __STDCPP_MATH_SPEC_FUNCS__. * include/bits/shared_ptr_base.h: Switch to bits/version.h for __cpp_lib_{smart_ptr_for_overwrite,shared_ptr_arrays}. (_Sp_overwrite_tag): Guard behind __cpp_lib_smart_ptr_for_overwrite. * include/bits/shared_ptr_atomic.h: Switch to bits/version.h for __cpp_lib_atomic_shared_ptr. * include/bits/shared_ptr.h: Switch to bits/version.h for __cpp_lib_{enable_shared_from_this,shared_ptr_weak_type}. (shared_ptr<T>::weak_type): Guard behind __cpp_lib_shared_ptr_weak_type. (enable_shared_from_this<T>::weak_from_this): Guard behind __cpp_lib_enable_shared_from_this. * include/bits/ranges_cmp.h: Switch to bits/version.h for __cpp_lib_ranges. * include/bits/ranges_algo.h: Switch to bits/version.h for __cpp_lib_{shift,ranges_{contains,find_last,fold,iota}}. * include/bits/range_access.h: Switch to bits/version.h for __cpp_lib_nonmember_container_access (size, empty, data): Guard behind __cpp_lib_nonmember_container_access. (ssize): Guard behind __cpp_lib_ssize. * include/bits/ptr_traits.h: Switch to bits/version.h. for __cpp_lib_{constexpr_memory,to_address}. (to_address): Guard behind __cpp_lib_to_address. * include/bits/node_handle.h: Switch to bits/version.h for __cpp_lib_node_extract. Guard header behind that FTM. * include/bits/move_only_function.h: Switch to bits/version.h for __cpp_lib_move_only_function. Guard header behind that FTM. * include/bits/move.h: Switch to bits/version.h for __cpp_lib_addressof_constexpr. * include/bits/ios_base.h: Switch to bits/version.h for __cpp_lib_ios_noreplace. (noreplace): Guard with __cpp_lib_ios_noreplace. * include/bits/hashtable.h: Switch to bits/version.h for __cpp_lib_generic_unordered_lookup. (_M_equal_range_tr, _M_count_tr, _M_find_tr): Guard behind __cpp_lib_generic_unordered_lookup. * include/bits/forward_list.h: Switch to bits/version.h for __cpp_lib_list_remove_return_type. (__remove_return_type): Guard behind __cpp_lib_list_remove_return_type. * include/bits/erase_if.h: Switch to bits/version.h for __cpp_lib_erase_if. * include/bits/cow_string.h: Switch to bits/version.h for __cpp_lib_constexpr_string. * include/bits/chrono.h: Swtich to bits/version.h for __cpp_lib_chrono{,_udls}. (ceil): Guard behind __cpp_lib_chrono. (operator""ns et al): Guard behind __cpp_lib_chrono_udls. * include/bits/char_traits.h: Switch to bits/version.h for __cpp_lib_constexpr_char_traits. * include/bits/basic_string.h: Switch to bits/version.h for __cpp_lib_{constexpr_string,string_{resize_and_overwrite,udls}}. (resize_and_overwrite): Guard behind __cpp_lib_string_resize_and_overwrite. (operator""s): Guard behind __cpp_lib_string_udls. * include/bits/atomic_wait.h: Switch to bits/version.h for __cpp_lib_atomic_wait. Guard header behind that FTM. * include/bits/atomic_base.h: Switch to bits/version.h for __cpp_lib_atomic_value_initialization and __cpp_lib_atomic_flag_test. (atomic_flag::test): Guard behind __cpp_lib_atomic_flag_test, rather than C++20. * include/bits/allocator.h: Switch to bits/version.h for __cpp_lib_incomplete_container_elements. * include/bits/alloc_traits.h: Switch to using bits/version.h for __cpp_lib_constexpr_dynamic_alloc and __cpp_lib_allocator_traits_is_always_equal. * include/bits/align.h: Switch to bits/version.h for defining __cpp_lib_assume_aligned. (assume_aligned): Guard with __cpp_lib_assume_aligned. * include/bits/algorithmfwd.h: Switch to bits/version.h for defining __cpp_lib_constexpr_algorithms. * include/std/stacktrace: Switch to bits/version.h for __cpp_lib_stacktrace. Guard header behind that FTM. * testsuite/23_containers/array/tuple_interface/get_neg.cc: Update line numbers.
2023-05-10libstdc++: [_Hashtable] Implement several small methods implicitly inlineFrançois Dumont1-105/+76
Make implementation of 3 simple _Hashtable methods implicitly inline. Avoid usage of const_iterator abstraction within _Hashtable implementation. Replace several usages of __node_type* with expected __node_ptr. libstdc++-v3/ChangeLog: * include/bits/hashtable_policy.h (_NodeBuilder<>::_S_build): Use __node_ptr. (_ReuseOrAllocNode<>): Use __node_ptr in place of __node_type*. (_AllocNode<>): Likewise. (_Equality<>::_M_equal): Remove const_iterator usages. Only preserved to call std::is_permutation in the non-unique key implementation. * include/bits/hashtable.h (_Hashtable<>::_M_update_begin()): Capture _M_begin() once. (_Hashtable<>::_M_bucket_begin(size_type)): Implement implicitly inline. (_Hashtable<>::_M_insert_bucket_begin): Likewise. (_Hashtable<>::_M_remove_bucket_begin): Likewise. (_Hashtable<>::_M_compute_hash_code): Use __node_ptr rather than const_iterator. (_Hashtable<>::find): Likewise. (_Hashtable<>::_M_emplace): Likewise. (_Hashtable<>::_M_insert_unique): Likewise.
2023-01-16Update copyright years.Jakub Jelinek1-1/+1
2022-06-15libstdc++: [_Hashtable] Insert range of types convertible to value_type PR ↵François Dumont1-10/+20
105717 Fix insertion of range of instances convertible to value_type. libstdc++-v3/ChangeLog: PR libstdc++/105717 * include/bits/hashtable_policy.h (_ConvertToValueType): New. * include/bits/hashtable.h (_Hashtable<>::_M_insert_unique_aux): New. (_Hashtable<>::_M_insert(_Arg&&, const _NodeGenerator&, true_type)): Use latters. (_Hashtable<>::_M_insert(_Arg&&, const _NodeGenerator&, false_type)): Likewise. (_Hashtable(_InputIterator, _InputIterator, size_type, const _Hash&, const _Equal&, const allocator_type&, true_type)): Use this.insert range. (_Hashtable(_InputIterator, _InputIterator, size_type, const _Hash&, const _Equal&, const allocator_type&, false_type)): Use _M_insert. * testsuite/23_containers/unordered_map/cons/56112.cc: Check how many times conversion is done. * testsuite/23_containers/unordered_map/insert/105717.cc: New test. * testsuite/23_containers/unordered_set/insert/105717.cc: New test.
2022-05-26libstdc++: Refactor includes for unordered containersJonathan Wakely1-2/+1
This moves some #include directives to the relevant place. For example, <bits/hashtable_policy.h> needs <bits/stl_pair.h> so should include it directly instead of relying on <unordered_map> and <unordered_set> to do so first. libstdc++-v3/ChangeLog: * include/bits/functional_hash.h (__is_fast_hash): Add doxygen comment. * include/bits/hashtable.h: Do not include <bits/stl_function.h> here. * include/bits/hashtable_policy.h: Include <bits/stl_pair.h> and <bits/functional_hash.h>. * include/bits/unordered_map.h: Include required headers. * include/bits/unordered_set.h: Likewise. * include/std/unordered_map: Do not include headers for indirect dependencies. * include/std/unordered_set: Likewise.
2022-05-26libstdc++: Make headers include their prerequisitesNathan Sidwell1-0/+2
These headers were relying on their includers having already included some prerequisites. That makes them unsuitable to be header-units. So directly include the needed headers. Reviewed-by: Jonathan Wakely <jwakely@redhat.com> libstdc++-v3/ChangeLog: * include/bits/hashtable.h: Include required headers. * include/bits/hashtable_policy.h: Likewise. * include/bits/stl_heap.h: Likewise. * include/bits/stl_iterator_base_funcs.h: Likewise.
2022-01-05libstdc++: Optimize operations on small size hashtable [PR 68303]François Dumont1-26/+161
When hasher is identified as slow and the number of elements is limited in the container use a brute-force loop on those elements to look for a given key using the key_equal functor. For the moment the default threshold to consider the container as small is 20. libstdc++-v3/ChangeLog: PR libstdc++/68303 * include/bits/hashtable_policy.h (_Hashtable_hash_traits<_Hash>): New. (_Hash_code_base<>::_M_hash_code(const _Hash_node_value<>&)): New. (_Hashtable_base<>::_M_key_equals): New. (_Hashtable_base<>::_M_equals): Use latter. (_Hashtable_base<>::_M_key_equals_tr): New. (_Hashtable_base<>::_M_equals_tr): Use latter. * include/bits/hashtable.h (_Hashtable<>::__small_size_threshold()): New, use _Hashtable_hash_traits. (_Hashtable<>::find): Loop through elements to look for key if size is lower than __small_size_threshold(). (_Hashtable<>::_M_emplace(true_type, _Args&&...)): Likewise. (_Hashtable<>::_M_insert_unique(_Kt&&, _Args&&, const _NodeGenerator&)): Likewise. (_Hashtable<>::_M_compute_hash_code(const_iterator, const key_type&)): New. (_Hashtable<>::_M_emplace(const_iterator, false_type, _Args&&...)): Use latter. (_Hashtable<>::_M_find_before_node(const key_type&)): New. (_Hashtable<>::_M_erase(true_type, const key_type&)): Use latter. (_Hashtable<>::_M_erase(false_type, const key_type&)): Likewise. * src/c++11/hashtable_c++0x.cc: Include <bits/functional_hash.h>. * testsuite/util/testsuite_performance.h (report_performance): Use 9 width to display memory. * testsuite/performance/23_containers/insert_erase/unordered_small_size.cc: New performance test case.
2022-01-03Update copyright years.Jakub Jelinek1-1/+1
2021-11-15libstdc++: Unordered containers merge re-use hash codeFrançois Dumont1-4/+6
When merging 2 unordered containers with same hasher we can re-use the hash code from the cache if any. Also in the context of the merge operation on multi-container use previous insert iterator as a hint for the next insert. libstdc++-v3/ChangeLog: * include/bits/hashtable_policy.h: (_Hash_code_base<>::_M_hash_code(const _Hash&, const _Hash_node_value<_Value, true>&)): New. (_Hash_code_base<>::_M_hash_code<_H2>(const _H2&, const _Hash_node_value<>&)): New. * include/bits/hashtable.h (_Hashtable<>::_M_merge_unique): Use latter. (_Hashtable<>::_M_merge_multi): Likewise. * testsuite/23_containers/unordered_multiset/modifiers/merge.cc (test05): New test. * testsuite/23_containers/unordered_set/modifiers/merge.cc (test04): New test.
2021-11-09libstdc++: [_GLIBCXX_DEBUG] Implement unordered container mergeFrançois Dumont1-5/+12
The _GLIBCXX_DEBUG unordered containers need a dedicated merge implementation so that any existing iterator on the transfered nodes is properly invalidated. Add typedef/using declarations for everything used as-is from normal implementation. libstdc++-v3/ChangeLog: * include/bits/hashtable_policy.h (__distance_fw): Replace class keyword with typename. * include/bits/hashtable.h (_Hashtable<>::_M_merge_unique): Remove noexcept qualification. Use const_iterator for node extraction/reinsert. (_Hashtable<>::_M_merge_multi): Likewise. Compute new hash code before extract. * include/debug/safe_container.h (_Safe_container<>): Make all methods protected. * include/debug/safe_unordered_container.h (_Safe_unordered_container<>::_UContInvalidatePred<_ExtractKey, _Source>): New. (_Safe_unordered_container<>::_UMContInvalidatePred<_ExtractKey, _Source>): New. (_Safe_unordered_container<>::_UContMergeGuard<_Source, _InvalidatePred>): New. (_Safe_unordered_container<>::_S_uc_guard<_ExtractKey, _Source>): New. (_Safe_unordered_container<>::_S_umc_guard<_ExtractKey, _Source>): New. (_Safe_unordered_container<>::_M_invalide_all): Make public. (_Safe_unordered_container<>::_M_invalide_if): Likewise. (_Safe_unordered_container<>::_M_invalide_local_if): Likewise. * include/debug/unordered_map (unordered_map<>::mapped_type, pointer, const_pointer): New typedef. (unordered_map<>::reference, const_reference, difference_type): New typedef. (unordered_map<>::get_allocator, empty, size, max_size): Add usings. (unordered_map<>::bucket_count, max_bucket_count, bucket): Add usings. (unordered_map<>::hash_function, key_equal, count, contains): Add usings. (unordered_map<>::operator[], at, rehash, reserve): Add usings. (unordered_map<>::merge): New. (unordered_multimap<>::mapped_type, pointer, const_pointer): New typedef. (unordered_multimap<>::reference, const_reference, difference_type): New typedef. (unordered_multimap<>::get_allocator, empty, size, max_size): Add usings. (unordered_multimap<>::bucket_count, max_bucket_count, bucket): Add usings. (unordered_multimap<>::hash_function, key_equal, count, contains): Add usings. (unordered_multimap<>::rehash, reserve): Add usings. (unordered_multimap<>::merge): New. * include/debug/unordered_set (unordered_set<>::mapped_type, pointer, const_pointer): New typedef. (unordered_set<>::reference, const_reference, difference_type): New typedef. (unordered_set<>::get_allocator, empty, size, max_size): Add usings. (unordered_set<>::bucket_count, max_bucket_count, bucket): Add usings. (unordered_set<>::hash_function, key_equal, count, contains): Add usings. (unordered_set<>::rehash, reserve): Add usings. (unordered_set<>::merge): New. (unordered_multiset<>::mapped_type, pointer, const_pointer): New typedef. (unordered_multiset<>::reference, const_reference, difference_type): New typedef. (unordered_multiset<>::get_allocator, empty, size, max_size): Add usings. (unordered_multiset<>::bucket_count, max_bucket_count, bucket): Add usings. (unordered_multiset<>::hash_function, key_equal, count, contains): Add usings. (unordered_multiset<>::rehash, reserve): Add usings. (unordered_multiset<>::merge): New. * testsuite/23_containers/unordered_map/debug/merge1_neg.cc: New test. * testsuite/23_containers/unordered_map/debug/merge2_neg.cc: New test. * testsuite/23_containers/unordered_map/debug/merge3_neg.cc: New test. * testsuite/23_containers/unordered_map/debug/merge4_neg.cc: New test. * testsuite/23_containers/unordered_multimap/debug/merge1_neg.cc: New test. * testsuite/23_containers/unordered_multimap/debug/merge2_neg.cc: New test. * testsuite/23_containers/unordered_multimap/debug/merge3_neg.cc: New test. * testsuite/23_containers/unordered_multimap/debug/merge4_neg.cc: New test. * testsuite/23_containers/unordered_multiset/debug/merge1_neg.cc: New test. * testsuite/23_containers/unordered_multiset/debug/merge2_neg.cc: New test. * testsuite/23_containers/unordered_multiset/debug/merge3_neg.cc: New test. * testsuite/23_containers/unordered_multiset/debug/merge4_neg.cc: New test. * testsuite/23_containers/unordered_set/debug/merge1_neg.cc: New test. * testsuite/23_containers/unordered_set/debug/merge2_neg.cc: New test. * testsuite/23_containers/unordered_set/debug/merge3_neg.cc: New test. * testsuite/23_containers/unordered_set/debug/merge4_neg.cc: New test. * testsuite/util/testsuite_abi.h: [_GLIBCXX_DEBUG] Use normal unordered container implementation.
2021-10-21libstdc++: Suppress Doxygen docs for more implementation detailsJonathan Wakely1-0/+2
libstdc++-v3/ChangeLog: * include/bits/alloc_traits.h: Suppress doxygen documentation. * include/bits/allocated_ptr.h: Likewise. * include/bits/enable_special_members.h: Likewise. * include/bits/hashtable.h: Likewise. * include/bits/hashtable_policy.h: Likewise. * include/bits/uses_allocator.h: Likewise. * include/bits/node_handle.h: Document node handles and suppress documentation for protected members. * include/std/any: Suppress documentation for implementation details.
2021-10-09libstdc++: Avoid instantiation of _Hash_node before it's neededJonathan Wakely1-8/+9
This is a step towards restoring support for incomplete types in unordered containers (PR 53339). We do not need to instantiate the node type to get its value_type member, because we know that the value type is the first template parameter. We can deduce that template argument using a custom trait and a partial specialization for _Hash_node. If we wanted to support custom hash node types we could still use typename _Tp::value_type in the primary template of that trait, but that seems unnecessary. The other change needed is to defer a static assert at class scope, so that it is done when the types are complete. We must have a complete type in the destructor, so we can do it there instead. libstdc++-v3/ChangeLog: * include/bits/hashtable.h: Move static assertion to destructor. * include/bits/hashtable_policy.h: Deduce value type from node type without instantiating it.
2021-10-01libstdc++: Add std::__conditional_t alias templateJonathan Wakely1-7/+7
This change is inspired by the suggestion in http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1715r0.html The new std::__conditional_t alias template is functionally equivalent to std::conditional_t but should be more efficient to compile, due to only ever instantiating two specializations (std::__conditional<true> and std::__conditional<false>) rather than a new specialization for every use of std::conditional. The new alias template is also available in C++11, unlike the C++14 std::conditional_t alias. Signed-off-by: Jonathan Wakely <jwakely@redhat.com> libstdc++-v3/ChangeLog: * include/std/type_traits (__conditional): New class template for internal uses of std::conditional. (__conditional_t): New alias template to replace conditional_t. (__and_, __or_, __result_of_memfun, __result_of_memobj): Use __conditional_t instead of conditional::type. * include/bits/atomic_base.h (__atomic_impl::_Diff): Likewise. * include/bits/hashtable.h (_Hashtable): Likewise. * include/bits/hashtable_policy.h (_Node_iterator, _Insert_base) (_Local_iterator): Likewise. Replace typedefs with using-declarations. * include/bits/move.h (move_if_noexcept): Use __conditional_t. * include/bits/parse_numbers.h (_Select_int_base): Likewise. * include/bits/ptr_traits.h (__make_not_void): Likewise. * include/bits/ranges_algobase.h (__copy_or_move_backward) (__copy_or_move): Likewise. * include/bits/ranges_base.h (borrowed_iterator_t): Likewise. * include/bits/ranges_util.h (borrowed_subrange_t): Likewise. * include/bits/regex_compiler.h (_BracketMatcher): Use __conditional_t. Replace typedefs with using-declarations. * include/bits/shared_ptr_base.h (__shared_count): Use __conditional_t. * include/bits/stl_algobase.h (__copy_move, __copy_move_backward): Likewise. * include/bits/stl_iterator.h (__detail::__clamp_iter_cat) (reverse_iterator::iterator_concept) (__make_move_if_noexcept_iterator) (iterator_traits<common_iterator<_It, _Sent>>) (iterator_traits<counted_iterator<_It>>): Likewise. * include/bits/stl_pair.h (_PCC, pair::operator=): Likewise. * include/bits/stl_tree.h (_Rb_tree::insert_return_type) (_Rb_tree::_M_clone_node): Likewise. * include/bits/unique_ptr.h (unique_ptr(unique_ptr<U,E>&&)): Likewise. * include/bits/uses_allocator.h (__uses_alloc): Likewise. (__is_uses_allocator_predicate): Likewise. * include/debug/functions.h (__foreign_iterator_aux2): Likewise. * include/experimental/any (any::_Manager, __any_caster): Likewise. * include/experimental/executor (async_completion): Likewise. * include/experimental/functional (__boyer_moore_base_t): Likewise. * include/std/any (any::_Manager): Likewise. * include/std/functional (__boyer_moore_base_t): Likewise. * include/std/ranges (borrowed_iterator_t) (borrowed_subrange_t, __detail::__maybe_present_t) (__detail::__maybe_const_t, split_view): Likewise. * include/std/tuple (__empty_not_final, tuple::operator=): Likewise. * include/std/variant (__detail::__variant::__get_t): Likewise.
2021-07-22libstdc++: Fix non-default constructors for hash containers [PR101583]Jonathan Wakely1-5/+12
When I added the new mixin to _Hashtable, I forgot to explicitly construct it in each non-default constructor. That means you can't use any constructors unless all three of the hash function, equality function, and allocator are all default constructible. libstdc++-v3/ChangeLog: PR libstdc++/101583 * include/bits/hashtable.h (_Hashtable): Replace mixin with _Enable_default_ctor. Construct it explicitly in all non-forwarding, non-defaulted constructors. * testsuite/23_containers/unordered_map/cons/default.cc: Check non-default constructors can be used. * testsuite/23_containers/unordered_set/cons/default.cc: Likewise.
2021-07-20libstdc++: fix is_default_constructible for hash containers [PR 100863]Jonathan Wakely1-1/+14
The recent change to _Hashtable_ebo_helper for this PR broke the is_default_constructible trait for a hash container with a non-default constructible allocator. That happens because the constructor needs to be user-provided in order to initialize the member, and so is not defined as deleted when the type is not default constructible. By making _Hashtable derive from _Enable_special_members we can ensure that the default constructor for the std::unordered_xxx containers is deleted when it would be ill-formed. This makes the trait give the correct answer. Signed-off-by: Jonathan Wakely <jwakely@redhat.com> libstdc++-v3/ChangeLog: PR libstdc++/100863 * include/bits/hashtable.h (_Hashtable): Conditionally delete default constructor by deriving from _Enable_special_members. * testsuite/23_containers/unordered_map/cons/default.cc: New test. * testsuite/23_containers/unordered_set/cons/default.cc: New test.
2021-06-04libstdc++: Add feature test macro for heterogeneous lookup in unordered ↵Jonathan Wakely1-2/+4
containers Also update the C++20 status docs. Signed-off-by: Jonathan Wakely <jwakely@redhat.com> libstdc++-v3/ChangeLog: * doc/xml/manual/status_cxx2020.xml: * doc/html/*: Regenerate. * include/bits/hashtable.h (__cpp_lib_generic_unordered_lookup): Define. * include/std/version (__cpp_lib_generic_unordered_lookup): Define. * testsuite/23_containers/unordered_map/operations/1.cc: Check feature test macro. * testsuite/23_containers/unordered_set/operations/1.cc: Likewise.
2021-05-24libstdc++: Limit allocation on iterator insertion in Hashtable [PR 96088]François Dumont1-13/+46
When inserting into unordered_multiset or unordered_multimap first instantiate the node to store and compute the hash code from it to avoid a potential intermediate key_type instantiation. When inserting into unordered_set or unordered_map check if invoking the hash functor with container key_type is noexcept and invoking the same hash functor with key part of the iterator value_type can throw. In this case create a temporary key_type instance at Hashtable level and use it to compute the hash code. This temporary instance is moved to the final storage location if needed. libstdc++-v3/ChangeLog: PR libstdc++/96088 * include/bits/hashtable_policy.h (_Select2nd): New. (_NodeBuilder<>): New. (_ReuseOrAllocNode<>::operator()): Use variadic template args. (_AllocNode<>::operator()): Likewise. * include/bits/hashtable.h (_Hashtable<>::__node_builder_t): New. (_Hashtable<>::_M_insert_unique<>(_Kt&&, _Arg&&, const _NodeGenerator&)): New. (_Hashtable<>::_S_forward_key): New. (_Hashtable<>::_M_insert): Use latter. (_Hashtable<>::_M_insert(const_iterator, _Arg&&, const _NodeGenerator&, false_type)): Instantiate node first, compute hash code second. * testsuite/23_containers/unordered_map/96088.cc: New test. * testsuite/23_containers/unordered_multimap/96088.cc: New test. * testsuite/23_containers/unordered_multiset/96088.cc: New test. * testsuite/23_containers/unordered_set/96088.cc: New test. * testsuite/util/replacement_memory_operators.h (counter::_M_increment): New. (counter::_M_decrement): New. (counter::reset()): New.
2021-04-09libstdc++: Fix invalid constexpr function in C++11 mode [PR 99985]Jonathan Wakely1-2/+8
I keep forgetting that a constexpr function in C++11 has to be a single return statement. libstdc++-v3/ChangeLog: PR libstdc++/99985 * include/bits/hashtable.h (_Hashtable::_S_nothrow_move()): Fix to be a valid constexpr function in C++11. * testsuite/23_containers/unordered_set/cons/99985.cc: New test.
2021-04-08libstdc++: Simplify noexcept-specifiers for move constructorsJonathan Wakely1-12/+14
This puts the logic for the noexcept-specifier in one place, and then reuses it elsewhere. This means checking whether the move constructor can throw doesn't need to do overload resolution and then check whether some other constructor can throw, we just get the answer directly. libstdc++-v3/ChangeLog: * include/bits/hashtable.h (_Hashtable::_S_nothrow_move()): New function to determine noexcept-specifier for move constructors. (_Hashtable): Use _S_nothrow_move() on move constructors. * testsuite/23_containers/unordered_map/cons/noexcept_move_construct.cc: Correct static assertion message. * testsuite/23_containers/unordered_multimap/cons/noexcept_move_construct.cc: Likewise. * testsuite/23_containers/unordered_multiset/cons/noexcept_move_construct.cc: Likewise. * testsuite/23_containers/unordered_set/cons/noexcept_move_construct.cc: Likewise.
2021-02-09libstdc++: Add unordered containers heterogeneous lookupFrançois Dumont1-0/+201
Add unordered containers heterogeneous lookup member functions find, count, contains and equal_range in C++20. Those members are considered for overload resolution only if hash and equal functors used to instantiate the container have a nested is_transparent type. libstdc++-v3/ChangeLog: * include/bits/stl_tree.h (__has_is_transparent, __has_is_transparent_t): Move... * include/bits/stl_function.h: ...here. * include/bits/hashtable_policy.h (_Hash_code_base<>::_M_hash_code_tr): New.. (_Hashtable_base<>::_M_equals_tr): New. * include/bits/hashtable.h (_Hashtable<>::_M_find_tr, _Hashtable<>::_M_count_tr, _Hashtable<>::_M_equal_range_tr): New member function templates to perform heterogeneous lookup. (_Hashtable<>::_M_find_before_node_tr): New. (_Hashtable<>::_M_find_node_tr): New. * include/bits/unordered_map.h (unordered_map::find<>, unordered_map::count<>, unordered_map::contains<>, unordered_map::equal_range<>): New member function templates to perform heterogeneous lookup. (unordered_multimap::find<>, unordered_multimap::count<>, unordered_multimap::contains<>, unordered_multimap::equal_range<>): Likewise. * include/bits/unordered_set.h (unordered_set::find<>, unordered_set::count<>, unordered_set::contains<>, unordered_set::equal_range<>): Likewise. (unordered_multiset::find<>, unordered_multiset::count<>, unordered_multiset::contains<>, unordered_multiset::equal_range<>): Likewise. * include/debug/unordered_map (unordered_map::find<>, unordered_map::equal_range<>): Likewise. (unordered_multimap::find<>, unordered_multimap::equal_range<>): Likewise. * include/debug/unordered_set (unordered_set::find<>, unordered_set::equal_range<>): Likewise. (unordered_multiset::find<>, unordered_multiset::equal_range<>): Likewise. * testsuite/23_containers/unordered_map/operations/1.cc: New test. * testsuite/23_containers/unordered_multimap/operations/1.cc: New test. * testsuite/23_containers/unordered_multiset/operations/1.cc: New test. * testsuite/23_containers/unordered_set/operations/1.cc: New test.
2021-01-04Update copyright years.Jakub Jelinek1-1/+1
2020-10-20libstdc++: Refactor _Hashtable to prepare for custom pointersFrançois Dumont1-117/+134
Limit usage of node pointers in _Hashtable implementation details so that when we move to allocator custom pointer type we do not need to add an _Alloc template parameter everywhere which would impact abi. This is done by reviewing node type definition. It is now based on new basic types which are not pointer dependant. The _Hashtable helper types are now using those new node base types and do not receive node pointers. libstdc++-v3/ChangeLog * include/bits/hashtable_policy.h (_Hash_node_value_base<>): Remove _Hash_node_base inheritance. (_Hash_node_code_cache<bool _Cache_hash_code>): New. (_Hash_node_value<typename _Value, bool _Cache_hash_code>): New. (_Hash_node<>): Inherits _Hash_node_base<> and _Hash_node_value<>. (_Map_base<>::__node_type): Remove. (_Map_base<>::iterator): Remove. (_Insert_base<>::__hash_cached): New. (_Insert_base<>::__constant_iterators): New. (_Insert_base<>::__hashtable_alloc): New. (_Insert_base<>::__node_type): Remove. (_Insert_base<>::__node_ptr): New. (_Hash_code_base<>): Remove specializations. (_Hash_code_base<>::__node_type): Remove. (_Hash_code_base<>::_M_bucket_index(const __node_type*, size_t)): Replace by... (_Hash_code_base<>::_M_bucket_index(const _Hash_node_value<>&, size_t)): ...this. (_Hash_code_base<>::_M_store_code(__node_type*, __hash_code)): Replace by... (_Hash_code_base<>::_M_store_code(_Hash_node_code_cache<>&, __hash_code)): ...this. (_Hash_code_base<>::_M_copy_code(__node_type*, const __node_type*)): Replace by... (_Hash_code_base<>::_M_copy_code(_Hash_node_code_cache<>&, const _Hash_node_code_base<>&)): ...this. (_Hashtable_base<>::__constant_iterators): Remove. (_Hashtable_base<>::__unique_keys): Remove. (_Hashtable_base<>::__node_type): Remove. (_Hashtable_base<>::iterator): Remove. (_Hashtable_base<>::const_iterator): Remove. (_Hashtable_base<>::local_iterator): Remove. (_Hashtable_base<>::const_local_iterator): Remove. (_Hashtable_base<>::__ireturn_type): Remove. (_Hashtable_base<>::_Equal_hash_code<>::_S_equals): Replace by... (_Hashtable_base<>::_S_equals(__hash_code, const _Hash_node_code_hash<>&)): ...this. (_Hashtable_base<>::_Equal_hash_code<>::_S_node_equals): Replace by... (_Hashtable_base<>::_S_node_equals(__hash_code, const _Hash_node_code_hash<>&)): ...this. (_Hashtable_base<>::_Equal_hash_code<>): Remove. (_Hashtable_base<>::_M_equals): Adapt. (_Hashtable_baxe<>::_M_node_equals): Adapt. (_Equality<>::_M_equal): Adapt. (_Hashtable_alloc<>::__node_ptr): New. (_Hashtable_alloc<>::__bucket_type): Rename into... (_Hashtable_alloc<>::__node_base_ptr): ...this. (_Hashtable_alloc<>::__bucket_alloc_type): Rename into... (_Hashtable_alloc<>::__buckets_alloc_type): ...this. (_Hashtable_alloc<>::__bucket_alloc_traits): Rename into... (_Hashtable_alloc<>::__buckets_alloc_traits): ...this. (_Hashtable_alloc<>::__buckets_ptr): New. (_Hashtable_alloc<>::_M_allocate_node): Adapt. (_Hashtable_alloc<>::_M_deallocate_node): Adapt. (_Hashtable_alloc<>::_M_deallocate_node_ptr): Adapt. (_Hashtable_alloc<>::_M_deallocate_nodes): Adapt. (_Hashtable_alloc<>::_M_allocate_buckets): Adapt. (_Hashtable_alloc<>::_M_deallocate_buckets): Adapt. * include/bits/hashtable.h (_Hashtable<>): Adapt.
2020-08-26libstdc++: Rename _Hashtable _H1, _H2 and _Hash template parametersFrançois Dumont1-362/+330
Limit our _Hashtable implementation to ranged hash. _H1 is now rename to _Hash matching the _Hash functor used for unordered containers. _H2 is now renamed to _RangeHash. Former _Hash simply becomes _Unused. Remove _ExtractKey storage. libstdc++-v3/ChangeLog: * include/bits/hashtable_policy.h (_Hashtable<>): Rename _H1 into _Hash _H2 into _RangeHash and _Hash into _Unused. (_Hastable_base<>): Likewise. (_Map_base<>): Likewise. (_Insert_base<>): Likewise. (_Insert<>): Likewise. (_Rehash_base<>): Likewise. (_Local_iterator_base<>): Likewise. (_Hash_code_base<>): Likewise. (_Hash_code_base<_Key, _Value, _ExtractKey, _H1, _H2, _Hash, false>): Remove. (_Hash_code_base<_Key, _Value, _ExtractKey, _H1, _H2, _Hash, true>): Remove. (_Hash_code_base<_Key, _Value, _ExtractKey, _Hash, _RangeHas, _Unused, bool>): Remove _Hashtable_ebo_helper<2, _RangeHash> base type.. (_Hash_code_base<>::_M_bucket_index(const _Key&, __hash_code, size_t)): Replace by... (_Hash_code_base<>::_M_bucket_index(__hash_code, size_t)): ...this. (_Local_iterator<>): Remove _H1 and _H2 template parameters. (_Local_const_iterator<>): Likewise. (_Equality<>): Likewise. (_Map_base<>::operator[](const key_type&): Adapt. (_Map_base<>::operator[](key_type&&): Adapt. (_Identity::operator()): Add noexcept. (_Select1st::operator()): Likewise. (_Hash_code_base<>): Remove _Hashtable_ebo_helper<0, _ExtractKey> base type. (_Hash_code_base::_M_extract): Remove. * include/bits/hashtable.h (_Hashtable<>): Remove _H1 and _H2 template parameters. Remove _ExtractKey from constructors. (_Hashtable<>::_M_insert_unique_node(const key_type&, size_t, __hash_code, __node_type*, size_t)): Replace by... (_Hashtable<>::_M_insert_unique_node(size_t, __hash_code, __node_type*, size_t)): ...this. (_Hashtable<>::_M_insert_muti_node(__node_type*, const key_type&, __hash_code, __node_type*)): Replace by... (_Hashtable<>::_M_insert_multi_node(__node_type*, __hash_code, __node_type*)): ...this. (_Hashtable<>::__key_extract): Remove. * include/bits/node_handle.h: Adapt.
2020-08-12libstdc++: Make self-move well-defined for containers [PR 85828]Jonathan Wakely1-0/+3
The C++ LWG recently confirmed that self-move assignment should not have undefined behaviour for standard containers (see the proposed resolution of LWG 2839). The result should be a valid but unspecified value, just like other times when a container is moved from. Our std::list, std::__cxx11::basic_string and unordered containers all have bugs which result in undefined behaviour. For std::list the problem is that we clear the previous contents using _M_clear() instead of clear(). This means the _M_next, _M_prev and _M_size members are not zeroed, and so after we "update" them (with their existing values), we are left with dangling pointers and a non-zero size, but no elements. For the unordered containers the problem is similar. _Hashtable first deallocates the existing contents, then takes ownership of the pointers from the RHS object (which has just had its contents deallocated so the pointers are dangling). For std::basic_string it's a little more subtle. When the string is local (i.e. fits in the SSO buffer) we use char_traits::copy to copy the contents from this->data() to __rhs.data(). When &__rhs == this that copy violates the precondition that the ranges don't overlap. We only need to check for self-move for this case where it's local, because the only other case that can be true for self-move is that it's non-local but the allocators compare equal. In that case the data pointer is neither deallocated nor leaked, so the result is well-defined. This patch also makes a small optimization for std::deque move assignment, to use the efficient move when is_always_equal is false, but the allocators compare equal at runtime. Finally, we need to remove all the Debug Mode checks which abort the program when a self-move is detected, because it's not undefined to do that. Before PR 85828 can be closed we should also look into fixing std::shuffle so it doesn't do any redundant self-swaps. libstdc++-v3/ChangeLog: PR libstdc++/85828 * include/bits/basic_string.h (operator=(basic_string&&)): Check for self-move before copying with char_traits::copy. * include/bits/hashtable.h (operator=(_Hashtable&&)): Check for self-move. * include/bits/stl_deque.h (_M_move_assign1(deque&&, false_type)): Check for equal allocators. * include/bits/stl_list.h (_M_move_assign(list&&, true_type)): Call clear() instead of _M_clear(). * include/debug/formatter.h (__msg_self_move_assign): Change comment. * include/debug/macros.h (__glibcxx_check_self_move_assign): (_GLIBCXX_DEBUG_VERIFY): Remove. * include/debug/safe_container.h (operator=(_Safe_container&&)): Remove assertion check for safe move and make it well-defined. * include/debug/safe_iterator.h (operator=(_Safe_iterator&&)): Remove assertion check for self-move. * include/debug/safe_local_iterator.h (operator=(_Safe_local_iterator&&)): Likewise. * testsuite/21_strings/basic_string/cons/char/self_move.cc: New test. * testsuite/23_containers/deque/cons/self_move.cc: New test. * testsuite/23_containers/forward_list/cons/self_move.cc: New test. * testsuite/23_containers/list/cons/self_move.cc: New test. * testsuite/23_containers/set/cons/self_move.cc: New test. * testsuite/23_containers/unordered_set/cons/self_move.cc: New test. * testsuite/23_containers/vector/cons/self_move.cc: New test.
2020-07-29libstdc++: Fix unordered containers move constructors noexcept qualificationFrançois Dumont1-7/+33
_Hashtable move constructor is wrongly qualified as noexcept(true) regardless of _Equal and _H1 copy constructor qualifications. _Hashtable allocator-aware move constructor is missing its noexcept qualification like the depending unordered containers ones. libstdc++-v3/ChangeLog: * include/bits/hashtable.h (_Hashtable(_Hashtable&& __ht, __node_alloc_type&& __a, true_type)): Add noexcept qualification. (_Hashtable(_Hashtable&&)): Fix noexcept qualification. (_Hashtable(_Hashtable&&, const allocator_type&)): Add noexcept qualification. * include/bits/unordered_map.h (unordered_map(unordered_map&&, const allocator_type&)): Add noexcept qualification. (unordered_multimap(unordered_multimap&&, const allocator_type&)): Likewise. * include/bits/unordered_set.h (unordered_set(unordered_set&&, const allocator_type&)): Likewise. (unordered_multiset(unordered_multiset&&, const allocator_type&)): Likewise. * include/debug/unordered_map (unordered_map(unordered_map&&, const allocator_type&)): Likewise. (unordered_multimap(unordered_multimap&&, const allocator_type&)): Likewise. * include/debug/unordered_set (unordered_set(unordered_set&&, const allocator_type&)): Likewise. (unordered_multiset(unordered_multiset&&, const allocator_type&)): Likewise. * testsuite/23_containers/unordered_map/allocator/default_init.cc: New test. * testsuite/23_containers/unordered_map/cons/noexcept_default_construct.cc: New test. * testsuite/23_containers/unordered_map/cons/noexcept_move_construct.cc: New test. * testsuite/23_containers/unordered_map/modifiers/move_assign.cc: New test. * testsuite/23_containers/unordered_multimap/cons/noexcept_default_construct.cc: New test. * testsuite/23_containers/unordered_multimap/cons/noexcept_move_construct.cc: New test. * testsuite/23_containers/unordered_multiset/cons/noexcept_default_construct.cc: New test. * testsuite/23_containers/unordered_multiset/cons/noexcept_move_construct.cc: New test. * testsuite/23_containers/unordered_set/allocator/default_init.cc: New test. * testsuite/23_containers/unordered_set/cons/noexcept_default_construct.cc: New test. * testsuite/23_containers/unordered_set/cons/noexcept_move_construct.cc: New test.
2020-07-28libstdc++: Do not over-size hashtable buckets on range insertionFrançois Dumont1-22/+68
We used to consider range size on insertion but on unique keys container not all range values might be inserted resulting in over-sizing. In this case we just consider user reservation and if none then the container will adapt to actually inserted elements. libstdc++-v3/ChangeLog: * include/bits/hashtable.h (_Hashtable<>(_InputIterator, _InputIterator, size_t, const _H1&, const _H2&, const _Hash&, const _Equal&, const _ExtractKey&, const allocator_type&, true_type)): New. (_Hashtable<>(_InputIterator, _InputIterator, size_t, const _H1&, const _H2&, const _Hash&, const _Equal&, const _ExtractKey&, const allocator_type&, false_type)): New. (_Hashtable<>(_InputIterator, _InputIterator, size_t, const _H1&, const _H2&, const _Hash&, const _Equal&, const _ExtractKey&, const allocator_type&)): Delegate to latters. (operator=(initializer_list<value_type>)): Rehash if too small. (_M_insert(_Arg&&, const _NodeGenerator&, true_type)): Remove size_t len parameter. * include/bits/hashtable_policy.h (_Insert_base<>::_M_insert_range): Do not try to get input range distance. * testsuite/23_containers/unordered_set/cons/bucket_hint.cc: New test. * testsuite/23_containers/unordered_set/modifiers/insert.cc: New test.
2020-07-27libstdc++: Review _Hashtable count, equal_range _M_erase(false_type,) codeFrançois Dumont1-82/+81
Simplify operator[] implementation using find method. Review several _Hashtable method implementations to limit the computation of bucket index. Introduce _M_update_bbegin to simplify code. libstdc++-v3/ChangeLog: * include/bits/hashtable_policy.h (_Map_base<>::at): Use _Hashtable<>::find. (_Hashtable_base<>::_Equal_hash_code<>::_S_node_equals):New. (_Hashtable_base<>::_M_node_equals): New, use latter. (_Equality<_Key, _Value, _Alloc, _ExtractKey, _Equal, _H1, _H2, _Hash, _RehashPolicy, false>::_M_equal): Adapt to use latter. * include/bits/hashtable.h (_Hashtable<>::_M_update_bbegin): New. (_Hashtable<>::_M_assign): Use latter. (_Hashtable<>::_M_move_assign): Likewise. (_Hashtable<>(_Hashtable<>&&)): Likewise. (_Hashtable<>(_Hashtable<>&&, const allocator_type&)): Likewise. (_Hashtable<>::swap): Likewise. (_Hashtable<>::find): Build iterator directly from _M_find_node result. (_Hashtable<>::count): Use _Hashtable<>::find. (_Hashtable<>::equal_range): Likewise. (_Hashtable<>::_M_erase(false_type, const key_type&)): Use _M_node_equals.
2020-02-12libstdc++: Add missing std:: qualification of a forward callFrançois Dumont1-1/+1
* include/bits/hashtable.h (_Hashtable<>(_Hashtable&&, std::allocator_type&)): Add missing std namespace qualification to forward call.
2020-01-16libstdc++: Improve unordered containers == operator (PR 91263)François Dumont1-0/+7
Avoid comparing elements with operator== multiple times by replacing uses of find and equal_range with equivalent inlined code that uses operator== instead of the container's equality comparison predicate. This is valid because the standard requires that operator== is a refinement of the equality predicate. Also replace the _S_is_permutation function with std::is_permutation, which wasn't yet implemented when this code was first written. PR libstdc++/91263 * include/bits/hashtable.h (_Hashtable<>): Make _Equality<> friend. * include/bits/hashtable_policy.h: Include <bits/stl_algo.h>. (_Equality_base): Remove. (_Equality<>::_M_equal): Review implementation. Use std::is_permutation. * testsuite/23_containers/unordered_multiset/operators/1.cc (Hash, Equal, test02, test03): New. * testsuite/23_containers/unordered_set/operators/1.cc (Hash, Equal, test02, test03): New.
2020-01-09PR libstdc++/92124 fix incorrect unordered container move assignmentFrançois Dumont1-36/+37
* include/bits/hashtable.h (_Hashtable<>::__alloc_node_gen_t): New template alias. (_Hashtable<>::__fwd_value_for): New. (_Hashtable<>::_M_assign_elements<>): Remove _NodeGenerator template parameter. (_Hashtable<>::_M_assign<>): Add _Ht template parameter. (_Hashtable<>::operator=(const _Hashtable<>&)): Adapt. (_Hashtable<>::_M_move_assign): Adapt. Replace std::move_if_noexcept with std::move. (_Hashtable<>::_Hashtable(const _Hashtable&)): Adapt. (_Hashtable<>::_Hashtable(const _Hashtable&, const allocator_type&)): Adapt. (_Hashtable<>::_Hashtable(_Hashtable&&, const allocator_type&)): Adapt. * testsuite/23_containers/unordered_set/92124.cc: New. From-SVN: r280028
2020-01-01Update copyright years.Jakub Jelinek1-1/+1
From-SVN: r279813
2019-06-17Simplify node ownership in _Hashtable membersFrançois Dumont1-145/+138
Introduce an RAII type to manage nodes in unordered containers while they are being inserted. If the caller always owns a node until it is inserted, then the insertion functions don't need to deallocate on failure. This allows a FIXME in the node re-insertion API to be removed. Also change extract(const key_type&) to not call extract(const_iterator) anymore. This avoids looping through the bucket nodes again to find the node before the one being extracted. 2019-06-17 François Dumont <fdumont@gcc.gnu.org> Jonathan Wakely <jwakely@redhat.com> * include/bits/hashtable.h (struct _Hashtable::_Scoped_node): New type. (_Hashtable::_M_insert_unique_node): Add key_type parameter. Don't deallocate node if insertion fails. (_Hashtable::_M_insert_multi_node): Likewise. (_Hashtable::_M_reinsert_node): Pass additional key argument. (_Hashtable::_M_reinsert_node_multi): Likewise. Remove FIXME. (_Hashtable::_M_extract_node(size_t, __node_base*)): New function. (_Hashtable::extract(const_iterator)): Use _M_extract_node. (_Hashtable::extract(const _Key&)): Likewise. (_Hashtable::_M_merge_unique): Pass additional key argument. (_Hashtable::_M_emplace<Args>(true_type, Args&&...)): Likewise. Use _Scoped_node. (_Hashtable::_M_insert): Likewise. * include/bits/hashtable_policy.h (_Map_base::operator[]): Likewise. (_Hashtable_alloc): Add comments to functions with misleading names. Co-Authored-By: Jonathan Wakely <jwakely@redhat.com> From-SVN: r272381
2019-06-03Rename variables and cleanup comments.François Dumont1-108/+112
2019-06-03 François Dumont <fdumont@gcc.gnu.org> Rename variables and cleanup comments. * include/bits/hashtable_policy.h * include/bits/hashtable.h From-SVN: r271876
2019-06-03Enforce allocator::value_type consistency for containers in C++2aJonathan Wakely1-1/+1
In previous standards it is undefined for a container and its allocator to have a different value_type. Libstdc++ has traditionally allowed it as an extension, automatically rebinding the allocator to the container's value_type. Since GCC 8.1 that extension has been disabled for C++11 and later when __STRICT_ANSI__ is defined (i.e. for -std=c++11, -std=c++14, -std=c++17 and -std=c++2a). Since the acceptance of P1463R1 into the C++2a draft an incorrect allocator::value_type now requires a diagnostic. This patch implements that by enabling the static_assert for -std=gnu++2a as well. * doc/xml/manual/status_cxx2020.xml: Document P1463R1 status. * include/bits/forward_list.h [__cplusplus > 201703]: Enable allocator::value_type assertion for C++2a. * include/bits/hashtable.h: Likewise. * include/bits/stl_deque.h: Likewise. * include/bits/stl_list.h: Likewise. * include/bits/stl_map.h: Likewise. * include/bits/stl_multimap.h: Likewise. * include/bits/stl_multiset.h: Likewise. * include/bits/stl_set.h: Likewise. * include/bits/stl_vector.h: Likewise. * testsuite/23_containers/deque/48101-3_neg.cc: New test. * testsuite/23_containers/forward_list/48101-3_neg.cc: New test. * testsuite/23_containers/list/48101-3_neg.cc: New test. * testsuite/23_containers/map/48101-3_neg.cc: New test. * testsuite/23_containers/multimap/48101-3_neg.cc: New test. * testsuite/23_containers/multiset/48101-3_neg.cc: New test. * testsuite/23_containers/set/48101-3_neg.cc: New test. * testsuite/23_containers/unordered_map/48101-3_neg.cc: New test. * testsuite/23_containers/unordered_multimap/48101-3_neg.cc: New test. * testsuite/23_containers/unordered_multiset/48101-3_neg.cc: New test. * testsuite/23_containers/unordered_set/48101-3_neg.cc: New test. * testsuite/23_containers/vector/48101-3_neg.cc: New test. From-SVN: r271866
2019-05-17PR libstdc++/85965 move is_invocable assertions againJonathan Wakely1-6/+0
This is another attempt to reduce how often the assertions are evaluated, so that code which doesn't try to use the function objects doesn't need them to be invocable. For _Rb_tree we access the _M_key_compare object directly, so can't put the assertions in an accessor function for it. However, every invocation of _M_key_compare is accompanied by a use of _S_key, so the assertions can be put in there. For _Hashtable there are member functions that are consistently used to obtain a hash code or test for equality, so the assertions can go in those members. PR libstdc++/85965 * include/bits/hashtable.h (_Hashtable::~_Hashtable()): Remove static assertions from the destructor. * include/bits/hashtable_policy.h (_Hash_code_base::_M_hash_code): Move static_assert for hash function to here. (_Hash_table_base::_M_equals): Move static_assert for equality predicate to here. * include/bits/stl_tree.h (_Rb_tree::_S_value(_Const_Link_type)): Remove. (_Rb_tree::_S_key(_Const_Link_type)): Move assertions here. Access the value directly instead of calling _S_value. (_Rb_tree::_S_value(_Const_Base_ptr)): Remove. (_Rb_tree::_S_key(_Const_Base_ptr)): Do downcast and forward to _S_key(_Const_Link_type). * testsuite/23_containers/set/85965.cc: Check construction, destruction, assignment and size() do not trigger the assertions. * testsuite/23_containers/unordered_set/85965.cc: Likewise. * testsuite/23_containers/map/48101_neg.cc: Call find and adjust expected errors. * testsuite/23_containers/multimap/48101_neg.cc: Likewise. * testsuite/23_containers/multiset/48101_neg.cc: Likewise. * testsuite/23_containers/set/48101_neg.cc: Likewise. * testsuite/23_containers/unordered_map/48101_neg.cc: Likewise. * testsuite/23_containers/unordered_multimap/48101_neg.cc: Likewise. * testsuite/23_containers/unordered_multiset/48101_neg.cc: Likewise. * testsuite/23_containers/unordered_set/48101_neg.cc: Likewise. From-SVN: r271323
2019-05-04hashtable.h (_Hashtable<>::rehash): Review comment.François Dumont1-1/+2
2019-05-04 François Dumont <fdumont@gcc.gnu.org> * include/bits/hashtable.h (_Hashtable<>::rehash): Review comment. * include/bits/hashtable_policy.h (_Prime_rehash_policy::_M_bkt_for_elements): Use __builtin_ceill. (_Power2_rehash_policy::_M_bkt_for_elements): Likewise. (_Power2_rehash_policy::_M_next_bkt): Enforce returning a result not smaller than input value rather than always greater. Preserve _M_next_resize if called with 0 input. Use __builtin_floorl. (_Power2_rehash_policy::_M_need_rehash): Rehash only if number of elements + number of insertions is greater than _M_next_resize. Start with 11 buckets if not told otherwise. Use __builtin_floorl. (_Rehash_base<>::reserve): Use rehash policy _M_bkt_for_elements. * src/c++11/hashtable_c++0x.cc (_Prime_rehash_policy::_M_next_bkt): Preserve _M_next_resize if called with 0 input. Use __builtin_floorl. (_Prime_rehash_policy::_M_need_rehash): Start with 11 buckets if not told otherwise. Use __builtin_floorl. * testsuite/23_containers/unordered_set/hash_policy/71181.cc: Adapt test to also validate _Power2_rehash_policy. * testsuite/23_containers/unordered_set/hash_policy/power2_rehash.cc: Adapt. From-SVN: r270868
2019-03-26PR libstdc++/85965 delay static assertions until types are completeJonathan Wakely1-5/+6
The static assertions added for PR libstdc++/48101 were at class scope and so were evaluated too eagerly, when it might not be possible to determine whether the function objects are invocable with the key types. The problematic cases are where the key type is not known to be convertible to the argument type(s) of the function object until later, after a type has been completed. Specifically, if the key type is a pointer to a derived class and the function object's argument type is a pointer to a base class, then the derived-to-base conversion is only valid once the derived type is complete. By moving the static assertions to the destructor they will only be evaluated when the destructor is instantiated, at which point whether the key type can be passed to the function object should be knowable. The ideal place to do the checks would be only when the function objects are actually invoked, but that would mean adding the checks in numerous places, so the destructor is used instead. The tests need to be adjusted because the "required from here" line is now the location of the destructor, not the point of instantiation in the test file. For the map and multimap tests which check two specializations, the dg-error matching the assertion text matches both cases. Also check the diagnostic output for the template arguments, to ensure both specializations trigger the assertion. PR libstdc++/85965 * include/bits/hashtable.h (_Hashtable): Move static assertions to destructor so they are not evaluated until the _Key type is complete. * include/bits/stl_tree.h (_Rb_tree): Likewise. * testsuite/23_containers/set/85965.cc: New test. * testsuite/23_containers/unordered_set/85965.cc: New test. * testsuite/23_containers/map/48101_neg.cc: Replace "here" errors with regexp matching the corresponding _Rb_tree specialization. * testsuite/23_containers/multimap/48101_neg.cc: Likewise. * testsuite/23_containers/multiset/48101_neg.cc: Remove "here" error. * testsuite/23_containers/set/48101_neg.cc: Likewise. * testsuite/23_containers/unordered_map/48101_neg.cc: Likewise. * testsuite/23_containers/unordered_multimap/48101_neg.cc: Likewise. * testsuite/23_containers/unordered_multiset/48101_neg.cc: Likewise. * testsuite/23_containers/unordered_set/48101_neg.cc: Likewise. From-SVN: r269949
2019-02-26PR libstdc++/89477 constrain deduction guides for maps and setsJonathan Wakely1-0/+7
The Compare, Hash, and Pred template parameters should be constrained in the C++17 deduction guides for associative and unordered containers. The deduction guides for stack, queue and priority_queue are already constrained, but this patch makes them use the _RequireNotAllocator helper instead of reproducing the logic each time. PR libstdc++/89477 * include/bits/alloc_traits.h (_RequireNotAllocator): New helper for container deduction guides. * include/bits/hashtable.h (_RequireNotAllocatorOrIntegral): Likewise. * include/bits/stl_map.h (map): Use _RequireNotAllocator to constrain parameters in deduction guides. * include/bits/stl_multimap.h (multimap): Likewise. * include/bits/stl_multiset.h (multiset): Likewise. * include/bits/stl_queue.h (queue, priority_queue): Likewise. * include/bits/stl_set.h (set): Likewise. * include/bits/stl_stack.h (stack): Likewise. * include/bits/unordered_map.h (unordered_map, unordered_multimap): use _RequireNotAllocator and _RequireNotAllocatorOrIntegral to constrain parameters in deduction guides. * include/bits/unordered_set.h (unordered_set, unordered_multiset): Likewise. * testsuite/23_containers/map/cons/deduction.cc: Test additional deduction cases. * testsuite/23_containers/multiset/cons/deduction.cc: Likewise. * testsuite/23_containers/set/cons/deduction.cc: Likewise. * testsuite/23_containers/unordered_map/cons/deduction.cc: Likewise. * testsuite/23_containers/unordered_multimap/cons/deduction.cc: Likewise. * testsuite/23_containers/unordered_multiset/cons/deduction.cc: Likewise. * testsuite/23_containers/unordered_set/cons/deduction.cc: Likewise. From-SVN: r269234
2019-01-21Fix after P0600.Ulrich Drepper1-1/+1
gcc/testsuite/ChangeLog 2019-02-20 Ulrich Drepper <drepper@redhat.com> Fix after P0600. * g++.dg/init/new39.C: Don't just ignore result of new. libstdc++/ChangeLog 2019-02-20 Ulrich Drepper <drepper@redhat.com> Implement C++20 P0600r1. * include/backward/hash_map: Add nodiscard attribute to empty. * include/backward/hash_set: Likewise. * backward/hashtable.h: Likewise. * include/bits/basic_string.h: Likewise. * include/bits/forward_list.h: Likewise. * include/bits/hashtable.h: Likewise. * include/bits/regex.h: Likewise. * include/bits/stl_deque.h: Likewise. * include/bits/stl_list.h: Likewise. * include/bits/stl_map.h: Likewise. * include/bits/stl_multimap.h: Likewise. * include/bits/stl_multiset.h: Likewise. * include/bits/stl_queue.h: Likewise. * include/bits/stl_set.h: Likewise. * include/bits/stl_stack.h: Likewise. * include/bits/stl_tree.h: Likewise. * include/bits/stl_vector.h: Likewise. * include/bits/unordered_map.h: Likewise. * include/bits/unordered_set.h: Likewise. * include/debug/array: Likewise. * include/experimental/any: Likewise. * include/experimental/bits/fs_path.h: Likewise. * include/experimental/internet: Likewise. * include/experimental/string_view: Likewise. * include/ext/pb_ds/detail/bin_search_tree_/info_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/binary_heap_/binary_heap_.hpp: Likewise. * include/ext/pb_ds/detail/binary_heap_/info_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/cc_hash_table_map_/cc_ht_map_.hpp: Likewise. * include/ext/pb_ds/detail/cc_hash_table_map_/info_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/cc_hash_table_map_/size_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/gp_hash_table_map_/gp_ht_map_.hpp: Likewise. * include/ext/pb_ds/detail/gp_hash_table_map_/info_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/left_child_next_sibling_heap_/info_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/left_child_next_sibling_heap_/left_child_next_sibling_heap_.hpp: Likewise. * include/ext/pb_ds/detail/list_update_map_/info_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/list_update_map_/lu_map_.hpp: Likewise. * include/ext/pb_ds/detail/ov_tree_map_/info_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/ov_tree_map_/ov_tree_map_.hp: Likewise. * include/ext/pb_ds/detail/pat_trie_/info_fn_imps.hpp: Likewise. * include/ext/pb_ds/detail/pat_trie_/pat_trie_.hpp: Likewise. * include/ext/pb_ds/detail/rc_binomial_heap_/rc.hpp: Likewise. * include/ext/pb_ds/detail/tree_trace_base.hpp: Likewise. * include/ext/pb_ds/trie_policy.hpp: Likewise. * include/ext/rope: Likewise. * include/ext/slist: Likewise. * include/ext/vstring.h: Likewise. * include/profile/array: Likewise. * include/std/array: Likewise. * include/tr1/array: Likewise. * include/tr1/hashtable.h: Likewise. * include/tr1/regex: Likewise. * include/tr2/dynamic_bitset: Likewise. * include/bits/alloc_traits.h: Add nodiscard attribute to allocate. * include/experimental/memory_resource: Likewise. * include/ext/alloc_traits.h: Likewise. * include/ext/array_allocator.h: Likewise. * include/ext/bitmap_allocator.h: Likewise. * include/ext/debug_allocator.h: Likewise. * include/ext/extptr_allocator.h: Likewise. * include/ext/mt_allocator.h: Likewise. * include/ext/new_allocator.h: Likewise. * include/ext/pool_allocator.h: Likewise. * include/ext/throw_allocator.h: Likewise. * include/std/scoped_allocator: Likewise. * libsupc++/eh_alloc.cc: Likewise. * include/std/future: Add nodiscard attribute to async. * libsupc++/new: Add nodiscard attribute to new. From-SVN: r268111
2019-01-01Update copyright years.Jakub Jelinek1-1/+1
From-SVN: r267494