aboutsummaryrefslogtreecommitdiff
path: root/malloc
AgeCommit message (Collapse)AuthorFilesLines
35 hoursmalloc: Remove dumped heap supportWilco Dijkstra2-582/+15
Remove support for obsolete dumped heaps. Dumping heaps was discontinued 8 years ago, however loading a dumped heap is still supported. This blocks changes and improvements of the malloc data structures - hence it is time to remove this. Ancient binaries that still call malloc_set_state will now get the -1 error code. Update tst-mallocstate.c to just check for this. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2 daysmalloc: Hoist common unlock out of if-else control blockDev Jain1-2/+1
We currently unlock the arena mutex in arena_get_retry() unconditionally. Therefore, hoist out the unlock from the if-else control block. Signed-off-by: Dev Jain <dev.jain@arm.com> Reviewed-by: DJ Delorie <dj@redhat.com>
11 daysmalloc: Cleanup libc_reallocWilco Dijkstra1-15/+11
Minor cleanup of libc_realloc: remove unnecessary special cases for mmap, move ar_ptr initialization, first check for oldmem == NULL. Reviewed-by: DJ Delorie <dj@redhat.com>
11 daysatomics: Remove unused atomicsWilco Dijkstra2-4/+4
Remove all unused atomics. Replace uses of catomic_increment and catomic_decrement with atomic_fetch_add_relaxed which maps to a standard compiler builtin. Relaxed memory ordering is correct for simple counters since they only need atomicity. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
11 daysmalloc: check "negative" tcache_key values by handSamuel Thibault1-1/+2
instead of undefined cases from casting uintptr_t into intptr_t.
13 daysmalloc: Fix Os build on some ABIsAdhemerval Zanella1-0/+6
I have not checked with all versions for all ABIs, but I saw failures with gcc-14 on arm, alpha, hppa, i686, sparc, sh4, and microblaze. Reviewed-by: Collin Funk <collin.funk1@gmail.com>
2025-08-29malloc: add tst-mxfast to hugetlb exclusion listDJ Delorie1-0/+1
tst-mxfast needs GLIBC_TUNABLES to be set to its own value. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-08-27malloc: Support hugepages in mremap_chunkWilco Dijkstra2-5/+35
Add mremap_chunk support for mmap()ed chunks using hugepages by accounting for their alignment, to prevent the mremap call failing in most cases where the size passed is not a hugepage size multiple. It also improves robustness for reallocating hugepages since mremap is much less likely to fail, so running out of memory when reallocating a larger size and having to copy the old contents after mremap fails is also less likely. To track whether an mmap()ed chunk uses hugepages, have a flag in the lowest bit of the mchunk_prev_size field which is set after a call to sysmalloc_mmap, and accessed later in mremap_chunk. Create macros for getting and setting this bit, and for mapping the bit off when accessing the field for mmap()ed chunks. Since the alignment cannot be lower than 8 bytes, this flag cannot affect the alignment data. Add malloc/tst-tcfree4-malloc-check to the tests-exclude-malloc-check list as malloc-check prevents the tcache from being used to store chunks. This test caused failures due to a bug in mem2chunk_check to be fixed in a later patch. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-08-27malloc: Change mmap chunk layoutWilco Dijkstra2-28/+34
Change the mmap chunk layout to be identical to a normal chunk. This makes it safe for tcache to hold mmap chunks and simplifies size calculations in memsize and musable. Add mmap_base() and mmap_size() macros to simplify code. Reviewed-by: Cupertino Miranda <cupertino.miranda@oracle.com>
2025-08-19malloc: Fix tst bug in malloc/tst-free-errno-malloc-hugetlb1.caiyinyu1-1/+1
When transparent hugepages (THP) are configured to 32MB on x86/loongarch systems, the current big_size value may not be sufficiently large to guarantee that free(ptr) [1] will call munmap(ptr_aligned, big_size). Tested on x86_64 and loongarch64. PS: Without this patch and using 32M THP, there is a about 50% chance that malloc/tst-free-errno-malloc-hugetlb1 will fail on both x86_64 and loongarch64. [1] malloc/tst-free-errno.c: ... errno = 1789; /* This call to free() is supposed to call munmap (ptr_aligned, big_size); which increases the number of VMAs by 1, which is supposed to fail. */ -> free (ptr); TEST_VERIFY (get_errno () == 1789); } ... Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-08-10malloc: Fix checking for small negative values of tcache_keySamuel Thibault1-1/+1
tcache_key is unsigned so we should turn it explicitly to signed before taking its absolute value.
2025-08-10malloc: Make sure tcache_key is odd enoughSamuel Thibault1-0/+16
We want tcache_key not to be a commonly-occurring value in memory, so ensure a minimum amount of one and zero bits. And we need it non-zero, otherwise even if tcache_double_free_verify sets e->key to 0 before calling __libc_free, it gets called again by __libc_free, thus looping indefinitely. Fixes: c968fe50628db74b52124d863cd828225a1d305c ("malloc: Use tailcalls in __libc_free")
2025-08-08malloc: Fix MALLOC_DEBUGWilco Dijkstra1-2/+2
MALLOC_DEBUG only works on locked arenas, so move the call to check_inuse_chunk from __libc_free() to _int_free_chunk(). Regress now passes if MALLOC_DEBUG is enabled. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-08-08malloc: Support THP in arenasWilco Dijkstra1-3/+8
Arenas support huge pages but not transparent huge pages. Add this by also checking mp_.thp_pagesize when creating a new arena, and use madvise. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-08-08malloc: Remove use of __curbrkWilco Dijkstra1-5/+3
Remove an odd use of __curbrk and use MORECORE (0) instead. This fixes Hurd build since it doesn't define this symbol. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-08-04Revert "Remove use of __curbrk."Wilco Dijkstra1-3/+5
This reverts commit 1ee0b771a9c0cd2b882fe7acd38deddb7d4fbef2.
2025-08-04Revert "Improve MALLOC_DEBUG"Wilco Dijkstra1-2/+2
This reverts commit 4b3e65682d1895a651653d82f05c66ead8dfcf3b.
2025-08-04Revert "Enable THP on arenas"Wilco Dijkstra1-8/+3
This reverts commit 77d3e739360ebb49bae6ecfd4181e4e1692f6362.
2025-08-04Revert "Use _int_free_chunk in tcache_thread_shutdown"Wilco Dijkstra1-6/+2
This reverts commit 05ef6a49746faedb4262db1476449c1c2c822e95.
2025-08-04Revert "Remove dumped heap support"Wilco Dijkstra2-15/+582
This reverts commit 8f57caa7fdcb7ab3016897a056ccf386061e7734.
2025-08-04Revert "malloc: Cleanup libc_realloc"Wilco Dijkstra1-11/+15
This reverts commit dea1e52af38c20eae37ec09727f17ab8fde87f55.
2025-08-04Revert "Change mmap representation"Wilco Dijkstra2-24/+28
This reverts commit 4b74591022e88639dcaefb8c4a2e405d301a59e2.
2025-08-04Remove use of __curbrk.Wilco Dijkstra1-5/+3
2025-08-04Improve MALLOC_DEBUGWilco Dijkstra1-2/+2
2025-08-04Enable THP on arenasWilco Dijkstra1-3/+8
2025-08-04Use _int_free_chunk in tcache_thread_shutdownWilco Dijkstra1-2/+6
2025-08-04Remove dumped heap supportWilco Dijkstra2-582/+15
2025-08-04malloc: Cleanup libc_reallocWilco Dijkstra1-15/+11
Minor cleanup of libc_realloc: remove unnecessary special cases for mmap, move ar_ptr initialization, first check for oldmem == NULL.
2025-08-04Change mmap representationWilco Dijkstra2-28/+24
2025-08-02malloc: Cleanup sysmalloc_mmapWilco Dijkstra1-72/+19
Cleanup sysmalloc_mmap - simplify padding since it is always a constant. Remove av parameter which is only used in do_check_chunk, but since it may be NULL for mmap, it will cause a crash in checking mode. Remove the odd check on mmap in do_check_chunk. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-08-02malloc: Improve checked_request2sizeWilco Dijkstra2-28/+13
Change checked_request2size to return SIZE_MAX for huge inputs. This ensures large allocation requests stay large and can't be confused with a small allocation. As a result several existing checks against PTRDIFF_MAX become redundant. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-08-02malloc: Cleanup madvise definesWilco Dijkstra1-11/+2
Remove redundant ifdefs for madvise/THP. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-08-02malloc: Fix MAX_TCACHE_SMALL_SIZEWilco Dijkstra1-10/+8
MAX_TCACHE_SMALL_SIZE should use chunk size since it is used after checked_request2size. Increase limit of tcache_max_bytes by 1 since all comparisons use '<'. As a result, the last tcache entry is now used as expected. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-07-29malloc: Enable THP always support on hugetlb tunableWilliam Hunt1-7/+11
Enable support for THP always when glibc.malloc.hugetlb=1, as the tunable currently only gives explicit support in malloc for the THP madvise mode by aligning to a huge page size. Add a thp_mode parameter to mp_ and check in madvise_thp whether the system is using madvise mode, otherwise the `__madvise` call is useless. Set the thp_mode to be unsupported by default, but if the hugetlb tunable is set this updates thp_mode. Performance of xalancbmk improves by 4.9% on Neoverse V2 when THP always mode is set on the system and glibc.malloc.hugetlb=1. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-07-29malloc: Remove redundant NULL checkWilco Dijkstra1-4/+3
Remove a redundant NULL check from tcache_get_n. Reviewed-by: Cupertino Miranda <cupertino.miranda@oracle.com>
2025-07-14malloc: fix definition for MAX_TCACHE_SMALL_SIZECupertino Miranda1-1/+1
Reviewed-by: Arjun Shankar <arjun@redhat.com>
2025-06-26malloc: Cleanup tcache_init()Wilco Dijkstra1-26/+8
Cleanup tcache_init() by using the new __libc_malloc2 interface. Reviewed-by: Cupertino Miranda <cupertino.miranda@oracle.com>
2025-06-26malloc: replace instances of __builtin_expect with __glibc_unlikelyWilliam Hunt2-26/+25
Replaced all instances of __builtin_expect to __glibc_unlikely within malloc.c and malloc-debug.c. This improves the portability of glibc by avoiding calls to GNU C built-in functions. Since all the expected results from calls to __builtin_expect were 0, __glibc_likely was never used as a replacement. Multiple calls to __builtin_expect within a single if statement have been replaced with one call to __glibc_unlikely, which wraps every condition. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org> Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-06-26malloc: refactored aligned_OK and misaligned_chunkWilliam Hunt2-12/+10
Renamed aligned_OK to misaligned_mem as to be similar to misaligned_chunk, and reversed any assertions using the macro. Made misaligned_chunk call misaligned_mem after chunk2mem rather than bitmasking with the malloc alignment itself, since misaligned_chunk is meant to test the data chunk itself rather than the header, and the compiler will optimise the addition so the ternary operator is not needed. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-06-19malloc: Link large tcache tests with $(shared-thread-library)Florian Weimer1-52/+58
Introduce tests-link-with-libpthread to list tests that require linking with libpthread, and use that to generate dependencies on $(shared-thread-library) for all multi-threaded tests. Fixes build failures of commit cde5caa4bb21d5c474b9e4762cc847bcbc70e481 ("malloc: add testing for large tcache support") on Hurd. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-06-18malloc: Cleanup _mid_memalignWilco Dijkstra1-14/+7
Remove unused 'address' parameter from _mid_memalign and callers. Fix off-by-one alignment calculation in __libc_pvalloc. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-06-17malloc: Sort tests-exclude-largetcache in MakefileH.J. Lu1-2/+2
This fixes: FAIL: lint-makefiles Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
2025-06-16malloc: add testing for large tcache supportCupertino Miranda1-0/+16
This patch adds large tcache support tests by re-executing malloc tests using the tunable: glibc.malloc.tcache_max=1048576 Test names are postfixed with "largetcache". Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-06-16malloc: add tcache support for large chunk cachingCupertino Miranda1-82/+227
Existing tcache implementation in glibc seems to focus in caching smaller data size allocations, limiting the size of the allocation to 1KB. This patch changes tcache implementation to allow to cache any chunk size allocations. The implementation adds extra bins (linked-lists) which store chunks with different ranges of allocation sizes. Bin selection is done in multiples in powers of 2 and chunks are inserted in growing size ordering within the bin. The last bin contains all other sizes of allocations. This patch although by default preserves the same implementation, limitting caches to 1KB chunks, it now allows to increase the max size for the cached chunks with the tunable glibc.malloc.tcache_max. It also now verifies if chunk was mmapped, in which case __libc_free will not add it to tcache. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-06-03malloc: Count tcache entries downwardsWilco Dijkstra1-14/+15
Currently tcache requires 2 global variable accesses to determine whether a block can be added to the tcache. Change the counts array to 'num_slots' to indicate the number of entries that could be added. If 'num_slots' reaches zero, no more blocks can be added. If the entries pointer is not NULL, at least one block is available for allocation. Now each tcache bin can support a different maximum number of entries, and they can be individually switched on or off (a zero initialized num_slots+entry means the tcache bin is not available for free or malloc). Reviewed-by: DJ Delorie <dj@redhat.com>
2025-05-14malloc: Improve performance of __libc_callocWilco Dijkstra1-27/+43
Improve performance of __libc_calloc by splitting it into 2 parts: first handle the tcache fastpath, then do the rest in a separate tailcalled function. This results in significant performance gains since __libc_calloc doesn't need to setup a frame. On Neoverse V2, bench-calloc-simple improves by 5.0% overall. Bench-calloc-thread 1 improves by 24%. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-05-12malloc: Improve malloc initializationWilco Dijkstra4-61/+11
Move malloc initialization to __libc_early_init. Use a hidden __ptmalloc_init for initialization and a weak call to avoid pulling in the system malloc in a static binary. All previous initialization checks can now be removed. Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-05-12malloc: Improved double free detection in the tcacheDavid Lau3-14/+76
The previous double free detection did not account for an attacker to use a terminating null byte overflowing from the previous chunk to change the size of a memory chunk is being sorted into. So that the check in 'tcache_double_free_verify' would pass even though it is a double free. Solution: Let 'tcache_double_free_verify' iterate over all tcache entries to detect double frees. This patch only protects from buffer overflows by one byte. But I would argue that off by one errors are the most common errors to be made. Alternatives Considered: Store the size of a memory chunk in big endian and thus the chunk size would not get overwritten because entries in the tcache are not that big. Move the tcache_key before the actual memory chunk so that it does not have to be checked at all, this would work better in general but also it would increase the memory usage. Signed-off-by: David Lau <david.lau@fau.de> Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-05-01malloc: Inline tcache_try_mallocWilco Dijkstra1-44/+7
Inline tcache_try_malloc into calloc since it is the only caller. Also fix usize2tidx and use it in __libc_malloc, __libc_calloc and _mid_memalign. The result is simpler, cleaner code. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-04-16malloc: move tcache_init out of hot tcache pathsCupertino Miranda1-12/+6
This patch moves any calls of tcache_init away after tcache hot paths. Since there is no reason to initialize tcaches in the hot path and since we need to be able to check tcache != NULL in any case, because of tcache_thread_shutdown function, moving tcache_init away from hot path can only be beneficial. The patch also removes the initialization of tcaches within the __libc_free call. It only makes sense to initialize tcaches for the thread after it calls one of the allocation functions. Also the patch removes the save/restore of errno from tcache_init code, as it is no longer needed.