aboutsummaryrefslogtreecommitdiff
path: root/malloc
AgeCommit message (Collapse)AuthorFilesLines
9 dayslinux: Add support for getrandom vDSOAdhemerval Zanella1-2/+2
Linux 6.11 has getrandom() in vDSO. It operates on a thread-local opaque state allocated with mmap using flags specified by the vDSO. Multiple states are allocated at once, as many as fit into a page, and these are held in an array of available states to be doled out to each thread upon first use, and recycled when a thread terminates. As these states run low, more are allocated. To make this procedure async-signal-safe, a simple guard is used in the LSB of the opaque state address, falling back to the syscall if there's reentrancy contention. Also, _Fork() is handled by blocking signals on opaque state allocation (so _Fork() always sees a consistent state even if it interrupts a getrandom() call) and by iterating over the thread stack cache on reclaim_stack. Each opaque state will be in the free states list (grnd_alloc.states) or allocated to a running thread. The cancellation is handled by always using GRND_NONBLOCK flags while calling the vDSO, and falling back to the cancellable syscall if the kernel returns EAGAIN (would block). Since getrandom is not defined by POSIX and cancellation is supported as an extension, the cancellation is handled as 'may occur' instead of 'shall occur' [1], meaning that if vDSO does not block (the expected behavior) getrandom will not act as a cancellation entrypoint. It avoids a pthread_testcancel call on the fast path (different than 'shall occur' functions, like sem_wait()). It is currently enabled for x86_64, which is available in Linux 6.11, and aarch64, powerpc32, powerpc64, loongarch64, and s390x, which are available in Linux 6.12. Link: https://pubs.opengroup.org/onlinepubs/9799919799/nframe.html [1] Co-developed-by: Jason A. Donenfeld <Jason@zx2c4.com> Tested-by: Jason A. Donenfeld <Jason@zx2c4.com> # x86_64 Tested-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> # x86_64, aarch64 Tested-by: Xi Ruoyao <xry111@xry111.site> # x86_64, aarch64, loongarch64 Tested-by: Stefan Liebler <stli@linux.ibm.com> # s390x
2024-08-20malloc: Link threading tests with $(shared-thread-library)Samuel Thibault1-0/+6
Fixes build failures on Hurd.
2024-07-27malloc: Link threading tests with $(shared-thread-library)Florian Weimer1-0/+2
Fixes build failures on Hurd.
2024-07-22malloc: add multi-threaded tests for aligned_alloc/calloc/mallocMiguel Martín3-0/+172
Improve aligned_alloc/calloc/malloc test coverage by adding multi-threaded tests with random memory allocations and with/without cross-thread memory deallocations. Perform a number of memory allocation calls with random sizes limited to 0xffff. Use the existing DSO ('malloc/tst-aligned_alloc-lib.c') to randomize allocator selection. The multi-threaded allocation/deallocation is staged as described below: - Stage 1: Half of the threads will be allocating memory and the other half will be waiting for them to finish the allocation. - Stage 2: Half of the threads will be allocating memory and the other half will be deallocating memory. - Stage 3: Half of the threads will be deallocating memory and the second half waiting on them to finish. Add 'malloc/tst-aligned-alloc-random-thread.c' where each thread will deallocate only the memory that was previously allocated by itself. Add 'malloc/tst-aligned-alloc-random-thread-cross.c' where each thread will deallocate memory that was previously allocated by another thread. The intention is to be able to utilize existing malloc testing to ensure that similar allocation APIs are also exposed to the same rigors. Reviewed-by: Arjun Shankar <arjun@redhat.com>
2024-07-22malloc: avoid global locks in tst-aligned_alloc-lib.cMiguel Martín1-19/+20
Make sure the DSO used by aligned_alloc/calloc/malloc tests does not get a global lock on multithreaded tests. Reviewed-by: Arjun Shankar <arjun@redhat.com>
2024-07-19Fix usage of _STACK_GROWS_DOWN and _STACK_GROWS_UP defines [BZ 31989]John David Anglin1-1/+1
Signed-off-by: John David Anglin <dave.anglin@bell.net> Reviewed-By: Andreas K. Hüttel <dilfridge@gentoo.org>
2024-06-24mtrace: make shell commands robust against meta charactersAndreas Schwab1-2/+2
Use the list form of the open function to avoid interpreting meta characters in the arguments.
2024-06-20malloc: Replace shell/Perl gate in mtraceFlorian Weimer1-4/+17
The previous version expanded $0 and $@ twice. The new version defines a q no-op shell command. The Perl syntax error is masked by the eval Perl function. The q { … } construct is executed by the shell without errors because the q shell function was defined, but treated as a non-expanding quoted string by Perl, effectively hiding its context from the Perl interpreter. As before the script is read by require instead of executed directly, to avoid infinite recursion because the #! line contains /bin/sh. Introduce the “fatal” function to produce diagnostics that are not suppressed by “do”. Use “do” instead of “require” because it has fewer requirements on the executed script than “require”. Prefix relative paths with './' because “do” (and “require“ before) searches for the script in @INC if the path is relative and does not start with './'. Use $_ to make the trampoline shorter. Add an Emacs mode marker to indentify the script as a Perl script.
2024-06-20malloc: Always install mtrace (bug 31892)Florian Weimer2-7/+4
Generation of the Perl script does not depend on Perl, so we can always install it even if $(PERL) is not set during the build. Change the malloc/mtrace.pl text substition not to rely on $(PERL). Instead use PATH at run time to find the Perl interpreter. The Perl interpreter cannot execute directly a script that starts with “#! /bin/sh”: it always executes it with /bin/sh. There is no perl command line switch to disable this behavior. Instead, use the Perl require function to execute the script. The additional shift calls remove the “.” shell arguments. Perl interprets the “.” as a string concatenation operator, making the expression syntactically valid. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-06-04malloc: New test to check malloc alternate path using memory obstructionsayan paul2-0/+73
The test aims to ensure that malloc uses the alternate path to allocate memory when sbrk() or brk() fails.To achieve this, the test first creates an obstruction at current program break, tests that obstruction with a failing sbrk(), then checks if malloc is still returning a valid ptr thus inferring that malloc() used mmap() instead of brk() or sbrk() to allocate the memory. Reviewed-by: Arjun Shankar <arjun@redhat.com> Reviewed-by: Zack Weinberg <zack@owlfolio.org>
2024-05-14malloc: Improve aligned_alloc and calloc test coverage.Joe Simmons-Talbott5-0/+151
Add a DSO (malloc/tst-aligned_alloc-lib.so) that can be used during testing to interpose malloc with a call that randomly uses either aligned_alloc, __libc_malloc, or __libc_calloc in the place of malloc. Use LD_PRELOAD with the DSO to mirror malloc/tst-malloc.c testing as an example in malloc/tst-malloc-random.c. Add malloc/tst-aligned-alloc-random.c as another example that does a number of malloc calls with randomly sized, but limited to 0xffff, requests. The intention is to be able to utilize existing malloc testing to ensure that similar allocation APIs are also exposed to the same rigors. Reviewed-by: DJ Delorie <dj@redhat.com>
2024-05-10malloc/Makefile: Split and sort testsH.J. Lu1-64/+102
Put each test on a separate line and sort tests. Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-01-12Make __getrandom_nocancel set errno and add a _nostatus versionXi Ruoyao1-1/+3
The __getrandom_nocancel function returns errors as negative values instead of errno. This is inconsistent with other _nocancel functions and it breaks "TEMP_FAILURE_RETRY (__getrandom_nocancel (p, n, 0))" in __arc4random_buf. Use INLINE_SYSCALL_CALL instead of INTERNAL_SYSCALL_CALL to fix this issue. But __getrandom_nocancel has been avoiding from touching errno for a reason, see BZ 29624. So add a __getrandom_nocancel_nostatus function and use it in tcache_key_initialize. Signed-off-by: Xi Ruoyao <xry111@xry111.site> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2024-01-01Update copyright dates not handled by scripts/update-copyrightsPaul Eggert3-3/+3
I've updated copyright dates in glibc for 2024. This is the patch for the changes not generated by scripts/update-copyrights and subsequent build / regeneration of generated files.
2024-01-01Update copyright dates with scripts/update-copyrightsPaul Eggert90-90/+90
2023-11-29malloc: Improve MAP_HUGETLB with glibc.malloc.hugetlb=2Adhemerval Zanella1-3/+10
Even for explicit large page support, allocation might use mmap without the hugepage bit set if the requested size is smaller than mmap_threshold. For this case where mmap is issued, MAP_HUGETLB is set iff the allocation size is larger than the used large page. To force such allocations to use large pages, also tune the mmap_threhold (if it is not explicit set by a tunable). This forces allocation to follow the sbrk path, which will fall back to mmap (which will try large pages before galling back to default mmap). Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org>
2023-11-22malloc: Use __get_nprocs on arena_get2 (BZ 30945)Adhemerval Zanella1-1/+1
This restore the 2.33 semantic for arena_get2. It was changed by 11a02b035b46 to avoid arena_get2 call malloc (back when __get_nproc was refactored to use an scratch_buffer - 903bc7dcc2acafc). The __get_nproc was refactored over then and now it also avoid to call malloc. The 11a02b035b46 did not take in consideration any performance implication, which should have been discussed properly. The __get_nprocs_sched is still used as a fallback mechanism if procfs and sysfs is not acessible. Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
2023-11-07malloc: Decorate malloc mapsAdhemerval Zanella2-0/+9
Add anonymous mmap annotations on loader malloc, malloc when it allocates memory with mmap, and on malloc arena. The /proc/self/maps will now print: [anon: glibc: malloc arena] [anon: glibc: malloc] [anon: glibc: loader malloc] On arena allocation, glibc annotates only the read/write mapping. Checked on x86_64-linux-gnu and aarch64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
2023-10-23malloc: Fix tst-tcfree3 build csky-linux-gnuabiv2 with fortify sourceAdhemerval Zanella2-3/+2
With gcc 13.1 with --enable-fortify-source=2, tst-tcfree3 fails to build on csky-linux-gnuabiv2 with: ../string/bits/string_fortified.h: In function ‘do_test’: ../string/bits/string_fortified.h:26:8: error: inlining failed in call to ‘always_inline’ ‘memcpy’: target specific option mismatch 26 | __NTH (memcpy (void *__restrict __dest, const void *__restrict __src, | ^~~~~~ ../misc/sys/cdefs.h:81:62: note: in definition of macro ‘__NTH’ 81 | # define __NTH(fct) __attribute__ ((__nothrow__ __LEAF)) fct | ^~~ tst-tcfree3.c:45:3: note: called from here 45 | memcpy (c, a, 32); | ^~~~~~~~~~~~~~~~~ Instead of relying on -O0 to avoid malloc/free to be optimized away, disable the builtin. Reviewed-by: DJ Delorie <dj@redhat.com>
2023-08-15malloc: Remove bin scanning from memalign (bug 30723)Florian Weimer2-166/+10
On the test workload (mpv --cache=yes with VP9 video decoding), the bin scanning has a very poor success rate (less than 2%). The tcache scanning has about 50% success rate, so keep that. Update comments in malloc/tst-memalign-2 to indicate the purpose of the tests. Even with the scanning removed, the additional merging opportunities since commit 542b1105852568c3ebc712225ae78b ("malloc: Enable merging of remainders in memalign (bug 30723)") are sufficient to pass the existing large bins test. Remove leftover variables from _int_free from refactoring in the same commit. Reviewed-by: DJ Delorie <dj@redhat.com>
2023-08-11malloc: Enable merging of remainders in memalign (bug 30723)Florian Weimer1-76/+121
Previously, calling _int_free from _int_memalign could put remainders into the tcache or into fastbins, where they are invisible to the low-level allocator. This results in missed merge opportunities because once these freed chunks become available to the low-level allocator, further memalign allocations (even of the same size are) likely obstructing merges. Furthermore, during forwards merging in _int_memalign, do not completely give up when the remainder is too small to serve as a chunk on its own. We can still give it back if it can be merged with the following unused chunk. This makes it more likely that memalign calls in a loop achieve a compact memory layout, independently of initial heap layout. Drop some useless (unsigned long) casts along the way, and tweak the style to more closely match GNU on changed lines. Reviewed-by: DJ Delorie <dj@redhat.com>
2023-07-26malloc: Fix set-freeres.c with gcc 6Adhemerval Zanella Netto1-0/+46
Old GCC might trigger the the comparison will always evaluate as ‘true’ warnig for static build: set-freeres.c:87:14: error: the comparison will always evaluate as ‘true’ for the address of ‘__libc_getgrgid_freemem_ptr’ will never be NULL [-Werror=address] if (&__ptr != NULL) \ So add pragma weak for all affected usages. Checked on x86_64 and i686 with gcc 6 and 13. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-07-06realloc: Limit chunk reuse to only growing requests [BZ #30579]Siddhesh Poyarekar1-8/+15
The trim_threshold is too aggressive a heuristic to decide if chunk reuse is OK for reallocated memory; for repeated small, shrinking allocations it leads to internal fragmentation and for repeated larger allocations that fragmentation may blow up even worse due to the dynamic nature of the threshold. Limit reuse only when it is within the alignment padding, which is 2 * size_t for heap allocations and a page size for mmapped allocations. There's the added wrinkle of THP, but this fix ignores it for now, pessimizing that case in favor of keeping fragmentation low. This resolves BZ #30579. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reported-by: Nicolas Dusart <nicolas@freedelity.be> Reported-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Tested-by: Aurelien Jarno <aurelien@aurel32.net>
2023-06-12malloc: Decrease resource usage for malloc testsAdhemerval Zanella Netto1-12/+11
The tst-mallocfork2 and tst-mallocfork3 create large number of subprocesss, around 11k for former and 20k for latter, to check for malloc async-signal-safeness on both fork and _Fork. However they do not really exercise allocation patterns different than other tests fro malloc itself, and the spawned process just exit without any extra computation. The tst-malloc-tcache-leak is similar, but creates 100k threads and already checks the resulting with mallinfo. These tests are also very sensitive to system load (since they estresss heavy the kernel resource allocation), and adding them on THP tunable and mcheck tests increase the pressure even more. For THP the fork tests do not add any more coverage than other tests. The mcheck is also not enable for tst-malloc-tcache-leak. Checked on x86_64-linux-gnu. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-06-06Move {read,write}_all functions to a dedicated headerFrédéric Bérat2-60/+2
Since these functions are used in both catgets/gencat.c and malloc/memusage{,stat}.c, it make sense to move them into a dedicated header where they can be inlined. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-02Fix a few more typos I missed in previous round -- BZ 25337Paul Pluzhnikov1-1/+1
2023-06-02Fix all the remaining misspellings -- BZ 25337Paul Pluzhnikov6-14/+14
2023-06-01malloc/{memusage.c, memusagestat.c}: fix warn unused resultFrédéric Bérat2-16/+86
Fix unused result warnings, detected when _FORTIFY_SOURCE is enabled in glibc. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-05-08aligned_alloc: conform to C17DJ Delorie5-6/+116
This patch adds the strict checking for power-of-two alignments in aligned_alloc(), and updates the manual accordingly. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-05-02malloc: Really fix tst-memalign-3 link against threadsSamuel Thibault1-1/+2
All the tst malloc variants need the thread linking flags.
2023-05-02malloc: Fix tst-memalign-3 link against threadsSamuel Thibault1-0/+1
2023-04-20malloc: Add missing shared thread library flagsAdhemerval Zanella1-0/+1
So tst-memalign-3 builds on Hurd.
2023-04-18malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)DJ Delorie4-82/+268
Based on these comments in malloc.c: size field is or'ed with NON_MAIN_ARENA if the chunk was obtained from a non-main arena. This is only set immediately before handing the chunk to the user, if necessary. The NON_MAIN_ARENA flag is never set for unsorted chunks, so it does not have to be taken into account in size comparisons. When we pull a chunk off the unsorted list (or any list) we need to make sure that flag is set properly before returning the chunk. Use the rounded-up size for chunk_ok_for_memalign() Do not scan the arena for reusable chunks if there's no arena. Account for chunk overhead when determining if a chunk is a reuse candidate. mcheck interferes with memalign, so skip mcheck variants of memalign tests. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2023-04-05malloc: Only set pragma weak for rpc freemem if requiredAdhemerval Zanella1-2/+4
Both __rpc_freemem and __rpc_thread_destroy are only used if the the compat symbols are required.
2023-03-29memalign: Support scanning for aligned chunks.DJ Delorie3-28/+390
This patch adds a chunk scanning algorithm to the _int_memalign code path that reduces heap fragmentation by reusing already aligned chunks instead of always looking for chunks of larger sizes and splitting them. The tcache macros are extended to allow removing a chunk from the middle of the list. The goal is to fix the pathological use cases where heaps grow continuously in workloads that are heavy users of memalign. Note that tst-memalign-2 checks for tcache operation, which malloc-check bypasses. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2023-03-29malloc: Use C11 atomics on memusageAdhemerval Zanella1-82/+111
Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
2023-03-29Remove --enable-tunables configure optionAdhemerval Zanella Netto4-137/+5
And make always supported. The configure option was added on glibc 2.25 and some features require it (such as hwcap mask, huge pages support, and lock elisition tuning). It also simplifies the build permutations. Changes from v1: * Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs more discussion. * Cleanup more code. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-03-29Remove --disable-experimental-malloc optionAdhemerval Zanella1-4/+0
It is the default since 2.26 and it has bitrotten over the years, By using it multiple malloc tests fails: FAIL: malloc/tst-memalign-2 FAIL: malloc/tst-memalign-2-malloc-hugetlb1 FAIL: malloc/tst-memalign-2-malloc-hugetlb2 FAIL: malloc/tst-memalign-2-mcheck FAIL: malloc/tst-mxfast-malloc-hugetlb1 FAIL: malloc/tst-mxfast-malloc-hugetlb2 FAIL: malloc/tst-tcfree2 FAIL: malloc/tst-tcfree2-malloc-hugetlb1 FAIL: malloc/tst-tcfree2-malloc-hugetlb2 Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
2023-03-28Allow building with --disable-nscd againFlavio Cruz1-0/+6
The change 88677348b4de breaks the build with undefiend references to the NSCD functions.
2023-03-27Move libc_freeres_ptrs and libc_subfreeres to hidden/weak functionsAdhemerval Zanella Netto2-29/+136
They are both used by __libc_freeres to free all library malloc allocated resources to help tooling like mtrace or valgrind with memory leak tracking. The current scheme uses assembly markers and linker script entries to consolidate the free routine function pointers in the RELRO segment and to be freed buffers in BSS. This patch changes it to use specific free functions for libc_freeres_ptrs buffers and call the function pointer array directly with call_function_static_weak. It allows the removal of both the internal macros and the linker script sections. Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-03-08malloc: Fix transposed arguments in sysmalloc_mmap_fallback callRobert Morell1-2/+2
git commit 0849eed45daa ("malloc: Move MORECORE fallback mmap to sysmalloc_mmap_fallback") moved a block of code from sysmalloc to a new helper function sysmalloc_mmap_fallback(), but 'pagesize' is used for the 'minsize' argument and 'MMAP_AS_MORECORE_SIZE' for the 'pagesize' argument. Fixes: 0849eed45daa ("malloc: Move MORECORE fallback mmap to sysmalloc_mmap_fallback") Signed-off-by: Robert Morell <rmorell@nvidia.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2023-02-22malloc: remove redundant check of unsorted bin corruptionAyush Mittal1-2/+0
* malloc/malloc.c (_int_malloc): remove redundant check of unsorted bin corruption With commit "b90ddd08f6dd688e651df9ee89ca3a69ff88cd0c" (malloc: Additional checks for unsorted bin integrity), same check of (bck->fd != victim) is added before checking of unsorted chunk corruption, which was added in "bdc3009b8ff0effdbbfb05eb6b10966753cbf9b8" (Added check before removing from unsorted list). .. 3773 if (__glibc_unlikely (bck->fd != victim) 3774 || __glibc_unlikely (victim->fd != unsorted_chunks (av))) 3775 malloc_printerr ("malloc(): unsorted double linked list corrupted"); .. .. 3815 /* remove from unsorted list */ 3816 if (__glibc_unlikely (bck->fd != victim)) 3817 malloc_printerr ("malloc(): corrupted unsorted chunks 3"); 3818 unsorted_chunks (av)->bk = bck; .. So this extra check can be removed. Signed-off-by: Maninder Singh <maninder1.s@samsung.com> Signed-off-by: Ayush Mittal <ayush.m@samsung.com> Reviewed-by: DJ Delorie <dj@redhat.com>
2023-01-06Update copyright dates not handled by scripts/update-copyrightsJoseph Myers3-3/+3
I've updated copyright dates in glibc for 2023. This is the patch for the changes not generated by scripts/update-copyrights and subsequent build / regeneration of generated files.
2023-01-06Update copyright dates with scripts/update-copyrightsJoseph Myers87-87/+87
2022-12-22Avoid use of atoi in mallocJoseph Myers1-7/+12
This patch is analogous to commit a3708cf6b0a5a68e2ed1ce3db28a03ed21d368d2. atoi has undefined behavior on out-of-range input, which makes it problematic to use anywhere in glibc that might be processing input out-of-range for atoi but not specified to produce undefined behavior for the function calling atoi. In conjunction with the C2x strtol changes, use of atoi in libc can also result in localplt test failures because the redirection for strtol does not interact properly with the libc_hidden_proto call for __isoc23_strtol for the call in the inline atoi implementation. In malloc/arena.c, this issue shows up for atoi calls that are only compiled for --disable-tunables (thus with the x86_64-linux-gnu-minimal configuration of build-many-glibcs.py, for example). Change those atoi calls to use strtol directly, as in the previous such changes. Tested for x86_64 (--disable-tunables).
2022-12-08realloc: Return unchanged if request is within usable sizeSiddhesh Poyarekar2-0/+33
If there is enough space in the chunk to satisfy the new size, return the old pointer as is, thus avoiding any locks or reallocations. The only real place this has a benefit is in large chunks that tend to get satisfied with mmap, since there is a large enough spare size (up to a page) for it to matter. For allocations on heap, the extra size is typically barely a few bytes (up to 15) and it's unlikely that it would make much difference in performance. Also added a smoke test to ensure that the old pointer is returned unchanged if the new size to realloc is within usable size of the old pointer. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2022-11-01malloc: Use uintptr_t for pointer alignmentCarlos Eduardo Seo1-3/+3
Avoid integer casts that assume unsigned long can represent pointers. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-10-28Remove unused scratch_buffer_dupfreeSzabolcs Nagy4-63/+0
Turns out scratch_buffer_dupfree internal API was unused since commit ef0700004bf0dccf493a5e8e21f71d9e7972ea9f stdlib: Simplify buffer management in canonicalize And the related test in malloc/tst-scratch_buffer had issues so it's better to remove it completely. Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-10-28malloc: Use uintptr_t in alloc_bufferSzabolcs Nagy1-3/+3
The values represnt pointers and not sizes. The members of struct alloc_buffer are already uintptr_t. Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-10-13malloc: Switch global_max_fast to uint8_tFlorian Weimer1-1/+1
MAX_FAST_SIZE is 160 at most, so a uint8_t is sufficient. This makes it harder to use memory corruption, by overwriting global_max_fast with a large value, to fundamentally alter malloc behavior. Reviewed-by: DJ Delorie <dj@redhat.com>