aboutsummaryrefslogtreecommitdiff
path: root/malloc/malloc.c
AgeCommit message (Collapse)AuthorFilesLines
2018-02-21Mechanically remove _IO_ name aliases for types and constants.Zack Weinberg1-3/+3
This patch mechanically removes all remaining uses, and the definitions, of the following libio name aliases: name replaced with ---- ------------- _IO_FILE FILE _IO_fpos_t __fpos_t _IO_fpos64_t __fpos64_t _IO_size_t size_t _IO_ssize_t ssize_t or __ssize_t _IO_off_t off_t _IO_off64_t off64_t _IO_pid_t pid_t _IO_uid_t uid_t _IO_wint_t wint_t _IO_va_list va_list or __gnuc_va_list _IO_BUFSIZ BUFSIZ _IO_cookie_io_functions_t cookie_io_functions_t __io_read_fn cookie_read_function_t __io_write_fn cookie_write_function_t __io_seek_fn cookie_seek_function_t __io_close_fn cookie_close_function_t I used __fpos_t and __fpos64_t instead of fpos_t and fpos64_t because the definitions of fpos_t and fpos64_t depend on the largefile mode. I used __ssize_t and __gnuc_va_list in a handful of headers where namespace cleanliness might be relevant even though they're internal-use-only. In all other cases, I used the public-namespace name. There are a tiny handful of places where I left a use of 'struct _IO_FILE' alone, because it was being used together with 'struct _IO_FILE_plus' or 'struct _IO_FILE_complete' in the same arithmetic expression. Because this patch was almost entirely done with search and replace, I may have introduced indentation botches. I did proofread the diff, but I may have missed something. The ChangeLog below calls out all of the places where this was not a pure search-and-replace change. Installed stripped libraries and executables are unchanged by this patch, except that some assertions in vfscanf.c change line numbers. * libio/libio.h (_IO_FILE): Delete; all uses changed to FILE. (_IO_fpos_t): Delete; all uses changed to __fpos_t. (_IO_fpos64_t): Delete; all uses changed to __fpos64_t. (_IO_size_t): Delete; all uses changed to size_t. (_IO_ssize_t): Delete; all uses changed to ssize_t or __ssize_t. (_IO_off_t): Delete; all uses changed to off_t. (_IO_off64_t): Delete; all uses changed to off64_t. (_IO_pid_t): Delete; all uses changed to pid_t. (_IO_uid_t): Delete; all uses changed to uid_t. (_IO_wint_t): Delete; all uses changed to wint_t. (_IO_va_list): Delete; all uses changed to va_list or __gnuc_va_list. (_IO_BUFSIZ): Delete; all uses changed to BUFSIZ. (_IO_cookie_io_functions_t): Delete; all uses changed to cookie_io_functions_t. (__io_read_fn): Delete; all uses changed to cookie_read_function_t. (__io_write_fn): Delete; all uses changed to cookie_write_function_t. (__io_seek_fn): Delete; all uses changed to cookie_seek_function_t. (__io_close_fn): Delete: all uses changed to cookie_close_function_t. * libio/iofopncook.c: Remove unnecessary forward declarations. * libio/iolibio.h: Correct outdated commentary. * malloc/malloc.c (__malloc_stats): Remove unnecessary casts. * stdio-common/fxprintf.c (__fxprintf_nocancel): Remove unnecessary casts. * stdio-common/getline.c: Use _IO_getdelim directly. Don't redefine ssize_t. * stdio-common/printf_fp.c, stdio_common/printf_fphex.c * stdio-common/printf_size.c: Don't redefine size_t or FILE. Remove outdated comments. * stdio-common/vfscanf.c: Don't redefine va_list.
2018-02-10[BZ #22830] malloc_stats: restore cancellation for stderr correctly.Zack Weinberg1-1/+1
malloc_stats means to disable cancellation for writes to stderr while it runs, but it restores stderr->_flags2 with |= instead of =, so what it actually does is disable cancellation on stderr permanently. [BZ #22830] * malloc/malloc.c (__malloc_stats): Restore stderr->_flags2 correctly. * malloc/tst-malloc-stats-cancellation.c: New test case. * malloc/Makefile: Add new test case.
2018-01-29malloc: Use assert.h's assert macroSamuel Thibault1-7/+4
This avoids assert definition conflicts if some of the headers used by malloc.c happens to include assert.h. Malloc still needs a malloc-avoiding implementation, which we get by redirecting __assert_fail to malloc's __malloc_assert. * malloc/malloc.c: Include <assert.h>. (assert): Do not define. [!defined NDEBUG] (__assert_fail): Define to __malloc_assert.
2018-01-18Fix integer overflows in internal memalign and malloc functions [BZ #22343]Arjun Shankar1-8/+22
When posix_memalign is called with an alignment less than MALLOC_ALIGNMENT and a requested size close to SIZE_MAX, it falls back to malloc code (because the alignment of a block returned by malloc is sufficient to satisfy the call). In this case, an integer overflow in _int_malloc leads to posix_memalign incorrectly returning successfully. Upon fixing this and writing a somewhat thorough regression test, it was discovered that when posix_memalign is called with an alignment larger than MALLOC_ALIGNMENT (so it uses _int_memalign instead) and a requested size close to SIZE_MAX, a different integer overflow in _int_memalign leads to posix_memalign incorrectly returning successfully. Both integer overflows affect other memory allocation functions that use _int_malloc (one affected malloc in x86) or _int_memalign as well. This commit fixes both integer overflows. In addition to this, it adds a regression test to guard against false successful allocations by the following memory allocation functions when called with too-large allocation sizes and, where relevant, various valid alignments: malloc, realloc, calloc, reallocarray, memalign, posix_memalign, aligned_alloc, valloc, and pvalloc.
2018-01-12malloc: Ensure that the consolidated fast chunk has a sane size.Istvan Kurucsai1-0/+6
2018-01-01Update copyright dates with scripts/update-copyrights.Joseph Myers1-1/+1
* All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
2017-11-30Fix integer overflow in malloc when tcache is enabled [BZ #22375]Arjun Shankar1-1/+2
When the per-thread cache is enabled, __libc_malloc uses request2size (which does not perform an overflow check) to calculate the chunk size from the requested allocation size. This leads to an integer overflow causing malloc to incorrectly return the last successfully allocated block when called with a very large size argument (close to SIZE_MAX). This commit uses checked_request2size instead, removing the overflow.
2017-11-23malloc: Call tcache destructor in arena_thread_freeresFlorian Weimer1-7/+16
It does not make sense to register separate cleanup functions for arena and tcache since they're always going to be called together. Call the tcache cleanup function from within arena_thread_freeres since it at least makes the order of those cleanups clear in the code. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2017-11-15malloc: Account for all heaps in an arena in malloc_info [BZ #22439]Florian Weimer1-4/+13
This commit adds a "subheaps" field to the malloc_info output that shows the number of heaps that were allocated to extend a non-main arena. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2017-11-15malloc: Add missing arena lock in malloc_info [BZ #22408]Florian Weimer1-4/+12
Obtain the size information while the arena lock is acquired, and only print it later.
2017-10-24Add single-threaded path to _int_mallocWilco Dijkstra1-25/+38
This patch adds single-threaded fast paths to _int_malloc. * malloc/malloc.c (_int_malloc): Add SINGLE_THREAD_P path.
2017-10-24Add single-threaded path to malloc/realloc/calloc/memallocWilco Dijkstra1-9/+41
This patch adds a single-threaded fast path to malloc, realloc, calloc and memalloc. When we're single-threaded, we can bypass arena_get (which always locks the arena it returns) and just use the main arena. Also avoid retrying a different arena since there is just the main arena. * malloc/malloc.c (__libc_malloc): Add SINGLE_THREAD_P path. (__libc_realloc): Likewise. (_mid_memalign): Likewise. (__libc_calloc): Likewise.
2017-10-20Fix build issue with SINGLE_THREAD_PWilco Dijkstra1-0/+3
Add sysdep-cancel.h include. * malloc/malloc.c (sysdep-cancel.h): Add include.
2017-10-20Add single-threaded path to _int_freeWilco Dijkstra1-14/+29
This patch adds single-threaded fast paths to _int_free. Bypass the explicit locking for larger allocations. * malloc/malloc.c (_int_free): Add SINGLE_THREAD_P fast paths.
2017-10-19Fix deadlock in _int_free consistency checkWilco Dijkstra1-9/+12
This patch fixes a deadlock in the fastbin consistency check. If we fail the fast check due to concurrent modifications to the next chunk or system_mem, we should not lock if we already have the arena lock. Simplify the check to make it obviously correct. * malloc/malloc.c (_int_free): Fix deadlock bug in consistency check.
2017-10-18Fix build failure on tilepro due to unsupported atomicsWilco Dijkstra1-1/+2
* malloc/malloc.c (malloc_state): Use int for have_fastchunks since not all targets support atomics on bool.
2017-10-17Improve malloc initialization sequenceWilco Dijkstra1-109/+88
The current malloc initialization is quite convoluted. Instead of sometimes calling malloc_consolidate from ptmalloc_init, call malloc_init_state early so that the main_arena is always initialized. The special initialization can now be removed from malloc_consolidate. This also fixes BZ #22159. Check all calls to malloc_consolidate and remove calls that are redundant initialization after ptmalloc_init, like in int_mallinfo and __libc_mallopt (but keep the latter as consolidation is required for set_max_fast). Update comments to improve clarity. Remove impossible initialization check from _int_malloc, fix assert in do_check_malloc_state to ensure arena->top != 0. Fix the obvious bugs in do_check_free_chunk and do_check_remalloced_chunk to enable single threaded malloc debugging (do_check_malloc_state is not thread safe!). [BZ #22159] * malloc/arena.c (ptmalloc_init): Call malloc_init_state. * malloc/malloc.c (do_check_free_chunk): Fix build bug. (do_check_remalloced_chunk): Fix build bug. (do_check_malloc_state): Add assert that checks arena->top. (malloc_consolidate): Remove initialization. (int_mallinfo): Remove call to malloc_consolidate. (__libc_mallopt): Clarify why malloc_consolidate is needed.
2017-10-17Use relaxed atomics for malloc have_fastchunksWilco Dijkstra1-32/+20
Currently free typically uses 2 atomic operations per call. The have_fastchunks flag indicates whether there are recently freed blocks in the fastbins. This is purely an optimization to avoid calling malloc_consolidate too often and avoiding the overhead of walking all fast bins even if all are empty during a sequence of allocations. However using catomic_or to update the flag is completely unnecessary since it can be changed into a simple boolean and accessed using relaxed atomics. There is no change in multi-threaded behaviour given the flag is already approximate (it may be set when there are no blocks in any fast bins, or it may be clear when there are free blocks that could be consolidated). Performance of malloc/free improves by 27% on a simple benchmark on AArch64 (both single and multithreaded). The number of load/store exclusive instructions is reduced by 33%. Bench-malloc-thread speeds up by ~3% in all cases. * malloc/malloc.c (FASTCHUNKS_BIT): Remove. (have_fastchunks): Remove. (clear_fastchunks): Remove. (set_fastchunks): Remove. (malloc_state): Add have_fastchunks. (malloc_init_state): Use have_fastchunks. (do_check_malloc_state): Remove incorrect invariant checks. (_int_malloc): Use have_fastchunks. (_int_free): Likewise. (malloc_consolidate): Likewise.
2017-10-17Inline tcache functionsWilco Dijkstra1-2/+2
The functions tcache_get and tcache_put show up in profiles as they are a critical part of the tcache code. Inline them to give tcache a 16% performance gain. Since this improves multi-threaded cases as well, it helps offset any potential performance loss due to adding single-threaded fast paths. * malloc/malloc.c (tcache_put): Inline. (tcache_get): Inline.
2017-10-06malloc: Fix tcache leak after thread destruction [BZ #22111]Carlos O'Donell1-3/+5
The malloc tcache added in 2.26 will leak all of the elements remaining in the cache and the cache structure itself when a thread exits. The defect is that we do not set tcache_shutting_down early enough, and the thread simply recreates the tcache and places the elements back onto a new tcache which is subsequently lost as the thread exits (unfreed memory). The fix is relatively simple, move the setting of tcache_shutting_down earlier in tcache_thread_freeres. We add a test case which uses mallinfo and some heuristics to look for unaccounted for memory usage between the start and end of a thread start/join loop. It is very reliable at detecting that there is a leak given the number of iterations. Without the fix the test will consume 122MiB of leaked memory.
2017-08-31malloc: Remove the internal_function attributeFlorian Weimer1-13/+4
2017-08-31malloc: Resolve compilation failure in NDEBUG modeFlorian Weimer1-18/+7
In _int_free, the locked variable is not used if NDEBUG is defined.
2017-08-31malloc: Change top_check return type to voidFlorian Weimer1-1/+1
After commit ec2c1fcefb200c6cb7e09553f3c6af8815013d83, (malloc: Abort on heap corruption, without a backtrace), the function always returns 0.
2017-08-30malloc: Remove corrupt arena flagFlorian Weimer1-13/+0
This is no longer needed because we now abort immediately once heap corruption is detected.
2017-08-30malloc: Remove check_action variable [BZ #21754]Florian Weimer1-122/+30
Clean up calls to malloc_printerr and trim its argument list. This also removes a few bits of work done before calling malloc_printerr (such as unlocking operations). The tunable/environment variable still enables the lightweight additional malloc checking, but mallopt (M_CHECK_ACTION) no longer has any effect.
2017-08-30malloc: Abort on heap corruption, without a backtrace [BZ #21754]Florian Weimer1-19/+4
The stack trace printing caused deadlocks and has been itself been targeted by code execution exploits.
2017-08-10malloc: Avoid optimizer warning with GCC 7 and -O3Florian Weimer1-4/+16
2017-07-11Avoid backtrace from __stack_chk_fail [BZ #12189]H.J. Lu1-2/+4
__stack_chk_fail is called on corrupted stack. Stack backtrace is very unreliable against corrupted stack. __libc_message is changed to accept enum __libc_message_action and call BEFORE_ABORT only if action includes do_backtrace. __fortify_fail_abort is added to avoid backtrace from __stack_chk_fail. [BZ #12189] * debug/Makefile (CFLAGS-tst-ssp-1.c): New. (tests): Add tst-ssp-1 if -fstack-protector works. * debug/fortify_fail.c: Include <stdbool.h>. (_fortify_fail_abort): New function. (__fortify_fail): Call _fortify_fail_abort. (__fortify_fail_abort): Add a hidden definition. * debug/stack_chk_fail.c: Include <stdbool.h>. (__stack_chk_fail): Call __fortify_fail_abort, instead of __fortify_fail. * debug/tst-ssp-1.c: New file. * include/stdio.h (__libc_message_action): New enum. (__libc_message): Replace int with enum __libc_message_action. (__fortify_fail_abort): New hidden prototype. * malloc/malloc.c (malloc_printerr): Update __libc_message calls. * sysdeps/posix/libc_fatal.c (__libc_message): Replace int with enum __libc_message_action. Call BEFORE_ABORT only if action includes do_backtrace. (__libc_fatal): Update __libc_message call.
2017-07-06Add per-thread cache to mallocDJ Delorie1-9/+341
* config.make.in: Enable experimental malloc option. * configure.ac: Likewise. * configure: Regenerate. * manual/install.texi: Document it. * INSTALL: Regenerate. * malloc/Makefile: Likewise. * malloc/malloc.c: Add per-thread cache (tcache). (tcache_put): New. (tcache_get): New. (tcache_thread_freeres): New. (tcache_init): New. (__libc_malloc): Use cached chunks if available. (__libc_free): Initialize tcache if needed. (__libc_realloc): Likewise. (__libc_calloc): Likewise. (_int_malloc): Prefill tcache when appropriate. (_int_free): Likewise. (do_set_tcache_max): New. (do_set_tcache_count): New. (do_set_tcache_unsorted_limit): New. * manual/probes.texi: Document new probes. * malloc/arena.c: Add new tcache tunables. * elf/dl-tunables.list: Likewise. * manual/tunables.texi: Document them. * NEWS: Mention the per-thread cache.
2017-05-03Tweak realloc/MREMAP comment to be more accurate.DJ Delorie1-3/+3
MMap'd memory isn't shrunk without MREMAP, but IIRC this is intentional for performance reasons. Regardless, this patch tweaks the existing comment to be more accurate wrt the existing code. [BZ #21411] * malloc/malloc.c: Tweak realloc/MREMAP comment to be more accurate.
2017-04-18malloc: Turn cfree into a compatibility symbolFlorian Weimer1-2/+3
2017-04-01Call the right helper function when setting mallopt M_ARENA_MAX (BZ #21338)Wladimir J. van der Laan1-1/+1
Fixes a typo introduced in commit be7991c0705e35b4d70a419d117addcd6c627319. This caused mallopt(M_ARENA_MAX) as well as the environment variable MALLOC_ARENA_MAX to not work as intended because it set the wrong internal parameter. [BZ #21338] * malloc/malloc.c: Call do_set_arena_max for M_ARENA_MAX instead of incorrect do_set_arena_test
2017-03-17Further harden glibc malloc metadata against 1-byte overflows.DJ Delorie1-0/+2
Additional check for chunk_size == next->prev->chunk_size in unlink() 2017-03-17 Chris Evans <scarybeasts@gmail.com> * malloc/malloc.c (unlink): Add consistency check between size and next->prev->size, to further harden against 1-byte overflows.
2017-03-01Narrowing the visibility of libc-internal.h even further.Zack Weinberg1-1/+1
posix/wordexp-test.c used libc-internal.h for PTR_ALIGN_DOWN; similar to what was done with libc-diag.h, I have split the definitions of cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and PTR_ALIGN_DOWN to a new header, libc-pointer-arith.h. It then occurred to me that the remaining declarations in libc-internal.h are mostly to do with early initialization, and probably most of the files including it, even in the core code, don't need it anymore. Indeed, only 19 files actually need what remains of libc-internal.h. 23 others need libc-diag.h instead, and 12 need libc-pointer-arith.h instead. No file needs more than one of them, and 16 don't need any of them! So, with this patch, libc-internal.h stops including libc-diag.h as well as losing the pointer arithmetic macros, and all including files are adjusted. * include/libc-pointer-arith.h: New file. Define cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and PTR_ALIGN_DOWN here. * include/libc-internal.h: Definitions of above macros moved from here. Don't include libc-diag.h anymore either. * posix/wordexp-test.c: Include stdint.h and libc-pointer-arith.h. Don't include libc-internal.h. * debug/pcprofile.c, elf/dl-tunables.c, elf/soinit.c, io/openat.c * io/openat64.c, misc/ptrace.c, nptl/pthread_clock_gettime.c * nptl/pthread_clock_settime.c, nptl/pthread_cond_common.c * string/strcoll_l.c, sysdeps/nacl/brk.c * sysdeps/unix/clock_settime.c * sysdeps/unix/sysv/linux/i386/get_clockfreq.c * sysdeps/unix/sysv/linux/ia64/get_clockfreq.c * sysdeps/unix/sysv/linux/powerpc/get_clockfreq.c * sysdeps/unix/sysv/linux/sparc/sparc64/get_clockfreq.c: Don't include libc-internal.h. * elf/get-dynamic-info.h, iconv/loop.c * iconvdata/iso-2022-cn-ext.c, locale/weight.h, locale/weightwc.h * misc/reboot.c, nis/nis_table.c, nptl_db/thread_dbP.h * nscd/connections.c, resolv/res_send.c, soft-fp/fmadf4.c * soft-fp/fmasf4.c, soft-fp/fmatf4.c, stdio-common/vfscanf.c * sysdeps/ieee754/dbl-64/e_lgamma_r.c * sysdeps/ieee754/dbl-64/k_rem_pio2.c * sysdeps/ieee754/flt-32/e_lgammaf_r.c * sysdeps/ieee754/flt-32/k_rem_pio2f.c * sysdeps/ieee754/ldbl-128/k_tanl.c * sysdeps/ieee754/ldbl-128ibm/k_tanl.c * sysdeps/ieee754/ldbl-96/e_lgammal_r.c * sysdeps/ieee754/ldbl-96/k_tanl.c, sysdeps/nptl/futex-internal.h: Include libc-diag.h instead of libc-internal.h. * elf/dl-load.c, elf/dl-reloc.c, locale/programs/locarchive.c * nptl/nptl-init.c, string/strcspn.c, string/strspn.c * malloc/malloc.c, sysdeps/i386/nptl/tls.h * sysdeps/nacl/dl-map-segments.h, sysdeps/x86_64/atomic-machine.h * sysdeps/unix/sysv/linux/spawni.c * sysdeps/x86_64/nptl/tls.h: Include libc-pointer-arith.h instead of libc-internal.h. * elf/get-dynamic-info.h, sysdeps/nacl/dl-map-segments.h * sysdeps/x86_64/atomic-machine.h: Add multiple include guard.
2017-01-01Update copyright dates with scripts/update-copyrights.Joseph Myers1-1/+1
2016-10-28malloc: Update comments about chunk layoutFlorian Weimer1-10/+30
2016-10-28sysmalloc: Initialize previous size field of mmaped chunksFlorian Weimer1-0/+1
With different encodings of the header, the previous zero initialization may be insufficient and produce an invalid encoding.
2016-10-28malloc: Use accessors for chunk metadata accessFlorian Weimer1-63/+84
This change allows us to change the encoding of these struct members in a centralized fashion.
2016-10-27Static inline functions for mallopt helpersSiddhesh Poyarekar1-34/+93
Make mallopt helper functions for each mallopt parameter so that it can be called consistently in other areas, like setting tunables. * malloc/malloc.c (do_set_mallopt_check): New function. (do_set_mmap_threshold): Likewise. (do_set_mmaps_max): Likewise. (do_set_top_pad): Likewise. (do_set_perturb_byte): Likewise. (do_set_trim_threshold): Likewise. (do_set_arena_max): Likewise. (do_set_arena_test): Likewise. (__libc_mallopt): Use them.
2016-10-26malloc: Remove malloc_get_state, malloc_set_state [BZ #19473]Florian Weimer1-2/+0
After the removal of __malloc_initialize_hook, newly compiled Emacs binaries are no longer able to use these interfaces. malloc_get_state is only used during the Emacs build process, so we provide a stub implementation only. Existing Emacs binaries will not call this stub function, but still reference the symbol. The rewritten tst-mallocstate test constructs a dumped heap which should approximates what existing Emacs binaries pass to glibc malloc.
2016-10-26Remove redundant definitions of M_ARENA_* macrosSiddhesh Poyarekar1-5/+0
The M_ARENA_MAX and M_ARENA_TEST macros are defined in malloc.c as well as malloc.h, and the former is unnecessary. This patch removes the duplicate. Tested on x86_64 to verify that the generated code remains unchanged barring changed line numbers to __malloc_assert. * malloc/malloc.c (M_ARENA_TEST, M_ARENA_MAX): Remove.
2016-10-26Document the M_ARENA_* mallopt parametersSiddhesh Poyarekar1-1/+0
The M_ARENA_* mallopt parameters are in wide use in production to control the number of arenas that a long lived process creates and hence there is no point in stating that this interface is non-public. Document this interface and remove the obsolete comment. * manual/memory.texi (M_ARENA_TEST): Add documentation. (M_ARENA_MAX): Likewise. * malloc/malloc.c: Remove obsolete comment.
2016-09-21malloc: Manual part of conversion to __libc_lockFlorian Weimer1-1/+1
This removes the old mutex_t-related definitions from malloc-machine.h, too.
2016-09-06malloc: Automated part of conversion to __libc_lockFlorian Weimer1-20/+20
2016-08-03elf: dl-minimal malloc needs to respect fundamental alignmentFlorian Weimer1-63/+0
The dynamic linker currently uses __libc_memalign for TLS-related allocations. The goal is to switch to malloc instead. If the minimal malloc follows the ABI fundamental alignment, we can assume that malloc provides this alignment, and thus skip explicit alignment in a few cases as an optimization. It was requested on libc-alpha that MALLOC_ALIGNMENT should be used, although this results in wasted space if MALLOC_ALIGNMENT is larger than the fundamental alignment. (The dynamic linker cannot assume that the non-minimal malloc will provide an alignment of MALLOC_ALIGNMENT; the ABI provides _Alignof (max_align_t) only.)
2016-06-20Revert __malloc_initialize_hook symbol poisoningFlorian Weimer1-3/+3
It turns out the Emacs-internal malloc implementation uses __malloc_* symbols. If glibc poisons them in <stdc-pre.h>, Emacs will no longer compile.
2016-06-11malloc_usable_size: Use correct size for dumped fake mapped chunksFlorian Weimer1-1/+6
The adjustment for the size computation in commit 1e8a8875d69e36d2890b223ffe8853a8ff0c9512 is needed in malloc_usable_size, too.
2016-06-10malloc: Remove __malloc_initialize_hook from the API [BZ #19564]Florian Weimer1-1/+15
__malloc_initialize_hook is interposed by application code, so the usual approach to define a compatibility symbol does not work. This commit adds a new mechanism based on #pragma GCC poison in <stdc-predef.h>.
2016-06-08malloc: Correct size computation in realloc for dumped fake mmapped chunksFlorian Weimer1-4/+8
For regular mmapped chunks there are two size fields (hence a reduction by 2 * SIZE_SZ bytes), but for fake chunks, we only have one size field, so we need to subtract SIZE_SZ bytes. This was initially reported as Emacs bug 23726.
2016-05-24malloc: Correct malloc alignment on 32-bit architectures [BZ #6527]Florian Weimer1-14/+2
After the heap rewriting added in commit 4cf6c72fd2a482e7499c29162349810029632c3f (malloc: Rewrite dumped heap for compatibility in __malloc_set_state), we can change malloc alignment for new allocations because the alignment of old allocations no longer matters. We need to increase the malloc state version number, so that binaries containing dumped heaps of the new layout will not try to run on previous versions of glibc, resulting in obscure crashes. This commit addresses a failure of tst-malloc-thread-fail on the affected architectures (32-bit ppc and mips) because the test checks pointer alignment.