aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2025-02-25math: Add optimization barrier to ensure a1 + u.d is not reused [BZ #30664]John David Anglin1-0/+3
A number of fma tests started to fail on hppa when gcc was changed to use Ranger rather than EVRP. Eventually I found that the value of a1 + u.d in this is block of code was being computed in FE_TOWARDZERO mode and not the original rounding mode: if (TININESS_AFTER_ROUNDING) { w.d = a1 + u.d; if (w.ieee.exponent == 109) return w.d * 0x1p-108; } This caused the exponent value to be wrong and the wrong return path to be used. Here we add an optimization barrier after the rounding mode is reset to ensure that the previous value of a1 + u.d is not reused. Signed-off-by: John David Anglin <dave.anglin@bell.net>
2025-02-25RISC-V: Fix IFUNC resolver cannot access gp pointerYangyu Chen1-6/+11
In some cases, an IFUNC resolver may need to access the gp pointer to access global variables. Such an object may have l_relocated == 0 at this time. In this case, an IFUNC resolver will fail to access a global variable and cause a SIGSEGV. This patch fixes this issue by relaxing the check of l_relocated in elf_machine_runtime_setup, but added a check for SHARED case to avoid using this code in static-linked executables. Such object have already set up the gp pointer in load_gp function and l->l_scope will be NULL if it is a pie object. So if we use these code to set up the gp pointer again for static-pie, it will causing a SIGSEGV in glibc as original bug on BZ #31317. I have also reproduced and checked BZ #31317 using the mold commit bed5b1731b ("illumos: Treat absolute symbols specially"), this patch can fix the issue. Also, we used the wrong gp pointer previously because ref->st_value is not the relocated address but just the offset from the base address of ELF. An edge case may happen if we reference gp pointer in a IFUNC resolver in a PIE object, but it will not happen in compiler-generated codes since -pie will disable relax to gp. In this case, the GP will be initialized incorrectly since the ref->st_value is not the address after relocation. This patch fixes this issue by adding the l->l_addr to ref->st_value to get the relocated address for the gp pointer. We don't use SYMBOL_ADDRESS macro here because __global_pointer$ is a special symbol that has SHN_ABS type, but it will use PC-relative addressing in the load_gp function using lla. Closes: BZ #32269 Fixes: 96d1b9ac23 ("RISC-V: Fix the static-PIE non-relocated object check") Co-authored-by: Vivian Wang <dramforever@live.com> Signed-off-by: Yangyu Chen <cyy@cyyself.name>
2025-02-24AArch64: Remove LP64 and ILP32 ifdefsWilco Dijkstra6-61/+16
Remove LP64 and ILP32 ifdefs. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Simplify lrintWilco Dijkstra1-51/+0
Simplify lrint. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Remove AARCH64_R macroWilco Dijkstra4-33/+16
Remove AArch64_R relocation macro. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Cleanup pointer manglingWilco Dijkstra4-58/+27
Cleanup pointer mangling. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Remove PTR_REG definesWilco Dijkstra7-68/+41
Remove PTR_REG defines. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Remove PTR_ARG/SIZE_ARG definesWilco Dijkstra35-106/+0
This series removes various ILP32 defines that are now no longer needed. Remove PTR_ARG/SIZE_ARG. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24stdlib: Add single-threaded fast path to rand()Wilco Dijkstra1-0/+7
Improve performance of rand() and __random() by adding a single-threaded fast path. Bench-random-lock shows about 5x speedup on Neoverse V1. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24Increase the amount of data tested in stdio-common/tst-fwrite-pipe.cStefan Liebler1-2/+2
The number of iterations and the length of the string are not high enough on some systems causing the test to return false-positives. Testcase stdio-common/tst-fwrite-bz29459.c was fixed in the same way in 1b6f868625403d6b7683af840e87d2b18d5d7731 (Increase the amount of data tested in stdio-common/tst-fwrite-bz29459.c, 2025-02-14) Testcases stdio-common/tst-fwrite-bz29459.c and stdio-common/tst-fwrite-pipe.c were introcued in 596a61cf6b51ce2d58b8ca4e1d1f4fdfe1440dbc (libio: Start to return errors when flushing fwrite's buffer [BZ #29459], 2025-01-28)
2025-02-24posix: Rewrite cpuset testsFrédéric Bérat5-83/+249
Rewriting the cpuset macros test to cover more use cases and port the tests to the new test infrastructure. The use cases include bad actor access attempts, before and after the CPU set structure. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@redhat.com>
2025-02-24support: Add support_next_to_fault_before support functionFrédéric Bérat2-10/+39
Refactor the support_next_to_fault and add the support_next_to_fault_before method returns a buffer with a protected page before it, to be able to test buffer underflow accesses. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@redhat.com>
2025-02-23math: Fix `unknown type name '__float128'` for clang 3.4 to 3.8.1 (bug 32694)koraynilay1-2/+2
When compiling a program that includes <bits/floatn.h> using a clang version between 3.4 (included) and 3.8.1 (included), clang will fail with `unknown type name '__float128'; did you mean '__cfloat128'?`. This changes fixes the clang prerequirements macro call in floatn.h to check for clang 3.9 instead of 3.4, since support for __float128 was actually enabled in 3.9 by: commit 50f29e06a1b6a38f0bba9360cbff72c82d46cdd4 Author: Nemanja Ivanovic <nemanja.i.ibm@gmail.com> Date: Wed Apr 13 09:49:45 2016 +0000 Enable support for __float128 in Clang This fixes bug 32694. Signed-off-by: koraynilay <koray.fra@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-02-21nptl: clear the whole rseq area before registrationMichael Jeanson2-6/+6
Due to the extensible nature of the rseq area we can't explictly initialize fields that are not part of the ABI yet. It was agreed with upstream that all new fields will be documented as zero initialized by userspace. Future kernels configured with CONFIG_DEBUG_RSEQ will validate the content of all fields during registration. Replace the explicit field initialization with a memset of the whole rseq area which will cover fields as they are added to future kernels. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-02-21aarch64: Add GCS test with signal handlerYury Khrustalev2-0/+106
Test that when we return from a function that enabled GCS at runtime we get SIGSEGV. Also test that ucontext contains GCS block with the GCS pointer. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-21aarch64: Add GCS tests for dlopenYury Khrustalev8-1/+101
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-21aarch64: Add GCS tests for transitive dependenciesYury Khrustalev11-16/+195
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-21aarch64: Add tests for Guarded Control StackYury Khrustalev15-1/+186
These tests validate that GCS tunable works as expected depending on the GCS markings in the test binaries. Tests validate both static and dynamically linked binaries. These new tests are AArch64 specific. Moreover, they are included only if linker supports the "-z gcs=<value>" option. If built, these tests will run on systems with and without HWCAP_GCS. In the latter case the tests will be reported as UNSUPPORTED. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-21aarch64: Add configure checks for GCS supportYury Khrustalev2-0/+116
- Add check that linker supports -z gcs=... - Add checks that main and test compiler support -mbranch-protection=gcs Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-20manual: Mark setlogmask as AS-unsafe and AC-unsafe.Carlos O'Donell1-1/+1
This fixes the check-safety.sh failure with commit ad9c4c536115ba38be3e63592a632709ec8209b4, and correctly marks the function AS-unsafe and AC-unsafe due to the use of the non-recursive lock. Tested on x86_64 without regressions. Reviewed-by: Frédéric Bérat <fberat@redhat.com>
2025-02-20AArch64: Add SVE memsetWilco Dijkstra4-0/+129
Add SVE memset based on the generic memset with predicated load for sizes < 16. Unaligned memsets of 128-1024 are improved by ~20% on average by using aligned stores for the last 64 bytes. Performance of random memset benchmark improves by ~2% on Neoverse V1. Reviewed-by: Yury Khrustalev <yury.khrustalev@arm.com>
2025-02-20x86 (__HAVE_FLOAT128): Defined to 0 for Intel SYCL compiler [BZ #32723]H.J. Lu1-2/+6
Intel compiler always defines __INTEL_LLVM_COMPILER. When SYCL is enabled by -fsycl, it also defines SYCL_LANGUAGE_VERSION. Since Intel SYCL compiler doesn't support _Float128: https://github.com/intel/llvm/issues/16903 define __HAVE_FLOAT128 to 0 for Intel SYCL compiler. This fixes BZ #32723. Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sam James <sam@gentoo.org>
2025-02-19manual: Document setlogmask as MT-safe.Carlos O'Donell1-4/+1
setlogmask(3) was made MT-safe in glibc-2.33 with the fix for bug 26100. Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-02-17math: Consolidate acosf and asinf internal tablesAdhemerval Zanella5-32/+85
The libm size improvement built with gcc-14, "--enable-stack-protector=strong --enable-bind-now=yes --enable-fortify-source=2": Before: 582292 844 12 583148 8e5ec aarch64-linux-gnu/math/libm.so 975133 1076 12 976221 ee55d x86_64-linux-gnu/math/libm.so 1203586 5608 368 1209562 1274da powerpc64le-linux-gnu/math/libm.so After: 581972 844 12 582828 8e4ac aarch64-linux-gnu/math/libm.so 974941 1076 12 976029 ee49d x86_64-linux-gnu/math/libm.so 1203394 5608 368 1209370 12741a powerpc64le-linux-gnu/math/libm.so Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-17math: Consolidate acospif and asinpif internal tablesAdhemerval Zanella5-104/+120
The libm size improvement built with gcc-14, "--enable-stack-protector=strong --enable-bind-now=yes --enable-fortify-source=2": Before: text data bss dec hex filename 583444 844 12 584300 8ea6c aarch64-linux-gnu/math/libm.so 976349 1076 12 977437 eea1d x86_64-linux-gnu/math/libm.so 1204738 5608 368 1210714 12795a powerpc64le-linux-gnu/math/libm.so After: 582292 844 12 583148 8e5ec aarch64-linux-gnu/math/libm.so 975133 1076 12 976221 ee55d x86_64-linux-gnu/math/libm.so 1203586 5608 368 1209562 1274da powerpc64le-linux-gnu/math/libm.so Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-17math: Consolidate cospif and sinpif internal tablesAdhemerval Zanella5-115/+124
The libm size improvement built with gcc-14, "--enable-stack-protector=strong --enable-bind-now=yes --enable-fortify-source=2": Before: text data bss dec hex filename 584500 844 12 585356 8ee8c aarch64-linux-gnu/math/libm.so 977341 1076 12 978429 eedfd x86_64-linux-gnu/math/libm.so 1205762 5608 368 1211738 127d5a powerpc64le-linux-gnu/math/libm.so After: text data bss dec hex filename 583444 844 12 584300 8ea6c aarch64-linux-gnu/math/libm.so 976349 1076 12 977437 eea1d x86_64-linux-gnu/math/libm.so 1204738 5608 368 1210714 12795a powerpc64le-linux-gnu/math/libm.so Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-16htl: don't export __pthread_default_rwlockattr anymore.gfleury3-3/+0
since now all symbloy that use it are in libc Message-ID: <20250216145434.7089-11-gfleury@disroot.org>
2025-02-16htl: move pthread_rwlock_init into libc.gfleury8-9/+15
Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-10-gfleury@disroot.org>
2025-02-16htl: move pthread_rwlock_destroy into libc.gfleury8-9/+17
Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-9-gfleury@disroot.org>
2025-02-16htl: move pthread_rwlock_{rdlock, timedrdlock, timedwrlock, wrlock, ↵gfleury14-40/+95
clockrdlock, clockwrlock} into libc. Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-8-gfleury@disroot.org>
2025-02-16htl: move pthread_rwlock_unlock into libc.gfleury10-10/+16
Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-7-gfleury@disroot.org>
2025-02-16htl: move pthread_rwlock_tryrdlock, pthread_rwlock_trywrlock into libc.gfleury9-15/+32
Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-6-gfleury@disroot.org>
2025-02-16htl: move pthread_rwlockattr_getpshared, pthread_rwlockattr_setpshared into ↵gfleury9-11/+36
libc. Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-5-gfleury@disroot.org>
2025-02-16htl: move pthread_rwlockattr_destroy into libc.gfleury8-5/+18
Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-4-gfleury@disroot.org>
2025-02-16htl: move pthread_rwlockattr_init into libc.gfleury8-5/+18
Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-3-gfleury@disroot.org>
2025-02-16htl: move __pthread_default_rwlockattr into libc.gfleury4-1/+4
Signed-off-by: gfleury <gfleury@disroot.org> Message-ID: <20250216145434.7089-2-gfleury@disroot.org>
2025-02-15Fix tst-aarch64-pkey to handle ENOSPC as not supportedAurelien Jarno1-0/+4
The syscall pkey_alloc can return ENOSPC to indicate either that all keys are in use or that the system runs in a mode in which memory protection keys are disabled. In such case the test should not fail and just return unsupported. This matches the behaviour of the generic tst-pkey. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-02-14Increase the amount of data tested in stdio-common/tst-fwrite-bz29459.cTulio Magno Quites Machado Filho1-2/+2
The number of iterations and the length of the string are not high enough on some systems causing the test to return false-positives. Fixes: 596a61cf6b (libio: Start to return errors when flushing fwrite's buffer [BZ #29459], 2025-01-28) Reported-by: Florian Weimer <fweimer@redhat.com>
2025-02-13elf: Keep using minimal malloc after early DTV resize (bug 32412)Florian Weimer4-0/+117
If an auditor loads many TLS-using modules during startup, it is possible to trigger DTV resizing. Previously, the DTV was marked as allocated by the main malloc afterwards, even if the minimal malloc was still in use. With this change, _dl_resize_dtv marks the resized DTV as allocated with the minimal malloc. The new test reuses TLS-using modules from other auditing tests. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-13libio: Initialize _total_written for all kinds of streamsTulio Magno Quites Machado Filho2-1/+1
Move the initialization code to a general place instead of keeping it specific to file-backed streams. Fixes: 596a61cf6b (libio: Start to return errors when flushing fwrite's buffer [BZ #29459], 2025-01-28) Reported-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Arjun Shankar <arjun@redhat.com>
2025-02-13malloc: Add size check when moving fastbin->tcacheBen Kallus1-0/+3
By overwriting a forward link in a fastbin chunk that is subsequently moved into the tcache, it's possible to get malloc to return an arbitrary address [0]. When a chunk is fetched from a fastbin, its size is checked against the expected chunk size for that fastbin (see malloc.c:3991). This patch adds a similar check for chunks being moved from a fastbin to tcache, which renders obsolete the exploitation technique described above. Now updated to use __glibc_unlikely instead of __builtin_expect, as requested. [0]: https://github.com/shellphish/how2heap/blob/master/glibc_2.39/fastbin_reverse_into_tcache.c Signed-off-by: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-13nss: Improve network number parsers (bz 32573, 32575)Tobias Stoeckmann6-14/+109
Make sure that numbers never overflow uint32_t in inet_network to properly validate octets encountered in IPv4 addresses. Avoid malloca in NSS networks file code because /etc/networks lines can be arbitrarily long. Instead of handcrafting the input for inet_network by adding ".0" octets if they are missing, just left shift the result. Also, do not accept invalid entries, but ignore the line instead. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Signed-off-by: Tobias Stoeckmann <tobias@stoeckmann.org>
2025-02-13nptl: Remove unused __g_refs comment.Carlos O'Donell1-5/+0
In the block comment for __pthread_cond_wait_common we mention __g_refs, but the implementation no longer uses group references.
2025-02-13advisories: Fix up GLIBC-SA-2025-0001Siddhesh Poyarekar1-0/+15
Add ref for the test case as well as backports. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2025-02-13AArch64: Improve codegen for SVE powfYat Long Poon1-58/+59
Improve memory access with indexed/unpredicated instructions. Eliminate register spills. Speedup on Neoverse V1: 3%. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-02-13AArch64: Improve codegen for SVE powYat Long Poon1-103/+142
Move constants to struct. Improve memory access with indexed/unpredicated instructions. Eliminate register spills. Speedup on Neoverse V1: 24%. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-02-13AArch64: Improve codegen for SVE erfcfYat Long Poon1-6/+6
Reduce number of MOV/MOVPRFXs and use unpredicated FMUL. Replace MUL with LSL. Speedup on Neoverse V1: 6%. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-02-13Aarch64: Improve codegen in SVE exp and users, and update expf_inlineLuna Lamb5-49/+59
Use unpredicted muls, and improve memory access. 7%, 3% and 1% improvement in throughput microbenchmark on Neoverse V1, for exp, exp2 and cosh respectively. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-02-13Aarch64: Improve codegen in SVE asinhLuna Lamb1-34/+77
Use unpredicated muls, use lanewise mla's and improve memory access. 1% regression in throughput microbenchmark on Neoverse V1. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-02-13math: Improve layout of exp/exp10 dataWilco Dijkstra1-2/+4
GCC aligns global data to 16 bytes if their size is >= 16 bytes. This patch changes the exp_data struct slightly so that the fields are better aligned and without gaps. As a result on targets that support them, more load-pair instructions are used in exp. Exp10 is improved by moving invlog10_2N later so that neglog10_2hiN and neglog10_2loN can be loaded using load-pair. The exp benchmark improves 2.5%, "144bits" by 7.2%, "768bits" by 12.7% on Neoverse V2. Exp10 improves by 1.5%. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>