aboutsummaryrefslogtreecommitdiff
path: root/sysdeps/unix/sysv/linux
AgeCommit message (Collapse)AuthorFilesLines
29 hoursImplement C23 pownHEADmasterJoseph Myers30-0/+209
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the pown functions, which are like pow but with an integer exponent. That exponent has type long long int in C23; it was intmax_t in TS 18661-4, and as with other interfaces changed after their initial appearance in the TS, I don't think we need to support the original version of the interface. The test inputs are based on the subset of test inputs for pow that use integer exponents that fit in long long. As the first such template implementation that saves and restores the rounding mode internally (to avoid possible issues with directed rounding and intermediate overflows or underflows in the wrong rounding mode), support also needed to be added for using SET_RESTORE_ROUND* in such template function implementations. This required math-type-macros-float128.h to include <fenv_private.h>, so it can tell whether SET_RESTORE_ROUNDF128 is defined. In turn, the include order with <fenv_private.h> included before <math_private.h> broke loongarch builds, showing up that sysdeps/loongarch/math_private.h is really a fenv_private.h file (maybe implemented internally before the consistent split of those headers in 2018?) and needed to be renamed to fenv_private.h to avoid errors with duplicate macro definitions if <math_private.h> is included after <fenv_private.h>. The underlying implementation uses __ieee754_pow functions (called more than once in some cases, where the exponent does not fit in the floating type). I expect a custom implementation for a given format, that only handles integer exponents but handles larger exponents directly, could be faster and more accurate in some cases. I encourage searching for worst cases for ulps error for these implementations (necessarily non-exhaustively, given the size of the input space). Tested for x86_64 and x86, and with build-many-glibcs.py.
2 dayslinux: Fix integer overflow warnings when including <sys/mount.h> [BZ #32708]Collin Funk1-1/+1
Using gcc -Wshift-overflow=2 -Wsystem-headers to compile a file including <sys/mount.h> will cause a warning since 1 << 31 is undefined behavior on platforms where int is 32-bits. Signed-off-by: Collin Funk <collin.funk1@gmail.com> Tested-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
4 daysAdd _FORTIFY_SOURCE support for inet_ptonAaron Merey32-0/+32
Add function __inet_pton_chk which calls __chk_fail when the size of argument dst is too small. inet_pton is redirected to __inet_pton_chk or __inet_pton_warn when _FORTIFY_SOURCE is > 0. Also add tests to debug/tst-fortify.c, update the abilist with __inet_pton_chk and mention inet_pton fortification in maint.texi. Co-authored-by: Frédéric Bérat <fberat@redhat.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
4 daysUpdate kernel version to 6.13 in header constant testsJoseph Myers3-4/+4
There are no new constants covered by tst-mman-consts.py, tst-mount-consts.py or tst-sched-consts.py in Linux 6.13 that need any header changes, so update the kernel version in those tests. (tst-pidfd-consts.py will need updating separately along with adding new constants to glibc.) Tested with build-many-glibcs.py.
7 daysdebug: Improve '%n' fortify detection (BZ 30932)Adhemerval Zanella1-13/+8
The 7bb8045ec0 path made the '%n' fortify check ignore EMFILE errors while trying to open /proc/self/maps, and this added a security issue where EMFILE can be attacker-controlled thus making it ineffective for some cases. The EMFILE failure is reinstated but with a different error message. Also, to improve the false positive of the hardening for the cases where no new files can be opened, the _dl_readonly_area now uses _dl_find_object to check if the memory area is within a writable ELF segment. The procfs method is still used as fallback. Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: Arjun Shankar <arjun@redhat.com>
7 daysAdd _FORTIFY_SOURCE support for inet_ntopFrédéric Bérat32-0/+32
- Create the __inet_ntop_chk routine that verifies that the builtin size of the destination buffer is at least as big as the size given by the user. - Redirect calls from inet_ntop to __inet_ntop_chk or __inet_ntop_warn - Update the abilist for this new routine - Update the manual to mention the new fortification Reviewed-by: Florian Weimer <fweimer@redhat.com>
14 daysImplement C23 powrJoseph Myers30-0/+209
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the powr functions, which are like pow, but with simpler handling of special cases (based on exp(y*log(x)), so negative x and 0^0 are domain errors, powers of -0 are always +0 or +Inf never -0 or -Inf, and 1^+-Inf and Inf^0 are also domain errors, while NaN^0 and 1^NaN are NaN). The test inputs are taken from those for pow, with appropriate adjustments (including removing all tests that would be domain errors from those in auto-libm-test-in and adding some more such tests in libm-test-powr.inc). The underlying implementation uses __ieee754_pow functions after dealing with all special cases that need to be handled differently. It might be a little faster (avoiding a wrapper and redundant checks for special cases) to have an underlying implementation built separately for both pow and powr with compile-time conditionals for special-case handling, but I expect the benefit of that would be limited given that both functions will end up needing to use the same logic for computing pow outside of special cases. My understanding is that powr(negative, qNaN) should raise "invalid": that the rule on "invalid" for an argument outside the domain of the function takes precedence over a quiet NaN argument producing a quiet NaN result with no exceptions raised (for rootn it's explicit that the 0th root of qNaN raises "invalid"). I've raised this on the WG14 reflector to confirm the intent. Tested for x86_64 and x86, and with build-many-glibcs.py.
2025-03-13elf: Canonicalize $ORIGIN in an explicit ld.so invocation [BZ 25263]Adhemerval Zanella1-0/+23
When an executable is invoked directly, we calculate $ORIGIN by calling readlink on /proc/self/exe, which the Linux kernel resolves to the target of any symlinks. However, if an executable is run through ld.so, we cannot use /proc/self/exe and instead use the path given as an argument. This leads to a different calculation of $ORIGIN, which is most notable in that it causes ldd to behave differently (e.g., by not finding a library) from directly running the program. To make the behavior consistent, take advantage of the fact that the kernel also resolves /proc/self/fd/ symlinks to the target of any symlinks in the same manner, so once we have opened the main executable in order to load it, replace the user-provided path with the result of calling readlink("/proc/self/fd/N"). (On non-Linux platforms this resolution does not happen and so no behavior change is needed.) The __fd_to_filename requires _fitoa_word and _itoa_word, which for 32-bits pulls a lot of definitions from _itoa.c (due _ITOA_NEEDED being defined). To simplify the build move the required function to a new file, _fitoa_word.c. Checked on x86_64-linux-gnu and i686-linux-gnu. Co-authored-by: Geoffrey Thomas <geofft@ldpreload.com> Reviewed-by: Geoffrey Thomas <geofft@ldpreload.com> Tested-by: Geoffrey Thomas <geofft@ldpreload.com>
2025-03-12Update syscall lists for Linux 6.13Joseph Myers26-2/+106
Linux 6.13 adds four new syscalls. Update syscall-names.list and regenerate the arch-syscall.h headers with build-many-glibcs.py update-syscalls. Tested with build-many-glibcs.py.
2025-03-12Linux: Add new test misc/tst-sched_setattr-threadFlorian Weimer2-0/+117
The straightforward sched_getattr call serves as a test for bug 32781, too. Reviewed-by: Joseph Myers <josmyers@redhat.com>
2025-03-12Linux: Remove attribute access from sched_getattr (bug 32781)Florian Weimer1-1/+1
The GCC attribute expects an element count, not bytes.
2025-03-12Linux: Add the pthread_gettid_np function (bug 27880)Florian Weimer32-0/+32
Current Bionic has this function, with enhanced error checking (the undefined case terminates the process). Reviewed-by: Joseph Myers <josmyers@redhat.com>
2025-03-07nptl: extend test coverage for sched_yieldSergey Kolosov2-3/+38
We add sched_yield() API testing to the existing thread affinity test case because it allows us to test sched_yield() operation in the following scenarios: * On a main thread. * On multiple threads simultaneously. * On every CPU the system reports simultaneously. The ensures we exercise sched_yield() in as many scenarios as we would exercise calls to the affinity functions. Additionally, the test is improved by adding a semaphore to coordinate all the threads running, so that an early starter thread won't consume cpu resources that could be used to start the other threads. Co-authored-by: DJ Delorie <dj@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2025-03-07Implement C23 rsqrtJoseph Myers30-0/+209
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the rsqrt functions (1/sqrt(x)). The test inputs are taken from those for sqrt. Tested for x86_64 and x86, and with build-many-glibcs.py.
2025-03-05sysdeps: linux: Add BTRFS_SUPER_MAGIC to pathconfRonan Pigott2-0/+4
btrfs has a 65535 maximum link count. Include this value in pathconf to give the real max link count for this filesystem. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-03-05linux: Prefix AT_HWCAP with 0x on LD_SHOW_AUXVAdhemerval Zanella1-4/+4
Suggested-by: Stefan Liebler <stli@linux.ibm.com> Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
2025-03-05Remove dl-procinfo.hAdhemerval Zanella2-6/+0
powerpc was the only architecture with arch-specific hooks for LD_SHOW_AUXV, and with the information moved to ld diagnostics there is no need to keep the _dl_procinfo hook. Checked with a build for all affected ABIs. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-05powerpc: Remove unused dl-procinfo.hAdhemerval Zanella2-0/+2
The _dl_string_platform is moved to hwcapinfo.h, since it is only used by hwcapinfo.c and test-get_hwcap internal test. Checked on powerpc64le-linux-gnu. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-05powerpc: Move AT_HWCAP descriptions to ld diagnosticsAdhemerval Zanella5-118/+179
The ld.so diagnostics already prints AT_HWCAP values, but only in hexadecimal. To avoid duplicating the strings, consolidate the hwcap_names from cpu-features.h on a new file, dl-hwcap-info.h (and it also improves the hwcap string description with more values). For future AT_HWCAP3/AT_HWCAP4 extensions, it is just a matter to add them on dl-hwcap-info.c so both ld diagnostics and tunable filtering will parse the new values. Checked on powerpc64le-linux-gnu. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-02-28Remove unused dl-procinfo.hWilco Dijkstra9-299/+4
Remove unused _dl_hwcap_string defines. As a result many dl-procinfo.h headers can be removed. This also removes target specific _dl_procinfo implementations which only printed HWCAP strings using dl_hwcap_string. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Remove LP64 and ILP32 ifdefsWilco Dijkstra3-29/+7
Remove LP64 and ILP32 ifdefs. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Cleanup pointer manglingWilco Dijkstra1-24/+11
Cleanup pointer mangling. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Remove PTR_REG definesWilco Dijkstra1-1/+1
Remove PTR_REG defines. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24AArch64: Remove PTR_ARG/SIZE_ARG definesWilco Dijkstra5-13/+0
This series removes various ILP32 defines that are now no longer needed. Remove PTR_ARG/SIZE_ARG. Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-21nptl: clear the whole rseq area before registrationMichael Jeanson1-6/+5
Due to the extensible nature of the rseq area we can't explictly initialize fields that are not part of the ABI yet. It was agreed with upstream that all new fields will be documented as zero initialized by userspace. Future kernels configured with CONFIG_DEBUG_RSEQ will validate the content of all fields during registration. Replace the explicit field initialization with a memset of the whole rseq area which will cover fields as they are added to future kernels. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-02-21aarch64: Add GCS test with signal handlerYury Khrustalev2-0/+106
Test that when we return from a function that enabled GCS at runtime we get SIGSEGV. Also test that ucontext contains GCS block with the GCS pointer. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-21aarch64: Add GCS tests for dlopenYury Khrustalev7-0/+100
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-21aarch64: Add GCS tests for transitive dependenciesYury Khrustalev11-16/+195
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-21aarch64: Add tests for Guarded Control StackYury Khrustalev15-1/+186
These tests validate that GCS tunable works as expected depending on the GCS markings in the test binaries. Tests validate both static and dynamically linked binaries. These new tests are AArch64 specific. Moreover, they are included only if linker supports the "-z gcs=<value>" option. If built, these tests will run on systems with and without HWCAP_GCS. In the latter case the tests will be reported as UNSUPPORTED. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-15Fix tst-aarch64-pkey to handle ENOSPC as not supportedAurelien Jarno1-0/+4
The syscall pkey_alloc can return ENOSPC to indicate either that all keys are in use or that the system runs in a mode in which memory protection keys are disabled. In such case the test should not fail and just return unsupported. This matches the behaviour of the generic tst-pkey. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-30ld.so: Decorate BSS mappingsPetr Malat2-12/+41
Decorate BSS mappings with [anon: glibc: .bss <file>], for example [anon: glibc: .bss /lib/libc.so.6]. The string ".bss" is already used by bionic so use the same, but add the filename as well. If the name would be longer than what the kernel allows, drop the directory part of the path. Refactor glibc.mem.decorate_maps check to a separate function and use it to avoid assembling a name, which would not be used later. Signed-off-by: Petr Malat <oss@malat.biz> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-30nptl: Add support for setup guard pages with MADV_GUARD_INSTALLAdhemerval Zanella1-0/+2
Linux 6.13 (662df3e5c3766) added a lightweight way to define guard areas through madvise syscall. Instead of PROT_NONE the guard region through mprotect, userland can madvise the same area with a special flag, and the kernel ensures that accessing the area will trigger a SIGSEGV (as for PROT_NONE mapping). The madvise way has the advantage of less kernel memory consumption for the process page-table (one less VMA per guard area), and slightly less contention on kernel (also due to the fewer VMA areas being tracked). The pthread_create allocates a new thread stack in two ways: if a guard area is set (the default) it allocates the memory range required using PROT_NONE and then mprotect the usable stack area. Otherwise, if a guard page is not set it allocates the region with the required flags. For the MADV_GUARD_INSTALL support, the stack area region is allocated with required flags and then the guard region is installed. If the kernel does not support it, the usual way is used instead (and MADV_GUARD_INSTALL is disabled for future stack creations). The stack allocation strategy is recorded on the pthread struct, and it is used in case the guard region needs to be resized. To avoid needing an extra field, the 'user_stack' is repurposed and renamed to 'stack_mode'. This patch also adds a proper test for the pthread guard. I checked on x86_64, aarch64, powerpc64le, and hppa with kernel 6.13.0-rc7. Reviewed-by: DJ Delorie <dj@redhat.com>
2025-01-21aarch64: Add HWCAP_GCSYury Khrustalev2-4/+1
Use upper 32 bits of HWCAP. Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-01-20Linux: Do not check unused bytes after sched_getattr in tst-sched_setattrFlorian Weimer1-11/+0
Linux 6.13 was released with a change that overwrites those bytes. This means that the check_unused subtest fails. Update the manual accordingly. Tested-by: Xi Ruoyao <xry111@xry111.site> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-20aarch64: Use __alloc_gcs in makecontextSzabolcs Nagy1-30/+8
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-20aarch64: Process gnu properties in static exeSzabolcs Nagy1-0/+14
Unlike for BTI, the kernel does not process GCS properties so update GL(dl_aarch64_gcs) before the GCS status is set. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-20aarch64: Enable GCS in static linked exeSzabolcs Nagy1-0/+48
Use the ARCH_SETUP_TLS hook to enable GCS in the static linked case. The system call must be inlined and then GCS is enabled on a top level stack frame that does not return and has no exception handlers above it. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-20aarch64: Add glibc.cpu.aarch64_gcs tunableSzabolcs Nagy2-0/+45
This tunable controls Guarded Control Stack (GCS) for the process. 0 = disabled: do not enable GCS 1 = enforced: check markings and fail if any binary is not marked 2 = optional: check markings but keep GCS off if a binary is unmarked 3 = override: enable GCS, markings are ignored By default it is 0, so GCS is disabled, value 1 will enable GCS. The status is stored into GL(dl_aarch64_gcs) early and only applied later, since enabling GCS is tricky: it must happen on a top level stack frame. Using GL instead of GLRO because it may need updates depending on loaded libraries that happen after readonly protection is applied, however library marking based GCS setting is not yet implemented. Describe new tunable in the manual. Co-authored-by: Yury Khrustalev <yury.khrustalev@arm.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-20aarch64: Add GCS support for makecontextSzabolcs Nagy2-2/+63
Changed the makecontext logic: previously the first setcontext jumped straight to the user callback function and the return address is set to __startcontext. This does not work when GCS is enabled as the integrity of the return address is protected, so instead the context is setup such that setcontext jumps to __startcontext which calls the user callback (passed in x20). The map_shadow_stack syscall is used to allocate a suitably sized GCS (which includes some reserved area to account for altstack signal handlers and otherwise supports maximum number of 16 byte aligned stack frames on the given stack) however the GCS is never freed as the lifetime of ucontext and related stack is user managed. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-20aarch64: Add GCS support for setcontextSzabolcs Nagy4-9/+83
Userspace ucontext needs to store GCSPR, it does not have to be compatible with the kernel ucontext. For now we use the linux struct gcs_context layout but only use the gcspr field from it. Similar implementation to the longjmp code, supports switching GCS if the target GCS is capped, and unwinding a continuous GCS to a previous state. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2025-01-20aarch64: Add GCS support to vforkSzabolcs Nagy1-1/+6
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2025-01-16Linux: Add tests that check that TLS and rseq area are separateFlorian Weimer6-0/+213
The new test elf/tst-rseq-tls-range-4096-static reliably detected the extra TLS allocation problem (tcb_offset was dropped from the allocation size) on aarch64. It also failed with a crash in dlopen *before* the extra TLS changes, so TLS alignment with static dlopen was already broken. Reviewed-by: Michael Jeanson <mjeanson@efficios.com>
2025-01-16Linux: Fixes for getrandom fork handlingFlorian Weimer1-3/+23
Careful updates of grnd_alloc.len are required to ensure that after fork, grnd_alloc.states does not contain entries that are also encountered by __getrandom_reset_state in TCBs. For the same reason, it is necessary to overwrite the TCB state pointer with NULL before updating grnd_alloc.states in __getrandom_vdso_release. Before this change, different TCBs could share the same getrandom state after multi-threaded fork. This would be a critical security bug (predictable randomness) if not caught during development. The additional check in stdlib/tst-arc4random-thread makes it more likely that the test fails due to the bugs mentioned above. Both __getrandom_reset_state and __getrandom_vdso_release could put reserved NULL pointers into the states array. This is also fixed with this commit. After these changes, no null pointers were observed in the states array during testing. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-14affinity-inheritance: Overallocate CPU setsStefan Liebler1-3/+4
Some kernels on S390 appear to return a CPU affinity mask based on configured processors rather than the ones online. Overallocate the CPU set to match that, but operate only on the ones online. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Co-authored-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2025-01-13sh4: ensure FPSCR.PR==0 when executing FRCHG [BZ #27543]mirabilos3-0/+10
If the bit is not 0, the operations FRCHG and FSCHG are undefined and cause a trap; qemu now checks for this as well, so we set it to 0 temporarily and restore the old value in getcontext afterwards (setcontext/swapcontext already do so). From the discussion in the bugreport, this can probably be optimised in one place but none of the people involved are SH4 assembly experts, this patch is field-tested, and it’s not a code path run often. The other question, what happens if a signal occurs while the bit is temporarily 0, is also still unsolved, but to fix that a kernel change is most likely needed; this patch changes a certain trap on many CPUs for a hard-to-get trap in a signal handler if a signal is delivered during the few instructions the PR bit is temporarily set to 0, so it’s not a regression for most users. See BZ and https://bugs.launchpad.net/qemu/+bug/1796520 for related discussion, references and review comments. Signed-off-by: mirabilos <tg@debian.org> Reviewed-by: Oleg Endo <olegendo@gcc.gnu.org> Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-13aarch64: Use 64-bit variable to access the special registersAdhemerval Zanella2-2/+2
clang issues: error: value size does not match register size specified by the constraint and modifier [-Werror,-Wasm-operand-widths] while tryng to use 32 bit variables with 'mrs' to get/set the fpsr, dczid_el0, and ctr.
2025-01-10Linux: Update internal copy of '<sys/rseq.h>'Michael Jeanson1-0/+11
Sync the internal copy of '<sys/rseq.h>' with the latest Linux kernel 'include/uapi/linux/rseq.h'. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10nptl: Move the rseq area to the 'extra TLS' blockMichael Jeanson10-48/+205
Move the rseq area to the newly added 'extra TLS' block, this is the last step in adding support for the rseq extended ABI. The size of the rseq area is now dynamic and depends on the rseq features reported by the kernel through the elf auxiliary vector. This will allow applications to use rseq features past the 32 bytes of the original rseq ABI as they become available in future kernels. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10nptl: Introduce <rseq-access.h> for RSEQ_* accessorsMichael Jeanson1-0/+8
In preparation to move the rseq area to the 'extra TLS' block, we need accessors based on the thread pointer and the rseq offset. The ONCE variant of the accessors ensures single-copy atomicity for loads and stores which is required for all fields once the registration is active. A separate header is required to allow including <atomic.h> which results in an include loop when added to <tcb-access.h>. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10nptl: add rtld_hidden_proto to __rseq_size and __rseq_offsetMichael Jeanson2-15/+28
This allows accessing the internal aliases of __rseq_size and __rseq_offset from ld.so without ifdefs and avoids dynamic symbol binding at run time for both variables. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>