aboutsummaryrefslogtreecommitdiff
path: root/sysdeps/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2025-06-25powerpc: Remove modf optimizationAdhemerval Zanella4-62/+4
The generic implementation is slight more optimized than the powerpc one, where it has a more optimized inf/nan check (by not using FP unit checks, along with branch prediction hints), and removed one branch by issuing trunc instead of a combination of floor/ceil (which also generated less code). On power10 with gcc 14.2.1: reciprocal-throughput master patch difference workload-0_1 1.1351 0.9067 20.12% workload-1_maxint 1.4230 0.9040 36.47% workload-maxint_maxfloat 1.5038 0.9076 39.65% workload-integral 1.1280 0.9111 19.23% latency master patch difference workload-0_1 1.1440 2.7117 -137.03% workload-1_maxint 4.0556 2.7070 33.25% workload-maxint_maxfloat 3.2122 2.7164 15.43% workload-integral 3.2381 2.7281 15.75% Checked on powerpc64le-linux-gnu. Reviewed-by: Sachin Monga <smonga@linux.ibm.com>
2025-06-25powerpc: Remove modff optimizationAdhemerval Zanella4-57/+10
The generic implementation is slight more optimized than the powerpc one, where it has a more optimized inf/nan check (by not using FP unit checks, along with branch prediction hints), and removed one branch by issuing trunc instead of a combination of floor/ceil (which also generated less code). On power10 with gcc 14.2.1: reciprocal-throughput master patch difference workload-0_1 1.5210 1.3942 8.34% workload-1_maxint 2.0926 1.3940 33.38% workload-maxint_maxfloat 1.7851 1.3940 21.91% workload-integral 1.5216 1.3941 8.37% latency master patch difference workload-0_1 1.5928 2.6337 -65.35% workload-1_maxint 3.2929 2.6337 20.02% workload-maxint_maxfloat 1.9697 2.6341 -33.73% workload-integral 2.0597 2.6337 -27.87% Checked on powerpc64le-linux-gnu. Reviewed-by: Sachin Monga <smonga@linux.ibm.com>
2025-06-23powerpc: use .machine power10 in POWER10 assembler sourcesAndreas Schwab5-5/+5
They were misattributed as POWER9 sources.
2025-06-19i386: Update ___tls_get_addr to preserve vector registersH.J. Lu1-0/+5
Compiler generates the following instruction sequence for dynamic TLS access: leal tls_var@tlsgd(,%ebx,1), %eax call ___tls_get_addr@PLT CALL instruction is transparent to compiler which assumes all registers, except for EFLAGS, AX, CX, and DX, are unchanged after CALL. But ___tls_get_addr is a normal function which doesn't preserve any vector registers. 1. Rename the generic __tls_get_addr function to ___tls_get_addr_internal. 2. Change ___tls_get_addr to a wrapper function with implementations for FNSAVE, FXSAVE, XSAVE and XSAVEC to save and restore all vector registers. 3. dl-tlsdesc-dynamic.h has: _dl_tlsdesc_dynamic: /* Like all TLS resolvers, preserve call-clobbered registers. We need two scratch regs anyway. */ subl $32, %esp cfi_adjust_cfa_offset (32) It is wrong to use movl %ebx, -28(%esp) movl %esp, %ebx cfi_def_cfa_register(%ebx) ... mov %ebx, %esp cfi_def_cfa_register(%esp) movl -28(%esp), %ebx to preserve EBX on stack. Fix it with: movl %ebx, 28(%esp) movl %esp, %ebx cfi_def_cfa_register(%ebx) ... mov %ebx, %esp cfi_def_cfa_register(%esp) movl 28(%esp), %ebx 4. Update _dl_tlsdesc_dynamic to call ___tls_get_addr_internal directly. 5. Add have-test-mtls-traditional to compile tst-tls23-mod.c with traditional TLS variant to verify the fix. 6. Define DL_RUNTIME_RESOLVE_REALIGN_STACK in sysdeps/x86/sysdep.h. This fixes BZ #32996. Co-Authored-By: Adhemerval Zanella <adhemerval.zanella@linaro.org> Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-06-18powerpc: Remove assembler workaroundsAndreas Schwab4-101/+34
Now that we require at least binutils 2.39 the support for POWER9 and POWER10 instructions can be assumed.
2025-06-16ppc64le: Revert "powerpc: Optimized strcmp for power10" (CVE-2025-5702)Carlos O'Donell5-239/+1
This reverts commit 3367d8e180848030d1646f088759f02b8dfe0d6f Reason for revert: Power10 strcmp clobbers non-volatile vector registers (Bug 33056) Tested on ppc64le without regression.
2025-06-16ppc64le: Revert "powerpc : Add optimized memchr for POWER10" (Bug 33059)Carlos O'Donell5-369/+11
This reverts commit b9182c793caa05df5d697427c0538936e6396d4b Reason for revert: Power10 memchr clobbers v20 vector register (Bug 33059) This is not a security issue, unlike CVE-2025-5745 and CVE-2025-5702. Tested on ppc64le without regression.
2025-06-16ppc64le: Revert "powerpc: Fix performance issues of strcmp power10" ↵Carlos O'Donell1-95/+66
(CVE-2025-5702) This reverts commit 90bcc8721ef82b7378d2b080141228660e862d56 This change is in the chain of the final revert that fixes the CVE i.e. 3367d8e180848030d1646f088759f02b8dfe0d6f Reason for revert: Power10 strcmp clobbers non-volatile vector registers (Bug 33056) Tested on ppc64le with no regressions.
2025-06-16ppc64le: Revert "powerpc: Optimized strncmp for power10" (CVE-2025-5745)Carlos O'Donell5-304/+1
This reverts commit 23f0d81608d0ca6379894ef81670cf30af7fd081 Reason for revert: Power10 strncmp clobbers non-volatile vector registers (Bug 33060) Tested on ppc64le with no regressions.
2025-06-02math: Optimize float ilogb/llogbAdhemerval Zanella3-0/+45
It removes the wrapper by moving the error/EDOM handling to an out-of-line implementation (__math_invalidf_i/__math_invalidf_li). Also, __glibc_unlikely is used on errors case since it helps code generation on recent gcc. The code now builds to with gcc-14 on aarch64: 0000000000000000 <__ilogbf>: 0: 1e260000 fmov w0, s0 4: d3577801 ubfx x1, x0, #23, #8 8: 340000e1 cbz w1, 24 <__ilogbf+0x24> c: 5101fc20 sub w0, w1, #0x7f 10: 7103fc3f cmp w1, #0xff 14: 54000040 b.eq 1c <__ilogbf+0x1c> // b.none 18: d65f03c0 ret 1c: 12b00000 mov w0, #0x7fffffff // #2147483647 20: 14000000 b 0 <__math_invalidf_i> 24: 53175800 lsl w0, w0, #9 28: 340000a0 cbz w0, 3c <__ilogbf+0x3c> 2c: 5ac01000 clz w0, w0 30: 12800fc1 mov w1, #0xffffff81 // #-127 34: 4b000020 sub w0, w1, w0 38: d65f03c0 ret 3c: 320107e0 mov w0, #0x80000001 // #-2147483647 40: 14000000 b 0 <__math_invalidf_i> Some ABI requires additional adjustments: * i386 and m68k requires to use the template version, since both provide __ieee754_ilogb implementatations. * loongarch uses a custom implementation as well. * powerpc64le also has a custom implementation for POWER9, which is also used for float and float128 version. The generic e_ilogb.c implementation is moved on powerpc to keep the current code as-is. Checked on aarch64-linux-gnu and x86_64-linux-gnu. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-06-02math: Optimize double ilogb/llogbAdhemerval Zanella3-0/+45
It removes the wrapper by moving the error/EDOM handling to an out-of-line implementation (__math_invalid_i/__math_invalid_li). Also, __glibc_unlikely is used on errors case since it helps code generation on recent gcc. The code now builds to with gcc-14 on aarch64: 0000000000000000 <__ilogb>: 0: 9e660000 fmov x0, d0 4: d374f801 ubfx x1, x0, #52, #11 8: 340000e1 cbz w1, 24 <__ilogb+0x24> c: 510ffc20 sub w0, w1, #0x3ff 10: 711ffc3f cmp w1, #0x7ff 14: 54000040 b.eq 1c <__ilogb+0x1c> // b.none 18: d65f03c0 ret 1c: 12b00000 mov w0, #0x7fffffff // #2147483647 20: 14000000 b 0 <__math_invalid_i> 24: d374cc00 lsl x0, x0, #12 28: b40000a0 cbz x0, 3c <__ilogb+0x3c> 2c: dac01000 clz x0, x0 30: 12807fc1 mov w1, #0xfffffc01 // #-1023 34: 4b000020 sub w0, w1, w0 38: d65f03c0 ret 3c: 320107e0 mov w0, #0x80000001 // #-2147483647 40: 14000000 b 0 <__math_invalid_i> Some ABI requires additional adjustments: * i386 and m68k requires to use the template version, since both provide __ieee754_ilogb implementatations. * loongarch uses a custom implementation as well. * powerpc64le also has a custom implementation for POWER9, which is also used for float and float128 version. The generic e_ilogb.c implementation is moved on powerpc to keep the current code as-is. Checked on aarch64-linux-gnu and x86_64-linux-gnu. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2025-05-14powerpc64le: Remove configure check for objcopy >= 2.26.Stefan Liebler2-77/+0
Due to raising the minimum binutils version to >= 2.26, the configure check for testing support of --update-section is not needed anymore. Reviewed-by: Peter Bergner <bergner@tenstorrent.com>
2025-05-09Implement C23 compoundnJoseph Myers4-2/+6
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the compoundn functions, which compute (1+X) to the power Y for integer Y (and X at least -1). The integer exponent has type long long int in C23; it was intmax_t in TS 18661-4, and as with other interfaces changed after their initial appearance in the TS, I don't think we need to support the original version of the interface. Note that these functions are "compoundn" with a trailing "n", *not* "compound" (CORE-MATH has the wrong name, for example). As with pown, I strongly encourage searching for worst cases for ulps error for these implementations (necessarily non-exhaustively, given the size of the input space). I also expect a custom implementation for a given format could be much faster as well as more accurate (I haven't tested or benchmarked the CORE-MATH implementation for binary32); this is one of the more complicated and less efficient functions to implement in a type-generic way. As with exp2m1 and exp10m1, this showed up places where the powerpc64le IFUNC setup is not as self-contained as one might hope (in this case, without the changes specific to powerpc64le, there were undefined references to __GI___expf128). Tested for x86_64 and x86, and with build-many-glibcs.py.
2025-05-06powerpc: Remove POWER7 strncasecmp optimizationAdhemerval Zanella12-270/+4
These routines are not extensively used (gnulib documentation even recommend use a replacement [1]), and there is already a POWER8 version that uses proper vectorized instructions. [1] https://www.gnu.org/software/gnulib/manual/gnulib.html#C-strings Checked with a build for some powerpc variations. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-04-10powerpc: Remove relocation cache flush code for power64Florian Weimer1-15/+0
This is only needed for -mno-secure-plt, and this linkage mode is not supported with powerpc64 and powerp64le. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-12math: Refactor how to use libm-test-ulpsAdhemerval Zanella4-3517/+0
The current approach tracks math maximum supported errors by explicitly setting them per function and architecture. On newer implementations or new compiler versions, the file is updated with newer values if it shows higher results. The idea is to track the maximum known error, to update the manual with the obtained values. The constant libm-test-ulps shows little value, where it is usually a mechanical change done by the maintainer, for past releases it is usually ignored whether the ulp change resulted from a compiler regression, and the math tests already have a maximum ulp error that triggers a regression. It was shown by a recent update after the new acosf [1] implementation that is correctly rounded, where the libm-test-ulps was indeed from a compiler issue. This patch removes all arch-specific libm-test-ulps, adds system generic libm-test-ulps where applicable, and changes its semantics. The generic files now track specific implementation constraints, like if it is expected to be correctly rounded, or if the system-specific has different error expectations. Now multiple libm-test-ulps can be defined, and system-specific overrides generic implementation. This is for the case where arch-specific implementation might show worse precision than generic implementation, for instance, the cbrtf on i686. Regressions are only reported if the implementation shows larger errors than 9 ulps (13 for IBM long double) unless it is overridden by libm-test-ulps and the maximum error is not printed at the end of tests. The regen-ulps rule is also removed since it does not make sense to update the libm-test-ulps automatically. The manual error table is also removed, Paul Zimmermann and others have been tracking libm precision with a more comprehensive analysis for some releases; so link to his work instead. [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=9cc9f8e11e8fb8f54f1e84d9f024917634a78201
2025-03-05Remove dl-procinfo.hAdhemerval Zanella2-2/+0
powerpc was the only architecture with arch-specific hooks for LD_SHOW_AUXV, and with the information moved to ld diagnostics there is no need to keep the _dl_procinfo hook. Checked with a build for all affected ABIs. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-05powerpc: Remove unused dl-procinfo.hAdhemerval Zanella4-131/+107
The _dl_string_platform is moved to hwcapinfo.h, since it is only used by hwcapinfo.c and test-get_hwcap internal test. Checked on powerpc64le-linux-gnu. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-05powerpc: Move cache geometry information to ld diagnosticsAdhemerval Zanella2-66/+50
From LD_SHOW_AUXV output. Checked on powerpc64le-linux-gnu. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-05powerpc: Move AT_HWCAP descriptions to ld diagnosticsAdhemerval Zanella3-93/+48
The ld.so diagnostics already prints AT_HWCAP values, but only in hexadecimal. To avoid duplicating the strings, consolidate the hwcap_names from cpu-features.h on a new file, dl-hwcap-info.h (and it also improves the hwcap string description with more values). For future AT_HWCAP3/AT_HWCAP4 extensions, it is just a matter to add them on dl-hwcap-info.c so both ld diagnostics and tunable filtering will parse the new values. Checked on powerpc64le-linux-gnu. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-02-12math: Use tanpif from CORE-MATHAdhemerval Zanella2-4/+1
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic tanpif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 85.1683 47.7990 43.88% x86_64v2 76.8219 41.4679 46.02% x86_64v3 73.7775 37.7734 48.80% aarch64 (Neoverse) 35.4514 18.0742 49.02% power8 22.7604 10.1054 55.60% power10 22.1358 9.9553 55.03% reciprocal-throughput master patched improvement x86_64 41.0174 19.4718 52.53% x86_64v2 34.8565 11.3761 67.36% x86_64v3 34.0325 9.6989 71.50% aarch64 (Neoverse) 25.4349 9.2017 63.82% power8 13.8626 3.8486 72.24% power10 11.7933 3.6420 69.12% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use sinpif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic sinpif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 47.5710 38.4455 19.18% x86_64v2 46.8828 40.7563 13.07% x86_64v3 44.0034 34.1497 22.39% aarch64 (Neoverse) 19.2493 14.1968 26.25% power8 23.5312 16.3854 30.37% power10 22.6485 10.2888 54.57% reciprocal-throughput master patched improvement x86_64 21.8858 11.6717 46.67% x86_64v2 22.0620 11.9853 45.67% x86_64v3 21.5653 11.3291 47.47% aarch64 (Neoverse) 13.0615 6.5499 49.85% power8 16.2030 6.9580 57.06% power10 12.8911 4.2858 66.75% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use cospif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic cospif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 47.4679 38.4157 19.07% x86_64v2 46.9686 38.3329 18.39% x86_64v3 43.8929 31.8510 27.43% aarch64 (Neoverse) 18.8867 13.2089 30.06% power8 22.9435 7.8023 65.99% power10 15.4472 7.77505 49.67% reciprocal-throughput master patched improvement x86_64 20.9518 11.4991 45.12% x86_64v2 19.8699 10.5921 46.69% x86_64v3 19.3475 9.3998 51.42% aarch64 (Neoverse) 12.5767 6.2158 50.58% power8 15.0566 3.2654 78.31% power10 9.2866 3.1147 66.46% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use atanpif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic atanpif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 66.3296 52.7558 20.46% x86_64v2 66.0429 51.4007 22.17% x86_64v3 60.6294 48.7876 19.53% aarch64 (Neoverse) 24.3163 20.9110 14.00% power8 16.5766 13.3620 19.39% power10 16.5115 13.4072 18.80% reciprocal-throughput master patched improvement x86_64 30.8599 16.0866 47.87% x86_64v2 29.2286 15.4688 47.08% x86_64v3 23.0960 12.8510 44.36% aarch64 (Neoverse) 15.4619 10.6752 30.96% power8 7.9200 5.2483 33.73% power10 6.8539 4.6262 32.50% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use atan2pif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic atan2pif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 79.4006 70.8726 10.74% x86_64v2 77.5136 69.1424 10.80% x86_64v3 71.8050 68.1637 5.07% aarch64 (Neoverse) 27.8363 24.7700 11.02% power8 39.3893 17.2929 56.10% power10 19.7200 16.8187 14.71% reciprocal-throughput master patched improvement x86_64 38.3457 30.9471 19.29% x86_64v2 37.4023 30.3112 18.96% x86_64v3 33.0713 24.4891 25.95% aarch64 (Neoverse) 19.3683 15.3259 20.87% power8 19.5507 8.27165 57.69% power10 9.05331 7.63775 15.64% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use asinpif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic asinpif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 46.4996 41.6126 10.51% x86_64v2 46.7551 38.8235 16.96% x86_64v3 42.6235 33.7603 20.79% aarch64 (Neoverse) 17.4161 14.3604 17.55% power8 10.7347 9.0193 15.98% power10 10.6420 9.0362 15.09% reciprocal-throughput master patched improvement x86_64 24.7208 16.5544 33.03% x86_64v2 24.2177 14.8938 38.50% x86_64v3 20.5617 10.5452 48.71% aarch64 (Neoverse) 13.4827 7.17613 46.78% power8 6.46134 3.56089 44.89% power10 5.79007 3.49544 39.63% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use acospif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic acospif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 54.8281 42.9070 21.74% x86_64v2 54.1717 42.7497 21.08% x86_64v3 49.3552 34.1512 30.81% aarch64 (Neoverse) 17.9395 14.3733 19.88% power8 20.3110 8.8609 56.37% power10 11.3113 8.84067 21.84% reciprocal-throughput master patched improvement x86_64 21.2301 14.4803 31.79% x86_64v2 20.6858 13.9506 32.56% x86_64v3 16.1944 11.3377 29.99% aarch64 (Neoverse) 11.4474 7.13282 37.69% power8 10.6916 3.57547 66.56% power10 4.64269 3.54145 23.72% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-05powerpc64le: Also avoid IFUNC for __mempcpyFlorian Weimer1-0/+1
Code used during early static startup in elf/dl-tls.c uses __mempcpy. Fixes commit cbd9fd236981717d3d4ee942986ea912e9707c32 ("Consolidate TLS block allocation for static binaries with ld.so"). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-09Move <thread_pointer.h> to kernel-independent sysdeps directoriesFlorian Weimer1-0/+0
Hurd is expected to use the same thread ABI as Linux. Reviewed-by: Michael Jeanson <mjeanson@efficios.com>
2025-01-09math: Fix acosf when building with gcc <= 11Adhemerval Zanella1-2/+0
GCC <= 11 wrongly assumes the rounding is to nearest and performs a constant folding where it should evaluate since the result is not exact [1]. [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=57245
2025-01-07math: update powerpc ulps (this time LE)Andreas K. Hüttel1-8/+8
Linux bogsucker 6.1.55-gentoo-dist-hardened #1 SMP Sun Oct 1 18:03:02 UTC 2023 ppc64le POWER9 (architected), altivec supported CHRP IBM pSeries (emulated by qemu) GNU/Linux Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2025-01-03math: update powerpc ulpsAndreas K. Hüttel1-25/+25
Linux timberdoodle 6.1.60-gentoo-dist-hardened #1 SMP Fri Dec 1 22:10:49 UTC 2023 ppc64 POWER9 (architected), altivec supported CHRP IBM pSeries (emulated by qemu) GNU/Linux Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2025-01-02elf: Remove the remaining uses of GET_ADDR_OFFSETFlorian Weimer1-1/+0
Expand the macro where it is used in static definitions of __tls_get_addr. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-02powerpc: Update acosf ulpsFlorian Weimer1-0/+2
As seen on powerpc64le-linux-gnu with GCC 11 defaulting to POWER9 instructions.
2025-01-01Update copyright in generated files by running "make"Paul Eggert1-6/+9
2025-01-01Update copyright dates with scripts/update-copyrightsPaul Eggert567-567/+567
2024-12-20elf: Introduce is_rtld_link_mapFlorian Weimer1-2/+2
Unconditionally define it to false for static builds. This avoids the awkward use of weak_extern for _dl_rtld_map in checks that cannot be possibly true on static builds. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-12-18math: Use tanhf from CORE-MATHAdhemerval Zanella2-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic tanhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 51.5273 41.0951 20.25% x86_64v2 47.7021 39.1526 17.92% x86_64v3 45.0373 34.2737 23.90% i686 133.9970 83.8596 37.42% aarch64 (Neoverse) 21.5439 14.7961 31.32% power10 13.3301 8.4406 36.68% reciprocal-throughput master patched improvement x86_64 24.9493 12.8547 48.48% x86_64v2 20.7051 12.7761 38.29% x86_64v3 19.2492 11.0851 42.41% i686 78.6498 29.8211 62.08% aarch64 (Neoverse) 11.6026 7.11487 38.68% power10 6.3328 2.8746 54.61% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use sinhf from CORE-MATHAdhemerval Zanella2-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic sinhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 52.6819 49.1489 6.71% x86_64v2 49.1162 42.9447 12.57% x86_64v3 46.9732 39.9157 15.02% i686 141.1470 129.6410 8.15% aarch64 (Neoverse) 20.8539 17.1288 17.86% power10 14.5258 9.1906 36.73% reciprocal-throughput master patched improvement x86_64 27.5553 23.9395 13.12% x86_64v2 21.6423 20.3219 6.10% x86_64v3 21.4842 16.0224 25.42% i686 87.9709 86.1626 2.06% aarch64 (Neoverse) 15.1919 12.2744 19.20% power10 7.2188 5.2611 27.12% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use coshf from CORE-MATHAdhemerval Zanella2-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode), although it should worse performance than current one. The current implementation performance comes mainly from the internal usage of the optimize expf implementation, and shows a maximum ULPs of 2 for FE_TONEAREST and 3 for other rounding modes. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 40.6995 49.0737 -20.58% x86_64v2 40.5841 44.3604 -9.30% x86_64v3 39.3879 39.7502 -0.92% i686 112.3380 129.8570 -15.59% aarch64 (Neoverse) 18.6914 17.0946 8.54% power10 11.1343 9.3245 16.25% reciprocal-throughput master patched improvement x86_64 18.6471 24.1077 -29.28% x86_64v2 17.7501 20.2946 -14.34% x86_64v3 17.8262 17.1877 3.58% i686 64.1454 86.5645 -34.95% aarch64 (Neoverse) 9.77226 12.2314 -25.16% power10 4.0200 5.3316 -32.63% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use atanhf from CORE-MATHAdhemerval Zanella2-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic atanhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 59.4930 45.8568 22.92% x86_64v2 59.5705 45.5804 23.48% x86_64v3 53.1838 37.7155 29.08% i686 169.354 133.5940 21.12% aarch64 (Neoverse) 26.0781 16.9829 34.88% power10 15.6591 10.7623 31.27% reciprocal-throughput master patched improvement x86_64 23.5903 18.5766 21.25% x86_64v2 22.6489 18.2683 19.34% x86_64v3 19.0401 13.9474 26.75% i686 97.6034 107.3260 -9.96% aarch64 (Neoverse) 15.3664 9.57846 37.67% power10 6.8877 4.6242 32.86% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use atan2f from CORE-MATHAdhemerval Zanella2-16/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic atan2f. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 68.1175 69.2014 -1.59% x86_64v2 66.9884 66.0081 1.46% x86_64v3 57.7034 61.6407 -6.82% i686 189.8690 152.7560 19.55% aarch64 (Neoverse) 32.6151 24.5382 24.76% power10 21.7282 17.1896 20.89% reciprocal-throughput master patched improvement x86_64 34.5202 31.6155 8.41% x86_64v2 32.6379 30.3372 7.05% x86_64v3 34.3677 23.6455 31.20% i686 157.7290 75.8308 51.92% aarch64 (Neoverse) 27.7788 16.2671 41.44% power10 15.5715 8.1588 47.60% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use atanf from CORE-MATHAdhemerval Zanella2-11/+3
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic atanf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 56.8265 53.6842 5.53% x86_64v2 54.8177 53.6842 2.07% x86_64v3 46.2915 48.7034 -5.21% i686 158.3760 108.9560 31.20% aarch64 (Neoverse) 21.687 20.5893 5.06% power10 13.1903 13.5012 -2.36% reciprocal-throughput master patched improvement x86_64 16.6787 16.7601 -0.49% x86_64v2 16.6983 16.7601 -0.37% x86_64v3 16.2268 12.1391 25.19% i686 138.6840 36.0640 74.00% aarch64 (Neoverse) 11.8012 10.3565 12.24% power10 5.3212 4.2894 19.39% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use asinhf from CORE-MATHAdhemerval Zanella2-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic asinhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 64.5128 56.9717 11.69% x86_64v2 63.3065 57.2666 9.54% x86_64v3 62.8719 51.4170 18.22% i686 189.1630 137.635 27.24% aarch64 (Neoverse) 25.3551 20.5757 18.85% power10 17.9712 13.3302 25.82% reciprocal-throughput master patched improvement x86_64 20.0844 15.4731 22.96% x86_64v2 19.2919 15.4000 20.17% x86_64v3 18.7226 11.9009 36.44% i686 103.7670 80.2681 22.65% aarch64 (Neoverse) 12.5005 8.68969 30.49% power10 7.2220 5.03617 30.27% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>: Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use asinf from CORE-MATHAdhemerval Zanella2-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic asinf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 42.8237 35.2460 17.70% x86_64v2 43.3711 35.9406 17.13% x86_64v3 35.0335 30.5744 12.73% i686 213.8780 104.4710 51.15% aarch64 (Neoverse) 17.2937 13.6025 21.34% power10 12.0227 7.4241 38.25% reciprocal-throughput master patched improvement x86_64 13.6770 15.5231 -13.50% x86_64v2 13.8722 16.0446 -15.66% x86_64v3 13.6211 13.2753 2.54% i686 186.7670 45.4388 75.67% aarch64 (Neoverse) 9.96089 9.39285 5.70% power10 4.9862 3.7819 24.15% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use acoshf from CORE-MATHAdhemerval Zanella2-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic acoshf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 61.2471 58.7742 4.04% x86_64-v2 62.6519 59.0523 5.75% x86_64-v3 58.7408 50.1393 14.64% aarch64 24.8580 21.3317 14.19% power10 17.0469 13.1345 22.95% reciprocal-throughput master patched improvement x86_64 16.1618 15.1864 6.04% x86_64-v2 15.7729 14.7563 6.45% x86_64-v3 14.1669 11.9568 15.60% aarch64 10.911 9.5486 12.49% power10 6.38196 5.06734 20.60% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use acosf from CORE-MATHAdhemerval Zanella2-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic acosf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 52.5098 36.6312 30.24% x86_64v2 53.0217 37.3091 29.63% x86_64v3 42.8501 32.3977 24.39% i686 207.3960 109.4000 47.25% aarch64 21.3694 13.7871 35.48% power10 14.5542 7.2891 49.92% reciprocal-throughput master patched improvement x86_64 14.1487 15.9508 -12.74% x86_64v2 14.3293 16.1899 -12.98% x86_64v3 13.6563 12.6161 7.62% i686 158.4060 45.7354 71.13% aarch64 12.5515 9.19233 26.76% power10 5.7868 3.3487 42.13% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18powerpc: Update libm-test-ulpsAdhemerval Zanella1-0/+120
Regen to add new functions acospi, asinpi, atan2pi, atanpi, and tanpi.
2024-12-12Implement C23 atan2piJoseph Myers1-0/+1
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the atan2pi functions (atan2(y,x)/pi). Tested for x86_64 and x86, and with build-many-glibcs.py.
2024-12-11Implement C23 atanpiJoseph Myers1-0/+1
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the atanpi functions (atan(x)/pi). Tested for x86_64 and x86, and with build-many-glibcs.py.