aboutsummaryrefslogtreecommitdiff
path: root/sysdeps/arc
AgeCommit message (Collapse)AuthorFilesLines
2025-01-02elf: Introduce generic <dl-tls.h>Florian Weimer1-30/+0
On arc, the definition of TLS_DTV_UNALLOCATED now comes from <dl-dtv.h>. For x86-64 x32, a separate version is needed because unsigned long int is 32 bits on this target. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-01Update copyright dates with scripts/update-copyrightsPaul Eggert43-43/+43
2024-12-18math: Use tanhf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic tanhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 51.5273 41.0951 20.25% x86_64v2 47.7021 39.1526 17.92% x86_64v3 45.0373 34.2737 23.90% i686 133.9970 83.8596 37.42% aarch64 (Neoverse) 21.5439 14.7961 31.32% power10 13.3301 8.4406 36.68% reciprocal-throughput master patched improvement x86_64 24.9493 12.8547 48.48% x86_64v2 20.7051 12.7761 38.29% x86_64v3 19.2492 11.0851 42.41% i686 78.6498 29.8211 62.08% aarch64 (Neoverse) 11.6026 7.11487 38.68% power10 6.3328 2.8746 54.61% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use sinhf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic sinhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 52.6819 49.1489 6.71% x86_64v2 49.1162 42.9447 12.57% x86_64v3 46.9732 39.9157 15.02% i686 141.1470 129.6410 8.15% aarch64 (Neoverse) 20.8539 17.1288 17.86% power10 14.5258 9.1906 36.73% reciprocal-throughput master patched improvement x86_64 27.5553 23.9395 13.12% x86_64v2 21.6423 20.3219 6.10% x86_64v3 21.4842 16.0224 25.42% i686 87.9709 86.1626 2.06% aarch64 (Neoverse) 15.1919 12.2744 19.20% power10 7.2188 5.2611 27.12% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use coshf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode), although it should worse performance than current one. The current implementation performance comes mainly from the internal usage of the optimize expf implementation, and shows a maximum ULPs of 2 for FE_TONEAREST and 3 for other rounding modes. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 40.6995 49.0737 -20.58% x86_64v2 40.5841 44.3604 -9.30% x86_64v3 39.3879 39.7502 -0.92% i686 112.3380 129.8570 -15.59% aarch64 (Neoverse) 18.6914 17.0946 8.54% power10 11.1343 9.3245 16.25% reciprocal-throughput master patched improvement x86_64 18.6471 24.1077 -29.28% x86_64v2 17.7501 20.2946 -14.34% x86_64v3 17.8262 17.1877 3.58% i686 64.1454 86.5645 -34.95% aarch64 (Neoverse) 9.77226 12.2314 -25.16% power10 4.0200 5.3316 -32.63% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use atanhf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic atanhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 59.4930 45.8568 22.92% x86_64v2 59.5705 45.5804 23.48% x86_64v3 53.1838 37.7155 29.08% i686 169.354 133.5940 21.12% aarch64 (Neoverse) 26.0781 16.9829 34.88% power10 15.6591 10.7623 31.27% reciprocal-throughput master patched improvement x86_64 23.5903 18.5766 21.25% x86_64v2 22.6489 18.2683 19.34% x86_64v3 19.0401 13.9474 26.75% i686 97.6034 107.3260 -9.96% aarch64 (Neoverse) 15.3664 9.57846 37.67% power10 6.8877 4.6242 32.86% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use atan2f from CORE-MATHAdhemerval Zanella2-10/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic atan2f. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 68.1175 69.2014 -1.59% x86_64v2 66.9884 66.0081 1.46% x86_64v3 57.7034 61.6407 -6.82% i686 189.8690 152.7560 19.55% aarch64 (Neoverse) 32.6151 24.5382 24.76% power10 21.7282 17.1896 20.89% reciprocal-throughput master patched improvement x86_64 34.5202 31.6155 8.41% x86_64v2 32.6379 30.3372 7.05% x86_64v3 34.3677 23.6455 31.20% i686 157.7290 75.8308 51.92% aarch64 (Neoverse) 27.7788 16.2671 41.44% power10 15.5715 8.1588 47.60% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use atanf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic atanf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 56.8265 53.6842 5.53% x86_64v2 54.8177 53.6842 2.07% x86_64v3 46.2915 48.7034 -5.21% i686 158.3760 108.9560 31.20% aarch64 (Neoverse) 21.687 20.5893 5.06% power10 13.1903 13.5012 -2.36% reciprocal-throughput master patched improvement x86_64 16.6787 16.7601 -0.49% x86_64v2 16.6983 16.7601 -0.37% x86_64v3 16.2268 12.1391 25.19% i686 138.6840 36.0640 74.00% aarch64 (Neoverse) 11.8012 10.3565 12.24% power10 5.3212 4.2894 19.39% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use asinhf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic asinhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 64.5128 56.9717 11.69% x86_64v2 63.3065 57.2666 9.54% x86_64v3 62.8719 51.4170 18.22% i686 189.1630 137.635 27.24% aarch64 (Neoverse) 25.3551 20.5757 18.85% power10 17.9712 13.3302 25.82% reciprocal-throughput master patched improvement x86_64 20.0844 15.4731 22.96% x86_64v2 19.2919 15.4000 20.17% x86_64v3 18.7226 11.9009 36.44% i686 103.7670 80.2681 22.65% aarch64 (Neoverse) 12.5005 8.68969 30.49% power10 7.2220 5.03617 30.27% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>: Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use asinf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic asinf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 42.8237 35.2460 17.70% x86_64v2 43.3711 35.9406 17.13% x86_64v3 35.0335 30.5744 12.73% i686 213.8780 104.4710 51.15% aarch64 (Neoverse) 17.2937 13.6025 21.34% power10 12.0227 7.4241 38.25% reciprocal-throughput master patched improvement x86_64 13.6770 15.5231 -13.50% x86_64v2 13.8722 16.0446 -15.66% x86_64v3 13.6211 13.2753 2.54% i686 186.7670 45.4388 75.67% aarch64 (Neoverse) 9.96089 9.39285 5.70% power10 4.9862 3.7819 24.15% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use acoshf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic acoshf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 61.2471 58.7742 4.04% x86_64-v2 62.6519 59.0523 5.75% x86_64-v3 58.7408 50.1393 14.64% aarch64 24.8580 21.3317 14.19% power10 17.0469 13.1345 22.95% reciprocal-throughput master patched improvement x86_64 16.1618 15.1864 6.04% x86_64-v2 15.7729 14.7563 6.45% x86_64-v3 14.1669 11.9568 15.60% aarch64 10.911 9.5486 12.49% power10 6.38196 5.06734 20.60% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use acosf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic acosf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 52.5098 36.6312 30.24% x86_64v2 53.0217 37.3091 29.63% x86_64v3 42.8501 32.3977 24.39% i686 207.3960 109.4000 47.25% aarch64 21.3694 13.7871 35.48% power10 14.5542 7.2891 49.92% reciprocal-throughput master patched improvement x86_64 14.1487 15.9508 -12.74% x86_64v2 14.3293 16.1899 -12.98% x86_64v3 13.6563 12.6161 7.62% i686 158.4060 45.7354 71.13% aarch64 12.5515 9.19233 26.76% power10 5.7868 3.3487 42.13% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-02elf: Consolidate stackinfo.hAdhemerval Zanella1-33/+0
And use sane default the generic implementation. Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-11-22math: Use tanf from CORE-MATHAdhemerval Zanella2-7/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic tanf. The code was adapted to glibc style, to use the definition of math_config.h, to remove errno handling, and to use a generic 128 bit routine for ABIs that do not support it natively. Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (neoverse1, gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 82.3961 54.8052 33.49% x86_64v2 82.3415 54.8052 33.44% x86_64v3 69.3661 50.4864 27.22% i686 219.271 45.5396 79.23% aarch64 29.2127 19.1951 34.29% power10 19.5060 16.2760 16.56% reciprocal-throughput master patched improvement x86_64 28.3976 19.7334 30.51% x86_64v2 28.4568 19.7334 30.65% x86_64v3 21.1815 16.1811 23.61% i686 105.016 15.1426 85.58% aarch64 18.1573 10.7681 40.70% power10 8.7207 8.7097 0.13% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-22math: Use lgammaf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic lgammaf. The code was adapted to glibc style, to use the definition of math_config.h, to remove errno handling, to use math_narrow_eval on overflow usage, and to adapt to make it reentrant. Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1, gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 86.5609 70.3278 18.75% x86_64v2 78.3030 69.9709 10.64% x86_64v3 74.7470 59.8457 19.94% i686 387.355 229.761 40.68% aarch64 40.8341 33.7563 17.33% power10 26.5520 16.1672 39.11% powerpc 28.3145 17.0625 39.74% reciprocal-throughput master patched improvement x86_64 68.0461 48.3098 29.00% x86_64v2 55.3256 47.2476 14.60% x86_64v3 52.3015 38.9028 25.62% i686 340.848 195.707 42.58% aarch64 36.8000 30.5234 17.06% power10 20.4043 12.6268 38.12% powerpc 22.6588 13.8866 38.71% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-22math: Use erfcf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic erfcf. The code was adapted to glibc style and to use the definition of math_config.h. Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1, gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 98.8796 66.2142 33.04% x86_64v2 98.9617 67.4221 31.87% x86_64v3 87.4161 53.1754 39.17% aarch64 33.8336 22.0781 34.75% power10 21.1750 13.5864 35.84% powerpc 21.4694 13.8149 35.65% reciprocal-throughput master patched improvement x86_64 48.5620 27.6731 43.01% x86_64v2 47.9497 28.3804 40.81% x86_64v3 42.0255 18.1355 56.85% aarch64 24.3938 13.4041 45.05% power10 10.4919 6.1881 41.02% powerpc 11.763 6.76468 42.49% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-22math: Use erff from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic erff. The code was adapted to glibc style and to use the definition of math_config.h. Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1, gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 85.7363 45.1372 47.35% x86_64v2 86.6337 38.5816 55.47% x86_64v3 71.3810 34.0843 52.25% i686 190.143 97.5014 48.72% aarch64 34.9091 14.9320 57.23% power10 38.6160 8.5188 77.94% powerpc 39.7446 8.45781 78.72% reciprocal-throughput master patched improvement x86_64 35.1739 14.7603 58.04% x86_64v2 34.5976 11.2283 67.55% x86_64v3 27.3260 9.8550 63.94% i686 91.0282 30.8840 66.07% aarch64 22.5831 6.9615 69.17% power10 18.0386 3.0918 82.86% powerpc 20.7277 3.63396 82.47% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-22math: Use cbrtf from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic cbrtf. The code was adapted to glibc style and to use the definition of math_config.h. Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1, gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 68.6348 36.8908 46.25% x86_64v2 67.3418 36.6968 45.51% x86_64v3 63.4981 32.7859 48.37% aarch64 29.3172 12.1496 58.56% power10 18.0845 8.8893 50.85% powerpc 18.0859 8.79527 51.37% reciprocal-throughput master patched improvement x86_64 36.4369 13.3565 63.34% x86_64v2 37.3611 13.1149 64.90% x86_64v3 31.6024 11.2102 64.53% aarch64 18.6866 7.3474 60.68% power10 9.4758 3.6329 61.66% powerpc 9.58896 3.90439 59.28% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-11-19Fix femode_t conditionals for arc and or1kJoseph Myers1-1/+1
Two of the architecture bits/fenv.h headers define femode_t if __GLIBC_USE (IEC_60559_BFP_EXT), instead of the correct condition __GLIBC_USE (IEC_60559_BFP_EXT_C23) (both were added after commit 0175c9e9be5f0b2000859666b6e1ef3696f1123b, but were probably first developed before it and then not updated to take account of its changes). This results in failures of the installed headers check for fenv.h when building with GCC 15 (defaults to -std=gnu23 - we don't yet have an installed-headers test specifically for C23 mode and don't yet require a compiler with such a mode for building glibc) together with a combination of options leaving C23 features enabled, since the declarations of functions using femode_t use the correct conditions; see <https://sourceware.org/pipermail/libc-testresults/2024q4/013163.html>. Fix the conditionals to get <fenv.h> to work correctly in C23 mode again. Tested with build-many-glibcs.py (arc-linux-gnu, arch-linux-gnuhf, or1k-linux-gnu-hard, or1k-linux-gnu-soft).
2024-11-01math: Use log10p1f from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic log10p1f. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1, gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 68.5251 32.2627 52.92% x86_64v2 68.8912 32.7887 52.41% x86_64v3 59.3427 27.0521 54.41% i686 162.026 103.383 36.19% aarch64 26.8513 14.5695 45.74% power10 12.7426 8.4929 33.35% powerpc 16.6768 9.29135 44.29% reciprocal-throughput master patched improvement x86_64 26.0969 12.4023 52.48% x86_64v2 25.0045 11.0748 55.71% x86_64v3 20.5610 10.2995 49.91% i686 89.8842 78.5211 12.64% aarch64 17.1200 9.4832 44.61% power10 6.7814 6.4258 5.24% powerpc 15.769 7.6825 51.28% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01math: Use log1pf from CORE-MATHAdhemerval Zanella2-9/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic log1pf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1, gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 71.8142 38.9668 45.74% x86_64v2 71.9094 39.1321 45.58% x86_64v3 60.1000 32.4016 46.09% i686 147.105 104.258 29.13% aarch64 26.4439 14.0050 47.04% power10 19.4874 9.4146 51.69% powerpc 17.6145 8.00736 54.54% reciprocal-throughput master patched improvement x86_64 19.7604 12.7254 35.60% x86_64v2 19.0039 11.9455 37.14% x86_64v3 16.8559 11.9317 29.21% i686 82.3426 73.9718 10.17% aarch64 14.4665 7.9614 44.97% power10 11.9974 8.4117 29.89% powerpc 7.15222 6.0914 14.83% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01math: Use log2p1f from CORE-MATHAdhemerval Zanella2-6/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance compared to the generic log2p1f. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 70.1462 47.0090 32.98% x86_64v2 70.2513 47.6160 32.22% x86_64v3 60.4840 39.9443 33.96% i686 164.068 122.909 25.09% aarch64 25.9169 16.9207 34.71% power10 18.1261 9.8592 45.61% powerpc 17.2683 9.38665 45.64% reciprocal-throughput master patched improvement x86_64 26.2240 16.4082 37.43% x86_64v2 25.0911 15.7480 37.24% x86_64v3 20.9371 11.7264 43.99% i686 90.4209 95.3073 -5.40% aarch64 16.8537 8.9561 46.86% power10 12.9401 6.5555 49.34% powerpc 9.01763 7.54745 16.30% The performance decrease for i686 is mostly due the use of x87 fpu, when building with '-msse2 -mfpmath=sse: master patched improvement latency 164.068 102.982 37.23% reciprocal-throughput 89.1968 82.5117 7.49% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01math: Use expm1f from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance compared to the generic expm1f. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 96.7402 36.4026 62.37% x86_64v2 97.5391 33.4625 65.69% x86_64v3 82.1778 30.8668 62.44% i686 120.58 94.8302 21.35% aarch64 32.3558 12.8881 60.17% power10 23.5087 9.8574 58.07% powerpc 23.4776 9.06325 61.40% reciprocal-throughput master patched improvement x86_64 27.8224 15.9255 42.76% x86_64v2 27.8364 9.6438 65.36% x86_64v3 20.3227 9.6146 52.69% i686 63.5629 59.4718 6.44% aarch64 17.4838 7.1082 59.34% power10 12.4644 8.7829 29.54% powerpc 14.2152 5.94765 58.16% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01math: Use exp2m1f from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance compared to the generic exp2m1f. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). The only change is to handle FLT_MAX_EXP for FE_DOWNWARD or FE_TOWARDZERO. The benchmark inputs are based on exp2f ones. Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 40.6042 48.7104 -19.96% x86_64v2 40.7506 35.9032 11.90% x86_64v3 35.2301 31.7956 9.75% i686 102.094 94.6657 7.28% aarch64 18.2704 15.1387 17.14% power10 11.9444 8.2402 31.01% reciprocal-throughput master patched improvement x86_64 20.8683 16.1428 22.64% x86_64v2 19.5076 10.4474 46.44% x86_64v3 19.2106 10.4014 45.86% i686 56.4054 59.3004 -5.13% aarch64 12.0781 7.3953 38.77% power10 6.5306 5.9388 9.06% The generic implementation calls __ieee754_exp2f and x86_64 provides an optimized ifunc version (built with -mfma -mavx2, not correctly rounded). This explains the performance difference for x86_64. Same for i686, where the ABI provides an optimized __ieee754_exp2f version built with '-msse2 -mfpmath=sse'. When built wth same flags, the new algorithm shows a better performance: master patched improvement latency 102.094 91.2823 10.59% reciprocal-throughput 56.4054 52.7984 6.39% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01math: Use exp10m1f from CORE-MATHAdhemerval Zanella2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance compared to the generic exp10m1f. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). I mostly fixed some small issues in corner cases (sNaN handling, -INFINITY, a specific overflow check). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 45.4690 49.5845 -9.05% x86_64v2 46.1604 36.2665 21.43% x86_64v3 37.8442 31.0359 17.99% i686 121.367 93.0079 23.37% aarch64 21.1126 15.0165 28.87% power10 12.7426 8.4929 33.35% reciprocal-throughput master patched improvement x86_64 19.6005 17.4005 11.22% x86_64v2 19.6008 11.1977 42.87% x86_64v3 17.5427 10.2898 41.34% i686 59.4215 60.9675 -2.60% aarch64 13.9814 7.9173 43.37% power10 6.7814 6.4258 5.24% The generic implementation calls __ieee754_exp10f which has an optimized version, although it is not correctly rounded, which is the main culprit of the the latency difference for x86_64 and throughp for i686. Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-10-11replace tgammaf by the CORE-MATH implementationPaul Zimmermann2-5/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode). This can be checked by exhaustive tests in a few minutes since there are less than 2^32 values to check against for example GNU MPFR. This patch also adds some bench values for tgammaf. Tested on x86_64 and x86 (cfarm26). With the initial GNU libc code it gave on an Intel(R) Core(TM) i7-8700: "tgammaf": { "": { "duration": 3.50188e+09, "iterations": 2e+07, "max": 602.891, "min": 65.1415, "mean": 175.094 } } With the new code: "tgammaf": { "": { "duration": 3.30825e+09, "iterations": 5e+07, "max": 211.592, "min": 32.0325, "mean": 66.1649 } } With the initial GNU libc code it gave on cfarm26 (i686): "tgammaf": { "": { "duration": 3.70505e+09, "iterations": 6e+06, "max": 2420.23, "min": 243.154, "mean": 617.509 } } With the new code: "tgammaf": { "": { "duration": 3.24497e+09, "iterations": 1.8e+07, "max": 1238.15, "min": 101.155, "mean": 180.276 } } Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Changes in v2: - include <math.h> (fix the linknamespace failures) - restored original benchtests/strcoll-inputs/filelist#en_US.UTF-8 file - restored original wrapper code (math/w_tgammaf_compat.c), except for the dealing with the sign - removed the tgammaf/float entries in all libm-test-ulps files - address other comments from Joseph Myers (https://sourceware.org/pipermail/libc-alpha/2024-July/158736.html) Changes in v3: - pass NULL argument for signgam from w_tgammaf_compat.c - use of math_narrow_eval - added more comments Changes in v4: - initialize local_signgam to 0 in math/w_tgamma_template.c - replace sysdeps/ieee754/dbl-64/gamma_productf.c by dummy file Changes in v5: - do not mention local_signgam any more in math/w_tgammaf_compat.c - initialize local_signgam to 1 instead of 0 in w_tgamma_template.c and added comment Changes in v6: - pass NULL as 2nd argument of __ieee754_gammaf_r in w_tgammaf_compat.c, and check for NULL in e_gammaf_r.c Changes in v7: - added Signed-off-by line for Alexei Sibidanov (author of the code) Changes in v8: - added Signed-off-by line for Paul Zimmermann (submitted of the patch) Changes in v9: - address comments from review by Adhemerval Zanella Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-09-25arc: Cleanup arcbePavel Kozlov3-8/+4
Remove the mention of arcbe ABI to avoid any mislead. ARC big endian ABI is no longer supported. Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-09-25arc: Remove HAVE_ARC_BE macro and disable big-endian portFlorian Weimer2-13/+5
It is no longer needed, now that ARC is always little endian.
2024-08-11ARC: Regenerate ULPsPavel Kozlov2-0/+81
Regenerate fpu and soft-fp ULPs. Based on results from HSDK-4xD board with GCC 14 build. Including new tests added by 07972839108495245d8b93ca546462b3f4dad47f.
2024-06-17Convert to autoconf 2.72 (vanilla release, no distribution patches)Andreas K. Hüttel1-63/+57
As discussed at the patch review meeting Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org> Reviewed-by: Simon Chopin <simon.chopin@canonical.com>
2024-06-17Implement C23 logp1Joseph Myers2-0/+20
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the logp1 functions (aliases for log1p functions - the name is intended to be more consistent with the new log2p1 and log10p1, where clearly it would have been very confusing to name those functions log21p and log101p). As aliases rather than new functions, the content of this patch is somewhat different from those actually adding new functions. Tests are shared with log1p, so this patch *does* mechanically update all affected libm-test-ulps files to expect the same errors for both functions. The vector versions of log1p on aarch64 and x86_64 are *not* updated to have logp1 aliases (and thus there are no corresponding header, tests, abilist or ulps changes for vector functions either). It would be reasonable for such vector aliases and corresponding changes to other files to be made separately. For now, the log1p tests instead avoid testing logp1 in the vector case (a Makefile change is needed to avoid problems with grep, used in generating the .c files for vector function tests, matching more than one ALL_RM_TEST line in a file testing multiple functions with the same inputs, when it assumes that the .inc file only has a single such line). Tested for x86_64 and x86, and with build-many-glibcs.py.
2024-04-19login: Check default sizes of structs utmp, utmpx, lastlogFlorian Weimer1-0/+3
The default <utmp-size.h> is for ports with a 64-bit time_t. Ports with a 32-bit time_t or with __WORDSIZE_TIME64_COMPAT32=1 need to override it. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-02-01string: Use builtins for ffs and ffsllAdhemerval Zanella Netto1-0/+2
It allows to remove a lot of arch-specific implementations. Checked on x86_64, aarch64, powerpc64. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-01-01Update copyright dates with scripts/update-copyrightsPaul Eggert44-44/+44
2023-07-17configure: Use autoconf 2.71Siddhesh Poyarekar1-38/+51
Bump autoconf requirement to 2.71 to allow regenerating configure on more recent distributions. autoconf 2.71 has been in Fedora since F36 and is the current version in Debian stable (bookworm). It appears to be current in Gentoo as well. All sysdeps configure and preconfigure scripts have also been regenerated; all changes are trivial transformations that do not affect functionality. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-05-30Fix misspellings in sysdeps/ -- BZ 25337Paul Pluzhnikov1-1/+1
2023-02-17ARC:fpu: add extra capability check before use of sqrt and fma builtinsPavel Kozlov2-4/+24
Add extra check for compiler definitions to ensure that compiler provides sqrt and fma hw fpu instructions else use software implementation. As divide/sqrt and FMA hw support from CPU side is optional, the compiler can be configured by options to generate hw FPU instructions, but without use of FDDIV, FDSQRT, FSDIV, FSSQRT, FDMADD and FSMADD instructions. In this case __builtin_sqrt and __builtin_sqrtf provided by compiler can't be used inside the glibc code, as these builtins are used in implementations of sqrt() and sqrtf() functions but at the same time these builtins unfold to sqrt() and sqrtf(). So it is possible to receive code like that: 0001c4b4 <__ieee754_sqrtf>: 1c4b4: 0001 0000 b 0 ;1c4b4 <__ieee754_sqrtf> The same is also true for __builtin_fma and __builtin_fmaf. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2023-01-06Update copyright dates with scripts/update-copyrightsJoseph Myers44-44/+44
2022-11-29ARC: update definitions in elf/elf.hShahab Vahedi1-4/+4
While porting ARCv2 to elfutils [1], it was brought up that the necessary changes to the project's libelf/elf.h must come from glibc, because they sync it from glibc [2]. Therefore, this patch is to update ARC entries in elf/elf.h. The majority of the update is about adding new definitions, specially for the relocations. However, there is one rename, one deletion, and one change: - R_ARC_JUMP_SLOT renamed to R_ARC_JMP_SLOT to match binutils. - R_ARC_B26 removed because it is unused and deprecated. - R_ARC_TLS_DTPOFF_S9 changed from 0x4a to the correct value 0x49. Finally, a specific SHT class for ARC has been added to glibcelf.py. Else, it would result in a collision: _register_elf_h(Sht, ranges=True, File "/src/glibc/scripts/glibcelf.py", line x, in _register_elf_h raise ValueError('duplicate value {}: {}, {}'.format( ValueError: duplicate value 1879048193: SHT_ARC_ATTRIBUTES, SHT_X86_64_UNWIND [1] https://sourceware.org/pipermail/elfutils-devel/2022q4/005530.html [2] https://sourceware.org/pipermail/elfutils-devel/2022q4/005548.html No regression has been observed after applying this patch. Below follows the result: UNSUPPORTED: crypt/cert UNSUPPORTED: elf/tst-audit22 FAIL: elf/tst-audit25a FAIL: elf/tst-audit25b FAIL: elf/tst-bz15311 FAIL: elf/tst-bz28937 FAIL: elf/tst-dlmopen4 UNSUPPORTED: elf/tst-dlopen-self-container UNSUPPORTED: elf/tst-dlopen-tlsmodid-container UNSUPPORTED: elf/tst-glibc-hwcaps-prepend-cache UNSUPPORTED: elf/tst-ldconfig-bad-aux-cache UNSUPPORTED: elf/tst-ldconfig-ld_so_conf-update UNSUPPORTED: elf/tst-pldd UNSUPPORTED: elf/tst-preload-pthread-libc XPASS: elf/tst-protected1a XPASS: elf/tst-protected1b FAIL: elf/tst-tls-allocation-failure-static-patched FAIL: elf/tst-tls1 FAIL: elf/tst-tls3 FAIL: elf/tst-tlsalign-extern UNSUPPORTED: elf/tst-valgrind-smoke UNSUPPORTED: grp/tst-initgroups1 UNSUPPORTED: grp/tst-initgroups2 UNSUPPORTED: io/tst-getcwd-smallbuff UNSUPPORTED: locale/tst-localedef-path-norm FAIL: localedata/sort-test UNSUPPORTED: localedata/tst-localedef-hardlinks FAIL: malloc/tst-malloc-thread-fail-malloc-check FAIL: malloc/tst-malloc_info-malloc-check UNSUPPORTED: math/test-fesetexcept-traps UNSUPPORTED: math/test-fexcept-traps UNSUPPORTED: math/test-nearbyint-except UNSUPPORTED: math/test-nearbyint-except-2 UNSUPPORTED: misc/tst-adjtimex UNSUPPORTED: misc/tst-clock_adjtime FAIL: misc/tst-misalign-clone FAIL: misc/tst-misalign-clone-internal UNSUPPORTED: misc/tst-ntp_adjtime UNSUPPORTED: misc/tst-pkey UNSUPPORTED: misc/tst-rseq UNSUPPORTED: misc/tst-rseq-disable UNSUPPORTED: misc/tst-syslog UNSUPPORTED: misc/tst-ttyname FAIL: nptl/test-cond-printers FAIL: nptl/test-condattr-printers FAIL: nptl/test-mutex-printers FAIL: nptl/test-mutexattr-printers FAIL: nptl/test-rwlock-printers FAIL: nptl/test-rwlockattr-printers UNSUPPORTED: nptl/tst-pthread-gdb-attach UNSUPPORTED: nptl/tst-pthread-gdb-attach-static UNSUPPORTED: nptl/tst-pthread-getattr UNSUPPORTED: nptl/tst-rseq-nptl UNSUPPORTED: nss/tst-nss-compat1 UNSUPPORTED: nss/tst-nss-db-endgrent UNSUPPORTED: nss/tst-nss-db-endpwent UNSUPPORTED: nss/tst-nss-files-hosts-long UNSUPPORTED: nss/tst-nss-gai-actions UNSUPPORTED: nss/tst-nss-test3 UNSUPPORTED: nss/tst-reload1 UNSUPPORTED: nss/tst-reload2 UNSUPPORTED: posix/bug-ga2 UNSUPPORTED: posix/bug-ga2-mem FAIL: posix/globtest UNSUPPORTED: posix/tst-vfork3 UNSUPPORTED: posix/tst-vfork3-mem UNSUPPORTED: resolv/mtrace-tst-leaks2 UNSUPPORTED: resolv/tst-leaks2 UNSUPPORTED: resolv/tst-resolv-ai_idn UNSUPPORTED: resolv/tst-resolv-ai_idn-latin1 UNSUPPORTED: resolv/tst-resolv-res_init UNSUPPORTED: resolv/tst-resolv-res_init-thread UNSUPPORTED: rt/tst-bz28213 UNSUPPORTED: rt/tst-mqueue1 UNSUPPORTED: rt/tst-mqueue10 UNSUPPORTED: rt/tst-mqueue2 UNSUPPORTED: rt/tst-mqueue3 UNSUPPORTED: rt/tst-mqueue4 UNSUPPORTED: rt/tst-mqueue5 UNSUPPORTED: rt/tst-mqueue6 UNSUPPORTED: rt/tst-mqueue8 UNSUPPORTED: rt/tst-mqueue8x UNSUPPORTED: rt/tst-mqueue9 UNSUPPORTED: stdlib/test-bz22786 UNSUPPORTED: stdlib/tst-system UNSUPPORTED: string/test-bcopy UNSUPPORTED: string/test-memmove UNSUPPORTED: string/tst-memmove-overflow UNSUPPORTED: string/tst-strerror UNSUPPORTED: string/tst-strsignal UNSUPPORTED: time/tst-clock_settime UNSUPPORTED: time/tst-settimeofday Summary of test results: 21 FAIL 4184 PASS 69 UNSUPPORTED 16 XFAIL 2 XPASS Signed-off-by: Shahab Vahedi <shahab@synopsys.com> Signed-off-by: Vineet Gupta <vineet.gupta@linux.dev>
2022-11-03elf: Introduce <dl-call_tls_init_tp.h> and call_tls_init_tp (bug 29249)Florian Weimer1-2/+1
This makes it more likely that the compiler can compute the strlen argument in _startup_fatal at compile time, which is required to avoid a dependency on strlen this early during process startup. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2022-10-18Use PTR_MANGLE and PTR_DEMANGLE unconditionally in C sourcesFlorian Weimer1-2/+0
In the future, this will result in a compilation failure if the macros are unexpectedly undefined (due to header inclusion ordering or header inclusion missing altogether). Assembler sources are more difficult to convert. In many cases, they are hand-optimized for the mangling and no-mangling variants, which is why they are not converted. sysdeps/s390/s390-32/__longjmp.c and sysdeps/s390/s390-64/__longjmp.c are special: These are C sources, but most of the implementation is in assembler, so the PTR_DEMANGLE macro has to be undefined in some cases, to match the assembler style. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-10-18Introduce <pointer_guard.h>, extracted from <sysdep.h>Florian Weimer1-0/+1
This allows us to define a generic no-op version of PTR_MANGLE and PTR_DEMANGLE. In the future, we can use PTR_MANGLE and PTR_DEMANGLE unconditionally in C sources, avoiding an unintended loss of hardening due to missing include files or unlucky header inclusion ordering. In i386 and x86_64, we can avoid a <tls.h> dependency in the C code by using the computed constant from <tcb-offsets.h>. <sysdep.h> no longer includes these definitions, so there is no cyclic dependency anymore when computing the <tcb-offsets.h> constants. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-09-26Use atomic_exchange_release/acquireWilco Dijkstra1-1/+1
Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire since these map to the standard C11 atomic builtins. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-08-26csu: Change start code license to have link exceptionSzabolcs Nagy1-1/+18
The start code can get linked into dynamic linked executables where LGPL would require shipping the source or linkable binaries when the executable is distributed. On some targets the license exception was missing in start.S (which is compiled into crt1.o and Scrt1.o which may end up linked into PDE and PIE binaries). I did not review what other code may end up in executables, just fixed the start.S license inconsistency across targets. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-06-15elf: Remove ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATAFangrui Song1-21/+0
If an executable has copy relocations for extern protected data, that can only work if the library containing the definition is built with assumptions (a) the compiler emits GOT-generating relocations (b) the linker produces R_*_GLOB_DAT instead of R_*_RELATIVE. Otherwise the library uses its own definition directly and the executable accesses a stale copy. Note: the GOT relocations defeat the purpose of protected visibility as an optimization, but allow rtld to make the executable and library use the same copy when copy relocations are present, but it turns out this never worked perfectly. ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA has strange semantics when both a.so and b.so define protected var and the executable copy relocates var: b.so accesses its own copy even with GLOB_DAT. The behavior change is from commit 62da1e3b00b51383ffa7efc89d8addda0502e107 (x86) and then copied to nios2 (ae5eae7cfc9c4a8297ff82ec6b794faca1976ecc) and arc (0e7d930c4c11de896fe807f67fa1eb756c9c1e05). Without ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA, b.so accesses the copy relocated data like a.so. There is now a warning for copy relocation on protected symbol since commit 7374c02b683b7110b853a32496a619410364d70b. It's extremely unlikely anyone relies on the ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA behavior, so let's remove it: this removes a check in the symbol lookup code.
2022-05-30arc: Remove _dl_skip_args usageAdhemerval Zanella1-15/+2
Since ad43cac44a the generic code already shuffles the argv/envp/auxv on the stack to remove the ld.so own arguments and thus _dl_skip_args is always 0. So there is no need to adjust the argc or argv. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-17rtld: Remove DL_ARGV_NOT_RELRO and make _dl_skip_args constSzabolcs Nagy1-4/+0
_dl_skip_args is always 0, so the target specific code that modifies argv after relro protection is applied is no longer used. After the patch relro protection is applied to _dl_argv consistently on all targets. Reviewed-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-04-26elf: Replace PI_STATIC_AND_HIDDEN with opposite HIDDEN_VAR_NEEDS_DYNAMIC_RELOCFangrui Song2-3/+0
PI_STATIC_AND_HIDDEN indicates whether accesses to internal linkage variables and hidden visibility variables in a shared object (ld.so) need dynamic relocations (usually R_*_RELATIVE). PI (position independent) in the macro name is a misnomer: a code sequence using GOT is typically position-independent as well, but using dynamic relocations does not meet the requirement. Not defining PI_STATIC_AND_HIDDEN is legacy and we expect that all new ports will define PI_STATIC_AND_HIDDEN. Current ports defining PI_STATIC_AND_HIDDEN are more than the opposite. Change the configure default. No functional change. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-01-01Update copyright dates with scripts/update-copyrightsPaul Eggert45-45/+45
I used these shell commands: ../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright (cd ../glibc && git commit -am"[this commit message]") and then ignored the output, which consisted lines saying "FOO: warning: copyright statement not found" for each of 7061 files FOO. I then removed trailing white space from math/tgmath.h, support/tst-support-open-dev-null-range.c, and sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following obscure pre-commit check failure diagnostics from Savannah. I don't know why I run into these diagnostics whereas others evidently do not. remote: *** 912-#endif remote: *** 913: remote: *** 914- remote: *** error: lines with trailing whitespace found ... remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
2021-12-28malloc: Remove memusage.hAdhemerval Zanella1-21/+0
And use machine-sp.h instead. The Linux implementation is based on already provided CURRENT_STACK_FRAME (used on nptl code) and STACK_GROWS_UPWARD is replaced with _STACK_GROWS_UP.