Age | Commit message (Collapse) | Author | Files | Lines |
|
The libm size improvement built with gcc-14, "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":
Before:
582292 844 12 583148 8e5ec aarch64-linux-gnu/math/libm.so
975133 1076 12 976221 ee55d x86_64-linux-gnu/math/libm.so
1203586 5608 368 1209562 1274da powerpc64le-linux-gnu/math/libm.so
After:
581972 844 12 582828 8e4ac aarch64-linux-gnu/math/libm.so
974941 1076 12 976029 ee49d x86_64-linux-gnu/math/libm.so
1203394 5608 368 1209370 12741a powerpc64le-linux-gnu/math/libm.so
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
|
|
The libm size improvement built with gcc-14, "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":
Before:
text data bss dec hex filename
583444 844 12 584300 8ea6c aarch64-linux-gnu/math/libm.so
976349 1076 12 977437 eea1d x86_64-linux-gnu/math/libm.so
1204738 5608 368 1210714 12795a powerpc64le-linux-gnu/math/libm.so
After:
582292 844 12 583148 8e5ec aarch64-linux-gnu/math/libm.so
975133 1076 12 976221 ee55d x86_64-linux-gnu/math/libm.so
1203586 5608 368 1209562 1274da powerpc64le-linux-gnu/math/libm.so
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
|
|
The libm size improvement built with gcc-14, "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":
Before:
text data bss dec hex filename
584500 844 12 585356 8ee8c aarch64-linux-gnu/math/libm.so
977341 1076 12 978429 eedfd x86_64-linux-gnu/math/libm.so
1205762 5608 368 1211738 127d5a powerpc64le-linux-gnu/math/libm.so
After:
text data bss dec hex filename
583444 844 12 584300 8ea6c aarch64-linux-gnu/math/libm.so
976349 1076 12 977437 eea1d x86_64-linux-gnu/math/libm.so
1204738 5608 368 1210714 12795a powerpc64le-linux-gnu/math/libm.so
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
|
|
since now all symbloy that use it are in libc
Message-ID: <20250216145434.7089-11-gfleury@disroot.org>
|
|
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-10-gfleury@disroot.org>
|
|
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-9-gfleury@disroot.org>
|
|
clockrdlock, clockwrlock} into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-8-gfleury@disroot.org>
|
|
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-7-gfleury@disroot.org>
|
|
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-6-gfleury@disroot.org>
|
|
libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-5-gfleury@disroot.org>
|
|
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-4-gfleury@disroot.org>
|
|
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-3-gfleury@disroot.org>
|
|
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-2-gfleury@disroot.org>
|
|
The syscall pkey_alloc can return ENOSPC to indicate either that all
keys are in use or that the system runs in a mode in which memory
protection keys are disabled. In such case the test should not fail and
just return unsupported.
This matches the behaviour of the generic tst-pkey.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Improve memory access with indexed/unpredicated instructions.
Eliminate register spills. Speedup on Neoverse V1: 3%.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Move constants to struct. Improve memory access with indexed/unpredicated
instructions. Eliminate register spills. Speedup on Neoverse V1: 24%.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Reduce number of MOV/MOVPRFXs and use unpredicated FMUL.
Replace MUL with LSL. Speedup on Neoverse V1: 6%.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Use unpredicted muls, and improve memory access.
7%, 3% and 1% improvement in throughput microbenchmark on Neoverse V1,
for exp, exp2 and cosh respectively.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Use unpredicated muls, use lanewise mla's and improve memory access.
1% regression in throughput microbenchmark on Neoverse V1.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
GCC aligns global data to 16 bytes if their size is >= 16 bytes. This patch
changes the exp_data struct slightly so that the fields are better aligned
and without gaps. As a result on targets that support them, more load-pair
instructions are used in exp. Exp10 is improved by moving invlog10_2N later
so that neglog10_2hiN and neglog10_2loN can be loaded using load-pair.
The exp benchmark improves 2.5%, "144bits" by 7.2%, "768bits" by 12.7% on
Neoverse V2. Exp10 improves by 1.5%.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The libm size improvement built with "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":
Before:
text data bss dec hex filename
585192 860 12 586064 8f150 aarch64-linux-gnu/math/libm.so
960775 1068 12 961855 ead3f x86_64-linux-gnu/math/libm.so
1189174 5544 368 1195086 123c4e powerpc64le-linux-gnu/math/libm.so
After:
text data bss dec hex filename
584952 860 12 585824 8f060 aarch64-linux-gnu/math/libm.so
960615 1068 12 961695 eac9f x86_64-linux-gnu/math/libm.so
1189078 5544 368 1194990 123bee powerpc64le-linux-gnu/math/libm.so
The are small code changes for x86_64 and powerpc64le, which do not
affect performance; but on aarch64 with gcc-14 I see a slight better
code generation due the usage of ldq for floating point constant loading.
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
|
|
The libm size improvement built with "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":
Before:
text data bss dec hex filename
587304 860 12 588176 8f990 aarch64-linux-gnu-master/math/libm.so
962855 1068 12 963935 eb55f x86_64-linux-gnu-master/math/libm.so
1191222 5544 368 1197134 12444e powerpc64le-linux-gnu-master/math/libm.so
After:
text data bss dec hex filename
585192 860 12 586064 8f150 aarch64-linux-gnu/math/libm.so
960775 1068 12 961855 ead3f x86_64-linux-gnu/math/libm.so
1189174 5544 368 1195086 123c4e powerpc64le-linux-gnu/math/libm.so
The are small code changes for x86_64 and powerpc64le, which do not
affect performance; but on aarch64 with gcc-14 I see a slight better
code generation due the usage of ldq for floating point constant loading.
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
|
|
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic tanpif.
The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).
Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):
latency master patched improvement
x86_64 85.1683 47.7990 43.88%
x86_64v2 76.8219 41.4679 46.02%
x86_64v3 73.7775 37.7734 48.80%
aarch64 (Neoverse) 35.4514 18.0742 49.02%
power8 22.7604 10.1054 55.60%
power10 22.1358 9.9553 55.03%
reciprocal-throughput master patched improvement
x86_64 41.0174 19.4718 52.53%
x86_64v2 34.8565 11.3761 67.36%
x86_64v3 34.0325 9.6989 71.50%
aarch64 (Neoverse) 25.4349 9.2017 63.82%
power8 13.8626 3.8486 72.24%
power10 11.7933 3.6420 69.12%
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic sinpif.
The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).
Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):
latency master patched improvement
x86_64 47.5710 38.4455 19.18%
x86_64v2 46.8828 40.7563 13.07%
x86_64v3 44.0034 34.1497 22.39%
aarch64 (Neoverse) 19.2493 14.1968 26.25%
power8 23.5312 16.3854 30.37%
power10 22.6485 10.2888 54.57%
reciprocal-throughput master patched improvement
x86_64 21.8858 11.6717 46.67%
x86_64v2 22.0620 11.9853 45.67%
x86_64v3 21.5653 11.3291 47.47%
aarch64 (Neoverse) 13.0615 6.5499 49.85%
power8 16.2030 6.9580 57.06%
power10 12.8911 4.2858 66.75%
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic cospif.
The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).
Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):
latency master patched improvement
x86_64 47.4679 38.4157 19.07%
x86_64v2 46.9686 38.3329 18.39%
x86_64v3 43.8929 31.8510 27.43%
aarch64 (Neoverse) 18.8867 13.2089 30.06%
power8 22.9435 7.8023 65.99%
power10 15.4472 7.77505 49.67%
reciprocal-throughput master patched improvement
x86_64 20.9518 11.4991 45.12%
x86_64v2 19.8699 10.5921 46.69%
x86_64v3 19.3475 9.3998 51.42%
aarch64 (Neoverse) 12.5767 6.2158 50.58%
power8 15.0566 3.2654 78.31%
power10 9.2866 3.1147 66.46%
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic atanpif.
The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).
Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):
latency master patched improvement
x86_64 66.3296 52.7558 20.46%
x86_64v2 66.0429 51.4007 22.17%
x86_64v3 60.6294 48.7876 19.53%
aarch64 (Neoverse) 24.3163 20.9110 14.00%
power8 16.5766 13.3620 19.39%
power10 16.5115 13.4072 18.80%
reciprocal-throughput master patched improvement
x86_64 30.8599 16.0866 47.87%
x86_64v2 29.2286 15.4688 47.08%
x86_64v3 23.0960 12.8510 44.36%
aarch64 (Neoverse) 15.4619 10.6752 30.96%
power8 7.9200 5.2483 33.73%
power10 6.8539 4.6262 32.50%
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic atan2pif.
The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).
Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):
latency master patched improvement
x86_64 79.4006 70.8726 10.74%
x86_64v2 77.5136 69.1424 10.80%
x86_64v3 71.8050 68.1637 5.07%
aarch64 (Neoverse) 27.8363 24.7700 11.02%
power8 39.3893 17.2929 56.10%
power10 19.7200 16.8187 14.71%
reciprocal-throughput master patched improvement
x86_64 38.3457 30.9471 19.29%
x86_64v2 37.4023 30.3112 18.96%
x86_64v3 33.0713 24.4891 25.95%
aarch64 (Neoverse) 19.3683 15.3259 20.87%
power8 19.5507 8.27165 57.69%
power10 9.05331 7.63775 15.64%
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic asinpif.
The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).
Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):
latency master patched improvement
x86_64 46.4996 41.6126 10.51%
x86_64v2 46.7551 38.8235 16.96%
x86_64v3 42.6235 33.7603 20.79%
aarch64 (Neoverse) 17.4161 14.3604 17.55%
power8 10.7347 9.0193 15.98%
power10 10.6420 9.0362 15.09%
reciprocal-throughput master patched improvement
x86_64 24.7208 16.5544 33.03%
x86_64v2 24.2177 14.8938 38.50%
x86_64v3 20.5617 10.5452 48.71%
aarch64 (Neoverse) 13.4827 7.17613 46.78%
power8 6.46134 3.56089 44.89%
power10 5.79007 3.49544 39.63%
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic acospif.
The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).
Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):
latency master patched improvement
x86_64 54.8281 42.9070 21.74%
x86_64v2 54.1717 42.7497 21.08%
x86_64v3 49.3552 34.1512 30.81%
aarch64 (Neoverse) 17.9395 14.3733 19.88%
power8 20.3110 8.8609 56.37%
power10 11.3113 8.84067 21.84%
reciprocal-throughput master patched improvement
x86_64 21.2301 14.4803 31.79%
x86_64v2 20.6858 13.9506 32.56%
x86_64v3 16.1944 11.3377 29.99%
aarch64 (Neoverse) 11.4474 7.13282 37.69%
power8 10.6916 3.57547 66.56%
power10 4.64269 3.54145 23.72%
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
Like already done in various other places and advised by Roland in
https://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00124.html
|
|
The RPC stub will write a string anyway.
|
|
since all symbol that use it are now in libc
Message-ID: <20250209200108.865599-9-gfleury@disroot.org>
|
|
Message-ID: <20250209200108.865599-8-gfleury@disroot.org>
|
|
Message-ID: <20250209200108.865599-7-gfleury@disroot.org>
|
|
Message-ID: <20250209200108.865599-6-gfleury@disroot.org>
|
|
into libc.
Message-ID: <20250209200108.865599-5-gfleury@disroot.org>
|
|
Message-ID: <20250209200108.865599-4-gfleury@disroot.org>
|
|
Message-ID: <20250209200108.865599-3-gfleury@disroot.org>
|
|
Message-ID: <20250209200108.865599-2-gfleury@disroot.org>
|
|
Code used during early static startup in elf/dl-tls.c uses
__mempcpy.
Fixes commit cbd9fd236981717d3d4ee942986ea912e9707c32 ("Consolidate
TLS block allocation for static binaries with ld.so").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The logic was copied wrong from CORE-MATH.
|
|
It's not necessary to introduce temporaries because the compiler
is able to evaluate l_soname just once in constracts like:
l_soname (l) != NULL && strcmp (l_soname (l), LIBC_SO) != 0
|
|
So that they can eventually be called separately from dlopen.
|
|
|
|
This reduces code size and dependencies on ld.so internals from
libc.so.
Fixes commit f4c142bb9fe6b02c0af8cfca8a920091e2dba44b
("arm: Use _dl_find_object on __gnu_Unwind_Find_exidx (BZ 31405)").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
sysdeps/pthread/sem_open.c: call pthread_setcancelstate directely
since forward declaration is gone on hurd too
Message-ID: <20250201080202.494671-1-gfleury@disroot.org>
|
|
The logic was copied wrong from CORE-MATH.
|
|
It was copied wrong from CORE-MATH.
|
|
The tests uses ARCH_MIN_GUARD_SIZE and the sysdep.h include is not
required.
|
|
Decorate BSS mappings with [anon: glibc: .bss <file>], for example
[anon: glibc: .bss /lib/libc.so.6]. The string ".bss" is already used
by bionic so use the same, but add the filename as well. If the name
would be longer than what the kernel allows, drop the directory part
of the path.
Refactor glibc.mem.decorate_maps check to a separate function and use
it to avoid assembling a name, which would not be used later.
Signed-off-by: Petr Malat <oss@malat.biz>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|