Age | Commit message (Collapse) | Author | Files | Lines |
|
And replace with a scratch_buffer.
|
|
The function is refactored so double and long double / Float128 have
different stack allocations. For double, allocating the maximum
required size for all possible inputs requires about ~480 bytes,
so the memory is allocated on the stack. The long double / _Float128
requires about 6240 bytes, so a stratch_buffer is used instead.
|
|
Use a fixed size array instead. The maximum number of arguments
is set by macro tricks.
|
|
This patch makes build-many-glibcs.py use the new GMP 6.3.0 and MPFR
4.2.1 releases.
Tested with build-many-glibcs.py (host-libraries, compilers and glibcs
builds).
|
|
Add common emojis to the translit-able characters (mostly
faces and hearts), and translit them to old-fashioned
smileys.
Signed-off-by: Colin Leroy-Mira <colin@colino.net>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Previously, if an entry was marked unusable for any reason, but had
not timed out yet, the assert would trigger.
One way to get into such state is if a data change is detected during
re-validation of an entry. This causes the entry to be marked as not
usable. If exits nscd soon after that, then the clock jumps
backwards, and nscd restarted, the cache re-validation run after
startup triggers the removed assert.
The change is more complicated than just the removal of the assert
because entries marked as not usable should be garbage-collected in
the second pass. To make this happen, it is necessary to update some
book-keeping data.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
|
|
According to glibc memcmp microbenchmark test results(Add generic
memcmp), this implementation have performance improvement
except the length is less than 3, details as below:
Name Percent of time reduced
memcmp-lasx 16%-74%
memcmp-lsx 20%-50%
memcmp-aligned 5%-20%
|
|
According to glibc memset microbenchmark test results, for LSX and LASX
versions, A few cases with length less than 8 experience performace
degradation, overall, the LASX version could reduce the runtime about
15% - 75%, LSX version could reduce the runtime about 15%-50%.
The unaligned version uses unaligned memmory access to set data which
length is less than 64 and make address aligned with 8. For this part,
the performace is better than aligned version. Comparing with the generic
version, the performance is close when the length is larger than 128. When
the length is 8-128, the unaligned version could reduce the runtime about
30%-70%, the aligned version could reduce the runtime about 20%-50%.
|
|
According to glibc memrchr microbenchmark, this implementation could reduce
the runtime as following:
Name Percent of rutime reduced
memrchr-lasx 20%-83%
memrchr-lsx 20%-64%
|
|
According to glibc memchr microbenchmark, this implementation could reduce
the runtime as following:
Name Percent of runtime reduced
memchr-lasx 37%-83%
memchr-lsx 30%-66%
memchr-aligned 0%-15%
|
|
According to glibc rawmemchr microbenchmark, A few cases tested with
char '\0' experience performance degradation due to the lasx and lsx
versions don't handle the '\0' separately. Overall, rawmemchr-lasx
implementation could reduce the runtime about 40%-80%, rawmemchr-lsx
implementation could reduce the runtime about 40%-66%, rawmemchr-aligned
implementation could reduce the runtime about 20%-40%.
|
|
We are requiring Binutils >= 2.41, so explicit relocation syntax is
always supported by the assembler. Use it to reduce one instruction.
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
|
|
We are requiring Binutils >= 2.41, so la.pcrel always works here.
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
|
|
We are strictly requiring GAS >= 2.41 now, so we don't need to check
assembler capability anymore.
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
|
|
This patch adds the new F_SEAL_EXEC constant from Linux 6.3 (see Linux
commit 6fd7353829c ("mm/memfd: add F_SEAL_EXEC") to bits/fcntl-linux.h.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Even though the alloca usage is relatively small and fixed size the code
can be written without using alloca. Convert to local variables.
Checked on x86_64-linux-gnu.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Convert to scratch_buffers to avoid potential stack overflow.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
This patch adds a new macro, M68K_SCALE_AVAILABLE, similar to gmp
scale_available_p (mpn/m68k/m68k-defs.m4) that expand to 1 if a
scale factor can be used in addressing modes. This is used
instead of __mc68020__ for some optimization decisions.
Checked on a build for m68k-linux-gnu target mc68020 and mc68040.
|
|
GCC currently does not define __mc68020__ for -mcpu=68040 or higher,
which memcpy/memmove assumptions. Since this memory copy optimization
seems only intended for m68020, disable for other m680X0 variants.
Checked on a build for m68k-linux-gnu target mc68020 and mc68040.
|
|
Parts of elf/tst-rtld-list-diagnostics.py have been copied from
scripts/tst-ld-trace.py.
The abnf module is entirely optional and used to verify the
ABNF grammar as included in the manual.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The missing @item makes it look like errno will be set to ESRCH
if a cross-session getpgid is not permitted.
Found by ulfvonbelow on irc.
|
|
Based on the glibc microbenchmark, only a few short inputs with this
strncmp-aligned and strncmp-lsx implementation experience performance
degradation, overall, strncmp-aligned could reduce the runtime 0%-10%
for aligned comparision, 10%-25% for unaligend comparision, strncmp-lsx
could reduce the runtime about 0%-60%.
|
|
Based on the glibc microbenchmark, strcmp-aligned implementation could
reduce the runtime 0%-10% for aligned comparison, 10%-20% for unaligned
comparison, strcmp-lsx implemenation could reduce the runtime 0%-50%.
|
|
Based on the glibc microbenchmark, strnlen-aligned implementation could
reduce the runtime more than 10%, strnlen-lsx implementation could reduce
the runtime about 50%-78%, strnlen-lasx implementation could reduce the
runtime about 50%-88%.
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-11-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-10-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-9-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-8-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-7-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-6-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-5-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-4-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-3-gfleury@disroot.org>
|
|
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-2-gfleury@disroot.org>
|
|
The path auxv[*].a_val could either be an integer or a string,
depending on the a_type value. Use a separate field, a_val_string, to
simplify mechanical parsing of the --list-diagnostics output.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Otherwise, the ld.so constructor runs for each audit namespace
and each dlmopen namespace.
|
|
On Skylake, it changes log1p bench performance by:
Before After Improvement
max 63.349 58.347 8%
min 4.448 5.651 -30%
mean 12.0674 10.336 14%
The minimum code path is
if (hx < 0x3FDA827A) /* x < 0.41422 */
{
if (__glibc_unlikely (ax >= 0x3ff00000)) /* x <= -1.0 */
{
...
}
if (__glibc_unlikely (ax < 0x3e200000)) /* |x| < 2**-29 */
{
math_force_eval (two54 + x); /* raise inexact */
if (ax < 0x3c900000) /* |x| < 2**-54 */
{
...
}
else
return x - x * x * 0.5;
FMA and non-FMA code sequences look similar. Non-FMA version is slightly
faster. Since log1p is called by asinh and atanh, it improves asinh
performance by:
Before After Improvement
max 75.645 63.135 16%
min 10.074 10.071 0%
mean 15.9483 14.9089 6%
and improves atanh performance by:
Before After Improvement
max 91.768 75.081 18%
min 15.548 13.883 10%
mean 18.3713 16.8011 8%
|
|
The db2 subdir has been removed more than 20 years ago.
|
|
When building with fortify enabled, GCC < 12 issues a warning on the
fortify strncat wrapper might overflow the destination buffer (the
failure is tied to -Werror).
Checked on ppc64 and x86_64.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The static PIE configure check uses link tests. When bootstrapping
a cross-toolchain, the link tests fail due to missing crt-files /
libc.so. As we explicitely want to test an issue in binutils (ld),
we now also explicitely check for known linker versions.
See also commit 368b7c614b102122b86af3953daea2b30230d0a8
S390: Use compile-only instead of also link-tests in configure.
|
|
From revision 03f3d275d0d6 in the gmp repository.
|
|
Thanks to Andreas Schwab for reporting.
Fixes: 652b9fdb77d9fd056d4dd26dad2c14142768ab49
Signed-off-by: Sam James <sam@gentoo.org>
|
|
memmove{aligned, unaligned, lsx, lasx}
These implementations improve the time to copy data in the glibc
microbenchmark as below:
memcpy-lasx reduces the runtime about 8%-76%
memcpy-lsx reduces the runtime about 8%-72%
memcpy-unaligned reduces the runtime of unaligned data copying up to 40%
memcpy-aligned reduece the runtime of unaligned data copying up to 25%
memmove-lasx reduces the runtime about 20%-73%
memmove-lsx reduces the runtime about 50%
memmove-unaligned reduces the runtime of unaligned data moving up to 40%
memmove-aligned reduces the runtime of unaligned data moving up to 25%
|
|
strchrnul{aligned, lsx, lasx}
These implementations improve the time to run strchr{nul}
microbenchmark in glibc as below:
strchr-lasx reduces the runtime about 50%-83%
strchr-lsx reduces the runtime about 30%-67%
strchr-aligned reduces the runtime about 10%-20%
strchrnul-lasx reduces the runtime about 50%-83%
strchrnul-lsx reduces the runtime about 36%-65%
strchrnul-aligned reduces the runtime about 6%-10%
|
|
SYS_modify_ldt requires CONFIG_MODIFY_LDT_SYSCALL to be set in the kernel, which
some distributions may disable for hardening. Check if that's the case (unset)
and mark the test as UNSUPPORTED if so.
Reviewed-by: DJ Delorie <dj@redhat.com>
Signed-off-by: Sam James <sam@gentoo.org>
|
|
All callers pass 1 or 0x11 anyway (same meaning according to man page),
but still.
Reviewed-by: DJ Delorie <dj@redhat.com>
Signed-off-by: Sam James <sam@gentoo.org>
|
|
Fixes: db25bc52026f ("hurd: Add prototype for and thus fix _hurdsig_abort_rpcs call")
|
|
This was actually not a problem since NULL was getting passed.
|