Age | Commit message (Collapse) | Author | Files | Lines |
|
This adds CFI directives which communicate that the stack ends
with this function.
Fixes bug 26853.
|
|
This test fails without bug 26798 fixed because some integer registers
likely get clobbered by lazy binding and variant PCS only allows x16
and x17 to be clobbered at call time.
The test requires binutils 2.32.1 or newer for handling variant PCS
symbols. SVE registers are not covered by this test, to avoid the
complexity of handling multiple compile- and runtime feature support
cases.
|
|
The variant PCS support was ineffective because in the common case
linkmap->l_mach.plt == 0 but then the symbol table flags were ignored
and normal lazy binding was used instead of resolving the relocs early.
(This was a misunderstanding about how GOT[1] is setup by the linker.)
In practice this mainly affects SVE calls when the vector length is
more than 128 bits, then the top bits of the argument registers get
clobbered during lazy binding.
Fixes bug 26798.
|
|
Add CPU detection of Neoverse N2 and Neoverse V1, and select __memcpy_simd as
the memcpy/memmove ifunc.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
There are several compiler implementations that allow large stack
allocations to jump over the guard page at the end of the stack and
corrupt memory beyond that. See CVE-2017-1000364.
Compilers can emit code to probe the stack such that the guard page
cannot be skipped, but on aarch64 the probe interval is 64K by default
instead of the minimum supported page size (4K).
This patch enforces at least 64K guard on aarch64 unless the guard
is disabled by setting its size to 0. For backward compatibility
reasons the increased guard is not reported, so it is only observable
by exhausting the address space or parsing /proc/self/maps on linux.
On other targets the patch has no effect. If the stack probe interval
is larger than a page size on a target then ARCH_MIN_GUARD_SIZE can
be defined to get large enough stack guard on libc allocated stacks.
The patch does not affect threads with user allocated stacks.
Fixes bug 26691.
|
|
On some microarchitectures performance of the backwards memmove improves if
the stores use STR with decreasing addresses. So change the memmove loop
in memcpy_advsimd.S to use 2x STR rather than STP.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
For new j0 test.
|
|
Make glibc MTE-safe on systems where MTE is available. This allows
using heap tagging with an LD_PRELOADed malloc implementation that
enables MTE. We don't document this as guaranteed contract yet, so
glibc may not be MTE safe when HWCAP2_MTE is set (older glibcs
certainly aren't). This is mainly for testing and debugging.
The HWCAP flag is not exposed in public headers until Linux adds it
to its uapi. The HWCAP value reservation will be in Linux 5.9.
|
|
Use PROT_READ and PROT_WRITE according to the load segment p_flags
when adding PROT_BTI.
This is before processing relocations which may drop PROT_BTI in
case of textrels. Executable stacks are not protected via PROT_BTI
either. PROT_BTI is hardening in case memory corruption happened,
it's value is reduced if there is writable and executable memory
available so missing it on such memory is fine, but we should
respect the p_flags and should not drop PROT_WRITE.
|
|
Optimize strlen using a mix of scalar and SIMD code. On modern micro
architectures large strings are 2.6 times faster than existing
strlen_asimd and 35% faster than the new MTE version of strlen.
On a random strlen benchmark using small sizes the speedup is 7% vs
strlen_asimd and 40% vs the MTE strlen. This fixes the main strlen
regressions on Cortex-A53 and other cores with a simple Neon unit.
Rename __strlen_generic to __strlen_mte, and select strlen_asimd when
MTE is not enabled (this is waiting on support for a HWCAP_MTE bit).
This fixes big-endian bug 25824. Passes GLIBC regression tests.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
Rename IS_ARES to IS_NEOVERSE_N1 since that is a bit clearer.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
Add a new memcpy using 128-bit Q registers - this is faster on modern
cores and reduces codesize. Similar to the generic memcpy, small cases
include copies up to 32 bytes. 64-128 byte copies are split into two
cases to improve performance of 64-96 byte copies. Large copies align
the source rather than the destination.
bench-memcpy-random is ~9% faster than memcpy_falkor on Neoverse N1,
so make this memcpy the default on N1 (on Centriq it is 15% faster than
memcpy_falkor).
Passes GLIBC regression tests.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
Given almost all uses of ENTRY are for string/memory functions,
align ENTRY to a cacheline to simplify things.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
RETURN_ADDRESS is used at several places in glibc to mean a valid
code address of the call site, but with pac-ret it may contain a
pointer authentication code (PAC), so its definition is adjusted.
This is gcc PR target/94891: __builtin_return_address should not
expose signed pointers to user code where it can cause ABI issues.
In glibc RETURN_ADDRESS is only changed if it is built with pac-ret.
There is no detection for the specific gcc issue because it is
hard to test and the additional xpac does not cause problems.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Currently gcc -pg -mbranch-protection=pac-ret passes signed return
address to _mcount, so _mcount now has to always strip pac from the
frompc since that's from user code that may be built with pac-ret.
This is gcc PR target/94791: signed pointers should not escape and get
passed across extern call boundaries, since that's an ABI break, but
because existing gcc has this issue we work it around in glibc until
that is resolved. This is compatible with a fixed gcc and it is a nop
on systems without PAuth support. The bug was introduced in gcc-7 with
-msign-return-address=non-leaf|all support which in gcc-9 got renamed
to -mbranch-protection=pac-ret|pac-ret+leaf|standard.
strip_pac uses inline asm instead of __builtin_aarch64_xpaclri since
that is not a documented api and not available in all supported gccs.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Use return address signing in assembly files for functions that save
LR when pac-ret is enabled in the compiler.
The GNU property note for PAC-RET is not meaningful to the dynamic
linker so it is not strictly required, but it may be used to track
the security property of binaries. (The PAC-RET property is only set
if BTI is set too because BTI implies working GNU property support.)
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Return address signing requires unwinder support, which is
present in libgcc since >=gcc-7, however due to bugs the
support may be broken in <gcc-10 (and similarly there may
be issues in custom unwinders), so pac-ret is not always
safe to use. So in assembly code glibc should only use
pac-ret if the compiler uses it too. Unfortunately there
is no predefined feature macro for it set by the compiler
so pac-ret is inferred from the code generation.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
When glibc is built with branch protection (i.e. with a gcc configured
with --enable-standard-branch-protection), all glibc binaries should
be BTI compatible and marked as such.
It is easy to link BTI incompatible objects by accident and this is
silent currently which is usually not the expectation, so this is
changed into a link error. (There is no linker flag for failing on
BTI incompatible inputs so all warnings are turned into fatal errors
outside the test system when building glibc with branch protection.)
Unfortunately, outlined atomic functions are not BTI compatible in
libgcc (PR libgcc/96001), so to build glibc with current gcc use
'CC=gcc -mno-outline-atomics', this should be fixed in libgcc soon
and then glibc can be built and tested without such workarounds.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Binaries can opt-in to using BTI via an ELF object file marking.
The dynamic linker has to then mprotect the executable segments
with PROT_BTI. In case of static linked executables or in case
of the dynamic linker itself, PROT_BTI protection is done by the
operating system.
On AArch64 glibc uses PT_GNU_PROPERTY instead of PT_NOTE to check
the properties of a binary because PT_NOTE can be unreliable with
old linkers (old linkers just append the notes of input objects
together and add them to the output without checking them for
consistency which means multiple incompatible GNU property notes
can be present in PT_NOTE).
BTI property is handled in the loader even if glibc is not built
with BTI support, so in theory user code can be BTI protected
independently of glibc. In practice though user binaries are not
marked with the BTI property if glibc has no support because the
static linked libc objects (crt files, libc_nonshared.a) are
unmarked.
This patch relies on Linux userspace API that is not yet in a
linux release but in v5.8-rc1 so scheduled to be in Linux 5.8.
Co-authored-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Tailcalls must use x16 or x17 for the indirect branch instruction
to be compatible with code that uses BTI c at function entries.
(Other forms of indirect branches can only land on BTI j.)
Also added a BTI c at the ELF entry point of rtld, this is not
strictly necessary since the kernel does not use indirect branch
to get there, but it seems safest once building glibc itself with
BTI is supported.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
To enable building glibc with branch protection, assembly code
needs BTI landing pads and ELF object file markings in the form
of a GNU property note.
The landing pads are unconditionally added to all functions that
may be indirectly called. When the code segment is not mapped
with PROT_BTI these instructions are nops. They are kept in the
code when BTI is not supported so that the layout of performance
critical code is unchanged across configurations.
The GNU property notes are only added when there is support for
BTI in the toolchain, because old binutils does not handle the
notes right. (Does not know how to merge them nor to put them in
PT_GNU_PROPERTY segment instead of PT_NOTE, and some versions
of binutils emit warnings about the unknown GNU property. In
such cases the produced libc binaries would not have valid
ELF marking so BTI would not be enabled.)
Note: functions using ENTRY or ENTRY_ALIGN now start with an
additional BTI c, so alignment of the following code changes,
but ENTRY_ALIGN_AND_PAD was fixed so there is no change to the
existing code layout. Some string functions may need to be
tuned for optimal performance after this commit.
Co-authored-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The compiler can add required elf markings based on CFLAGS
but the assembler cannot, so using C code for empty files
creates less of a maintenance problem.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Check BTI support in the compiler and linker. The check also
requires READELF that understands the BTI GNU property note.
It is expected to succeed with gcc >=gcc-9 configured with
--enable-standard-branch-protection and binutils >=binutils-2.33.
Note: passing -mbranch-protection=bti in CFLAGS when building glibc
may not be enough to get a glibc that supports BTI because crtbegin*
and crtend* provided by the compiler needs to be BTI compatible too.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Add support for MTE to strncmp. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Branislav Rankov <branislav.rankov@arm.com>
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
|
|
Add support for MTE to strcmp. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Branislav Rankov <branislav.rankov@arm.com>
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
|
|
Add support for MTE to strrchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
|
|
Add support for MTE to memrchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
|
|
Add support for MTE to memchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Gabor Kertesz <gabor.kertesz@arm.com>
|
|
Add support for MTE to strcpy. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
|
|
The -fno-math-errno is already added by default and the minimum
required GCC to build glibc (6.2) make the -ffinite-math-only
superflous.
Checked on aarch64-linux-gnu.
|
|
The define is already set on the math-use-builtins-ceil.h, the patch
just removes the implementations (it was missed on c9feb1be93).
Checked on aarch64-linux-gnu.
|
|
Each symbol definitions are moved on a separated file and it
cover all symbol type definitions (float, double, long double,
and float128).
It allows to set support for architectures without the boiler
place of copying default values.
Checked with a build on the affected ABIs.
|
|
Introduce an Arm MTE compatible strlen implementation.
The existing implementation assumes that any access to the pages in
which the string resides is safe. This assumption is not true when
MTE is enabled. This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance on modern cores. On cores with less
efficient Advanced SIMD implementation such as Cortex-A53 it can
be slower.
Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
|
|
Introduce an Arm MTE compatible strchr implementation.
The existing implementation assumes that any access to the pages in
which the string resides is safe. This assumption is not true when
MTE is enabled. This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance.
Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
|
|
Introduce an Arm MTE compatible strchrnul implementation.
The existing implementation assumes that any access to the pages in
which the string resides is safe. This assumption is not true when
MTE is enabled. This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance.
Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
|
|
Falkor's memcpy and memmove share some implementation details,
therefore, the two routines are moved to a single source file
for code reuse.
The two routines now share code for small and medium copies
(up to and including 128 bytes). Large copies in memcpy do not
handle overlap correctly, consequently, the loops for
moving/copying more than 128 bytes stay separate for memcpy
and memmove.
To increase code reuse a number of small modifications were made:
1. The old implementation of memcpy copied the first 16-bytes as
soon as the size of data was determined to be greater than 32 bytes.
For memcpy code to also work when copying small/medium overlapping
data, the first load and store was moved to the large copy case.
2. Medium memcpy case no longer assumes that 16 bytes were already
copied and uses 8 registers to copy up to 128 bytes.
3. Small case for memmove was enlarged to that of memcpy, which is
less than or equal to 32 bytes.
4. Medium case for memmove was enlarged to that of memcpy, which is
less than or equal to 128 bytes.
Other changes include:
1. Improve alignment of existing loop bodies.
2. 'Delouse' memmove and memcpy input arguments. Make sure that
upper 32-bits of input registers are zeroed if unused.
3. Do one more iteration in memmove loops and reduce the number of
copies made from the start/end of the buffer, depending on
the direction of the memmove loop.
Benchmarking:
Looking at the results from bench-memcpy-random.out, we can see that
now memmove_falkor is about 5% faster than memcpy_falkor_old, while
memmove_falkor_old was more than 15% slower. The memcpy implementation
remained largely unmodified, so there is no significant performance
change.
The reason for such a significant memmove performance gain is the
increase of the upper bound on the small copy case to 32 bytes and
the increase of the upper bound on the medium copy case to 128 bytes.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
introduce sysdep header math-use-builtins.h to replace aarch64
implementations with corresponding generic ones.
- newly inroduced generic sqrt{,f}, fma{,f}
- existing floor{,f}, nearbyint{,f}, rint{,f}, round{,f}, trunc{,f}
- Note that generic copysign was already enabled (via generic
math-use-builtins.h) now thru sysdep header
Tested with build-many-glibcs for aarch64-linux-gnu
This is a non functional change and aarch64 libm before/after was
byte invariant as compared below:
| cd /SCRATCH/vgupta/gnu/install-glibc-A-baseline
| for i in `find . -name libm-2.31.9000.so`; do
| echo $i; diff $i /SCRATCH/vgupta/gnu/install-glibc-C-reduce-scope/$i ;
| echo $?;
| done
| ./aarch64-linux-gnu/lib64/libm-2.31.9000.so
| 0
| ./arm-linux-gnueabi/lib/libm-2.31.9000.so
| 0
| ./x86_64-linux-gnu/lib64/libm-2.31.9000.so
| 0
| ./arm-linux-gnueabihf/lib/libm-2.31.9000.so
| 0
| ./riscv64-linux-gnu-rv64imac-lp64/lib64/lp64/libm-2.31.9000.so
| 0
| ./riscv64-linux-gnu-rv64imafdc-lp64/lib64/lp64/libm-2.31.9000.so
| 0
| ./powerpc-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./microblaze-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./nios2-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./hppa-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./s390x-linux-gnu/lib64/libm-2.31.9000.so
| 0
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
This patch fixes the optimized implementation of strcpy and strnlen
on a big-endian arm64 machine.
The optimized method uses neon, which can process 128bit with one
instruction. On a big-endian machine, the bit order should be reversed
for the whole 128-bits double word. But with instuction
rev64 datav.16b, datav.16b
it reverses 64bits in the two halves rather than reversing 128bits.
There is no such instruction as rev128 to reverse the 128bits, but we
can fix this by loading the data registers accordingly.
Fixes 0237b61526e7("aarch64: Optimized implementation of strcpy") and
2911cb68ed3d("aarch64: Optimized implementation of strnlen").
Signed-off-by: Lexi Shao <shaolexi@huawei.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
|
|
With mathinline removal there is no need to keep building and testing
inline math tests.
The gen-libm-tests.py support to generate ULP_I_* is removed and all
libm-test-ulps files are updated to longer have the
i{float,double,ldouble} entries. The support for no-test-inline is
also removed from both gen-auto-libm-tests and the
auto-libm-test-out-* were regenerated.
Checked on x86_64-linux-gnu and i686-linux-gnu.
|
|
Further optimize integer memcpy. Small cases now include copies up
to 32 bytes. 64-128 byte copies are split into two cases to improve
performance of 64-96 byte copies. Comments have been rewritten.
|
|
This supersedes the init_array sysdeps directory. It allows us to
check for ELF_INITFINI in both C and assembler code, and skip DT_INIT
and DT_FINI processing completely on newer architectures.
A new header file is needed because <dl-machine.h> is incompatible
with assembler code. <sysdep.h> is compatible with assembler code,
but it cannot be included in all assembler files because on some
architectures, it redefines register names, and some assembler files
conflict with that.
<elf-initfini.h> is replicated for legacy architectures which need
DT_INIT/DT_FINI support. New architectures follow the generic default
and disable it.
|
|
All architectures using their own definition of struct
__pthread_rwlock_arch_t need to provide their own pthread-offsets.h.
|
|
This patch adds a new macro, libm_alias_finite, to define all _finite
symbol. It sets all _finite symbol as compat symbol based on its first
version (obtained from the definition at built generated first-versions.h).
The <fn>f128_finite symbols were introduced in GLIBC 2.26 and so need
special treatment in code that is shared between long double and float128.
It is done by adding a list, similar to internal symbol redifinition,
on sysdeps/ieee754/float128/float128_private.h.
Alpha also needs some tricky changes to ensure we still emit 2 compat
symbols for sqrt(f).
Passes buildmanyglibc.
Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
|
|
Checked on aarch64-linux-gnu.
|
|
Rename ifunc for kunpeng to kunpeng920, and modify the corresponding
function files including IS_KUNPENG920 judgement.
Checked on aarch64-linux-gnu.
|
|
Checked on aarch64-linux-gnu.
|
|
Due to the branch prediction issue of Kunpeng processor, we found
memset_generic has poor performance on middle sizes setting, and so
we reconstructed the logic, expanded the loop by 4 times in set_long
to solve the problem, even when setting below 1K sizes have benefit.
Another change is that DZ_ZVA seems no work when setting zero, so we
discarded it and used set_long to set zero instead. Fewer branches and
predictions also make the zero case have slightly improvement.
Checked on aarch64-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Optimize the strlen implementation by using vector operations and
loop unrolling in main loop.Compared to __strlen_generic,it reduces
latency of cases in bench-strlen by 7%~18% when the length of src
is greater than 128 bytes, with gains throughout the benchmark.
Checked on aarch64-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|