Age | Commit message (Collapse) | Author | Files | Lines |
|
Hurd with USE_OLD_TTY was the only remaining platform with speed_t not
containing a proper baud rate. From the looks of it, that code has
long since bitrotted.
Remove the vestiges of USE_OLD_TTY.
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
|
|
Linux has supported arbitrary speeds and split speeds in the kernel
since 2008 on all platforms except Alpha (fixed in 2020), but glibc
was never updated to match. This is further complicated by POSIX uses
of macros for the cf[gs]et[io]speed interfaces, rather than plain
numbers, as it really ought to have.
On most platforms, the glibc ABI includes the c_[io]speed fields in
struct termios, but they are incorrectly used. On MIPS and SPARC, they
are entirely missing.
For backwards compatibility, the kernel will still use the legacy
speed fields unless they are set to BOTHER, and will use the legacy
output speed as the input speed if the latter is 0 (== B0). However,
the specific encoding used is visible to user space applications,
including ones other than the one running.
- SPARC and MIPS get a new struct termios, and tc[gs]etattr() is
versioned accordingly. However, the new struct termios is set to be
a strict extension of the old one, which means that cf* interfaces
other than the speed-related ones do not need versioning.
- The Bxxx constants are redefined as equivalent to their integer
values and the legacy Bxxx constants are renamed __Bxxx.
- cf[gs]et[io]speed() and cfsetspeed() are versioned accordingly.
- tcgetattr() and cfset[io]speed() are adjusted to always keep the
c_[io]speed fields correct (unlike earlier versions), but to
canonicalize the representation to ALSO configure the legacy fields
if a valid legacy representation exists.
- tcsetattr(), too, canonicalizes the representation in this way
before passing it to the kernel, to maximize compatibility with
older applications/tools.
- The old IBAUD0 hack is removed; it is no longer necessary since
even the legacy c_cflag baud rate fields have had separate input
values for a long time.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The powerpc architecture, only, emulates the termios ioctls using the
glibc termios structure. Export the real kernel ones as the termios2
interface; although the kernel doesn't call it termios2, it is exactly
the termios2 interface, and it avoids the namespace clash between the
emulated ioctls and the real kernel ioctls.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
In the kernel, these are <linux/sockios.h>. The differences between
<linux/sockios.h> and the copied data in <bits/ioctls.h> are minor;
mainly some #ifdefs, so try to use <linux/sockios.h> directly; it is
hopefully clean enough these days to use directly.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Replace local_isatty() inlined in libio with a proper function
__isatty_nostatus(). This allows simpler system-specific
implementations that don't need to touch errno at all.
Note: I left the prototype in include/unistd.h (the internal header
file.) It didn't much make sense to me to put it in a different header
(not-cancel.h), but perhaps someone can elucidate the need.
Add such an implementation for Linux, with a generic fallback.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
There is a prototype for an internal __tcsetattr() function in
include/termios.h, but tcsetattr without __ were still declared as the
actual functions.
Make this match the comment and make __tcsetattr() an internal
interface. This will be required to version struct termios for Linux on
MIPS and SPARC.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
This reverts commit 3367d8e180848030d1646f088759f02b8dfe0d6f
Reason for revert: Power10 strcmp clobbers non-volatile vector
registers (Bug 33056)
Tested on ppc64le without regression.
|
|
This reverts commit b9182c793caa05df5d697427c0538936e6396d4b
Reason for revert: Power10 memchr clobbers v20 vector register
(Bug 33059)
This is not a security issue, unlike CVE-2025-5745 and
CVE-2025-5702.
Tested on ppc64le without regression.
|
|
(CVE-2025-5702)
This reverts commit 90bcc8721ef82b7378d2b080141228660e862d56
This change is in the chain of the final revert that fixes the CVE
i.e. 3367d8e180848030d1646f088759f02b8dfe0d6f
Reason for revert: Power10 strcmp clobbers non-volatile vector
registers (Bug 33056)
Tested on ppc64le with no regressions.
|
|
This reverts commit 23f0d81608d0ca6379894ef81670cf30af7fd081
Reason for revert: Power10 strncmp clobbers non-volatile vector
registers (Bug 33060)
Tested on ppc64le with no regressions.
|
|
rtld.c has
extern const ElfW(Ehdr) __ehdr_start attribute_hidden;
...
_dl_rtld_map.l_map_start = (ElfW(Addr)) &__ehdr_start;
_dl_rtld_map.l_map_end = (ElfW(Addr)) _end;
As
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120653
shows, compiler may generate run-time relocation on __ehdr_start with
movq .LC0(%rip), %xmm0
...
.section .data.rel.ro.local,"aw"
.align 8
.LC0:
.quad __ehdr_start
This won't work before run-time relocation is finished in rtld.c. Add
optimization barrier to prevent run-time relocations against __ehdr_start
and _end.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Sam James <sam@gentoo.org>
|
|
Signed-off-by: gfleury <gfleury@disroot.org>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Message-ID: <20250613184440.1660335-1-gfleury@disroot.org>
|
|
The third argument to __riscv_hwprobe is the size in bytes of the
cpu bitmask pointed to by the fourth argument, however in the access
attribute (read_only, 4, 3) it is used as an element count (i.e., the
number of unsigned longs that make up the bitmask), resulting in a
false compiler warning:
$ gcc -c hwprobe1.c
hwprobe1.c: In function 'main':
hwprobe1.c:15:11: warning: '__riscv_hwprobe' reading 1024 bytes from a region of size 128 [-Wstringop-overread]
15 | ret = __riscv_hwprobe (pairs, 1, sizeof(cpus), cpus, 0);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
hwprobe1.c:9:23: note: source object 'cpus' of size 128
9 | unsigned long int cpus[16];
| ^~~~
In file included from hwprobe1.c:1:
/usr/include/riscv64-linux-gnu/sys/hwprobe.h:66:12: note: in a call to function '__riscv_hwprobe' declared with attribute 'access (read_only, 4, 3)'
66 | extern int __riscv_hwprobe (struct riscv_hwprobe *__pairs, size_t __pair_count,
| ^~~~~~~~~~~~~~~
$
The documentation (https://docs.kernel.org/arch/riscv/hwprobe.html)
claims that the cpu bitmask has the type cpu_set_t *, which would be
consistent with other functions that take a cpu bitmask such as
sched_setaffinity and sched_getaffinity. It also uses the name
cpusetsize for the third argument, which is much more accurate than
cpu_count since it is a size in bytes and not a cpu count. The
(read_only, 4, 3) access attribute in the glibc prototype claims
that the cpu bitmask is only read, however when flags is
RISCV_HWPROBE_WHICH_CPUS it is both read and written.
Therefore, in the glibc prototype the type of the fourth argument is
changed to cpu_set_t * to match the documentation, the name of the
third argument is changed to cpusetsize as in the documentation, and the
incorrect access attribute that applies to these arguments is removed.
Almost all existing callers pass a null pointer for the fourth
argument, however a transparent union is introduced for compatibility
with callers that cast a pointer to the old argument type, and a
macro is introduced allowing callers the ability to distinguish
between the old and new prototype when needed.
The access attributes are being specified with __fortified_attr_access,
however this macro is for fortified functions; the regular
__attr_access macro is for non-fortified functions such as this one.
Using the incorrect macro results in no access checks at fortify level
3, because it is assumed that the fortified function will be doing the
checking. It is changed to use the correct macro so that the access
checks will work regardless of fortify level.
Also because __riscv_hwprobe is not a cancellation point, __THROW
is added, consistent with similar functions. (However, it is omitted
from the typedef because GCC does not accept it there.)
The __wur (warn_unused_result) attribute is helpful for functions that
cannot be used safely without checking the result, however code such
as the following does not require the result to be checked and should
not produce a warning:
struct riscv_hwprobe pair = { RISCV_HWPROBE_KEY_IMA_EXT_0, 0 };
__riscv_hwprobe (&pair, 1, 0, NULL, 0);
if (pair.value & RISCV_HWPROBE_EXT_ZBB) ...
Therefore this attribute is omitted.
The comment claiming that the second argument to the ifunc selector
is a pointer to the vDSO function is corrected. It is a pointer to
the regular glibc function (which returns errors as positive values),
not the vDSO function (which returns errors as negative values).
Fixes commit 426d0e1aa8f17426d13707594111df712d2b8911 ("riscv: Add
Linux hwprobe syscall support").
Fixes: BZ #32932
Signed-off-by: Mark Harris <mark.hsj@gmail.com>
Signed-off-by: Mark Harris <mark.hsj@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@dabbelt.com>
Acked-by: Palmer Dabbelt <palmer@dabbelt.com>
|
|
|
|
25d37948c9f3 ("malloc: Improve malloc initialization") moved calling malloc
initialization earlier, within _dl_sysdep_start's call to dl_main, before
__mach_init is called by _dl_init_first. But malloc initialization uses
getrandom, which needs to make RPCs.
This adds __getrandom_early_init on hurd to express that getrandom needs
__mach_init too. This also adds a guard to avoid making it create several task
and host ports.
Fixes: 25d37948c9f3 ("malloc: Improve malloc initialization")
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
|
|
In init_cpu_features, replace GLRO(dl_x86_cpu_features) with
cpu_features to avoid an extra load.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Early versions of GCC 12 didn't support -mtune=neoverse-v2, so use
-mtune=neoverse-v1 instead.
Reported-by: Yury Khrustalev <yury.khrustalev@arm.com>
|
|
Support for the SIOCGIFINDEX ioctl(2) Linux ABI (0x8933 command, called
SIOGIFINDEX in the API originally) was added with kernel version 2.1.14
for AF_INET6 sockets, followed by general support with version 2.1.22.
The Linux API was then updated by adding the current SIOCGIFINDEX name
with kernel version 2.1.68, back in Nov 1997.
All these kernel versions are well below our current default required
minimum of 3.2.0, let alone some platform higher version requirements.
Drop support for the absence of the SIOCGIFINDEX ioctl(2) in the API or
ABI, by removing arrangements for the ENOSYS error condition. Discard
the indirection from '__if_nameindex' to 'if_nameindex_netlink' and
adjust the implementation of '__if_nametoindex' accordingly for a better
code flow.
|
|
Add a new helper function __ifunc_hwcap() as a portable way to
access HWCAP elements via the parameter(s) passed to an ifunc
resolver checking the _IFUNC_ARG_HWCAP bit in the first parameter
and size of the buffer in the second parameter.
Note that 0 is returned when the requested element is not available
or does not correspond to a valid AT_HWCAP{,2,...} value.
Also add relevant tests.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Add basic support for hwcap3 and hwcap4 in dynamic loader and
ifunc resolvers.
Describe new backward-compatible prototype for GNU indirect
function resolvers that use a pointer to uint64_t array in
stead of a pointer to the __ifunc_arg_t struct.
This patch also adds macro _IFUNC_HWCAP_MAX to specify current
number of hwcap elements.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
sparc start.S does not provide the final argument for
__libc_start_main, which is the highest stack address used to
update the __libc_stack_end.A
This fixes elf/tst-execstack-prog-static-tunable on sparc64.
On sparcv9 this does not happen because the kernel puts an
auxv value, which turns to point to a value in the stack itself.
Checked on sparc64-linux-gnu.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Before:
rt_sigaction(SIGBUS, {sa_handler=0x55abb9960139, sa_mask=[], sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO|0xffffffff00000000, sa_restorer=0x7fb1b2a82050}, NULL, 8) = 0
After:
rt_sigaction(SIGBUS, {sa_handler=0x7f6a70dce139, sa_mask=[], sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, sa_restorer=0x7f6a70c28f60}, NULL, 8) = 0
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
The new float and double implementation does not required an
extra function call and error handling uses math_err function,
which results in better performance on i386 as well.
With gcc-14 on AMD AMD Ryzen 9 5900X, master shows:
$ ./benchtests/bench-ilogb
"ilogb": {
"subnormal": {
"duration": 3.68863e+09,
"iterations": 1.72228e+08,
"max": 89.2995,
"min": 21.016,
"mean": 21.4171
},
"normal": {
"duration": 3.68878e+09,
"iterations": 1.72948e+08,
"max": 78.6065,
"min": 21.127,
"mean": 21.3288
}
}
$ ./benchtests/bench-ilogbf
"ilogbf": {
"subnormal": {
"duration": 3.68835e+09,
"iterations": 1.66716e+08,
"max": 46.953,
"min": 21.793,
"mean": 22.1236
},
"normal": {
"duration": 3.68784e+09,
"iterations": 1.66168e+08,
"max": 46.9715,
"min": 21.904,
"mean": 22.1935
}
}
While with this patch:
$ ./benchtests/bench-ilogb
"ilogb": {
"subnormal": {
"duration": 3.68134e+09,
"iterations": 4.17516e+08,
"max": 32.5045,
"min": 8.3245,
"mean": 8.81723
},
"normal": {
"duration": 3.6677e+09,
"iterations": 6.79468e+08,
"max": 50.9305,
"min": 5.3465,
"mean": 5.3979
}
}
$ ./benchtests/bench-ilogbf
"ilogbf": {
"subnormal": {
"duration": 3.67553e+09,
"iterations": 5.11032e+08,
"max": 35.927,
"min": 7.0485,
"mean": 7.19237
},
"normal": {
"duration": 3.66877e+09,
"iterations": 6.556e+08,
"max": 26.3625,
"min": 5.5315,
"mean": 5.59605
}
}
Checked on i686-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
It removes the wrapper by moving the error/EDOM handling to an
out-of-line implementation (__math_invalidf_i/__math_invalidf_li).
Also, __glibc_unlikely is used on errors case since it helps
code generation on recent gcc.
The code now builds to with gcc-14 on aarch64:
0000000000000000 <__ilogbf>:
0: 1e260000 fmov w0, s0
4: d3577801 ubfx x1, x0, #23, #8
8: 340000e1 cbz w1, 24 <__ilogbf+0x24>
c: 5101fc20 sub w0, w1, #0x7f
10: 7103fc3f cmp w1, #0xff
14: 54000040 b.eq 1c <__ilogbf+0x1c> // b.none
18: d65f03c0 ret
1c: 12b00000 mov w0, #0x7fffffff // #2147483647
20: 14000000 b 0 <__math_invalidf_i>
24: 53175800 lsl w0, w0, #9
28: 340000a0 cbz w0, 3c <__ilogbf+0x3c>
2c: 5ac01000 clz w0, w0
30: 12800fc1 mov w1, #0xffffff81 // #-127
34: 4b000020 sub w0, w1, w0
38: d65f03c0 ret
3c: 320107e0 mov w0, #0x80000001 // #-2147483647
40: 14000000 b 0 <__math_invalidf_i>
Some ABI requires additional adjustments:
* i386 and m68k requires to use the template version, since
both provide __ieee754_ilogb implementatations.
* loongarch uses a custom implementation as well.
* powerpc64le also has a custom implementation for POWER9, which
is also used for float and float128 version. The generic
e_ilogb.c implementation is moved on powerpc to keep the
current code as-is.
Checked on aarch64-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
The subnormal exponent calculation invokes UB by left shifting the
signed expoenent to find the first leading bit.
The patch reimplements ilogb using the math_config.h macros and
uses the new stdbit.h function to simplify the subnormal handling.
On aarch64 it generates better code:
* master:
0000000000000000 <__ieee754_ilogbf>:
0: 1e260000 fmov w0, s0
4: 12007801 and w1, w0, #0x7fffffff
8: 72091c1f tst w0, #0x7f800000
c: 54000141 b.ne 34 <__ieee754_ilogbf+0x34> // b.any
10: 34000201 cbz w1, 50 <__ieee754_ilogbf+0x50>
14: 53185c21 lsl w1, w1, #8
18: 12800fa0 mov w0, #0xffffff82 // #-126
1c: d503201f nop
20: 531f7821 lsl w1, w1, #1
24: 51000400 sub w0, w0, #0x1
28: 7100003f cmp w1, #0x0
2c: 54ffffac b.gt 20 <__ieee754_ilogbf+0x20>
30: d65f03c0 ret
34: 13177c20 asr w0, w1, #23
38: 12b01002 mov w2, #0x7f7fffff // #2139095039
3c: 5101fc00 sub w0, w0, #0x7f
40: 6b02003f cmp w1, w2
44: 12b00001 mov w1, #0x7fffffff // #2147483647
48: 1a819000 csel w0, w0, w1, ls // ls = plast
4c: d65f03c0 ret
50: 320107e0 mov w0, #0x80000001 // #-2147483647
54: d65f03c0 ret
* patch:
0000000000000000 <__ieee754_ilogbf>:
0: 1e260001 fmov w1, s0
4: d3577820 ubfx x0, x1, #23, #8
8: 350000e0 cbnz w0, 24 <__ieee754_ilogbf+0x24>
c: 53175821 lsl w1, w1, #9
10: 34000141 cbz w1, 38 <__ieee754_ilogbf+0x38>
14: 5ac01021 clz w1, w1
18: 12800fc0 mov w0, #0xffffff81 // #-127
1c: 4b010000 sub w0, w0, w1
20: d65f03c0 ret
24: 7103fc1f cmp w0, #0xff
28: 5101fc00 sub w0, w0, #0x7f
2c: 12b00001 mov w1, #0x7fffffff // #2147483647
30: 1a811000 csel w0, w0, w1, ne // ne = any
34: d65f03c0 ret
38: 320107e0 mov w0, #0x80000001 // #-2147483647
3c: d65f03c0 ret
Other architecture with support for stdc_leading_zeros and/or
__builtin_clzll should have similar improvements.
Checked on aarch64-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
It removes the wrapper by moving the error/EDOM handling to an
out-of-line implementation (__math_invalid_i/__math_invalid_li).
Also, __glibc_unlikely is used on errors case since it helps
code generation on recent gcc.
The code now builds to with gcc-14 on aarch64:
0000000000000000 <__ilogb>:
0: 9e660000 fmov x0, d0
4: d374f801 ubfx x1, x0, #52, #11
8: 340000e1 cbz w1, 24 <__ilogb+0x24>
c: 510ffc20 sub w0, w1, #0x3ff
10: 711ffc3f cmp w1, #0x7ff
14: 54000040 b.eq 1c <__ilogb+0x1c> // b.none
18: d65f03c0 ret
1c: 12b00000 mov w0, #0x7fffffff // #2147483647
20: 14000000 b 0 <__math_invalid_i>
24: d374cc00 lsl x0, x0, #12
28: b40000a0 cbz x0, 3c <__ilogb+0x3c>
2c: dac01000 clz x0, x0
30: 12807fc1 mov w1, #0xfffffc01 // #-1023
34: 4b000020 sub w0, w1, w0
38: d65f03c0 ret
3c: 320107e0 mov w0, #0x80000001 // #-2147483647
40: 14000000 b 0 <__math_invalid_i>
Some ABI requires additional adjustments:
* i386 and m68k requires to use the template version, since
both provide __ieee754_ilogb implementatations.
* loongarch uses a custom implementation as well.
* powerpc64le also has a custom implementation for POWER9, which
is also used for float and float128 version. The generic
e_ilogb.c implementation is moved on powerpc to keep the
current code as-is.
Checked on aarch64-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
The subnormal exponent calculation invokes UB by left shifting the
signed exponent to find the first leading bit. The implementation
also uses 32 bits operations, which generates suboptimal code in
64 bits architectures.
The patch reimplements ilogb using the math_config.h macros and
uses the new stdbit function to simplify the subnormal handling.
On aarch64 it generates better code:
* master:
0000000000000000 <__ieee754_ilogb>:
0: 9e660000 fmov x0, d0
4: d360fc02 lsr x2, x0, #32
8: d360f801 ubfx x1, x0, #32, #31
c: f26c285f tst x2, #0x7ff00000
10: 540001a1 b.ne 44 <__ieee754_ilogb+0x44> // b.any
14: 2a000022 orr w2, w1, w0
18: 34000322 cbz w2, 7c <__ieee754_ilogb+0x7c>
1c: 35000221 cbnz w1, 60 <__ieee754_ilogb+0x60>
20: 2a0003e1 mov w1, w0
24: 7100001f cmp w0, #0x0
28: 12808240 mov w0, #0xfffffbed // #-1043
2c: 540000ad b.le 40 <__ieee754_ilogb+0x40>
30: 531f7821 lsl w1, w1, #1
34: 51000400 sub w0, w0, #0x1
38: 7100003f cmp w1, #0x0
3c: 54ffffac b.gt 30 <__ieee754_ilogb+0x30>
40: d65f03c0 ret
44: 13147c20 asr w0, w1, #20
48: 12b00202 mov w2, #0x7fefffff // #2146435071
4c: 510ffc00 sub w0, w0, #0x3ff
50: 6b02003f cmp w1, w2
54: 12b00001 mov w1, #0x7fffffff // #2147483647
58: 1a819000 csel w0, w0, w1, ls // ls = plast
5c: d65f03c0 ret
60: 53155021 lsl w1, w1, #11
64: 12807fa0 mov w0, #0xfffffc02 // #-1022
68: 531f7821 lsl w1, w1, #1
6c: 51000400 sub w0, w0, #0x1
70: 7100003f cmp w1, #0x0
74: 54ffffac b.gt 68 <__ieee754_ilogb+0x68>
78: d65f03c0 ret
7c: 320107e0 mov w0, #0x80000001 // #-2147483647
80: d65f03c0 ret
* patch:
0000000000000000 <__ieee754_ilogb>:
0: 9e660001 fmov x1, d0
4: d374f820 ubfx x0, x1, #52, #11
8: 350000e0 cbnz w0, 24 <__ieee754_ilogb+0x24>
c: d374cc21 lsl x1, x1, #12
10: b4000141 cbz x1, 38 <__ieee754_ilogb+0x38>
14: dac01021 clz x1, x1
18: 12807fc0 mov w0, #0xfffffc01 // #-1023
1c: 4b010000 sub w0, w0, w1
20: d65f03c0 ret
24: 711ffc1f cmp w0, #0x7ff
28: 510ffc00 sub w0, w0, #0x3ff
2c: 12b00001 mov w1, #0x7fffffff // #2147483647
30: 1a811000 csel w0, w0, w1, ne // ne = any
34: d65f03c0 ret
38: 320107e0 mov w0, #0x80000001 // #-2147483647
3c: d65f03c0 ret
Other architecture with support for stdc_leading_zeros and/or
__builtin_clzll should have similar improvements.
Checked on aarch64-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Linux 6.15 adds the new syscall open_tree_attr. Update
syscall-names.list and regenerate the arch-syscall.h headers with
build-many-glibcs.py update-syscalls.
Tested with build-many-glibcs.py.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
When using a -mcpu option in CFLAGS, GCC can report errors when building libmvec.
Fix this by overriding both -mcpu and -march with a generic variant with SVE added.
Also use a tune for a modern SVE core.
Reviewed-by: Yury Khrustalev <yury.khrustalev@arm.com>
|
|
Improves memory access, reformat evaluation scheme to pack coefficients.
5% improvement in throughput microbenchmark on Neoverse V1.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
A corresponding macro has been added to Linux UAPI headers in 6.15.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
This is required after commit 03da41d47dc73674307e6ffc5b75e9043febc698
("Turn on -Wmissing-parameter-name by default if available").
Reviewed-by: Sam James <sam@gentoo.org>
|
|
Due to raising the minimum binutils version to version >=2.28,
the used cfi_escape for cfi_val_offset can now be ommitted.
The commit 0fc76d876261ee8253fef198ffec48c832edd4ff
has already adjusted it for the 64bit part of mcount.
This patch also adjusts it for the 31bit part of mcount.
Checked with "objdump -WF" / "objdump -Wf" that the previous
cfi_escape and the new cfi_val_offset are equal.
|
|
|
|
Fix wcsncpy and wcpncpy typo in ifunc-impl-list.c.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
Fix typo atanpi2->atan2pi in math-vector.h.
|
|
The -fno-builtin options need to disable the long double builtins.
|
|
Now we finally support modern GCC and binutils, it's time for a cleanup.
Remove HAVE_AARCH64_SVE_ASM define and conditional compilation. Remove SVE
configure checks for SVE, ACLE and variant-PCS support.
Reviewed-by: Yury Khrustalev <yury.khrustalev@arm.com>
|
|
Now we finally support modern GCC and binutils, it's time for a cleanup.
Use PAC and BTI instructions unconditionally and use proper assembler syntax.
Remove the PR target/94791 strip_pac workarounds for buggy GCCs. Remove the
PAC/BTI configure checks - always emit GNU property notes on assembly files.
Change cfi_window_save to the correct cfi_negate_ra_state unwind directive.
Reviewed-by: Matthieu Longo <matthieu.longo@arm.com>
|
|
Implement double and single precision variants of the C23 routine atan2pi
for both AdvSIMD and SVE.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Implement double and single precision variants of the C23 routine atanpi
for both AdvSIMD and SVE.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Implement double and single precision variants of the C23 routine asinpi
for both AdvSIMD and SVE.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Implement double and single precision variants of the C23 routine acospi
for both AdvSIMD and SVE.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Improve performance of Inverse trig functions by altering how coefficients are
loaded.
Performance improvement on Neoverse V1:
SVE acos 14%
AdvSIMD acos 6%
AdvSIMD asin 6%
SVE asin 5%
AdvSIMD asinf 2%
AdvSIMD atanf 22%
SVE atanf 20%
SVE atan 11%
AdvSIMD atan 5%
SVE atan2 7%
SVE atan2f 4%
AdvSIMD atan2f 3%
AdvSIMD atan2 2%
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Use __thread variables directly instead. The macros do not save any
typing. It seems unlikely that a future port will lack __thread
variable support.
Some of the __libc_tsd_* variables are referenced from assembler
files, so keep their names. Previously, <libc-tls.h> included
<tls.h>, which in turn included <errno.h>, so a few direct includes
of <errno.h> are now required.
Reviewed-by: Frédéric Bérat <fberat@redhat.com>
|
|
The -mabi=ibmlongdouble option has been added in gcc 4.2, thus can be
assumed to always exist.
|
|
Add test that checks that ZA state is disabled after setjmp and sigsetjmp
Update existing SME test that uses setjmp
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Due to the nature of the ZA state, setjmp() should clear it in the
same manner as it is already done by longjmp.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
C23 adds various <math.h> function families originally defined in TS
18661-4. Add the rootn functions, which compute the Yth root of X for
integer Y (with a domain error if Y is 0, even if X is a NaN). The
integer exponent has type long long int in C23; it was intmax_t in TS
18661-4, and as with other interfaces changed after their initial
appearance in the TS, I don't think we need to support the original
version of the interface.
As with pown and compoundn, I strongly encourage searching for worst
cases for ulps error for these implementations (necessarily
non-exhaustively, given the size of the input space). I also expect a
custom implementation for a given format could be much faster as well
as more accurate, although the implementation is simpler than those
for pown and compoundn.
This completes adding to glibc those TS 18661-4 functions (ignoring
DFP) that are included in C23. See
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118592 regarding the C23
mathematical functions (not just the TS 18661-4 ones) missing built-in
functions in GCC, where such functions might usefully be added.
Tested for x86_64 and x86, and with build-many-glibcs.py.
|