aboutsummaryrefslogtreecommitdiff
path: root/sysdeps/aarch64
AgeCommit message (Collapse)AuthorFilesLines
8 daysreplace tgammaf by the CORE-MATH implementationPaul Zimmermann1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode). This can be checked by exhaustive tests in a few minutes since there are less than 2^32 values to check against for example GNU MPFR. This patch also adds some bench values for tgammaf. Tested on x86_64 and x86 (cfarm26). With the initial GNU libc code it gave on an Intel(R) Core(TM) i7-8700: "tgammaf": { "": { "duration": 3.50188e+09, "iterations": 2e+07, "max": 602.891, "min": 65.1415, "mean": 175.094 } } With the new code: "tgammaf": { "": { "duration": 3.30825e+09, "iterations": 5e+07, "max": 211.592, "min": 32.0325, "mean": 66.1649 } } With the initial GNU libc code it gave on cfarm26 (i686): "tgammaf": { "": { "duration": 3.70505e+09, "iterations": 6e+06, "max": 2420.23, "min": 243.154, "mean": 617.509 } } With the new code: "tgammaf": { "": { "duration": 3.24497e+09, "iterations": 1.8e+07, "max": 1238.15, "min": 101.155, "mean": 180.276 } } Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Changes in v2: - include <math.h> (fix the linknamespace failures) - restored original benchtests/strcoll-inputs/filelist#en_US.UTF-8 file - restored original wrapper code (math/w_tgammaf_compat.c), except for the dealing with the sign - removed the tgammaf/float entries in all libm-test-ulps files - address other comments from Joseph Myers (https://sourceware.org/pipermail/libc-alpha/2024-July/158736.html) Changes in v3: - pass NULL argument for signgam from w_tgammaf_compat.c - use of math_narrow_eval - added more comments Changes in v4: - initialize local_signgam to 0 in math/w_tgamma_template.c - replace sysdeps/ieee754/dbl-64/gamma_productf.c by dummy file Changes in v5: - do not mention local_signgam any more in math/w_tgammaf_compat.c - initialize local_signgam to 1 instead of 0 in w_tgamma_template.c and added comment Changes in v6: - pass NULL as 2nd argument of __ieee754_gammaf_r in w_tgammaf_compat.c, and check for NULL in e_gammaf_r.c Changes in v7: - added Signed-off-by line for Alexei Sibidanov (author of the code) Changes in v8: - added Signed-off-by line for Paul Zimmermann (submitted of the patch) Changes in v9: - address comments from review by Adhemerval Zanella Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-09-23AArch64: Simplify rounding-multiply pattern in several AdvSIMD routinesJoe Ramsay5-38/+30
This operation can be simplified to use simpler multiply-round-convert sequence, which uses fewer instructions and constants. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2024-09-23AArch64: Improve codegen in users of ADVSIMD expm1f helperJoe Ramsay4-91/+58
Rearrange operations so MOV is not necessary in reduction or around the special-case handler. Reduce memory access by using more indexed MLAs in polynomial. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2024-09-23AArch64: Improve codegen in users of AdvSIMD log1pf helperJoe Ramsay5-139/+146
log1pf is quite register-intensive - use fewer registers for the polynomial, and make various changes to shorten dependency chains in parent routines. There is now no spilling with GCC 14. Accuracy moves around a little - comments adjusted accordingly but does not require regen-ulps. Use the helper in log1pf as well, instead of having separate implementations. The more accurate polynomial means special-casing can be simplified, and the shorter dependency chain avoids the usual dance around v0, which is otherwise difficult. There is a small duplication of vectors containing 1.0f (or 0x3f800000) - GCC is not currently able to efficiently handle values which fit in FMOV but not MOVI, and are reinterpreted to integer. There may be potential for more optimisation if this is fixed. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2024-09-23AArch64: Improve codegen in SVE F32 logsJoe Ramsay3-47/+69
Reduce MOVPRFXs by using unpredicated (non-destructive) instructions where possible. Similar to the recent change to AdvSIMD F32 logs, adjust special-case arguments and bounds to allow for more optimal register usage. For all 3 routines one MOVPRFX remains in the reduction, which cannot be avoided as immediate AND and ASR are both destructive. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2024-09-23AArch64: Improve codegen in SVE expf & related routinesJoe Ramsay5-148/+136
Reduce MOV and MOVPRFX by improving special-case handling. Use inline helper to duplicate the entire computation between the special- and non-special case branches, removing the contention for z0 between x and the return value. Also rearrange some MLAs and MLSs - by making the multiplicand the destination we can avoid a MOVPRFX in several cases. Also change which constants go in the vector used for lanewise ops - the last lane is no longer wasted. Spotted that shift was incorrect in exp2f and exp10f, w.r.t. to the comment that explains it. Fixed - worst-case ULP for exp2f moves around but it doesn't change significantly for either routine. Worst-case error for coshf increases due to passing x to exp rather than abs(x) - updated the comment, but does not require regen-ulps. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2024-09-19AArch64: Add vector logp1 alias for log1pJoe Ramsay7-0/+25
This enables vectorisation of C23 logp1, which is an alias for log1p. There are no new tests or ulp entries because the new symbols are simply aliases. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2024-09-10AArch64: Remove memset-reg.hWilco Dijkstra6-35/+28
Remove memset-reg.h by moving register definitions into the memset implementations. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-09-09AArch64: Optimize memsetWilco Dijkstra1-111/+84
Improve small memsets by avoiding branches and use overlapping stores. Use DC ZVA for copies over 128 bytes. Remove unnecessary code for ZVA sizes other than 64 and 128. Performance of random memset benchmark improves by 24% on Neoverse N1. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-09-09aarch64: Avoid redundant MOVs in AdvSIMD F32 logsJoe Ramsay3-45/+72
Since the last operation is destructive, the first argument to the FMA also has to be the first argument to the special-case in order to avoid unnecessary MOVs. Reorder arguments and adjust special-case bounds to facilitate this. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2024-08-07aarch64: Regenerate ULPsAdhemerval Zanella1-32/+32
From new tests added by 07972839108495245d8b93ca546462b3f4dad47f.
2024-08-07AArch64: Improve generic strlenWilco Dijkstra1-12/+27
Improve performance by handling another 16 bytes before entering the loop. Use ADDHN in the loop to avoid SHRN+FMOV when it terminates. Change final size computation to avoid increasing latency. On Neoverse V1 performance of the random strlen benchmark improves by 4.6%. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-07-25aarch64: Regenerate ULPsAdhemerval Zanella1-4/+4
From new tests added by 4dc22baa84bdb4111c0ac0db7139bf9ab953bf61.
2024-06-30Aarch64: Add new memset for Qualcomm's oryon-1 coreAndrew Pinski4-0/+176
Qualcom's new core, oryon-1, has a different characteristics for memset than the current versions of memset. For non-zero, larger sizes, using GPRs rather than the SIMD stores is ~30% faster. For even larger sizes, using the nontemporal stores is needed not to polute the L1/L2 caches. For zero values, using `dc zva` should be used. Since we know the size will always be 64 bytes, we don't need to figure out the size there. I started with the emag memset and added back the `dc zva` code. Changes since v1: * v3: Fix comment formating Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-06-30Aarch64: Add memcpy for qualcomm's oryon-1 coreAndrew Pinski5-0/+316
Qualcomm's new core (oryon-1) has a different performance characteristic than other cores. For memcpy, it is faster to use the GPRs to do the copy for large sizes (2x faster). For even larger sizes, it is better to use the nontemporal load/store instructions so we don't pollute the L1/L2 caches. For smaller sizes, the characteristic are very similar to other cores. I used the thunderx memcpy as a starting point and expanded from there. Changes since v1: * v2: Fix ordering in Makefile. * v3: Fix comment grammar about the ldnp/stnp instructions. Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-06-18aarch64: Update ulpsAdhemerval Zanella1-0/+60
For the exp10m1, exp2m1, and log10p1 implementations.
2024-06-17Convert to autoconf 2.72 (vanilla release, no distribution patches)Andreas K. Hüttel1-78/+77
As discussed at the patch review meeting Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org> Reviewed-by: Simon Chopin <simon.chopin@canonical.com>
2024-06-17Implement C23 logp1Joseph Myers1-0/+20
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the logp1 functions (aliases for log1p functions - the name is intended to be more consistent with the new log2p1 and log10p1, where clearly it would have been very confusing to name those functions log21p and log101p). As aliases rather than new functions, the content of this patch is somewhat different from those actually adding new functions. Tests are shared with log1p, so this patch *does* mechanically update all affected libm-test-ulps files to expect the same errors for both functions. The vector versions of log1p on aarch64 and x86_64 are *not* updated to have logp1 aliases (and thus there are no corresponding header, tests, abilist or ulps changes for vector functions either). It would be reasonable for such vector aliases and corresponding changes to other files to be made separately. For now, the log1p tests instead avoid testing logp1 in the vector case (a Makefile change is needed to avoid problems with grep, used in generating the .c files for vector function tests, matching more than one ALL_RM_TEST line in a file testing multiple functions with the same inputs, when it assumes that the .inc file only has a single such line). Tested for x86_64 and x86, and with build-many-glibcs.py.
2024-05-23aarch64: Remove duplicate memchr/strlen in libc.a (BZ 31777)Adhemerval Zanella2-0/+6
The generic version provides weak definitions of memchr/strlen, which are already provided by the ifunc resolvers. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-05-21aarch64/fpu: Add vector variants of powJoe Ramsay20-12/+2231
Plus a small amount of moving includes around in order to be able to remove duplicate definition of asuint64. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-05-20aarch64: Update ulpsAdhemerval Zanella1-0/+20
For the log2p1 implementation.
2024-05-16aarch64/fpu: Add vector variants of cbrtJoe Ramsay13-0/+521
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-05-16aarch64/fpu: Add vector variants of hypotJoe Ramsay13-0/+324
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-05-14aarch64: Fix AdvSIMD libmvec routines for big-endianJoe Ramsay17-85/+119
Previously many routines used * to load from vector types stored in the data table. This is emitted as ldr, which byte-swaps the entire vector register, and causes bugs for big-endian when not all lanes contain the same value. When a vector is to be used this way, it has been replaced with an array and the load with an explicit ld1 intrinsic, which byte-swaps only within lanes. As well, many routines previously used non-standard GCC syntax for vector operations such as indexing into vectors types with [] and assembling vectors using {}. This syntax should not be mixed with ACLE, as the former does not respect endianness whereas the latter does. Such examples have been replaced with, for instance, vcombine_* and vgetq_lane* intrinsics. Helpers which only use the GCC syntax, such as the v_call helpers, do not need changing as they do not use intrinsics. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-05-07elf: Only process multiple tunable once (BZ 31686)Adhemerval Zanella1-0/+4
The 680c597e9c3 commit made loader reject ill-formatted strings by first tracking all set tunables and then applying them. However, it does not take into consideration if the same tunable is set multiple times, where parse_tunables_string appends the found tunable without checking if it was already in the list. It leads to a stack-based buffer overflow if the tunable is specified more than the total number of tunables. For instance: GLIBC_TUNABLES=glibc.malloc.check=2:... (repeat over the number of total support for different tunable). Instead, use the index of the tunable list to get the expected tunable entry. Since now the initial list is zero-initialized, the compiler might emit an extra memset and this requires some minor adjustment on some ports. Checked on x86_64-linux-gnu and aarch64-linux-gnu. Reported-by: Yuto Maeda <maeda@cyberdefense.jp> Reported-by: Yutaro Shimizu <shimizu@cyberdefense.jp> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2024-04-30AArch64: Remove unused defines of CPU namesWilco Dijkstra1-7/+0
Remove unused defines of CPU names in cpu-features.h. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-04-08aarch64: Enhanced CPU diagnostics for ld.soFlorian Weimer1-0/+84
This prints some information from struct cpu_features, and the midr_el1 and dczid_el0 system register contents on every CPU. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64: Remove ld.so __tls_get_addr plt usageAdhemerval Zanella1-1/+2
Use the hidden alias instead. Checked on aarch64-linux-gnu.
2024-04-04aarch64/fpu: Add vector variants of erfcJoe Ramsay16-1/+4892
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64/fpu: Add vector variants of tanhJoe Ramsay13-1/+374
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64/fpu: Add vector variants of sinhJoe Ramsay15-0/+567
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64/fpu: Add vector variants of atanhJoe Ramsay13-0/+283
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64/fpu: Add vector variants of asinhJoe Ramsay13-0/+484
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64/fpu: Add vector variants of acoshJoe Ramsay18-0/+648
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64/fpu: Add vector variants of coshJoe Ramsay17-1/+643
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64/fpu: Add vector variants of erfJoe Ramsay18-1/+4526
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-03-21AArch64: Check kernel version for SVE ifuncsWilco Dijkstra4-2/+5
Old Linux kernels disable SVE after every system call. Calling the SVE-optimized memcpy afterwards will then cause a trap to reenable SVE. As a result, applications with a high use of syscalls may run slower with the SVE memcpy. This is true for kernels between 4.15.0 and before 6.2.0, except for 5.14.0 which was patched. Avoid this by checking the kernel version and selecting the SVE ifunc on modern kernels. Parse the kernel version reported by uname() into a 24-bit kernel.major.minor value without calling any library functions. If uname() is not supported or if the version format is not recognized, assume the kernel is modern. Tested-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-03-19elf: Enable TLS descriptor tests on aarch64Adhemerval Zanella1-0/+1
The aarch64 uses 'trad' for traditional tls and 'desc' for tls descriptors, but unlike other targets it defaults to 'desc'. The gnutls2 configure check does not set aarch64 as an ABI that uses TLS descriptors, which then disable somes stests. Also rename the internal machinery fron gnu2 to tls descriptors. Checked on aarch64-linux-gnu. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-03-14aarch64: fix check for SVE support in assemblerSzabolcs Nagy2-4/+6
Due to GCC bug 110901 -mcpu can override -march setting when compiling asm code and thus a compiler targetting a specific cpu can fail the configure check even when binutils gas supports SVE. The workaround is that explicit .arch directive overrides both -mcpu and -march, and since that's what the actual SVE memcpy uses the configure check should use that too even if the GCC issue is fixed independently. Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-02-26aarch64/fpu: Sync libmvec routines from 2.39 and before with AORJoe Ramsay18-105/+111
This includes a fix for big-endian in AdvSIMD log, some cosmetic changes, and numerous small optimisations mainly around inlining and using indexed variants of MLA intrinsics. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-02-01string: Use builtins for ffs and ffsllAdhemerval Zanella Netto1-0/+2
It allows to remove a lot of arch-specific implementations. Checked on x86_64, aarch64, powerpc64. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-02-01Refer to C23 in place of C2X in glibcJoseph Myers1-1/+1
WG14 decided to use the name C23 as the informal name of the next revision of the C standard (notwithstanding the publication date in 2024). Update references to C2X in glibc to use the C23 name. This is intended to update everything *except* where it involves renaming files (the changes involving renaming tests are intended to be done separately). In the case of the _ISOC2X_SOURCE feature test macro - the only user-visible interface involved - support for that macro is kept for backwards compatibility, while adding _ISOC23_SOURCE. Tested for x86_64.
2024-01-04aarch64: Make cpu-features definitions not Linux-specificSergey Bugaev2-0/+110
These describe generic AArch64 CPU features, and are not tied to a kernel-specific way of determining them. We can share them between the Linux and Hurd AArch64 ports. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-ID: <20240103171502.1358371-13-bugaevc@gmail.com>
2024-01-02aarch64: Add longjmp test for SMESzabolcs Nagy2-0/+283
Includes test for setcontext too. The test directly checks after longjmp if ZA got disabled and the ZA contents got saved following the lazy saving scheme. It does not use ACLE code to verify that gcc can interoperate with glibc. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-01-02aarch64: Add longjmp support for SMESzabolcs Nagy1-0/+22
For the ZA lazy saving scheme to work, longjmp has to call __libc_arm_za_disable. In ld.so we assume ZA is not used so longjmp does not need special support there. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-01-02aarch64: Add SME runtime supportSzabolcs Nagy3-3/+129
The runtime support routines for the call ABI of the Scalable Matrix Extension (SME) are mostly in libgcc. Since libc.so cannot depend on libgcc_s.so have an implementation of __arm_za_disable in libc for libc internal use in longjmp and similar APIs. __libc_arm_za_disable follows the same PCS rules as __arm_za_disable, but it's a hidden symbol so it does not need variant PCS marking. Using __libc_fatal instead of abort because it can print a message and works in ld.so too. But for now we don't need SME routines in ld.so. To check the SME HWCAP in asm, we need the _dl_hwcap2 member offset in _rtld_global_ro in the shared libc.so, while in libc.a the _dl_hwcap2 object is accessed. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-01-01Update copyright dates with scripts/update-copyrightsPaul Eggert223-223/+223
2023-12-20aarch64: Add SIMD attributes to math functions with vector versionsJoe Ramsay2-0/+113
Added annotations for autovec by GCC and GFortran - this enables GCC >= 9 to autovectorise math calls at -Ofast. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2023-12-20aarch64: Add half-width versions of AdvSIMD f32 libmvec routinesJoe Ramsay18-14/+108
Compilers may emit calls to 'half-width' routines (two-lane single-precision variants). These have been added in the form of wrappers around the full-width versions, where the low half of the vector is simply duplicated. This will perform poorly when one lane triggers the special-case handler, as there will be a redundant call to the scalar version, however this is expected to be rare at Ofast. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2023-12-05aarch64: correct CFI in rawmemchr (bug 31113)Andreas Schwab1-1/+1
The .cfi_return_column directive changes the return column for the whole FDE range. But the actual intent is to tell the unwinder that the value in x30 (lr) now resides in x15 after the move, and that is expressed by the .cfi_register directive.