aboutsummaryrefslogtreecommitdiff
path: root/sysdeps
AgeCommit message (Collapse)AuthorFilesLines
2020-04-17i686: Add INTERNAL_SYSCALL_NCS 6 argument supportAdhemerval Zanella1-30/+49
It is required for i686 BZ#12683 support when building with -Os or -fno-omit-frame-pointer on some gcc versions. It is not used on current code. Check on i686-linux-gnu.
2020-04-15Linux: Remove <sys/sysctl.h> and the sysctl functionFlorian Weimer10-124/+46
Linux 5.5 remove the system call in commit 61a47c1ad3a4dc6882f01ebdc88138ac62d0df03 ("Linux: Remove <sys/sysctl.h>"). Therefore, the compat function is just a stub that sets ENOSYS. Due to SHLIB_COMPAT, new ports will not add the sysctl function anymore automatically. x32 already lacks the sysctl function, so an empty sysctl.c file is used to suppress it. Otherwise, a new compat symbol would be added. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-04-14linux: wait4: Fix incorrect return value comparisonAlistair Francis1-10/+9
Patch 600f00b "linux: Use long time_t for wait4/getrusage" introduced two bugs: - The usage32 struct was set if the wait4 syscall had an error. - For 32-bit systems the usage struct was set even if it was specified as NULL. This patch fixes the two issues.
2020-04-14hurd: add mach_print functionSamuel Thibault1-0/+1
* mach/Versions (GLIBC_2.32): Add mach_print. * sysdeps/mach/hurd/i386/libc.abilist (GLIBC_2.32): Add mach_print.
2020-04-13x32: Properly pass long to syscall [BZ #25810]H.J. Lu2-6/+25
X32 has 32-bit long and pointer with 64-bit off_t. Since x32 psABI requires that pointers passed in registers must be zero-extended to 64bit, x32 can share many syscall interfaces with LP64. When a LP64 syscall with long and unsigned long arguments is used for x32, these arguments must be properly extended to 64-bit. Otherwise if the upper 32 bits of the register have undefined value, such a syscall will be rejected by kernel. Enforce zero-extension for pointers and array system call arguments. For integer types, extend to int64_t (the full register) using a regular cast, resulting in zero or sign extension based on the signedness of the original type. For void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset); we now generate 0: 41 f7 c1 ff 0f 00 00 test $0xfff,%r9d 7: 75 1f jne 28 <__mmap64+0x28> 9: 48 63 d2 movslq %edx,%rdx c: 89 f6 mov %esi,%esi e: 4d 63 c0 movslq %r8d,%r8 11: 4c 63 d1 movslq %ecx,%r10 14: b8 09 00 00 40 mov $0x40000009,%eax 19: 0f 05 syscall That is 1. addr is unchanged. 2. length is zero-extend to 64 bits. 3. prot is sign-extend to 64 bits. 4. flags is sign-extend to 64 bits. 5. fd is sign-extend to 64 bits. 6. offset is unchanged. For int arguments, since kernel uses only the lower 32 bits and ignores the upper 32 bits in 64-bit registers, these work correctly. Tested on x86-64 and x32. There are no code changes on x86-64.
2020-04-09Update kernel version to 5.6 in tst-mman-consts.py.Joseph Myers1-1/+1
This patch updates the kernel version in the test tst-mman-consts.py to 5.6. (There are no new constants covered by this test in 5.6 that need any other header changes.) Tested with build-many-glibcs.py.
2020-04-08Update mips libm-test-ulpsAdhemerval Zanella2-54/+56
2020-04-08Update alpha libm-test-ulpsAdhemerval Zanella1-27/+28
2020-04-08Update ia64 libm-test-ulpsAdhemerval Zanella1-15/+17
2020-04-08Update sparc libm-test-ulpsAdhemerval Zanella1-27/+28
2020-04-08Update arm libm-test-ulpsAdhemerval Zanella1-27/+28
2020-04-08Update aarch64 libm-test-ulpsAdhemerval Zanella1-15/+16
2020-04-07powerpc: Update ULPs and xfail more ibm128 outputsTulio Magno Quites Machado Filho1-16/+18
There are 2 new input values that require to be marked as xfail-rounding:ibm128-libgcc as they're known to fail because of libgcc issues with different rounding modes. Otherwise, the other tests just need an increase in ULP.
2020-04-07i386: Remove build support for GCC older than GCC 6H.J. Lu3-52/+4
Since GCC 6.2 or later is required to build glibc, remove build support for GCC older than GCC 6. Testd with GCC 6.4 and GCC 9.3. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-04-06Update hppa libm-test-ulpsJohn David Anglin1-27/+27
2020-04-06y2038: linux: Provide __mq_timedreceive_time64 implementationLukasz Majewski1-4/+47
This patch provides new __mq_timedreceive_time64 explicit 64 bit function for receiving messages with absolute timeout. Moreover, a 32 bit version - __mq_timedreceive has been refactored to internally use __mq_timedreceive_time64. The __mq_timedreceive is now supposed to be used on systems still supporting 32 bit time (__TIMESIZE != 64) - hence the necessary conversion to 64 bit struct __timespec64 from struct timespec. The new mq_timedsend_time64 syscall available from Linux 5.1+ has been used, when applicable. As this wrapper function is also used internally in the glibc, to e.g. provide mq_receive implementation, an explicit check for abs_timeout being NULL has been added due to conversions between struct timespec and struct __timespec64. Before this change the Linux kernel handled this NULL pointer. Build tests: - ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Linux kernel, headers and minimal kernel version for glibc build test matrix: - Linux v5.1 (with mq_timedreceive_time64) and glibc built with v5.1 as minimal kernel version (--enable-kernel="5.1.0") The __ASSUME_TIME64_SYSCALLS flag defined. - Linux v5.1 and default minimal kernel version The __ASSUME_TIME64_SYSCALLS not defined, but kernel supports mq_timedreceive_time64 syscall. - Linux v4.19 (no mq_timedreceive_time64 support) with default minimal kernel version for contemporary glibc (3.2.0) This kernel doesn't support mq_timedreceive_time64 syscall, so the fallback to mq_timedreceive is tested. Above tests were performed with Y2038 redirection applied as well as without (so the __TIMESIZE != 64 execution path is checked as well). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-04-06y2038: linux: Provide __mq_timedsend_time64 implementationLukasz Majewski1-3/+46
This patch provides new __mq_timedsend_time64 explicit 64 bit function for sending messages with absolute timeout. Moreover, a 32 bit version - __mq_timedsend has been refactored to internally use __mq_timedsend_time64. The __mq_timedsend is now supposed to be used on systems still supporting 32 bit time (__TIMESIZE != 64) - hence the necessary conversion to 64 bit struct __timespec64 from struct timespec. The new __mq_timedsend_time64 syscall available from Linux 5.1+ has been used, when applicable. As this wrapper function is also used internally in the glibc, to e.g. provide mq_send implementation, an explicit check for abs_timeout being NULL has been added due to conversions between struct timespec and struct __timespec64. Before this change the Linux kernel handled this NULL pointer. Build tests: - ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Linux kernel, headers and minimal kernel version for glibc build test matrix: - Linux v5.1 (with mq_timedsend_time64) and glibc built with v5.1 as a minimal kernel version (--enable-kernel="5.1.0") The __ASSUME_TIME64_SYSCALLS flag defined. - Linux v5.1 and default minimal kernel version The __ASSUME_TIME64_SYSCALLS not defined, but kernel supports mq_timedsend_time64 syscall. - Linux v4.19 (no mq_timedsend_time64 support) with default minimal kernel version for contemporary glibc (3.2.0) This kernel doesn't support mq_timedsend_time64 syscall, so the fallback to mq_timedsend is tested. Above tests were performed with Y2038 redirection applied as well as without (so the __TIMESIZE != 64 execution path is checked as well). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-04-06powerpc64le: enforce non-specific long double in .gnu.attributes sectionPaul E. Murphy2-1/+56
We turn off this feature to avoid polluting our shared libary with a specific value. However, static libgcc is not under our control, and has enabled this for ibm128 routines. This pollutes the resulting shared libraries with it. Attach a post-linking hook to replace this section with one crafted as hard-float + indeterminate ldbl. This allows IEEE ldbl users to avoid having to disable the gnu attributes feature which should protect them from linking ibm ldbl libraries using the gnu attributes feature. Currently, this only replaces libc and libm which support both ldbl formats and rely on application code to explicitly determine which is to be used. Strictly speaking, the section could be deleted with minimal lost value. However correctly set attributes could prove useful for some future change, and similarly missing attributes. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
2020-04-06powerpc64le: workaround ieee long double / _Float128 stdc++ bugPaul E. Murphy1-0/+11
-mabi=ieeelongdouble triggers the stdc++ libraries _Float128 support, which then breaks if algorithm is included. For now, explicitly disable _Float128 for such tests. I have opened up GCC BZ 94080 to track this. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
2020-04-06powerpc64le: Enforce -mabi=ibmlongdouble when -mfloat128 usedPaul E. Murphy2-43/+53
I have observed a bug on 7.4.0 whereby __mulkc3 calls are swapped with __multc3 depending on ABI selection. For the sake of being overly cautious, build all _Float128 files with ibm128 to workaround these compilers. This has been noted in GCC BZ 84914, and will not be fixed for GCC 7. Likewise, non-math files built with _Float128 are assumed to have ibm long double. Explicilty preserve this assumption. Finally, add some bootstrapping code to avoid applying these options until IEEE long double is enabled as they require GCC 7 and above. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
2020-04-06powerpc64le/multiarch: don't generate strong aliases for fmaf128-ppc64Paul E. Murphy1-0/+2
This prevents generating a second alias for __fmaieee128 when compiling with ldouble == ieee128 redirects.
2020-04-06ldbl-128ibm: simplify iscanonical.hPaul E. Murphy1-8/+2
The test for enabling _Float128 or IEEE 128 long double can be greatly simplified knowing that there is no ibm128, thus we require no special cases, and everything is canonical. This reverts the changes to ldbl-128ibm iscanonical.h from commit 8dbfea3a2094798a52cebddde01d255483f49665 and extends the check for __NO_LONG_DOUBLE_MATH to include a check for float128 redirects to long double. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
2020-04-06i386: Disable check_consistency for GCC 5 and above [BZ #25788]H.J. Lu1-2/+3
check_consistency should be disabled for GCC 5 and above since there is no fixed PIC register in GCC 5 and above. Check __GNUC_PREREQ (5,0) instead OPTIMIZE_FOR_GCC_5 since OPTIMIZE_FOR_GCC_5 is false with -fno-omit-frame-pointer.
2020-04-03Update syscall lists for Linux 5.6.Joseph Myers24-2/+54
Linux 5.6 has new openat2 and pidfd_getfd syscalls. This patch adds them to syscall-names.list and regenerates the arch-syscall.h files. Tested with build-many-glibcs.py.
2020-04-03nptl: Remove x86_64 cancellation assembly implementations [BZ #25765]Adhemerval Zanella4-153/+0
All cancellable syscalls are done by C implementations, so there is no no need to use a specialized implementation to optimize register usage. It fixes BZ #25765. Checked on x86_64-linux-gnu.
2020-04-03aarch64: update bits/hwcap.hSzabolcs Nagy1-0/+18
Up to date with Linux 5.6. dl-procinfo.c is not updated because HWCAP2 bits are not handled specially in glibc.
2020-04-03S390: Regenerate ULPs.Stefan Liebler1-15/+19
Updates needed after recent commit a9d42c09a327540a99f2eac25a98fd2ad6d0b540 math: Add inputs that yield larger errors for float type (x86_64)
2020-04-02sysv/alpha: Use generic __timeval32 and helpersAlistair Francis10-162/+67
Now there is a generic __timeval32 and helpers we can use them for Alpha instead of the Alpha specific ones. Reviewed-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-04-02linux: Use long time_t for wait4/getrusageAlistair Francis4-3/+153
The Linux kernel expects rusage to use a 32-bit time_t, even on archs with a 64-bit time_t (like RV32). To address this let's convert rusage to/from 32-bit and 64-bit to ensure the kernel always gets a 32-bit time_t. While we are converting these functions let's also convert them to be the y2038 safe versions. This means there is a *64 function that is called by a backwards compatible wrapper. Reviewed-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-04-02linux: Use long time_t __getitimer/__setitimerAlistair Francis4-2/+182
The Linux kernel expects itimerval to use a 32-bit time_t, even on archs with a 64-bit time_t (like RV32). To address this let's convert itimerval to/from 32-bit and 64-bit to ensure the kernel always gets a 32-bit time_t. While we are converting these functions let's also convert them to be the y2038 safe versions. This means there is a *64 function that is called by a backwards compatible wrapper. Tested-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-04-02sysv: Define __KERNEL_OLD_TIMEVAL_MATCHES_TIMEVAL64Alistair Francis5-0/+26
On y2038 safe 32-bit systems the Linux kernel expects itimerval and rusage to use a 32-bit time_t, even though the other time_t's are 64-bit. There are currently no plans to make 64-bit time_t versions of these structs. There are also other occurrences where the time passed to the kernel via timeval doesn't match the wordsize. To handle these cases let's define a new macro __KERNEL_OLD_TIMEVAL_MATCHES_TIMEVAL64. This macro specifies if the kernel's old_timeval matches the new timeval64. This should be 1 for 64-bit architectures except for Alpha's osf syscalls. The define should be 0 for 32-bit architectures and Alpha's osf syscalls. Reviewed-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-03-31math: Add inputs that yield larger errors for float type (x86_64)Paul Zimmermann2-59/+64
The corner cases included were generated using exhaustive search for all float/binary32 values on x86_64 (comparing to MPFR for correct rounding to nearest). For the j0/j1/y0 functions, only cases with ulp error <= 9 were included. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-30Add new file missed in previous hppa commit.John David Anglin1-0/+58
2020-03-30powerpc: Add support for fmaf128() in hardwareRaphael Moreira Zinsly5-1/+127
Adds a POWER9 version of fmaf128 that uses the xsmaddqp instruction. Co-authored-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-03-30Fix data race in setting function descriptors during lazy binding on hppa.John David Anglin4-22/+142
This addresses an issue that is present mainly on SMP machines running threaded code. In a typical indirect call or PLT import stub, the target address is loaded first. Then the global pointer is loaded into the PIC register in the delay slot of a branch to the target address. During lazy binding, the target address is a trampoline which transfers to _dl_runtime_resolve(). _dl_runtime_resolve() uses the relocation offset stored in the global pointer and the linkage map stored in the trampoline to find the relocation. Then, the function descriptor is updated. In a multi-threaded application, it is possible for the global pointer to be updated between the load of the target address and the global pointer. When this happens, the relocation offset has been replaced by the new global pointer. The function pointer has probably been updated as well but there is no way to find the address of the function descriptor and to transfer to the target. So, _dl_runtime_resolve() typically crashes. HP-UX addressed this problem by adding an extra pc-relative branch to the trampoline. The descriptor is initially setup to point to the branch. The branch then transfers to the trampoline. This allowed the trampoline code to figure out which descriptor was being used without any modification to user code. I didn't use this approach as it is more complex and changes function pointer canonicalization. The order of loading the target address and global pointer in indirect calls was not consistent with the order used in import stubs. In particular, $$dyncall and some inline versions of it loaded the global pointer first. This was inconsistent with the global pointer being updated first in dl-machine.h. Assuming the accesses are ordered, we want elf_machine_fixup_plt() to store the global pointer first and calls to load it last. Then, the global pointer will be correct when the target function is entered. However, just to make things more fun, HP added support for out-of-order execution of accesses in PA 2.0. The accesses used by calls are weakly ordered. So, it's possibly under some circumstances that a function might be entered with the wrong global pointer. However, HP uses weakly ordered accesses in 64-bit HP-UX, so I assume that loading the global pointer in the delay slot of the branch must work consistently. The basic fix for the race is a combination of modifying user code to preserve the address of the function descriptor in register %r22 and setting the least-significant bit in the relocation offset. The latter was suggested by Carlos as a way to distinguish relocation offsets from global pointer values. Conventionally, %r22 is used as the address of the function descriptor in calls to $$dyncall. So, it wasn't hard to preserve the address in %r22. I have updated gcc trunk and gcc-9 branch to not clobber %r22 in $$dyncall and inline indirect calls. I have also modified the import stubs in binutils trunk and the 2.33 branch to preserve %r22. This required making the stubs one instruction longer but we save one relocation. I also modified binutils to align the .plt section on a 8-byte boundary. This allows descriptors to be updated atomically with a floting-point store. With these changes, _dl_runtime_resolve() can fallback to an alternate mechanism to find the relocation offset when it has been clobbered. There's just one additional instruction in the fast path. I tested the fallback function, _dl_fix_reloc_arg(), by changing the branch to always use the fallback. Old code still runs as it did before. Fixes bug 23296. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-30sparc: Move __fenv_{ld,st}fsr to fenv-private.hAdhemerval Zanella18-9/+25
These should not be exported on installed headers. Checked on sparc64-linux-gnu and sparcv9-linux-gnu.
2020-03-30x86: Remove feraiseexcept optimizationAdhemerval Zanella2-111/+0
Similar to fenvinline.h removal, this kind of optimization is better implemented by the compiler. Also newer code avoid setting exceptions directly (for instance the code to make new logf, log2f and powf implementatation to now support SVID compat). The BZ#94194 [1] the corresponding GCC bug for adding replacements for these on x86. Checked on x86_64-linux-gnu and i686-linux-gnu. [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94194
2020-03-30math: Remove fenvinline.hAdhemerval Zanella4-113/+5
Similar to string2.h (18b10de7ce) and string3.h (09a596cc2c) this patch removes the fenvinline.h on all architectures. Currently only powerpc implements some optimizations. This kind of optimization is better implemented by the compiler (which handles the architecture ISA transparently). Also, for the specific optimized powerpc implementation the code is becoming convoluted and these micro-optimization are hardly wildly used, even more being a possible hotspot in realword cases (non-default rounding are used only on specific cases and exception handling are done most likely only on errors path). Only x86 implements similar optimization (on fenv.h) also indicates that these should no be on libc. The math/test-fenv already covers all math/test-fenvinline tests, so it is safe to remove it. The powerpc fegetround optimization is moved to internal fenv_libc.h. The BZ#94193 [1] the corresponding GCC bug for adding replacements for these on powerpc. Checked on x86_64-linux-gnu and powerpc64le-linux-gnu. [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94193
2020-03-27sysv/linux: Rename alpha functions to be alpha specificAlistair Francis9-32/+32
These functions are alpha specifc, rename them to be clear. Let's also rename the header file from tv32-compat.h to alpha-tv32-compat.h. This is to avoid conflicts with the one we will introduce later. Reviewed-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-03-25powerpc64: apply -mabi=ibmlongdouble to special filesPaul E. Murphy3-2/+13
Some of these files depend on the avoidance of using the various register sets of POWER. When enabling the IEEE 128 long double, we must be sure to disable this ABI as some compilers will refuse to compile if -mno-vsx and -mabi=ieeelongdouble are both present. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
2020-03-25powerpc64le: add -mno-gnu-attribute to *f128 objects and difftimePaul E. Murphy2-9/+16
In practice, this flag should be applied globally, but it makes a good sanity check to ensure ibm128 and ieee128 long double files are not getting mismatched. _Float128 files use no long double, thus are always safe to use this option. Similarly, when investigating the linker complaints, difftime makes trivial, self contained, usage of long double, so thus it is also explicitly marked as such. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
2020-03-25Makeconfig: sandwich gnulib-tests between libc/ld linking of testsPaul E. Murphy2-30/+1
This better resembles the default linking process with the gnulibs, and also resolves the increasingly difficult to maintain f128-loader-link usage on powerpc64le as some libgcc symbols are dependent on those found in the loader (ld).
2020-03-25powerpc64le: Ensure correct ldouble compiler flags are usedGabriel F. T. Gomes1-0/+58
Ensure the correct ldouble abi flags are applied to ibm128 files and nldbl files. Remove the IEEE options if used, and apply the flags used to build ldouble files which are ibm128 abi. nldbl tests are a little tricky. To use the support, we must remove all ldouble abi flags, and ensure -mlong-double-64 is used. Co-authored-by: Rajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com> Co-authored-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com> Co-authored-by: Paul E. Murphy <murphyp@linux.vnet.ibm.com>
2020-03-25ldbl-128ibm-compat: PLT redirects for using ldbl redirects internallyPaul E. Murphy11-3/+20
Tweak the PLT bypass magic when building glibc with long double redirects. This is made more difficult by the fact we only get one chance to redirect functions. This happens via the public headers. There are roughly three classes of redirect we need to attend to today: 1. Simple redirects, redirected via cdef macro overrides and and new libc_hidden_ldbl_proto macro. 2. Internal usage of internal API, e.g __snprintf, which has no direct analogue. This is bypassed directly on case-by- case basis. 3. Double redirects, e.g sscanf and related. These require a heavier handed approach of macro renaming to existing symbols. Most simple redirects are handled via 1. Ideally, the libc_* macro would live in libc-symbols.h, but in practice the macros needed for it to do anything useful live in cdefs.h, so they are defined in the local override. Notably, the internal name of the asprintf generated for ieee ldbl redirects is renamed to work with internal prefixed usage. This resolves the local plt usage introduced when building glibc with ldbl == ieee128 on ppc64le. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
2020-03-23posix: Fix system error return value [BZ #25715]Adhemerval Zanella1-7/+11
It fixes 5fb7fc9635 when posix_spawn fails. Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-23y2038: fix: Add missing libc_hidden_def attribute for some syscall wrappersLukasz Majewski5-0/+10
During the conversion to support 64 bit time on some architectures with __WORDSIZE == 32 && __TIMESIZE != 64 the libc_hidden_def attribute for eligible functions was by mistake omitted. This patch fixes this issue and exports (and allows using) those functions when Y2038 support is enabled in glibc.
2020-03-19math: Remove inline math testsAdhemerval Zanella25-15962/+1
With mathinline removal there is no need to keep building and testing inline math tests. The gen-libm-tests.py support to generate ULP_I_* is removed and all libm-test-ulps files are updated to longer have the i{float,double,ldouble} entries. The support for no-test-inline is also removed from both gen-auto-libm-tests and the auto-libm-test-out-* were regenerated. Checked on x86_64-linux-gnu and i686-linux-gnu.
2020-03-19m68k: Remove mathinline.hAdhemerval Zanella15-378/+238
This is similar to x86 (da75c1b180f9355a) and powerpc (32ea72999693b98e) mathinline.h removal. The required macros to build the fpu routines are moved to mathimpl.h, while the inline optimization macros for atan, tanh, rint, log1p, significand, trunc, floor, ceil, isinf, finite, scalbn, isnan, scalbln, nearbyint, lrint, and sincos are removed. The gcc bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94204 was created to track builtin support. Checked with a build against m68k-linux-gnu, resulting binaries are similar with and without the patch.
2020-03-18x86: Remove ARCH_CET_LEGACY_BITMAP [BZ #25397]H.J. Lu10-208/+165
Since legacy bitmap doesn't cover jitted code generated by legacy JIT engine, it isn't very useful. This patch removes ARCH_CET_LEGACY_BITMAP and treats indirect branch tracking similar to shadow stack by removing legacy bitmap support. Tested on CET Linux/x86-64 and non-CET Linux/x86-64. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-11[AArch64] Improve integer memcpyWilco Dijkstra1-96/+101
Further optimize integer memcpy. Small cases now include copies up to 32 bytes. 64-128 byte copies are split into two cases to improve performance of 64-96 byte copies. Comments have been rewritten.