aboutsummaryrefslogtreecommitdiff
path: root/sysdeps/x86
AgeCommit message (Collapse)AuthorFilesLines
2017-10-22x86-64: Use fxsave/xsave/xsavec in _dl_runtime_resolve [BZ #21265]H.J. Lu3-22/+85
In _dl_runtime_resolve, use fxsave/xsave/xsavec to preserve all vector, mask and bound registers. It simplifies _dl_runtime_resolve and supports different calling conventions. ld.so code size is reduced by more than 1 KB. However, use fxsave/xsave/xsavec takes a little bit more cycles than saving and restoring vector and bound registers individually. Latency for _dl_runtime_resolve to lookup the function, foo, from one shared library plus libc.so: Before After Change Westmere (SSE)/fxsave 345 866 151% IvyBridge (AVX)/xsave 420 643 53% Haswell (AVX)/xsave 713 1252 75% Skylake (AVX+MPX)/xsavec 559 719 28% Skylake (AVX512+MPX)/xsavec 145 272 87% Ryzen (AVX)/xsavec 280 553 97% This is the worst case where portion of time spent for saving and restoring registers is bigger than majority of cases. With smaller _dl_runtime_resolve code size, overall performance impact is negligible. On IvyBridge, differences in build and test time of binutils with lazy binding GCC and binutils are noises. On Westmere, differences in bootstrap and "makc check" time of GCC 7 with lazy binding GCC and binutils are also noises. [BZ #21265] * sysdeps/x86/cpu-features-offsets.sym (XSAVE_STATE_SIZE_OFFSET): New. * sysdeps/x86/cpu-features.c: Include <libc-internal.h>. (get_common_indeces): Set xsave_state_size and bit_arch_XSAVEC_Usable if needed. (init_cpu_features): Remove bit_arch_Use_dl_runtime_resolve_slow and bit_arch_Use_dl_runtime_resolve_opt. * sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt): Removed. (bit_arch_Use_dl_runtime_resolve_slow): Likewise. (bit_arch_Prefer_No_AVX512): Updated. (bit_arch_MathVec_Prefer_No_AVX512): Likewise. (bit_arch_XSAVEC_Usable): New. (STATE_SAVE_OFFSET): Likewise. (STATE_SAVE_MASK): Likewise. [__ASSEMBLER__]: Include <cpu-features-offsets.h>. (cpu_features): Add xsave_state_size. (index_arch_Use_dl_runtime_resolve_opt): Removed. (index_arch_Use_dl_runtime_resolve_slow): Likewise. (index_arch_XSAVEC_Usable): New. * sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup): Replace _dl_runtime_resolve_sse, _dl_runtime_resolve_avx, _dl_runtime_resolve_avx_slow, _dl_runtime_resolve_avx_opt, _dl_runtime_resolve_avx512 and _dl_runtime_resolve_avx512_opt with _dl_runtime_resolve_fxsave, _dl_runtime_resolve_xsave and _dl_runtime_resolve_xsavec. * sysdeps/x86_64/dl-trampoline.S (DL_RUNTIME_UNALIGNED_VEC_SIZE): Removed. (DL_RUNTIME_RESOLVE_REALIGN_STACK): Check STATE_SAVE_ALIGNMENT instead of VEC_SIZE. (REGISTER_SAVE_BND0): Removed. (REGISTER_SAVE_BND1): Likewise. (REGISTER_SAVE_BND3): Likewise. (REGISTER_SAVE_RAX): Always defined to 0. (VMOV): Removed. (_dl_runtime_resolve_avx): Likewise. (_dl_runtime_resolve_avx_slow): Likewise. (_dl_runtime_resolve_avx_opt): Likewise. (_dl_runtime_resolve_avx512): Likewise. (_dl_runtime_resolve_avx512_opt): Likewise. (_dl_runtime_resolve_sse): Likewise. (_dl_runtime_resolve_sse_vex): Likewise. (USE_FXSAVE): New. (_dl_runtime_resolve_fxsave): Likewise. (USE_XSAVE): Likewise. (_dl_runtime_resolve_xsave): Likewise. (USE_XSAVEC): Likewise. (_dl_runtime_resolve_xsavec): Likewise. * sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx512): Removed. (_dl_runtime_resolve_avx512_opt): Likewise. (_dl_runtime_resolve_avx): Likewise. (_dl_runtime_resolve_avx_opt): Likewise. (_dl_runtime_resolve_sse): Likewise. (_dl_runtime_resolve_sse_vex): Likewise. (_dl_runtime_resolve_fxsave): New. (_dl_runtime_resolve_xsave): Likewise. (_dl_runtime_resolve_xsavec): Likewise. (cherry picked from commit b52b0d793dcb226ecb0ecca1e672ca265973233c)
2017-08-06x86-64: Use _dl_runtime_resolve_opt only with AVX512F [BZ #21871]H.J. Lu1-2/+5
On AVX machines with XGETBV (ECX == 1) like Skylake processors, (gdb) disass _dl_runtime_resolve_avx_opt Dump of assembler code for function _dl_runtime_resolve_avx_opt: 0x0000000000015890 <+0>: push %rax 0x0000000000015891 <+1>: push %rcx 0x0000000000015892 <+2>: push %rdx 0x0000000000015893 <+3>: mov $0x1,%ecx 0x0000000000015898 <+8>: xgetbv 0x000000000001589b <+11>: mov %eax,%r11d 0x000000000001589e <+14>: pop %rdx 0x000000000001589f <+15>: pop %rcx 0x00000000000158a0 <+16>: pop %rax 0x00000000000158a1 <+17>: and $0x4,%r11d 0x00000000000158a5 <+21>: bnd je 0x16200 <_dl_runtime_resolve_sse_vex> End of assembler dump. is slower than: (gdb) disass _dl_runtime_resolve_avx_slow Dump of assembler code for function _dl_runtime_resolve_avx_slow: 0x0000000000015850 <+0>: vorpd %ymm0,%ymm1,%ymm8 0x0000000000015854 <+4>: vorpd %ymm2,%ymm3,%ymm9 0x0000000000015858 <+8>: vorpd %ymm4,%ymm5,%ymm10 0x000000000001585c <+12>: vorpd %ymm6,%ymm7,%ymm11 0x0000000000015860 <+16>: vorpd %ymm8,%ymm9,%ymm9 0x0000000000015865 <+21>: vorpd %ymm10,%ymm11,%ymm10 0x000000000001586a <+26>: vpcmpeqd %xmm8,%xmm8,%xmm8 0x000000000001586f <+31>: vorpd %ymm9,%ymm10,%ymm10 0x0000000000015874 <+36>: vptest %ymm10,%ymm8 0x0000000000015879 <+41>: bnd jae 0x158b0 <_dl_runtime_resolve_avx> 0x000000000001587c <+44>: vzeroupper 0x000000000001587f <+47>: bnd jmpq 0x16200 <_dl_runtime_resolve_sse_vex> End of assembler dump. (gdb) since xgetbv takes much more cycles than single cycle operations like vpord/vvpcmpeq/ptest. _dl_runtime_resolve_opt should be used only with AVX512 where AVX512 instructions lead to lower CPU frequency on Skylake server. [BZ #21871] * sysdeps/x86/cpu-features.c (init_cpu_features): Set bit_arch_Use_dl_runtime_resolve_opt only with AVX512F. (cherry picked from commit d2cf37c0a2a375cf2fde69f1afbcc49e45368fc4)
2017-04-28x86: Use AVX2 memcpy/memset on Skylake server [BZ #21396]H.J. Lu2-1/+8
On Skylake server, AVX512 load/store instructions in memcpy/memset may lead to lower CPU turbo frequency in certain situations. Use of AVX2 in memcpy/memset has been observed to have improved overall performance in many workloads due to the higher frequency. Since AVX512ER is unique to Xeon Phi, this patch sets Prefer_No_AVX512 if AVX512ER isn't available so that AVX2 versions of memcpy/memset are used on Skylake server. [BZ #21396] * sysdeps/x86/cpu-features.c (init_cpu_features): Set Prefer_No_AVX512 if AVX512ER isn't available. * sysdeps/x86/cpu-features.h (bit_arch_Prefer_No_AVX512): New. (index_arch_Prefer_No_AVX512): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Don't use AVX512 version if Prefer_No_AVX512 is set. * sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Likewise. * sysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Likewise. * sysdeps/x86_64/multiarch/memmove_chk.S (__memmove_chk): Likewise. * sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise. * sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk): Likewise. * sysdeps/x86_64/multiarch/memset.S (memset): Likewise. * sysdeps/x86_64/multiarch/memset_chk.S (__memset_chk): Likewise. (cherry picked from commit 4cb334c4d6249686653137ec273d081371b3672d)
2017-04-28x86: Set Prefer_No_VZEROUPPER if AVX512ER is availableH.J. Lu2-2/+21
AVX512ER won't be implemented in any Xeon processors and will be in all Xeon Phi processors. Don't check CPU model number when setting Prefer_No_VZEROUPPER for Xeon Phi. Instead, set Prefer_No_VZEROUPPER if AVX512ER is available. It works with current and future Xeon Phi and non-Xeon Phi processors. * sysdeps/x86/cpu-features.c (init_cpu_features): Set Prefer_No_VZEROUPPER if AVX512ER is available. * sysdeps/x86/cpu-features.h (bit_cpu_AVX512PF): New. (bit_cpu_AVX512ER): Likewise. (bit_cpu_AVX512CD): Likewise. (bit_cpu_AVX512BW): Likewise. (bit_cpu_AVX512VL): Likewise. (index_cpu_AVX512PF): Likewise. (index_cpu_AVX512ER): Likewise. (index_cpu_AVX512CD): Likewise. (index_cpu_AVX512BW): Likewise. (index_cpu_AVX512VL): Likewise. (reg_AVX512PF): Likewise. (reg_AVX512ER): Likewise. (reg_AVX512CD): Likewise. (reg_AVX512BW): Likewise. (reg_AVX512VL): Likewise. (cherry picked from commit 1c53cb49de6d82d9469ccbd5aa0c55924502bd8b)
2016-11-30X86-64: Add _dl_runtime_resolve_avx[512]_{opt|slow} [BZ #20508]H.J. Lu2-0/+20
There is transition penalty when SSE instructions are mixed with 256-bit AVX or 512-bit AVX512 load instructions. Since _dl_runtime_resolve_avx and _dl_runtime_profile_avx512 save/restore 256-bit YMM/512-bit ZMM registers, there is transition penalty when SSE instructions are used with lazy binding on AVX and AVX512 processors. To avoid SSE transition penalty, if only the lower 128 bits of the first 8 vector registers are non-zero, we can preserve %xmm0 - %xmm7 registers with the zero upper bits. For AVX and AVX512 processors which support XGETBV with ECX == 1, we can use XGETBV with ECX == 1 to check if the upper 128 bits of YMM registers or the upper 256 bits of ZMM registers are zero. We can restore only the non-zero portion of vector registers with AVX/AVX512 load instructions which will zero-extend upper bits of vector registers. This patch adds _dl_runtime_resolve_sse_vex which saves and restores XMM registers with 128-bit AVX store/load instructions. It is used to preserve YMM/ZMM registers when only the lower 128 bits are non-zero. _dl_runtime_resolve_avx_opt and _dl_runtime_resolve_avx512_opt are added and used on AVX/AVX512 processors supporting XGETBV with ECX == 1 so that we store and load only the non-zero portion of vector registers. This avoids SSE transition penalty caused by _dl_runtime_resolve_avx and _dl_runtime_profile_avx512 when only the lower 128 bits of vector registers are used. _dl_runtime_resolve_avx_slow is added and used for AVX processors which don't support XGETBV with ECX == 1. Since there is no SSE transition penalty on AVX512 processors which don't support XGETBV with ECX == 1, _dl_runtime_resolve_avx512_slow isn't provided. [BZ #20495] [BZ #20508] * sysdeps/x86/cpu-features.c (init_cpu_features): For Intel processors, set Use_dl_runtime_resolve_slow and set Use_dl_runtime_resolve_opt if XGETBV suports ECX == 1. * sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt): New. (bit_arch_Use_dl_runtime_resolve_slow): Likewise. (index_arch_Use_dl_runtime_resolve_opt): Likewise. (index_arch_Use_dl_runtime_resolve_slow): Likewise. * sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup): Use _dl_runtime_resolve_avx512_opt and _dl_runtime_resolve_avx_opt if Use_dl_runtime_resolve_opt is set. Use _dl_runtime_resolve_slow if Use_dl_runtime_resolve_slow is set. * sysdeps/x86_64/dl-trampoline.S: Include <cpu-features.h>. (_dl_runtime_resolve_opt): New. Defined for AVX and AVX512. (_dl_runtime_resolve): Add one for _dl_runtime_resolve_sse_vex. * sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx_slow): New. (_dl_runtime_resolve_opt): Likewise. (_dl_runtime_profile): Define only if _dl_runtime_profile is defined. (cherry picked from commit fb0f7a6755c1bfaec38f490fbfcaa39a66ee3604)
2016-07-01Fixed wrong vector sincos/sincosf ABI to have it compatible withAndrew Senkevich1-0/+98
current vector function declaration "#pragma omp declare simd notinbranch", according to which vector sincos should have vector of pointers for second and third parameters. It is fixed with implementation as wrapper to version having second and third parameters as pointers. [BZ #20024] * sysdeps/x86/fpu/test-math-vector-sincos.h: New. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S: Fixed ABI of this implementation of vector function. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S: Likewise. * sysdeps/x86_64/fpu/svml_d_sincos2_core.S: Likewise. * sysdeps/x86_64/fpu/svml_d_sincos4_core.S: Likewise. * sysdeps/x86_64/fpu/svml_d_sincos4_core_avx.S: Likewise. * sysdeps/x86_64/fpu/svml_d_sincos8_core.S: Likewise. * sysdeps/x86_64/fpu/svml_s_sincosf16_core.S: Likewise. * sysdeps/x86_64/fpu/svml_s_sincosf4_core.S: Likewise. * sysdeps/x86_64/fpu/svml_s_sincosf8_core.S: Likewise. * sysdeps/x86_64/fpu/svml_s_sincosf8_core_avx.S: Likewise. * sysdeps/x86_64/fpu/test-double-vlen2-wrappers.c: Use another wrapper for testing vector sincos with fixed ABI. * sysdeps/x86_64/fpu/test-double-vlen4-avx2-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-double-vlen4-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-double-vlen8-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx.c: New test. * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx2.c: Likewise. * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx512.c: Likewise. * sysdeps/x86_64/fpu/test-double-libmvec-sincos.c: Likewise. * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx.c: Likewise. * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx2.c: Likewise. * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx512.c: Likewise. * sysdeps/x86_64/fpu/test-float-libmvec-sincosf.c: Likewise. * sysdeps/x86_64/fpu/Makefile: Added new tests.
2016-06-30Check Prefer_ERMS in memmove/memcpy/mempcpy/memsetH.J. Lu1-0/+3
Although the Enhanced REP MOVSB/STOSB (ERMS) implementations of memmove, memcpy, mempcpy and memset aren't used by the current processors, this patch adds Prefer_ERMS check in memmove, memcpy, mempcpy and memset so that they can be used in the future. * sysdeps/x86/cpu-features.h (bit_arch_Prefer_ERMS): New. (index_arch_Prefer_ERMS): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Return __memcpy_erms for Prefer_ERMS. * sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S (__memmove_erms): Enabled for libc.a. * ysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Return __memmove_erms or Prefer_ERMS. * sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Return __mempcpy_erms for Prefer_ERMS. * sysdeps/x86_64/multiarch/memset.S (memset): Return __memset_erms for Prefer_ERMS.
2016-06-29Avoid array-bounds warning for strncat on i586 (bug 20260)Andreas Schwab1-2/+1
2016-06-07Check FMA after COMMON_CPUID_INDEX_80000001H.J. Lu1-4/+9
Since the FMA4 bit is in COMMON_CPUID_INDEX_80000001 and FMA4 requires AVX, determine if FMA4 is usable after COMMON_CPUID_INDEX_80000001 is available and if AVX is usable. [BZ #20195] * sysdeps/x86/cpu-features.c (get_common_indeces): Move FMA4 check to ... (init_cpu_features): Here.
2016-05-27Count number of logical processors sharing L2 cacheH.J. Lu1-34/+116
For Intel processors, when there are both L2 and L3 caches, SMT level type should be ued to count number of available logical processors sharing L2 cache. If there is only L2 cache, core level type should be used to count number of available logical processors sharing L2 cache. Number of available logical processors sharing L2 cache should be used for non-inclusive L2 and L3 caches. * sysdeps/x86/cacheinfo.c (init_cacheinfo): Count number of available logical processors with SMT level type sharing L2 cache for Intel processors.
2016-05-20Remove special L2 cache case for Knights LandingH.J. Lu1-2/+0
L2 cache is shared by 2 cores on Knights Landing, which has 4 threads per core: https://en.wikipedia.org/wiki/Xeon_Phi#Knights_Landing So L2 cache is shared by 8 threads on Knights Landing as reported by CPUID. We should remove special L2 cache case for Knights Landing. [BZ #18185] * sysdeps/x86/cacheinfo.c (init_cacheinfo): Don't limit threads sharing L2 cache to 2 for Knights Landing.
2016-05-19Correct Intel processor level type mask from CPUIDH.J. Lu1-1/+1
Intel CPUID with EAX == 11 returns: ECX Bits 07 - 00: Level number. Same value in ECX input. Bits 15 - 08: Level type. ^^^^^^^^^^^^^^^^^^^^^^^^ This is level type. Bits 31 - 16: Reserved. Intel processor level type mask should be 0xff00, not 0xff0. [BZ #20119] * sysdeps/x86/cacheinfo.c (init_cacheinfo): Correct Intel processor level type mask for CPUID with EAX == 11.
2016-05-19Check the HTT bit before counting logical threadsH.J. Lu2-76/+85
Skip counting logical threads for Intel processors if the HTT bit is 0 which indicates there is only a single logical processor. * sysdeps/x86/cacheinfo.c (init_cacheinfo): Skip counting logical threads if the HTT bit is 0. * sysdeps/x86/cpu-features.h (bit_cpu_HTT): New. (index_cpu_HTT): Likewise. (reg_HTT): Likewise.
2016-05-13Support non-inclusive caches on Intel processorsH.J. Lu1-1/+11
* sysdeps/x86/cacheinfo.c (init_cacheinfo): Check and support non-inclusive caches on Intel processors.
2016-05-11Remove x86 ifunc-defines.sym and rtld-global-offsets.symH.J. Lu4-10/+18
Merge x86 ifunc-defines.sym with x86 cpu-features-offsets.sym. Remove x86 ifunc-defines.sym and rtld-global-offsets.sym. No code changes on i686 and x86-64. * sysdeps/i386/i686/multiarch/Makefile (gen-as-const-headers): Remove ifunc-defines.sym. * sysdeps/x86_64/multiarch/Makefile (gen-as-const-headers): Likewise. * sysdeps/i386/i686/multiarch/ifunc-defines.sym: Removed. * sysdeps/x86/rtld-global-offsets.sym: Likewise. * sysdeps/x86_64/multiarch/ifunc-defines.sym: Likewise. * sysdeps/x86/Makefile (gen-as-const-headers): Remove rtld-global-offsets.sym. * sysdeps/x86_64/multiarch/ifunc-defines.sym: Merged with ... * sysdeps/x86/cpu-features-offsets.sym: This. * sysdeps/x86/cpu-features.h: Include <cpu-features-offsets.h> instead of <ifunc-defines.h> and <rtld-global-offsets.h>.
2016-05-08Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86H.J. Lu1-0/+673
Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86. No code changes on x86 and x86_64. * sysdeps/i386/cacheinfo.c: Include <sysdeps/x86/cacheinfo.c> instead of <sysdeps/x86_64/cacheinfo.c>. * sysdeps/x86_64/cacheinfo.c: Moved to ... * sysdeps/x86/cacheinfo.c: Here.
2016-04-15Detect Intel Goldmont and Airmont processorsH.J. Lu1-0/+8
Updated from the model numbers of Goldmont and Airmont processors in Intel64 And IA-32 Processor Architectures Software Developer's Manual Volume 3 Revision 058. * sysdeps/x86/cpu-features.c (init_cpu_features): Detect Intel Goldmont and Airmont processors.
2016-04-01Remove Fast_Copy_Backward from Intel Core processorsH.J. Lu1-5/+1
Intel Core i3, i5 and i7 processors have fast unaligned copy and copy backward is ignored. Remove Fast_Copy_Backward from Intel Core processors to avoid confusion. * sysdeps/x86/cpu-features.c (init_cpu_features): Don't set bit_arch_Fast_Copy_Backward for Intel Core proessors.
2016-03-28Initial Enhanced REP MOVSB/STOSB (ERMS) supportH.J. Lu1-0/+4
The newer Intel processors support Enhanced REP MOVSB/STOSB (ERMS) which has a feature bit in CPUID. This patch adds the Enhanced REP MOVSB/STOSB (ERMS) bit to x86 cpu-features. * sysdeps/x86/cpu-features.h (bit_cpu_ERMS): New. (index_cpu_ERMS): Likewise. (reg_ERMS): Likewise.
2016-03-28[x86] Add a feature bit: Fast_Unaligned_CopyH.J. Lu2-1/+16
On AMD processors, memcpy optimized with unaligned SSE load is slower than emcpy optimized with aligned SSSE3 while other string functions are faster with unaligned SSE load. A feature bit, Fast_Unaligned_Copy, is added to select memcpy optimized with unaligned SSE load. [BZ #19583] * sysdeps/x86/cpu-features.c (init_cpu_features): Set Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel processors. Set Fast_Copy_Backward for AMD Excavator processors. * sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy): New. (index_arch_Fast_Unaligned_Copy): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
2016-03-22Set index_arch_AVX_Fast_Unaligned_Load only for Intel processorsH.J. Lu2-74/+88
Since only Intel processors with AVX2 have fast unaligned load, we should set index_arch_AVX_Fast_Unaligned_Load only for Intel processors. Move AVX, AVX2, AVX512, FMA and FMA4 detection into get_common_indeces and call get_common_indeces for other processors. Add CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P to aoid loading GLRO(dl_x86_cpu_features) in cpu-features.c. [BZ #19583] * sysdeps/x86/cpu-features.c (get_common_indeces): Remove inline. Check family before setting family, model and extended_model. Set AVX, AVX2, AVX512, FMA and FMA4 usable bits here. (init_cpu_features): Replace HAS_CPU_FEATURE and HAS_ARCH_FEATURE with CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P. Set index_arch_AVX_Fast_Unaligned_Load for Intel processors with usable AVX2. Call get_common_indeces for other processors with family == NULL. * sysdeps/x86/cpu-features.h (CPU_FEATURES_CPU_P): New macro. (CPU_FEATURES_ARCH_P): Likewise. (HAS_CPU_FEATURE): Use CPU_FEATURES_CPU_P. (HAS_ARCH_FEATURE): Use CPU_FEATURES_ARCH_P.
2016-03-10Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.hH.J. Lu2-147/+155
index_* and bit_* macros are used to access cpuid and feature arrays o struct cpu_features. It is very easy to use bits and indices of cpuid array on feature array, especially in assembly codes. For example, sysdeps/i386/i686/multiarch/bcopy.S has HAS_CPU_FEATURE (Fast_Rep_String) which should be HAS_ARCH_FEATURE (Fast_Rep_String) We change index_* and bit_* to index_cpu_*/index_arch_* and bit_cpu_*/bit_arch_* so that we can catch such error at build time. [BZ #19762] * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h (EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*. * sysdeps/x86/cpu-features.c (init_cpu_features): Likewise. * sysdeps/x86/cpu-features.h (bit_*): Renamed to ... (bit_arch_*): This for feature array. (bit_*): Renamed to ... (bit_cpu_*): This for cpu array. (index_*): Renamed to ... (index_arch_*): This for feature array. (index_*): Renamed to ... (index_cpu_*): This for cpu array. [__ASSEMBLER__] (HAS_FEATURE): Add and use field. [__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE. [__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE. [!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and bit_##name with index_cpu_##name and bit_cpu_##name. [!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and bit_##name with index_arch_##name and bit_arch_##name.
2016-03-08Define _HAVE_STRING_ARCH_mempcpy to 1 for x86H.J. Lu1-0/+3
Since x86 has an optimized mempcpy and GCC can inline mempcpy on x86, define _HAVE_STRING_ARCH_mempcpy to 1 for x86. [BZ #19759] * sysdeps/x86/bits/string.h (_HAVE_STRING_ARCH_mempcpy): New.
2016-02-18Add _STRING_INLINE_unaligned and string_private.hH.J. Lu2-2/+22
As discussed in https://sourceware.org/ml/libc-alpha/2015-10/msg00403.html the setting of _STRING_ARCH_unaligned currently controls the external GLIBC ABI as well as selecting the use of unaligned accesses withing GLIBC. Since _STRING_ARCH_unaligned was recently changed for AArch64, this would potentially break the ABI in GLIBC 2.23, so split the uses and add _STRING_INLINE_unaligned to select the string ABI. This setting must be fixed for each target, while _STRING_ARCH_unaligned may be changed from release to release. _STRING_ARCH_unaligned is used unconditionally in glibc. But <bits/string.h>, which defines _STRING_ARCH_unaligned, isn't included with -Os. Since _STRING_ARCH_unaligned is internal to glibc and may change between glibc releases, it should be made private to glibc. _STRING_ARCH_unaligned should defined in the new string_private.h heade file which is included unconditionally from internal <string.h> for glibc build. [BZ #19462] * bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * include/string.h: Include <string_private.h>. * string/bits/string2.h: Replace _STRING_ARCH_unaligned with _STRING_INLINE_unaligned. * sysdeps/aarch64/bits/string.h (_STRING_ARCH_unaligned): Removed. (_STRING_INLINE_unaligned): New. * sysdeps/aarch64/string_private.h: New file. * sysdeps/generic/string_private.h: Likewise. * sysdeps/m68k/m680x0/m68020/string_private.h: Likewise. * sysdeps/s390/string_private.h: Likewise. * sysdeps/x86/string_private.h: Likewise. * sysdeps/m68k/m680x0/m68020/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/s390/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/sparc/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/x86/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This.
2016-01-14Set index_Fast_Unaligned_Load for Excavator family CPUsAmit Pawar1-0/+8
GLIBC benchtest testcases shows SSE2_Unaligned based implementations are performing faster compare to SSE2 based implementations for routines: strcmp, strcat, strncat, stpcpy, stpncpy, strcpy, strncpy and strstr. Flag index_Fast_Unaligned_Load is set for Excavator family 0x15h CPU's. This makes SSE2_Unaligned based implementations as default for these routines. [BZ #19467] * sysdeps/x86/cpu-features.c (init_cpu_features): Set index_Fast_Unaligned_Load flag for Excavator family CPUs.
2016-01-04Update copyright dates with scripts/update-copyrights.Joseph Myers28-28/+28
2015-12-19Added memset optimized with AVX512 for KNL hardware.Andrew Senkevich2-0/+6
It shows improvement up to 28% over AVX2 memset (performance results attached at <https://sourceware.org/ml/libc-alpha/2015-12/msg00052.html>). * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: New file. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Added new file. * sysdeps/x86_64/multiarch/ifunc-impl-list.c: Added new tests. * sysdeps/x86_64/multiarch/memset.S: Added new IFUNC branch. * sysdeps/x86_64/multiarch/memset_chk.S: Likewise. * sysdeps/x86/cpu-features.h (bit_Prefer_No_VZEROUPPER, index_Prefer_No_VZEROUPPER): New. * sysdeps/x86/cpu-features.c (init_cpu_features): Set the Prefer_No_VZEROUPPER for Knights Landing.
2015-12-15Add Prefer_MAP_32BIT_EXEC to map executable pages with MAP_32BIThjl/32bit/masterH.J. Lu1-0/+3
According to Silvermont software optimization guide, for 64-bit applications, branch prediction performance can be negatively impacted when the target of a branch is more than 4GB away from the branch. Add the Prefer_MAP_32BIT_EXEC bit so that mmap will try to map executable pages with MAP_32BIT first. NB: MAP_32BIT will map to lower 2GB, not lower 4GB, address. Prefer_MAP_32BIT_EXEC reduces bits available for address space layout randomization (ASLR), which is always disabled for SUID programs and can only be enabled by setting environment variable, LD_PREFER_MAP_32BIT_EXEC. On Fedora 23, this patch speeds up GCC 5 testsuite by 3% on Silvermont. [BZ #19367] * sysdeps/unix/sysv/linux/wordsize-64/mmap.c: New file. * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/64/mmap.c: Likewise. * sysdeps/x86/cpu-features.h (bit_Prefer_MAP_32BIT_EXEC): New. (index_Prefer_MAP_32BIT_EXEC): Likewise.
2015-12-15Enable Silvermont optimizations for Knights LandingH.J. Lu1-0/+3
Knights Landing processor is based on Silvermont. This patch enables Silvermont optimizations for Knights Landing. * sysdeps/x86/cpu-features.c (init_cpu_features): Enable Silvermont optimizations for Knights Landing.
2015-12-07Utilize x86_64 vector math functions w/o -fopenmp.Andrew Senkevich1-0/+6
This patch allows to use x86_64 vector math functions with GCC 6.* without OpenMP SIMD constructs. For additional details please visit <https://sourceware.org/glibc/wiki/libmvec#Example_2>. * sysdeps/x86/fpu/bits/math-vector.h: W/o -fopenmp declare vector math functions with GCC 6.* __attribute__ ((__simd__)).
2015-11-30Update family and model detection for AMD CPUsH.J. Lu1-12/+15
AMD CPUs uses the similar encoding scheme for extended family and model as Intel CPUs as shown in: http://support.amd.com/TechDocs/25481.pdf This patch updates get_common_indeces to get family and model for both Intel and AMD CPUs when family == 0x0f. [BZ #19214] * sysdeps/x86/cpu-features.c (get_common_indeces): Add an argument to return extended model. Update family and model with extended family and model when family == 0x0f. (init_cpu_features): Updated.
2015-11-27Better workaround for aliases of *_finite symbols in vector math library.Andrew Senkevich1-29/+0
Old workaround based on assembly aliases can lead to link fail (bug 19058). This patch makes workaround in another way to avoid it. [BZ #19058] * math/Makefile ($(inst_libdir)/libm.so): Added libmvec_nonshared.a to AS_NEEDED. * sysdeps/x86/fpu/bits/math-vector.h: Removed code with old workaround. * sysdeps/x86_64/fpu/Makefile (libmvec-support, libmvec-static-only-routines): Added new file. * sysdeps/x86_64/fpu/svml_finite_alias.S: New file.
2015-11-14Run tst-prelink test for GLOB_DAT relocH.J. Lu3-46/+0
Run tst-prelink test on targets with GLOB_DAT relocaton. * config.make.in (have-glob-dat-reloc): New. * configure.ac (libc_cv_has_glob_dat): New. Set to yes if target supports GLOB_DAT relocaton. AC_SUBST. * configure: Regenerated. * elf/Makefile (tests): Add tst-prelink. (tests-special): Add $(objpfx)tst-prelink-cmp.out. (tst-prelink-ENV): New. ($(objpfx)tst-prelink-conflict.out): Likewise. ($(objpfx)tst-prelink-cmp.out): Likewise. * sysdeps/x86/tst-prelink.c: Moved to ... * elf/tst-prelink.c: Here. * sysdeps/x86/tst-prelink.exp: Moved to ... * elf/tst-prelink.exp: Here. * sysdeps/x86/Makefile (tests): Don't add tst-prelink. (tst-prelink-ENV): Removed. ($(objpfx)tst-prelink-conflict.out): Likewise. ($(objpfx)tst-prelink-cmp.out): Likewise. (tests-special): Don't add $(objpfx)tst-prelink-cmp.out.
2015-11-10Add a test for prelink outputH.J. Lu3-0/+46
This test applies to i386 and x86_64 which set R_386_GLOB_DAT and R_X86_64_GLOB_DAT to ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA. [BZ #19178] * sysdeps/x86/Makefile (tests): Add tst-prelink. (tst-prelink-ENV): New. ($(objpfx)tst-prelink-conflict.out): Likewise. ($(objpfx)tst-prelink-cmp.out): Likewise. (tests-special): Add $(objpfx)tst-prelink-cmp.out. * sysdeps/x86/tst-prelink.c: New file. * sysdeps/x86/tst-prelink.exp: Likewise.
2015-10-28Handle more state in i386/x86_64 fesetenv (bug 16068).Joseph Myers3-1/+347
fenv_t should include architecture-specific floating-point modes and status flags. i386 and x86_64 fesetenv limit which bits they use from the x87 status and control words, when using saved state, and limit which parts of the state they set to fixed values, when using FE_DFL_ENV / FE_NOMASK_ENV. The following should be included but are excluded in at least some cases: status and masking for the "denormal operand" exception (which isn't part of FE_ALL_EXCEPT); precision control (explicitly mentioned in Annex F as something that counts as part of the floating-point environment); MXCSR FZ and DAZ bits (for FE_DFL_ENV and FE_NOMASK_ENV). This patch arranges for this extra state to be handled by fesetenv (and thereby by feupdateenv, which calls fesetenv). (Note that glibc functions using floating point are not generally expected to work correctly with non-default values of this state, especially precision control, but it is still logically part of the floating-point environment and should be handled as such by fesetenv. Changes to the state relating to subnormals ought generally to work with libm functions when the arguments aren't subnormal and neither are the expected results; that's a consequence of functions avoiding spurious internal underflows.) A question arising from this is whether FE_NOMASK_ENV should or should not mask the "denormal operand" exception. I decided it should mask that exception. This is the status quo - previously that exception could only be unmasked by direct manipulation of control registers (possibly via <fpu_control.h>). In addition, it means that use of FE_NOMASK_ENV leaves a floating-point environment the same as could be obtained by fesetenv (FE_DFL_ENV); feenableexcept (FE_ALL_EXCEPT);, rather than an environment in which an exception is unmasked that could only be masked again by using fesetenv with FE_DFL_ENV (or a previously saved environment) - this exception not being usable with other <fenv.h> functions because it's outside FE_ALL_EXCEPT. Tested for x86_64 and x86. [BZ #16068] * sysdeps/i386/fpu/fesetenv.c: Include <fpu_control.h>. (FE_ALL_EXCEPT_X86): New macro. (__fesetenv): Use FE_ALL_EXCEPT_X86 in most places instead of FE_ALL_EXCEPT. Ensure precision control is included in floating-point state. Ensure that FE_DFL_ENV and FE_NOMASK_ENV handle "denormal operand exception" and clear FZ and DAZ bits. * sysdeps/x86_64/fpu/fesetenv.c: Include <fpu_control.h>. (FE_ALL_EXCEPT_X86): New macro. (__fesetenv): Use FE_ALL_EXCEPT_X86 in most places instead of FE_ALL_EXCEPT. Ensure precision control is included in floating-point state. Ensure that FE_DFL_ENV and FE_NOMASK_ENV handle "denormal operand exception" and clear FZ and DAZ bits. * sysdeps/x86/fpu/test-fenv-sse-2.c: New file. * sysdeps/x86/fpu/test-fenv-x87.c: Likewise. * sysdeps/x86/fpu/Makefile [$(subdir) = math] (tests): Add test-fenv-x87 and test-fenv-sse-2. [$(subdir) = math] (CFLAGS-test-fenv-sse-2.c): New variable.
2015-10-28Fix i386/x86_64 fesetenv SSE exception clearing (bug 19181).Joseph Myers2-1/+47
The i386 and x86_64 versions of fesetenv, when called with FE_DFL_ENV or FE_NOMASK_ENV as argument, do not clear SSE exceptions raised in MXCSR. These arguments should, like other fenv_t values, represent the whole of the floating-point state, so such exceptions should be cleared; this patch adds the required clearing. (Discovered while working on bug 16068.) Tested for x86_64 and x86. [BZ #19181] * sysdeps/i386/fpu/fesetenv.c (__fesetenv): Clear already-raised SSE exceptions when argument is FE_DFL_ENV or FE_NOMASK_ENV. * sysdeps/x86_64/fpu/fesetenv.c (__fesetenv): Likewise. * math/test-fenv-clear-main.c: New file. * math/test-fenv-clear.c: Likewise. * math/Makefile (tests): Add test-fenv-clear. * sysdeps/x86/fpu/test-fenv-clear-sse.c: New file. * sysdeps/x86/fpu/Makefile [$(subdir) = math] (tests): Add test-fenv-clear-sse. [$(subdir) = math] (CFLAGS-test-fenv-clear-sse.c): New variable.
2015-09-25Fix pow missing underflows (bug 18825).Joseph Myers1-0/+1
Similar to various other bugs in this area, pow functions can fail to raise the underflow exception when the result is tiny and inexact but one or more low bits of the intermediate result that is scaled down (or, in the i386 case, converted from a wider evaluation format) are zero. This patch forces the exception in a similar way to previous fixes, thereby concluding the fixes for known bugs with missing underflow exceptions currently filed in Bugzilla. Tested for x86_64, x86, mips64 and powerpc. [BZ #18825] * sysdeps/i386/fpu/i386-math-asm.h (FLT_NARROW_EVAL_UFLOW_NONNAN): New macro. (DBL_NARROW_EVAL_UFLOW_NONNAN): Likewise. (LDBL_CHECK_FORCE_UFLOW_NONNAN): Likewise. * sysdeps/i386/fpu/e_pow.S: Use DEFINE_DBL_MIN. (__ieee754_pow): Use DBL_NARROW_EVAL_UFLOW_NONNAN instead of DBL_NARROW_EVAL, reloading the PIC register as needed. * sysdeps/i386/fpu/e_powf.S: Use DEFINE_FLT_MIN. (__ieee754_powf): Use FLT_NARROW_EVAL_UFLOW_NONNAN instead of FLT_NARROW_EVAL. Use separate return path for case when first argument is NaN. * sysdeps/i386/fpu/e_powl.S: Include <i386-math-asm.h>. Use DEFINE_LDBL_MIN. (__ieee754_powl): Use LDBL_CHECK_FORCE_UFLOW_NONNAN, reloading the PIC register. * sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): Force underflow for subnormal result. * sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Use math_check_force_underflow_nonneg. * sysdeps/x86/fpu/powl_helper.c (__powl_helper): Use math_check_force_underflow. * sysdeps/x86_64/fpu/x86_64-math-asm.h (LDBL_CHECK_FORCE_UFLOW_NONNAN): New macro. * sysdeps/x86_64/fpu/e_powl.S: Include <x86_64-math-asm.h>. Use DEFINE_LDBL_MIN. (__ieee754_powl): Use LDBL_CHECK_FORCE_UFLOW_NONNAN. * math/auto-libm-test-in: Add more tests of pow. * math/auto-libm-test-out: Regenerated.
2015-09-04Rename bits/linkmap.h to linkmap.h (bug 14912).Joseph Myers1-0/+0
It was noted in <https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the bits/*.h naming scheme should only be used for installed headers. This patch renames bits/linkmap.h to plain linkmap.h to follow that convention. Tested for x86_64 (testsuite, and that installed stripped shared libraries are unchanged by the patch). [BZ #14912] * bits/linkmap.h: Move to ... * sysdeps/generic/linkmap.h: ...here. * sysdeps/aarch64/bits/linkmap.h: Move to ... * sysdeps/aarch64/linkmap.h: ...here. * sysdeps/arm/bits/linkmap.h: Move to ... * sysdeps/arm/linkmap.h: ...here. * sysdeps/hppa/bits/linkmap.h: Move to ... * sysdeps/hppa/linkmap.h: ...here. * sysdeps/ia64/bits/linkmap.h: Move to ... * sysdeps/ia64/linkmap.h: ...here. * sysdeps/mips/bits/linkmap.h: Move to ... * sysdeps/mips/linkmap.h: ...here. * sysdeps/s390/bits/linkmap.h: Move to ... * sysdeps/s390/linkmap.h: ...here. * sysdeps/sh/bits/linkmap.h: Move to ... * sysdeps/sh/linkmap.h: ...here. * sysdeps/x86/bits/linkmap.h: Move to ... * sysdeps/x86/linkmap.h: ...here. * include/link.h: Include <linkmap.h> instead of <bits/linkmap.h>.
2015-08-27Detect and select i586/i686 implementation at run-timefedora/masterH.J. Lu3-3/+38
We detect i586 and i686 features at run-time by checking CX8 and CMOV CPUID features bits. We can use these information to select the best implementation in ix86 multiarch. HAS_I586/HAS_I686 is true if i586/i686 instructions are available on the processor. Due to the reordering and the other nifty extensions in i686, it is not really good to use heavily i586 optimized code on an i686. It's better to use i486 code if it isn't an i586. USE_I586/USE_I686 is true if i586/i686 implementation should be used for the processor. USE_I586 is true only if i686 instructions aren't available. If i686 instructions are available, we always choose i686 or i486 implementation, in that order, and we never choose i586 implementation for i686-class processors. * sysdeps/i386/init-arch.h: New file. * sysdeps/i386/i586/init-arch.h: Likewise. * sysdeps/i386/i686/init-arch.h: Likewise. * sysdeps/x86/cpu-features.c (init_cpu_features): Set bit_I586 bit if CX8 is available. Set bit_I686 bit if CMOV is available. * sysdeps/x86/cpu-features.h (bit_I586): New. (bit_I686): Likewise. (bit_CX8): Likewise. (bit_CMOV): Likewise. (index_CX8): Likewise. (index_CMOV): Likewise. (index_I586): Likewise. (index_I686): Likewise. (reg_CX8): Likewise. (reg_CMOV): Likewise. (HAS_I586): Defined as HAS_ARCH_FEATURE (I586) if i586 isn't available at compile-time. (HAS_I686): Defined as HAS_ARCH_FEATURE (I686) if i686 isn't available at compile-time. * sysdeps/x86/init-arch.h (USE_I586): New macro. (USE_I686): Likewise.
2015-08-26Don't disable SSE in x86-64 ld.soH.J. Lu2-114/+0
Since x86-64 ld.so preserves vector registers now, we can use SSE in x86-64 ld.so. We should run tst-ld-sse-use.sh only on i386. * sysdeps/x86/Makefile [$(subdir) == elf] (CFLAGS-.os, tests-special, $(objpfx)tst-ld-sse-use.out): Moved to ... * sysdeps/i386/Makefile [$(subdir) == elf] (CFLAGS-.os, tests-special, $(objpfx)tst-ld-sse-use.out): Here. Update comments. * sysdeps/x86_64/Makefile [$(subdir) == elf] (CFLAGS-.os): Add -mno-mmx for $(all-rtld-routines). * sysdeps/x86/tst-ld-sse-use.sh: Moved to ... * sysdeps/i386/tst-ld-sse-use.sh: Here. Replace x86-64 with i386.
2015-08-20Move x86_64 init-arch.h to sysdeps/x86/init-arch.hH.J. Lu1-0/+22
Move sysdeps/x86_64/multiarch/init-arch.h to sysdeps/x86/init-arch.h which can be used for both i386 and x86_64. * sysdeps/i386/i686/multiarch/init-arch.h: Removed. * sysdeps/unix/sysv/linux/x86/init-arch.h: Likewise. * sysdeps/x86_64/cacheinfo.c: Include <init-arch.h> instead of "multiarch/init-arch.h". * sysdeps/x86_64/multiarch/init-arch.h: Renamed to ... * sysdeps/x86/init-arch.h: This.
2015-08-20nptl: Document crash due to incorrect use of locksFlorian Weimer1-1/+3
2015-08-19Also check __i586__/__i686__ for HAS_I586/HAS_I686H.J. Lu1-8/+9
* sysdeps/x86/cpu-features.h (HAS_I586): Defined to 1 if __i586__ is defined. (HAS_I686): Defined to 1 if __i686__ is defined.
2015-08-18Define HAS_CPUID/HAS_I586/HAS_I686 from -march=H.J. Lu2-2/+29
cpuid, i586 and i686 instructions are available if the processor specified by -march= supports them. We can use this information to determine whether those instructions can be used safely. * sysdeps/x86/cpu-features.c (init_cpu_features): Check whether cpuid is available only if HAS_CPUID is 0. * sysdeps/x86/cpu-features.h (HAS_CPUID): New. (HAS_I586): Likewise. (HAS_I686): Likewise.
2015-08-13Check if cpuid is available in init_cpu_featuresH.J. Lu1-0/+12
Since not all i486 processors support cpuid, we call __get_cpuid_max to check if cpuid is available before using it if not compiling for i586, i686 nor x86-64. * sysdeps/x86/cpu-features.c (init_cpu_features): Call __get_cpuid_max if not compiling for i586, i686 nor x86-64.
2015-08-13Add _dl_x86_cpu_features to rtld_globalH.J. Lu10-0/+572
This patch adds _dl_x86_cpu_features to rtld_global in x86 ld.so and initializes it early before __libc_start_main is called so that cpu_features is always available when it is used and we can avoid calling __init_cpu_features in IFUNC selectors. * sysdeps/i386/dl-machine.h: Include <cpu-features.c>. (dl_platform_init): Call init_cpu_features. * sysdeps/i386/dl-procinfo.c (_dl_x86_cpu_features): New. * sysdeps/i386/i686/cacheinfo.c (DISABLE_PREFERRED_MEMORY_INSTRUCTION): Removed. * sysdeps/i386/i686/multiarch/Makefile (aux): Remove init-arch. * sysdeps/i386/i686/multiarch/Versions: Removed. * sysdeps/i386/i686/multiarch/ifunc-defines.sym (KIND_OFFSET): Removed. * sysdeps/i386/ldsodefs.h: Include <cpu-features.h>. * sysdeps/unix/sysv/linux/x86/Makefile (libpthread-sysdep_routines): Remove init-arch. * sysdeps/unix/sysv/linux/x86_64/dl-procinfo.c: Include <sysdeps/x86_64/dl-procinfo.c> instead of sysdeps/generic/dl-procinfo.c>. * sysdeps/x86/Makefile [$(subdir) == csu] (gen-as-const-headers): Add cpu-features-offsets.sym and rtld-global-offsets.sym. [$(subdir) == elf] (sysdep-dl-routines): Add dl-get-cpu-features. [$(subdir) == elf] (tests): Add tst-get-cpu-features. [$(subdir) == elf] (tests-static): Add tst-get-cpu-features-static. * sysdeps/x86/Versions: New file. * sysdeps/x86/cpu-features-offsets.sym: Likewise. * sysdeps/x86/cpu-features.c: Likewise. * sysdeps/x86/cpu-features.h: Likewise. * sysdeps/x86/dl-get-cpu-features.c: Likewise. * sysdeps/x86/libc-start.c: Likewise. * sysdeps/x86/rtld-global-offsets.sym: Likewise. * sysdeps/x86/tst-get-cpu-features-static.c: Likewise. * sysdeps/x86/tst-get-cpu-features.c: Likewise. * sysdeps/x86_64/dl-procinfo.c: Likewise. * sysdeps/x86_64/cacheinfo.c (__cpuid_count): Removed. Assume USE_MULTIARCH is defined and don't check it. (is_intel): Replace __cpu_features with GLRO(dl_x86_cpu_features). (is_amd): Likewise. (max_cpuid): Likewise. (intel_check_word): Likewise. (__cache_sysconf): Don't call __init_cpu_features. (__x86_preferred_memory_instruction): Removed. (init_cacheinfo): Don't call __init_cpu_features. Replace __cpu_features with GLRO(dl_x86_cpu_features). * sysdeps/x86_64/dl-machine.h: <cpu-features.c>. (dl_platform_init): Call init_cpu_features. * sysdeps/x86_64/ldsodefs.h: Include <cpu-features.h>. * sysdeps/x86_64/multiarch/Makefile (aux): Remove init-arch. * sysdeps/x86_64/multiarch/Versions: Removed. * sysdeps/x86_64/multiarch/cacheinfo.c: Likewise. * sysdeps/x86_64/multiarch/init-arch.c: Likewise. * sysdeps/x86_64/multiarch/ifunc-defines.sym (KIND_OFFSET): Removed. * sysdeps/x86_64/multiarch/init-arch.h: Rewrite.
2015-07-09Preserve bound registers for pointer pass/returnIgor Zamyatin1-0/+2
We need to save/restore bound registers and add a BND prefix before branches in _dl_runtime_profile so that bound registers for pointer pass and return are preserved when LD_AUDIT is used. [BZ #18134] * sysdeps/i386/configure.ac: Set HAVE_MPX_SUPPORT. * sysdeps/i386/configure: Regenerated. * sysdeps/i386/dl-trampoline.S (PRESERVE_BND_REGS_PREFIX): New. (_dl_runtime_profile): Save and restore Intel MPX return bound registers when calling _dl_call_pltexit. Add PRESERVE_BND_REGS_PREFIX before return. * sysdeps/i386/link-defines.sym (LRV_BND0_OFFSET): New. (LRV_BND1_OFFSET): Likewise. * sysdeps/x86/bits/link.h (La_i86_retval): Add lrv_bnd0 and lrv_bnd1. * sysdeps/x86_64/dl-trampoline.S (_dl_runtime_profile): Fix typo in bndmov encoding. * sysdeps/x86_64/dl-trampoline.h: Properly save and restore Intel MPX bound registers. Add PRESERVE_BND_REGS_PREFIX before branch instructions to preserve bounds.
2015-07-07Do not create invalid pointers in C code of string functions.Torvald Riegel1-7/+11
Some of the x86 string functions create pointers based on input strings that may be outside of the input strings. When this happens in C code, the compiler can potentially detect this, leading to warnings in application code when those string functions are inlined. Perform those operations in the assembly code instead of the C code to fix this.
2015-06-18Vector sincosf for x86_64 and tests.Andrew Senkevich1-0/+2
Here is implementation of vectorized sincosf containing SSE, AVX, AVX2 and AVX512 versions according to Vector ABI <https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>. * NEWS: Mention addition of x86_64 vector sincosf. * math/test-float-vlen16.h: Added wrapper for sincosf tests. * math/test-float-vlen4.h: Likewise. * math/test-float-vlen8.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added. * sysdeps/x86/fpu/bits/math-vector.h: Added sincosf SIMD declaration. * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files. * sysdeps/x86_64/fpu/Versions: New versions added. * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated. * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines): Added build of SSE, AVX2 and AVX512 IFUNC versions. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core.S * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core.S * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core.S * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S * sysdeps/x86_64/fpu/svml_s_sincosf16_core.S * sysdeps/x86_64/fpu/svml_s_sincosf4_core.S * sysdeps/x86_64/fpu/svml_s_sincosf8_core.S * sysdeps/x86_64/fpu/svml_s_sincosf8_core_avx.S * sysdeps/x86_64/fpu/svml_s_sincosf_data.S: New file. * sysdeps/x86_64/fpu/svml_s_sincosf_data.h: New file. * sysdeps/x86_64/fpu/svml_s_wrapper_impl.h: Added 3 argument wrappers. * sysdeps/x86_64/fpu/test-float-vlen16.c: : Vector sincosf tests. * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen4.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
2015-06-18Vector sincos for x86_64 and tests.Andrew Senkevich1-0/+2
Here is implementation of vectorized sincos containing SSE, AVX, AVX2 and AVX512 versions according to Vector ABI <https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>. * NEWS: Mention addition of x86_64 vector sincos. * bits/libm-simd-decl-stubs.h: Added stubs for sincos. * math/math.h (__MATHDECL_VEC): New macro. * math/bits/mathcalls.h: Added sincos declaration with __MATHDECL_VEC. * math/gen-libm-have-vector-test.sh: Added generation of sincos wrapper declaration under condition. * math/test-vec-loop.h (TEST_VEC_LOOP): Refactored. * math/test-double-vlen2.h: Added wrapper for sincos tests, reflected TEST_VEC_LOOP change. * math/test-double-vlen4.h: Likewise. * math/test-double-vlen8.h: Likewise. * math/test-float-vlen16.h: Reflected TEST_VEC_LOOP change. * math/test-float-vlen4.h: Likewise. * math/test-float-vlen8.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added. * sysdeps/x86/fpu/bits/math-vector.h: Added sincos SIMD declaration. * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files. * sysdeps/x86_64/fpu/Versions: New versions added. * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated. * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines): Added build of SSE, AVX2 and AVX512 IFUNC versions. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core.S: New file. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S: New file. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core.S: New file. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S: New file. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core.S: New file. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S: New file. * sysdeps/x86_64/fpu/svml_d_sincos2_core.S: New file. * sysdeps/x86_64/fpu/svml_d_sincos4_core.S: New file. * sysdeps/x86_64/fpu/svml_d_sincos4_core_avx.S: New file. * sysdeps/x86_64/fpu/svml_d_sincos8_core.S: New file. * sysdeps/x86_64/fpu/svml_d_sincos_data.S: New file. * sysdeps/x86_64/fpu/svml_d_sincos_data.h: New file. * sysdeps/x86_64/fpu/svml_d_wrapper_impl.h: Added wrappers for sincos. * sysdeps/x86_64/fpu/test-double-vlen2-wrappers.c: Vector sincos tests. * sysdeps/x86_64/fpu/test-double-vlen2.c: Likewise. * sysdeps/x86_64/fpu/test-double-vlen4-avx2-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-double-vlen4-avx2.c: Likewise. * sysdeps/x86_64/fpu/test-double-vlen4-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-double-vlen4.c: Likewise. * sysdeps/x86_64/fpu/test-double-vlen8-wrappers.c: Likewise. * sysdeps/x86_64/fpu/test-double-vlen8.c: Likewise.