aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)AuthorFilesLines
2016-06-06Count number of logical processors sharing L2 cachehjl/erms/2.22H.J. Lu1-34/+116
2016-06-06Remove special L2 cache case for Knights LandingH.J. Lu1-2/+0
2016-06-06Correct Intel processor level type mask from CPUIDH.J. Lu1-1/+1
2016-06-06Check the HTT bit before counting logical threadsH.J. Lu2-76/+85
2016-06-06Remove alignments on jump targets in memsetH.J. Lu1-32/+5
2016-06-06Call init_cpu_features only if SHARED is definedH.J. Lu2-0/+8
2016-06-06Support non-inclusive caches on Intel processorsH.J. Lu1-1/+11
2016-06-06Remove x86 ifunc-defines.sym and rtld-global-offsets.symH.J. Lu8-51/+18
2016-06-06Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86H.J. Lu2-1/+1
2016-06-06Detect Intel Goldmont and Airmont processorsH.J. Lu1-0/+8
2016-04-08X86-64: Add dummy memcopy.h and wordcopy.cH.J. Lu2-0/+2
2016-04-08X86-64: Remove previous default/SSE2/AVX2 memcpy/memmoveH.J. Lu19-1492/+396
2016-04-08X86-64: Remove the previous SSE2/AVX2 memsetsH.J. Lu8-325/+62
2016-04-08X86-64: Use non-temporal store in memcpy on large dataH.J. Lu5-171/+234
2016-04-06X86-64: Prepare memmove-vec-unaligned-erms.SH.J. Lu1-54/+84
2016-04-06X86-64: Prepare memset-vec-unaligned-erms.SH.J. Lu1-13/+19
2016-04-05Force 32-bit displacement in memset-vec-unaligned-erms.SH.J. Lu1-0/+13
2016-04-05Add a comment in memset-sse2-unaligned-erms.SH.J. Lu1-0/+2
2016-04-05Don't put SSE2/AVX/AVX512 memmove/memset in ld.soH.J. Lu6-32/+40
2016-04-05Fix memmove-vec-unaligned-erms.SH.J. Lu1-24/+30
2016-04-05Use HAS_ARCH_FEATURE with Fast_Rep_StringH.J. Lu9-9/+9
2016-04-02Remove Fast_Copy_Backward from Intel Core processorsH.J. Lu1-5/+1
2016-04-02Add x86-64 memset with unaligned store and rep stosbH.J. Lu6-4/+339
2016-04-02Add x86-64 memmove with unaligned load/store and rep movsbH.J. Lu6-1/+606
2016-04-02Initial Enhanced REP MOVSB/STOSB (ERMS) supportH.J. Lu1-0/+4
2016-04-02Make __memcpy_avx512_no_vzeroupper an aliasH.J. Lu3-430/+404
2016-04-02Implement x86-64 multiarch mempcpy in memcpyH.J. Lu9-57/+69
2016-04-02[x86] Add a feature bit: Fast_Unaligned_CopyH.J. Lu3-1/+12
2016-04-02Don't set %rcx twice before "rep movsb"H.J. Lu1-1/+0
2016-04-02Set index_arch_AVX_Fast_Unaligned_Load only for Intel processorsH.J. Lu2-70/+84
2016-04-02Update family and model detection for AMD CPUsH.J. Lu1-12/+15
2016-04-02Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.hH.J. Lu3-137/+153
2016-04-02Or bit_Prefer_MAP_32BIT_EXEC in EXTRA_LD_ENVVARSH.J. Lu1-1/+1
2016-04-02Group AVX512 functions in .text.avx512 sectionH.J. Lu2-2/+2
2016-04-02x86-64: Fix memcpy IFUNC selectionH.J. Lu1-13/+14
2016-04-02Added memcpy/memmove family optimized with AVX512 for KNL hardware.Andrew Senkevich11-19/+540
2016-04-02Added memset optimized with AVX512 for KNL hardware.Andrew Senkevich7-2/+229
2016-04-02Add Prefer_MAP_32BIT_EXEC to map executable pages with MAP_32BITH.J. Lu4-0/+124
2016-04-02Enable Silvermont optimizations for Knights LandingH.J. Lu1-0/+3
2016-02-23[x86_64] Set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8hjl/plt/2.22H.J. Lu2-11/+15
2016-02-23Support x86-64 assmebler without AVX512H.J. Lu1-16/+24
2015-08-14Remove incorrect register mov in floorf/nearbyint on x86_64Siddhesh Poyarekar2-2/+0
2015-08-05Don't run tst-getpid2 with LD_BIND_NOW=1H.J. Lu1-5/+0
2015-08-05Use SSE optimized strcmp in x86-64 ld.soH.J. Lu1-253/+216
2015-08-05Remove x86-64 rtld-xxx.c and rtld-xxx.SH.J. Lu6-464/+0
2015-08-05Replace %xmm8 with %xmm0H.J. Lu1-26/+26
2015-08-05Replace %xmm[8-12] with %xmm[0-4]H.J. Lu1-47/+47
2015-08-05Don't disable SSE in x86-64 ld.soH.J. Lu3-11/+14
2015-08-05Save and restore vector registers in x86-64 ld.soH.J. Lu8-501/+472
2015-08-05Align stack when calling __errno_locationH.J. Lu3-0/+18