diff options
author | Noah Goldstein <goldstein.w.n@gmail.com> | 2021-09-20 16:20:15 -0500 |
---|---|---|
committer | Noah Goldstein <goldstein.w.n@gmail.com> | 2021-10-12 13:38:02 -0500 |
commit | e59ced238482fd71f3e493717f14f6507346741e (patch) | |
tree | 374870a4236379305baae6fcdb99ebec65708ca3 /sysdeps/x86_64/memset.S | |
parent | 1bd8b8d58fc9967cc073d2c13bfb6befefca2faa (diff) | |
download | glibc-e59ced238482fd71f3e493717f14f6507346741e.zip glibc-e59ced238482fd71f3e493717f14f6507346741e.tar.gz glibc-e59ced238482fd71f3e493717f14f6507346741e.tar.bz2 |
x86: Optimize memset-vec-unaligned-erms.S
No bug.
Optimization are
1. change control flow for L(more_2x_vec) to fall through to loop and
jump for L(less_4x_vec) and L(less_8x_vec). This uses less code
size and saves jumps for length > 4x VEC_SIZE.
2. For EVEX/AVX512 move L(less_vec) closer to entry.
3. Avoid complex address mode for length > 2x VEC_SIZE
4. Slightly better aligning code for the loop from the perspective of
code size and uops.
5. Align targets so they make full use of their fetch block and if
possible cache line.
6. Try and reduce total number of icache lines that will need to be
pulled in for a given length.
7. Include "local" version of stosb target. For AVX2/EVEX/AVX512
jumping to the stosb target in the sse2 code section will almost
certainly be to a new page. The new version does increase code size
marginally by duplicating the target but should get better iTLB
behavior as a result.
test-memset, test-wmemset, and test-bzero are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Diffstat (limited to 'sysdeps/x86_64/memset.S')
-rw-r--r-- | sysdeps/x86_64/memset.S | 10 |
1 files changed, 6 insertions, 4 deletions
diff --git a/sysdeps/x86_64/memset.S b/sysdeps/x86_64/memset.S index 7d4a327..0137eba 100644 --- a/sysdeps/x86_64/memset.S +++ b/sysdeps/x86_64/memset.S @@ -18,13 +18,15 @@ <https://www.gnu.org/licenses/>. */ #include <sysdep.h> +#define USE_WITH_SSE2 1 #define VEC_SIZE 16 +#define MOV_SIZE 3 +#define RET_SIZE 1 + #define VEC(i) xmm##i -/* Don't use movups and movaps since it will get larger nop paddings for - alignment. */ -#define VMOVU movdqu -#define VMOVA movdqa +#define VMOVU movups +#define VMOVA movaps #define MEMSET_VDUP_TO_VEC0_AND_SET_RETURN(d, r) \ movd d, %xmm0; \ |