diff options
author | Wilco Dijkstra <wdijkstr@arm.com> | 2020-08-28 17:51:40 +0100 |
---|---|---|
committer | Wilco Dijkstra <wdijkstr@arm.com> | 2020-08-28 17:51:40 +0100 |
commit | bd394d131c10c9ec22c6424197b79410042eed99 (patch) | |
tree | a50e1b4a3bd0a4cbc610c1f0ef7d7d8119f4ed6a | |
parent | 567b1705017a0876b1cf9661a20521ef1e4ddc54 (diff) | |
download | glibc-bd394d131c10c9ec22c6424197b79410042eed99.zip glibc-bd394d131c10c9ec22c6424197b79410042eed99.tar.gz glibc-bd394d131c10c9ec22c6424197b79410042eed99.tar.bz2 |
AArch64: Improve backwards memmove performance
On some microarchitectures performance of the backwards memmove improves if
the stores use STR with decreasing addresses. So change the memmove loop
in memcpy_advsimd.S to use 2x STR rather than STP.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
-rw-r--r-- | sysdeps/aarch64/multiarch/memcpy_advsimd.S | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/sysdeps/aarch64/multiarch/memcpy_advsimd.S b/sysdeps/aarch64/multiarch/memcpy_advsimd.S index d4ba747..48bb6d7 100644 --- a/sysdeps/aarch64/multiarch/memcpy_advsimd.S +++ b/sysdeps/aarch64/multiarch/memcpy_advsimd.S @@ -223,12 +223,13 @@ L(copy_long_backwards): b.ls L(copy64_from_start) L(loop64_backwards): - stp A_q, B_q, [dstend, -32] + str B_q, [dstend, -16] + str A_q, [dstend, -32] ldp A_q, B_q, [srcend, -96] - stp C_q, D_q, [dstend, -64] + str D_q, [dstend, -48] + str C_q, [dstend, -64]! ldp C_q, D_q, [srcend, -128] sub srcend, srcend, 64 - sub dstend, dstend, 64 subs count, count, 64 b.hi L(loop64_backwards) |