aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Analysis/ValueTracking.cpp
diff options
context:
space:
mode:
authorSam Parker <sam.parker@arm.com>2025-02-17 09:09:52 +0000
committerGitHub <noreply@github.com>2025-02-17 09:09:52 +0000
commitea7897a617b897f87f148db48cda9fcc7c1c53dc (patch)
treebcb3212fceab2478f2b8ebda4e70adcadebeb2a5 /llvm/lib/Analysis/ValueTracking.cpp
parent948a8477c6a966ee8509400d2857706e933f4149 (diff)
downloadllvm-ea7897a617b897f87f148db48cda9fcc7c1c53dc.zip
llvm-ea7897a617b897f87f148db48cda9fcc7c1c53dc.tar.gz
llvm-ea7897a617b897f87f148db48cda9fcc7c1c53dc.tar.bz2
[WebAssembly] Enable interleaved memory accesses (#125696)
Enable the vectorizer to access interleaved memory. This means that, when it's decided to be profitable, the memory accesses can be vectorized instead of the value being built up by a sequence of load_lane instructions. This will often increase the vectorization factor of the loop, leading to significantly better performance. I run a reasonably large collection of benchmarks and most are not affected by this change, with most performance changes <1%. But I see a 2.5% speedup for the total run time of TSVC, 1% speedup for SPEC2017 x265, 28% speedup for a ResNet workload and 95% for libyuv. This is running V8 on an AArch64 box.
Diffstat (limited to 'llvm/lib/Analysis/ValueTracking.cpp')
0 files changed, 0 insertions, 0 deletions