diff options
| author | Andrzej Warzynski <andrzej.warzynski@arm.com> | 2020-03-04 11:21:20 +0000 |
|---|---|---|
| committer | Andrzej Warzynski <andrzej.warzynski@arm.com> | 2020-03-12 13:55:56 +0000 |
| commit | 46b9f14d712d47fa0bebd72edd8cfe58cae11f53 (patch) | |
| tree | 5ea96471d038b128a2fb86f8ec2385966c106600 /llvm/lib/CodeGen/MachineModuleInfoImpls.cpp | |
| parent | a66dc755db4cd0af678b0dd7a84ca64fd66518f6 (diff) | |
| download | llvm-46b9f14d712d47fa0bebd72edd8cfe58cae11f53.zip llvm-46b9f14d712d47fa0bebd72edd8cfe58cae11f53.tar.gz llvm-46b9f14d712d47fa0bebd72edd8cfe58cae11f53.tar.bz2 | |
[AArch64][SVE] Add intrinsics for non-temporal scatters/gathers
Summary:
This patch adds the following intrinsics for non-temporal gather loads
and scatter stores:
* aarch64_sve_ldnt1_gather_index
* aarch64_sve_stnt1_scatter_index
These intrinsics implement the "scalar + vector of indices" addressing
mode.
As opposed to regular and first-faulting gathers/scatters, there's no
instruction that would take indices and then scale them. Instead, the
indices for non-temporal gathers/scatters are scaled before the
intrinsics are lowered to `ldnt1` instructions.
The new ISD nodes, GLDNT1_INDEX and SSTNT1_INDEX, are only used as
placeholders so that we can easily identify the cases implemented in
this patch in performGatherLoadCombine and performScatterStoreCombined.
Once encountered, they are replaced with:
* GLDNT1_INDEX -> SPLAT_VECTOR + SHL + GLDNT1
* SSTNT1_INDEX -> SPLAT_VECTOR + SHL + SSTNT1
The patterns for lowering ISD::SHL for scalable vectors (required by
this patch) were missing, so these are added too.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D75601
Diffstat (limited to 'llvm/lib/CodeGen/MachineModuleInfoImpls.cpp')
0 files changed, 0 insertions, 0 deletions
