aboutsummaryrefslogtreecommitdiff
path: root/clang/lib/CodeGen/CodeGenModule.cpp
diff options
context:
space:
mode:
authorPhilip Reames <listmail@philipreames.com>2020-11-23 15:23:46 -0800
committerPhilip Reames <listmail@philipreames.com>2020-11-23 15:32:17 -0800
commitb06a2ad94f45abc18970ecc3cec93d140d036d8f (patch)
tree820322cc96403a9df35ee26b5c5c58febdcae8df /clang/lib/CodeGen/CodeGenModule.cpp
parentf7d033f4d80f476246a70f165e7455639818f907 (diff)
downloadllvm-b06a2ad94f45abc18970ecc3cec93d140d036d8f.zip
llvm-b06a2ad94f45abc18970ecc3cec93d140d036d8f.tar.gz
llvm-b06a2ad94f45abc18970ecc3cec93d140d036d8f.tar.bz2
[LoopVectorizer] Lower uniform loads as a single load (instead of relying on CSE)
A uniform load is one which loads from a uniform address across all lanes. As currently implemented, we cost model such loads as if we did a single scalar load + a broadcast, but the actual lowering replicates the load once per lane. This change tweaks the lowering to use the REPLICATE strategy by marking such loads (and the computation leading to their memory operand) as uniform after vectorization. This is a useful change in itself, but it's real purpose is to pave the way for a following change which will generalize our uniformity logic. In review discussion, there was an issue raised with coupling cost modeling with the lowering strategy for uniform inputs. The discussion on that item remains unsettled and is pending larger architectural discussion. We decided to move forward with this patch as is, and revise as warranted once the bigger picture design questions are settled. Differential Revision: https://reviews.llvm.org/D91398
Diffstat (limited to 'clang/lib/CodeGen/CodeGenModule.cpp')
0 files changed, 0 insertions, 0 deletions