aboutsummaryrefslogtreecommitdiff
path: root/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp
diff options
context:
space:
mode:
authorStanislav Mekhanoshin <Stanislav.Mekhanoshin@amd.com>2022-01-27 16:27:43 -0800
committerStanislav Mekhanoshin <Stanislav.Mekhanoshin@amd.com>2022-02-01 11:43:17 -0800
commitc2b18a3cc5bd6cae49372c2367445b480989db0d (patch)
treeeb91d408da1135f9fdf09f2d13ff171707931681 /lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp
parent8e75536e510460bedcfdafb38d58cdfb7bb66111 (diff)
downloadllvm-c2b18a3cc5bd6cae49372c2367445b480989db0d.zip
llvm-c2b18a3cc5bd6cae49372c2367445b480989db0d.tar.gz
llvm-c2b18a3cc5bd6cae49372c2367445b480989db0d.tar.bz2
[AMDGPU] Allow scalar loads after barrier
Currently we cannot convert a vector load into scalar if there is dominating barrier or fence. It is considered a clobbering memory access to prevent memory operations reordering. While reordering is not possible the actual memory is not being clobbered by a barrier or fence and we can still use a scalar load for a uniform pointer. The solution is not to bail on a first clobbering access but traverse MemorySSA to the root excluding barriers and fences. Differential Revision: https://reviews.llvm.org/D118419
Diffstat (limited to 'lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp')
0 files changed, 0 insertions, 0 deletions