aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/CodeGen/TargetLoweringObjectFileImpl.cpp
diff options
context:
space:
mode:
authorMatt Arsenault <Matthew.Arsenault@amd.com>2021-08-03 19:09:44 -0400
committerMatt Arsenault <Matthew.Arsenault@amd.com>2021-08-10 13:12:34 -0400
commitd719f1c3cc9c6f44438b4bd847816d7462945269 (patch)
tree6b5b87547e4d8487c873b074d97d8fa81831b62b /llvm/lib/CodeGen/TargetLoweringObjectFileImpl.cpp
parentd84c4e385721ceb7fe3ef0bff88ed6a51a5337da (diff)
downloadllvm-d719f1c3cc9c6f44438b4bd847816d7462945269.zip
llvm-d719f1c3cc9c6f44438b4bd847816d7462945269.tar.gz
llvm-d719f1c3cc9c6f44438b4bd847816d7462945269.tar.bz2
AMDGPU: Add alloc priority to global ranges
The requested register class priorities weren't respected globally. Not sure why this is a target option, and not just the expected behavior (recently added in 1a6dc92be7d68611077f0fb0b723b361817c950c). This avoids an allocation failure when many wide tuple spills are introduced. I think this is a workaround since I would not expect the allocation priority to be required, and only a performance hint. The allocator should be smarter about when only a subregister needs to be spilled and restored. This does regress a couple of degenerate store stress lit tests which shouldn't be too important.
Diffstat (limited to 'llvm/lib/CodeGen/TargetLoweringObjectFileImpl.cpp')
0 files changed, 0 insertions, 0 deletions