aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
diff options
context:
space:
mode:
authorSimon Pilgrim <llvm-dev@redking.me.uk>2025-10-28 13:05:52 +0000
committerGitHub <noreply@github.com>2025-10-28 13:05:52 +0000
commite588c7fa713d8bdd5c424831ca42136b560ff66b (patch)
tree4288ef4bb3d58a0d47acafddaf6bfac30c904ca8 /llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
parent4678f16f6d9089ce7d8de8a7fa31b70190208ab2 (diff)
downloadllvm-e588c7fa713d8bdd5c424831ca42136b560ff66b.zip
llvm-e588c7fa713d8bdd5c424831ca42136b560ff66b.tar.gz
llvm-e588c7fa713d8bdd5c424831ca42136b560ff66b.tar.bz2
[X86] Attempt to fold trunc(srl(load(p),amt) -> load(p+amt/8) (#165266)
As reported on #164853 - we only attempt to reduce shifted loads for constant shift amounts, but we could do more with non-constant values if value tracking can confirm basic alignments. This patch determines if a truncated shifted load of scalar integer shifts by a byte aligned amount and replaces the non-constant shift amount with a pointer offset instead. I had hoped to make this a generic DAG fold, but reduceLoadWidth isn't ready to be converted to a KnownBits value tracking mechanism, and other targets don't have complex address math like X86. Fixes #164853
Diffstat (limited to 'llvm/lib/Bitcode/Writer/BitcodeWriter.cpp')
0 files changed, 0 insertions, 0 deletions