aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Bitcode
diff options
context:
space:
mode:
authorAlex Bradbury <asb@igalia.com>2024-11-15 14:07:46 +0000
committerGitHub <noreply@github.com>2024-11-15 14:07:46 +0000
commit7ff3a9acd84654c9ec2939f45ba27f162ae7fbc3 (patch)
tree3f4a75b1619fd542cf4e8589ede57e4d7009ad03 /llvm/lib/Bitcode
parent35710ab392b50c815765f03c12409147502dfb86 (diff)
downloadllvm-7ff3a9acd84654c9ec2939f45ba27f162ae7fbc3.zip
llvm-7ff3a9acd84654c9ec2939f45ba27f162ae7fbc3.tar.gz
llvm-7ff3a9acd84654c9ec2939f45ba27f162ae7fbc3.tar.bz2
[IR] Initial introduction of llvm.experimental.memset_pattern (#97583)
Supersedes the draft PR #94992, taking a different approach following feedback: * Lower in PreISelIntrinsicLowering * Don't require that the number of bytes to set is a compile-time constant * Define llvm.memset_pattern rather than llvm.memset_pattern.inline As discussed in the [RFC thread](https://discourse.llvm.org/t/rfc-introducing-an-llvm-memset-pattern-inline-intrinsic/79496), the intent is that the intrinsic will be lowered to loops, a sequence of stores, or libcalls depending on the expected cost and availability of libcalls on the target. Right now, there's just a single lowering path that aims to handle all cases. My intent would be to follow up with additional PRs that add additional optimisations when possible (e.g. when libcalls are available, when arguments are known to be constant etc).
Diffstat (limited to 'llvm/lib/Bitcode')
0 files changed, 0 insertions, 0 deletions