diff options
author | Simon Tatham <simon.tatham@arm.com> | 2024-03-28 08:57:27 +0000 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-28 08:57:27 +0000 |
commit | 88b10f3e3aa93232f1f530cf8dfe1227f5f74ae9 (patch) | |
tree | 4c8c2c002035784045d42349689be88b2e29435f /clang/lib/Analysis/FlowSensitive/DataflowEnvironment.cpp | |
parent | 2a2fd488b6bc1f3df7a8c103f53fec8bf849da4a (diff) | |
download | llvm-88b10f3e3aa93232f1f530cf8dfe1227f5f74ae9.zip llvm-88b10f3e3aa93232f1f530cf8dfe1227f5f74ae9.tar.gz llvm-88b10f3e3aa93232f1f530cf8dfe1227f5f74ae9.tar.bz2 |
[MC][AArch64] Segregate constant pool caches by size. (#86832)
If you write a 32- and a 64-bit LDR instruction that both refer to the
same constant or symbol using the = syntax:
```
ldr w0, =something
ldr x1, =something
```
then the first call to `ConstantPool::addEntry` will insert the constant
into its cache of existing entries, and the second one will find the
cache entry and reuse it. This results in a 64-bit load from a 32-bit
constant, reading nonsense into the other half of the target register.
In this patch I've done the simplest fix: include the size of the
constant pool entry as part of the key used to index the cache. So now
32- and 64-bit constant loads will never share a constant pool entry.
There's scope for doing this better, in principle: you could imagine
merging the two slots with appropriate overlap, so that the 32-bit load
loads the LSW of the 64-bit value. But that's much more complicated: you
have to take endianness into account, and maybe also adjust the size of
an existing entry. This is the simplest fix that restores correctness.
Diffstat (limited to 'clang/lib/Analysis/FlowSensitive/DataflowEnvironment.cpp')
0 files changed, 0 insertions, 0 deletions