Age | Commit message (Collapse) | Author | Files | Lines |
|
(#155041)
This PR simply moves the callsite anchors from the beginning of
callsites to their end.
Emitting the end of callsites is more sensible as it allows breaking the
basic block into subblocks which end with control transfer instructions.
|
|
For context see main pull request: #147424.
Reviewers: MaskRay
Reviewed By: MaskRay
Pull Request: https://github.com/llvm/llvm-project/pull/149259
|
|
Reference:
https://sourceware.org/git/?p=gnu-gabi.git;a=blob;f=program-loading-and-dynamic-linking.txt;h=3357d865720285df2d29c4e8f92de49ddf1beb40;hb=refs/heads/master
|
|
|
|
This change was mistakenly dropped from the prior commit 6b623a6622707ea47d84ab0069f766215a6fec44
|
|
(SHT_LLVM_BB_ADDR_MAP_V0). (#146186)
Version 2 was added more than two years ago
(https://github.com/llvm/llvm-project/commit/6015a045d768feab3bae9ad9c0c81e118df8b04a).
So it should be safe to deprecate older versions.
|
|
newly-introduced SHT_LLVM_BB_ADDR_MAP version. (#144426)
Recently, we have been looking at some optimizations targeting
individual calls. In particular, we plan to extend the address mapping
technique to map to individual callsites. For example, in this piece of
code for a basic blocks:
```
<BB>:
1200: lea 0x1(%rcx), %rdx
1204: callq foo
1209: cmpq 0x10, %rdx
120d: ja L1
```
We want to emit 0x9 as the call site offset for `callq foo` (the offset
from the block entry to right after the call), so that we know if a
sampled address is before the call or after.
This PR implements the decode/encode/emit capability. The Codegen change
will be implemented in a later PR.
|
|
section. (#142825)
Compression of SHT_LLVM_BB_ADDR_MAP with zstd can give 3X compression
ratio, which is especially beneficial with PGO_analysis_map. To read the
data back, we must decompress it. Though we can use llvm-objcopy to do
this, it's much better to do this decompression internally in the
library API.
|
|
try_emplace value-initializes values, so we do not need to pass
nullptr to try_emplace when the value types are raw pointers or
std::unique_ptr<T>.
|
|
This patch adds support for properly decoding SHT_LLVM_BB_ADDR_MAP
sections in relocatable object files when the relocation format is CREL.
Reviewers: rlavaee, jh7370, red1bluelost, MaskRay
Reviewed By: MaskRay
Pull Request: https://github.com/llvm/llvm-project/pull/126446
|
|
This patch updates the getSectionAndRelocations function to also support
CREL relocation sections. Unit tests have been added. This patch also
updates consumers to say they explicitly do not support CREL format
relocations. Subsequent patches will make the consumers work with CREL
format relocations and also add in testing support.
Reviewers: red1bluelost, MaskRay, rlavaee
Reviewed By: MaskRay
Pull Request: https://github.com/llvm/llvm-project/pull/126445
|
|
Sometimes we want to use a `PgoAnalysisMap` feature that doesn't require
the BB entries info, e.g. only the `FuncEntryCount`, but the BB entries
is emitted by default, so I'm adding an option to skip the info for this
case to save the binary size(can save ~90% size of the section). For
implementation, it extends a new field(`OmitBBEntries`) in
`BBAddrMap::Features` for this and it's controlled by a switch
`--basic-block-address-map-skip-bb-entries`.
Note that this naturally supports backwards compatibility as the field
is zero for the old version, matches the decoding in the new version
llvm.
|
|
`BBEntries` is defined outside of the loop and is used after move which
is undefined behavior.
|
|
These symbols are implicitly imported from the LLVM shared library by
llvm-objdump on ELF like platforms, but for windows they need to be
explicitly exported when LLVM is built as shared library.
I also add visibility macros for XCOFFObjectFile::getExceptionEntries
that can't automatically be added by clang tooling since it doesn't
store the source locations for explicit function template
instantiations.
|
|
This patch will make LLVM emit a new section .llvm_jump_table_sizes
containing tuples of (jump table address, entry count) in object files.
This section is useful for tools that need to statically reconstruct the
control flow of executables.
At the moment this is only enabled by default for the PS5 target.
|
|
Extract the llvm-readelf decoder to `decodeCrel` (#91280) and reuse it
for llvm-objdump.
Because the section representation of LLVMObject (`SectionRef`) is
64-bit, insufficient to hold all decoder states, `section_rel_begin` is
modified to decode CREL eagerly and hold the decoded relocations inside
ELFObjectFile<ELFT>.
The test is adapted from llvm/test/tools/llvm-readobj/ELF/crel.test.
Pull Request: https://github.com/llvm/llvm-project/pull/97382
|
|
CREL is a compact relocation format for the ELF object file format.
This patch adds integrated assembler support (using the RELA form)
available with `llvm-mc -filetype=obj -crel a.s -o a.o`.
A dependent patch will add `clang -c -Wa,--crel,--allow-experimental-crel`.
Also add llvm-readobj support (for both REL and RELA forms) to
facilitate testing the assembler. Additionally, yaml2obj gains support
for the RELA form to aid testing with llvm-readobj.
We temporarily assign the section type code 0x40000020 from the generic
range to `SHT_CREL`. We avoided using `SHT_LLVM_` or `SHT_GNU_` to
avoid code churn and maintain broader applicability for interested psABIs.
Similarly, `DT_CREL` is temporarily 0x40000026.
LLVM will change the code and break compatibility. This is not an issue
if all relocatable files using CREL are regenerated (aka no prebuilt
relocatable files).
Link: https://discourse.llvm.org/t/rfc-crel-a-compact-relocation-format-for-elf/77600
Pull Request: https://github.com/llvm/llvm-project/pull/91280
|
|
Validate `p_offset` in `dynamicEntries` before computing the entry offset.
Fixes: https://github.com/llvm/llvm-project/issues/85568.
|
|
Defines a subset of attributes and emits them to a section called
.hexagon.attributes.
The current attributes recorded are the attributes needed by
llvm-objdump to automatically determine target features and eliminate
the need to manually pass features.
|
|
together by decoupling the handling of the two features. (#74128)
Today `-split-machine-functions` and `-fbasic-block-sections={all,list}`
cannot be combined with `-basic-block-sections=labels` (the labels
option will be ignored).
The inconsistency comes from the way basic block address map -- the
underlying mechanism for basic block labels -- encodes basic block
addresses
(https://lists.llvm.org/pipermail/llvm-dev/2020-July/143512.html).
Specifically, basic block offsets are computed relative to the function
begin symbol. This relies on functions being contiguous which is not the
case for MFS and basic block section binaries. This means Propeller
cannot use binary profiles collected from these binaries, which limits
the applicability of Propeller for iterative optimization.
To make the `SHT_LLVM_BB_ADDR_MAP` feature work with basic block section
binaries, we propose modifying the encoding of this section as follows.
First let us review the current encoding which emits the address of each
function and its number of basic blocks, followed by basic block entries
for each basic block.
| | |
|--|--|
| Address of the function | Function Address |
| Number of basic blocks in this function | NumBlocks |
| BB entry 1
| BB entry 2
| ...
| BB entry #NumBlocks
To make this work for basic block sections, we treat each basic block
section similar to a function, except that basic block sections of the
same function must be encapsulated in the same structure so we can map
all of them to their single function.
We modify the encoding to first emit the number of basic block sections
(BB ranges) in the function. Then we emit the address map of each basic
block section section as before: the base address of the section, its
number of blocks, and BB entries for its basic block. The first section
in the BB address map is always the function entry section.
| | |
|--|--|
| Number of sections for this function | NumBBRanges |
| Section 1 begin address | BaseAddress[1] |
| Number of basic blocks in section 1 | NumBlocks[1] |
| BB entries for Section 1
|..................|
| Section #NumBBRanges begin address | BaseAddress[NumBBRanges] |
| Number of basic blocks in section #NumBBRanges |
NumBlocks[NumBBRanges] |
| BB entries for Section #NumBBRanges
The encoding of basic block entries remains as before with the minor
change that each basic block offset is now computed relative to the
begin symbol of its containing BB section.
This patch adds a new boolean codegen option `-basic-block-address-map`.
Correspondingly, the front-end flag `-fbasic-block-address-map` and LLD
flag `--lto-basic-block-address-map` are introduced.
Analogously, we add a new TargetOption field `BBAddrMap`. This means BB
address maps are either generated for all functions in the compiling
unit, or for none (depending on `TargetOptions::BBAddrMap`).
This patch keeps the functionality of the old
`-fbasic-block-sections=labels` option but does not remove it. A
subsequent patch will remove the obsolete option.
We refactor the `BasicBlockSections` pass by separating the BB address
map and BB sections handing to their own functions (named
`handleBBAddrMap` and `handleBBSections`). `handleBBSections` renumbers
basic blocks and places them in their assigned sections.
`handleBBAddrMap` is invoked after `handleBBSections` (if requested) and
only renumbers the blocks.
- New tests added:
- Two tests basic-block-address-map-with-basic-block-sections.ll and
basic-block-address-map-with-mfs.ll to exercise the combination of
`-basic-block-address-map` with `-basic-block-sections=list` and
'-split-machine-functions`.
- A driver sanity test for the `-fbasic-block-address-map` option
(basic-block-address-map.c).
- An LLD test for testing the `--lto-basic-block-address-map` option.
This reuses the LLVM IR from `lld/test/ELF/lto/basic-block-sections.ll`.
- Renamed and modified the two existing codegen tests for basic block
address map (`basic-block-sections-labels-functions-sections.ll` and
`basic-block-sections-labels.ll`)
- Removed `SHT_LLVM_BB_ADDR_MAP_V0` tests. Full deprecation of
`SHT_LLVM_BB_ADDR_MAP_V0` and `SHT_LLVM_BB_ADDR_MAP` version less than 2
will happen in a separate PR in a few months.
|
|
BBAddrMap. (#77139)
We had specified that `readBBAddrMap` will always keep PGOAnalyses and
BBAddrMaps the same length on success.
https://github.com/llvm/llvm-project/blob/365fbbfbcfefb8766f7716109b9c3767b58e6058/llvm/include/llvm/Object/ELFObjectFile.h#L116-L117
It turns out that this is not currently the case when no analyses exist
in a function. No test had caught it.
We also should not append PGOBBEntries when there is no BBFreq or
BrProb.
This patch adds:
* tests that PGOAnalyses and BBAddrMaps are same length even when no
analyses are enabled
* fixes decode so that PGOAnalyses and BBAddrMaps are same length
* updates test to not emit unnecessary PGOBBEntries
* fixes decode to not emit PGOBBEntries when unnecessary
|
|
with tests.
Reviewed in PR (#71750). A part of [RFC - PGO Accuracy Metrics: Emitting and Evaluating Branch
and Block
Analysis](https://discourse.llvm.org/t/rfc-pgo-accuracy-metrics-emitting-and-evaluating-branch-and-block-analysis/73902).
This PR adds the PGOAnalysisMap data structure and implements encoding and
decoding through Object and ObjectYAML along with associated tests. When
emitted into the bb-addr-map section, each function is followed by the associated
pgo-analysis-map for that function. The emitting of each analysis in the map is
controlled by a bit in the bb-addr-map feature byte. All existing bb-addr-map
code can ignore the pgo-analysis-map if the caller does not request the data.
|
|
Reapply llvm/llvm-project#72713 after fixing formatted printing of
`uint64_t` values as hex (see failing build here
https://lab.llvm.org/buildbot/#/builders/186/builds/13604).
This patch adds llvm-readobj support for:
- Dynamic `R_AARCH64_AUTH_*` relocations (including RELR compressed AUTH
relocations) as described here:
https://github.com/ARM-software/abi-aa/blob/main/pauthabielf64/pauthabielf64.rst#auth-variant-dynamic-relocations
- `.note.AARCH64-PAUTH-ABI-tag` section as defined here
https://github.com/ARM-software/abi-aa/blob/main/pauthabielf64/pauthabielf64.rst#elf-marking
|
|
Reverts llvm/llvm-project#72713
Buildbot tests fail on clang-armv7-global-isel builder
https://lab.llvm.org/buildbot/#/builders/186/builds/13604
|
|
This patch adds llvm-readobj support for:
- Dynamic R_AARCH64_AUTH_* relocations (including RELR compressed AUTH
relocations) as described here:
https://github.com/ARM-software/abi-aa/blob/main/pauthabielf64/pauthabielf64.rst#auth-variant-dynamic-relocations
- .note.AARCH64-PAUTH-ABI-tag section as defined here
https://github.com/ARM-software/abi-aa/blob/main/pauthabielf64/pauthabielf64.rst#elf-marking
|
|
BBAddrMap fields. (#72689)
The fields are still kept as public for now since our tooling accesses
them. Will change them to private visibility in a later patch.
|
|
clang -ffat-lto-objects can use this new ELF section type for the .llvm.lto
section for fat LTO support (D146776).
Original RFC: https://discourse.llvm.org/t/rfc-ffat-lto-objects-support/63977
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D153215
|
|
In preparation for removing the `#include "llvm/ADT/StringExtras.h"`
from the header to source file of `llvm/Support/Error.h`, first add in
all the missing includes that were previously included transitively
through this header.
|
|
This patch encapsulates the encoding and decoding logic of basic block metadata into the Metadata struct, and also reduces the decoded size of `SHT_LLVM_BB_ADDR_MAP` section.
The patch would've looked more readable if we could use designated initializer, but that is a c++20 feature.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D148360
|
|
Global variables are described in a metadata table called
SHT_AARCH64_MEMTAG_GLOBALS_DYNAMIC. It's basically a ULEB-encoded skip
list with some other fancy encoding tricks to make it smaller. You can
see the ABI at
https://github.com/ARM-software/abi-aa/blob/main/memtagabielf64/memtagabielf64.rst#83encoding-of-sht_aarch64_memtag_globals_dynamic
This extends readelf/readobj to understand these sections.
Reviewed By: pcc, MaskRay, jhenderson
Differential Revision: https://reviews.llvm.org/D145761
|
|
Currently when using the LLVM tools (eg llvm-readobj, llvm-objdump) to
find information about basic block locations using the propeller tooling
in relocatable object files function addresses are not mapped properly
which causes problems. In llvm-readobj this means that incorrect
function names will be pulled. In llvm-objdum this means that most BBs
won't show up in the output if --symbolize-operands is used. This patch
changes the behavior of decodeBBAddrMap to trace through relocations
to get correct function addresses if it is going through a relocatable
object file. This fixes the behavior in both tools and also other
consumers of decodeBBAddrMap. Some helper functions have been added
in/refactoring done to aid in grabbing BB address map sections now that
in some cases both relocation and BB address map sections need to be
obtained at the same time.
Regression tests moved around/added.
Differential Revision: https://reviews.llvm.org/D143841
|
|
Didn't fail locally for some reason with my gcc toolchain.
|
|
This refactoring will allow for this utility function to be used in
other places in the codebase outside of the llvm-readobj tool.
Reviewed By: jhenderson, rahmanl
Differential Revision: https://reviews.llvm.org/D144783
|
|
Let Propeller use specialized IDs for basic blocks, instead of MBB number.
This allows optimizations not just prior to asm-printer, but throughout the entire codegen.
This patch only implements the functionality under the new `LLVM_BB_ADDR_MAP` version, but the old version is still being used. A later patch will change the used version.
####Background
Today Propeller uses machine basic block (MBB) numbers, which already exist, to map native assembly to machine IR. This is done as follows.
- Basic block addresses are captured and dumped into the `LLVM_BB_ADDR_MAP` section just before the AsmPrinter pass which writes out object files. This ensures that we have a mapping that is close to assembly.
- Profiling mapping works by taking a virtual address of an instruction and looking up the `LLVM_BB_ADDR_MAP` section to find the MBB number it corresponds to.
- While this works well today, we need to do better when we scale Propeller to target other Machine IR optimizations like spill code optimization. Register allocation happens earlier in the Machine IR pipeline and we need an annotation mechanism that is valid at that point.
- The current scheme will not work in this scenario because the MBB number of a particular basic block is not fixed and changes over the course of codegen (via renumbering, adding, and removing the basic blocks).
- In other words, the volatile MBB numbers do not provide a one-to-one correspondence throughout the lifetime of Machine IR. Profile annotation using MBB numbers is restricted to a fixed point; only valid at the exact point where it was dumped.
- Further, the object file can only be dumped before AsmPrinter and cannot be dumped at an arbitrary point in the Machine IR pass pipeline. Hence, MBB numbers are not suitable and we need something else.
####Solution
We propose using fixed unique incremental MBB IDs for basic blocks instead of volatile MBB numbers. These IDs are assigned upon the creation of machine basic blocks. We modify `MachineFunction::CreateMachineBasicBlock` to assign the fixed ID to every newly created basic block. It assigns `MachineFunction::NextMBBID` to the MBB ID and then increments it, which ensures having unique IDs.
To ensure correct profile attribution, multiple equivalent compilations must generate the same Propeller IDs. This is guaranteed as long as the MachineFunction passes run in the same order. Since the `NextBBID` variable is scoped to `MachineFunction`, interleaving of codegen for different functions won't cause any inconsistencies.
The new encoding is generated under the new version number 2 and we keep backward-compatibility with older versions.
####Impact on Size of the `LLVM_BB_ADDR_MAP` Section
Emitting the Propeller ID results in a 23% increase in the size of the `LLVM_BB_ADDR_MAP` section for the clang binary.
Reviewed By: tmsriram
Differential Revision: https://reviews.llvm.org/D100808
|
|
Use deduction guides instead of helper functions.
The only non-automatic changes have been:
1. ArrayRef(some_uint8_pointer, 0) needs to be changed into ArrayRef(some_uint8_pointer, (size_t)0) to avoid an ambiguous call with ArrayRef((uint8_t*), (uint8_t*))
2. CVSymbol sym(makeArrayRef(symStorage)); needed to be rewritten as CVSymbol sym{ArrayRef(symStorage)}; otherwise the compiler is confused and thinks we have a (bad) function prototype. There was a few similar situation across the codebase.
3. ADL doesn't seem to work the same for deduction-guides and functions, so at some point the llvm namespace must be explicitly stated.
4. The "reference mode" of makeArrayRef(ArrayRef<T> &) that acts as no-op is not supported (a constructor cannot achieve that).
Per reviewers' comment, some useless makeArrayRef have been removed in the process.
This is a follow-up to https://reviews.llvm.org/D140896 that introduced
the deduction guides.
Differential Revision: https://reviews.llvm.org/D140955
|
|
Add file with Xtensa ELF relocations. Add Xtensa support to ELF.h,
ELFObject.h and ELFYAML.cpp. Add simple test of Xtensa ELF representation in YAML.
Differential Revision: https://reviews.llvm.org/D64827
|
|
MachineBasicBlock::Number."
This reverts commit 6015a045d768feab3bae9ad9c0c81e118df8b04a.
Differential Revision: https://reviews.llvm.org/D139952
|
|
Let Propeller use specialized IDs for basic blocks, instead of MBB number.
This allows optimizations not just prior to asm-printer, but throughout the entire codegen.
This patch only implements the functionality under the new `LLVM_BB_ADDR_MAP` version, but the old version is still being used. A later patch will change the used version.
####Background
Today Propeller uses machine basic block (MBB) numbers, which already exist, to map native assembly to machine IR. This is done as follows.
- Basic block addresses are captured and dumped into the `LLVM_BB_ADDR_MAP` section just before the AsmPrinter pass which writes out object files. This ensures that we have a mapping that is close to assembly.
- Profiling mapping works by taking a virtual address of an instruction and looking up the `LLVM_BB_ADDR_MAP` section to find the MBB number it corresponds to.
- While this works well today, we need to do better when we scale Propeller to target other Machine IR optimizations like spill code optimization. Register allocation happens earlier in the Machine IR pipeline and we need an annotation mechanism that is valid at that point.
- The current scheme will not work in this scenario because the MBB number of a particular basic block is not fixed and changes over the course of codegen (via renumbering, adding, and removing the basic blocks).
- In other words, the volatile MBB numbers do not provide a one-to-one correspondence throughout the lifetime of Machine IR. Profile annotation using MBB numbers is restricted to a fixed point; only valid at the exact point where it was dumped.
- Further, the object file can only be dumped before AsmPrinter and cannot be dumped at an arbitrary point in the Machine IR pass pipeline. Hence, MBB numbers are not suitable and we need something else.
####Solution
We propose using fixed unique incremental MBB IDs for basic blocks instead of volatile MBB numbers. These IDs are assigned upon the creation of machine basic blocks. We modify `MachineFunction::CreateMachineBasicBlock` to assign the fixed ID to every newly created basic block. It assigns `MachineFunction::NextMBBID` to the MBB ID and then increments it, which ensures having unique IDs.
To ensure correct profile attribution, multiple equivalent compilations must generate the same Propeller IDs. This is guaranteed as long as the MachineFunction passes run in the same order. Since the `NextBBID` variable is scoped to `MachineFunction`, interleaving of codegen for different functions won't cause any inconsistencies.
The new encoding is generated under the new version number 2 and we keep backward-compatibility with older versions.
####Impact on Size of the `LLVM_BB_ADDR_MAP` Section
Emitting the Propeller ID results in a 23% increase in the size of the `LLVM_BB_ADDR_MAP` section for the clang binary.
Reviewed By: tmsriram
Differential Revision: https://reviews.llvm.org/D100808
|
|
Add ELFObjectFileBase::getLoongArchFeatures, and return the proper ELF
relative reloc type for LoongArch.
Reviewed By: MaskRay, SixWeining
Differential Revision: https://reviews.llvm.org/D138016
|
|
Currently we use the `.llvm.offloading` section to store device-side
objects inside the host, creating a fat binary. The contents of these
sections is currently determined by the name of the section while it
should ideally be determined by its type. This patch adds the new
`SHT_LLVM_OFFLOADING` section type to the ELF section types. Which
should make it easier to identify this specific data format.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D129052
|
|
the previous basic blocks.
This is a resurrection of D106421 with the change that it keeps backward-compatibility. This means decoding the previous version of `LLVM_BB_ADDR_MAP` will work. This is required as the profile mapping tool is not released with LLVM (AutoFDO). As suggested by @jhenderson we rename the original section type value to `SHT_LLVM_BB_ADDR_MAP_V0` and assign a new value to the `SHT_LLVM_BB_ADDR_MAP` section type. The new encoding adds a version byte to each function entry to specify the encoding version for that function. This patch also adds a feature byte to be used with more flexibility in the future. An use-case example for the feature field is encoding multi-section functions more concisely using a different format.
Conceptually, the new encoding emits basic block offsets and sizes as label differences between each two consecutive basic block begin and end label. When decoding, offsets must be aggregated along with basic block sizes to calculate the final offsets of basic blocks relative to the function address.
This encoding uses smaller values compared to the existing one (offsets relative to function symbol).
Smaller values tend to occupy fewer bytes in ULEB128 encoding. As a result, we get about 17% total reduction in the size of the bb-address-map section (from about 11MB to 9MB for the clang PGO binary).
The extra two bytes (version and feature fields) incur a small 3% size overhead to the `LLVM_BB_ADDR_MAP` section size.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D121346
|
|
section dumping
Fix #54456: `objcopy --only-keep-debug` produces a linked image with invalid
empty dynamic section. llvm-objdump -p currently reports an error which seems
excessive.
```
% llvm-readelf -l a.out
llvm-readelf: warning: 'a.out': no valid dynamic table was found
...
```
Follow the spirit of llvm-readelf -l (D64472) and report a warning instead.
This allows later files to be dumped despite warnings for an input file, and
improves objdump compatibility in that the exit code is now 0 instead of 1.
```
% llvm-objdump -p a.out # new behavior
...
Program Header:
llvm-objdump: warning: 'a.out': invalid empty dynamic section
% objdump -p a.out
...
Dynamic Section:
```
Reviewed By: jhenderson, raj.khem
Differential Revision: https://reviews.llvm.org/D122505
|
|
This patch adds necessary definitions for LoongArch ELF files, including
relocation types. Also adds initial support to ELFYaml, llvm-objdump,
and llvm-readobj in order to work with LoongArch ELFs.
Differential revision: https://reviews.llvm.org/D115859
|
|
Change getELFRelativeRelocationType() to return R_VE_RELATIVE
as a preparation of lld for VE.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D115592
|
|
The only binary-format-related field in the BBAddrMap structure is the function address (`Addr`), which will use uint64_t in 64B format and uint32_t in 32B format. This patch changes it to use uint64_t in both formats.
This allows non-templated use of the struct, at the expense of a marginal additional size overhead for the 32-bit format. The size of the BB address map section does not change.
Differential Revision: https://reviews.llvm.org/D112679
|
|
|
|
and DT_RISCV_VARIANT_CC
STO_RISCV_VARIANT_CC marks that a symbol uses a non-standard calling
convention or the vector calling convention.
See https://github.com/riscv/riscv-elf-psabi-doc/pull/190
Differential Revision: https://reviews.llvm.org/D107949
|
|
The MSP430 ABI supports build attributes for specifying
the ISA, code model, data model and enum size in ELF object files.
Differential Revision: https://reviews.llvm.org/D107969
|
|
|
|
This patch lets llvm-readelf dump the content of the BB address map
section in the following format:
```
Function {
At: <address>
BB entries [
{
Offset: <offset>
Size: <size>
Metadata: <metadata>
},
...
]
}
...
```
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D95511
|