aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Object/ArchiveWriter.cpp
diff options
context:
space:
mode:
authorAmir Ayupov <aaupov@fb.com>2025-06-25 22:20:37 -0700
committerGitHub <noreply@github.com>2025-06-25 22:20:37 -0700
commit49847148d4b1e4139833766aedb843a40a147039 (patch)
tree8959d925d281a844ca338704ebaf0833c92087ed /llvm/lib/Object/ArchiveWriter.cpp
parent4cb8308ee9cb88734d82462f82a05b2a47ed6d24 (diff)
downloadllvm-49847148d4b1e4139833766aedb843a40a147039.zip
llvm-49847148d4b1e4139833766aedb843a40a147039.tar.gz
llvm-49847148d4b1e4139833766aedb843a40a147039.tar.bz2
[BOLT] Fix density for jump-through functions (#145619)
Address the issue that stems from how the density is computed. Binary *function* density is the ratio of its total dynamic number of executed bytes over the static size in bytes. The meaning of it is the amount of dynamic profile information relative to its static size. Binary *profile* density is the minimum *function* density among *well- -profiled* functions, taken as functions covering p99 samples, or, in other words, excluding functions in the tail 1% of samples. p99 is an arbitrary cutoff. The meaning of profile density is the *minimum amount of profile information per function* to be able to optimize the program well. The threshold for profile density is set empirically. The dynamically executed bytes are taken directly from LBR fall-throughs and for LBRs recorded in trampoline functions, such as ``` 000000001a941ec0 <Sleef_expf8_u10>: 1a941ec0: jmpq *0x37b911fa(%rip) # <pnt_expf8_u10> 1a941ec6: nopw %cs:(%rax,%rax) ``` the fall-through has zero length: ``` # Branch Target NextBranch Count T 1b171cf6 1a941ec0 1a941ec0 568562 ``` But it's not correct to say this function has zero executed bytes, just the size of the next branch is not included in the fall-through. If such functions have non-trivial sample count, they will fall in p99 samples, and cause the profile density to be zero. To solve this, we can either: 1. Include fall-through end jump size into executed bytes: is logically sound but technically challenging: the size needs to come from disassembly (expensive), and the threshold need to be reevaluated with updated definition of binary function density. 2. Exclude pass-through functions from density computation: follows the intent of profile density which is to set the amount of profile information needed to optimize the function well. Single instruction pass-through functions don't need samples many times the size to be optimized well. Go with option 2 as a reasonable compromise. Test Plan: added bolt/test/X86/zero-density.s
Diffstat (limited to 'llvm/lib/Object/ArchiveWriter.cpp')
0 files changed, 0 insertions, 0 deletions