diff options
author | Haochen Jiang <haochen.jiang@intel.com> | 2024-05-29 11:13:55 +0800 |
---|---|---|
committer | Haochen Jiang <haochen.jiang@intel.com> | 2024-05-29 11:13:55 +0800 |
commit | 00ed5424b1d4dcccfa187f55205521826794898c (patch) | |
tree | 49a7635126e1c69c127f1a32d35bc52f1bcf982d /gcc | |
parent | d9933e8c72364f001539247c6186ccfb3e7e95ba (diff) | |
download | gcc-00ed5424b1d4dcccfa187f55205521826794898c.zip gcc-00ed5424b1d4dcccfa187f55205521826794898c.tar.gz gcc-00ed5424b1d4dcccfa187f55205521826794898c.tar.bz2 |
Adjust generic loop alignment from 16:11:8 to 16 for Intel processors
Previously, we use 16:11:8 in generic tune for Intel processors, which
lead to cross cache line issue and result in some random performance
penalty in benchmarks with small loops commit to commit.
After changing to always aligning to 16 bytes, it will somehow solve
the issue.
gcc/ChangeLog:
* config/i386/x86-tune-costs.h (generic_cost): Change from
16:11:8 to 16.
Diffstat (limited to 'gcc')
-rw-r--r-- | gcc/config/i386/x86-tune-costs.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/gcc/config/i386/x86-tune-costs.h b/gcc/config/i386/x86-tune-costs.h index 65d7d1f..d3aaaa4 100644 --- a/gcc/config/i386/x86-tune-costs.h +++ b/gcc/config/i386/x86-tune-costs.h @@ -3758,7 +3758,7 @@ struct processor_costs generic_cost = { generic_memset, COSTS_N_INSNS (4), /* cond_taken_branch_cost. */ COSTS_N_INSNS (2), /* cond_not_taken_branch_cost. */ - "16:11:8", /* Loop alignment. */ + "16", /* Loop alignment. */ "16:11:8", /* Jump alignment. */ "0:0:8", /* Label alignment. */ "16", /* Func alignment. */ |