diff options
author | Richard Biener <rguenther@suse.de> | 2024-09-24 14:10:13 +0200 |
---|---|---|
committer | Richard Biener <rguenth@gcc.gnu.org> | 2024-09-25 08:59:39 +0200 |
commit | cc141b56b367b3d81c1b590e22ae174f1e013009 (patch) | |
tree | f56ca48c9c1f43101af72c2f6e4a7cf38e9ce73b /gcc/c/c-parser.cc | |
parent | 0b2d3bfa38ccce0dda46aba023f64440cc638496 (diff) | |
download | gcc-cc141b56b367b3d81c1b590e22ae174f1e013009.zip gcc-cc141b56b367b3d81c1b590e22ae174f1e013009.tar.gz gcc-cc141b56b367b3d81c1b590e22ae174f1e013009.tar.bz2 |
rtl-optimization/114855 - slow add_store_equivs in IRA
For the testcase in PR114855 at -O1 add_store_equivs shows up as the
main sink for bitmap_set_bit because it uses a bitmap to mark all
seen insns by UID to make sure the forward walk in memref_used_between_p
will find the insn in question. Given we do have a CFG here the
functions operation is questionable, given memref_used_between_p
together with the walk of all insns is obviously quadratic in the
worst case that whole thing should be re-done ... but, for the
testcase, using a sbitmap of size get_max_uid () + 1 gets
bitmap_set_bit off the profile and improves IRA time from 15.58s (8%)
to 3.46s (2%).
Now, given above quadraticness I wonder whether we should instead
gate add_store_equivs on optimize > 1 or flag_expensive_optimizations.
PR rtl-optimization/114855
* ira.cc (add_store_equivs): Use sbitmap for tracking
visited insns.
Diffstat (limited to 'gcc/c/c-parser.cc')
0 files changed, 0 insertions, 0 deletions