diff options
author | Vladimir Makarov <vmakarov@redhat.com> | 2009-09-02 18:54:25 +0000 |
---|---|---|
committer | Vladimir Makarov <vmakarov@gcc.gnu.org> | 2009-09-02 18:54:25 +0000 |
commit | ce18efcb54ff5d3de8b035aa2cd34db4715b8bfd (patch) | |
tree | 4fff239f690be40be173f6b04a6883338c271d49 /gcc/sched-deps.c | |
parent | f8563a3ba77cb002cf22d1c74715340a96fb404e (diff) | |
download | gcc-ce18efcb54ff5d3de8b035aa2cd34db4715b8bfd.zip gcc-ce18efcb54ff5d3de8b035aa2cd34db4715b8bfd.tar.gz gcc-ce18efcb54ff5d3de8b035aa2cd34db4715b8bfd.tar.bz2 |
invoke.texi (-fsched-pressure): Document it.
2009-09-02 Vladimir Makarov <vmakarov@redhat.com>
* doc/invoke.texi (-fsched-pressure): Document it.
(-fsched-reg-pressure-heuristic): Remove it.
* reload.c (ira.h): Include.
(find_reloads): Add choosing reload on number of small spilled
classes.
* haifa-sched.c (ira.h): Include.
(sched_pressure_p, sched_regno_cover_class, curr_reg_pressure,
saved_reg_pressure, curr_reg_live, saved_reg_live,
region_ref_regs): New variables.
(sched_init_region_reg_pressure_info, mark_regno_birth_or_death,
initiate_reg_pressure_info, setup_ref_regs,
initiate_bb_reg_pressure_info, save_reg_pressure,
restore_reg_pressure, dying_use_p, print_curr_reg_pressure): New
functions.
(setup_insn_reg_pressure_info): New function.
(rank_for_schedule): Add pressure checking and insn issue time.
Remove comparison of insn reg weights.
(ready_sort): Set insn reg pressure info.
(update_register_pressure, setup_insn_max_reg_pressure,
update_reg_and_insn_max_reg_pressure,
sched_setup_bb_reg_pressure_info): New functions.
(schedule_insn): Add code for printing and updating reg pressure
info.
(find_set_reg_weight, find_insn_reg_weight): Remove.
(ok_for_early_queue_removal): Do nothing if pressure_only_p.
(debug_ready_list): Print reg pressure info.
(schedule_block): Ditto. Check insn issue time.
(sched_init): Set up sched_pressure_p. Allocate and set up some
reg pressure related info.
(sched_finish): Free some reg pressure related info.
(fix_tick_ready): Make insn always ready if pressure_p.
(init_h_i_d): Don't call find_insn_reg_weight.
(haifa_finish_h_i_d): Free insn reg pressure info.
* ira-int.h (ira_hard_regno_cover_class, ira_reg_class_nregs,
ira_memory_move_cost, ira_class_hard_regs,
ira_class_hard_regs_num, ira_no_alloc_regs,
ira_available_class_regs, ira_reg_class_cover_size,
ira_reg_class_cover, ira_class_translate): Move to ira.h.
* ira-lives.c (single_reg_class): Check mode to find how many
registers are necessary for operand.
(ira_implicitly_set_insn_hard_regs): New.
* common.opt (fsched-pressure): New options.
(fsched-reg-pressure-heuristic): Remove.
* ira.c (setup_eliminable_regset): Rename to
ira_setup_eliminable_regset. Make it external.
(expand_reg_info): Pass cover class to setup_reg_classes.
(ira): Call resize_reg_info instead of allocate_reg_info.
* sched-deps.c: Include ira.h.
(implicit_reg_pending_clobbers, implicit_reg_pending_uses): New.
(create_insn_reg_use, create_insn_reg_set, setup_insn_reg_uses,
reg_pressure_info, insn_use_p, mark_insn_pseudo_birth,
mark_insn_hard_regno_birth, mark_insn_reg_birth,
mark_pseudo_death, mark_hard_regno_death, mark_reg_death,
mark_insn_reg_store, mark_insn_reg_clobber,
setup_insn_reg_pressure_info): New.
(sched_analyze_1): Update implicit_reg_pending_uses.
(sched_analyze_insn): Find implicit sets, uses, clobbers of regs.
Use them to create dependencies. Set insn reg uses and pressure
info. Process reg_pending_uses in one place.
(free_deps): Free implicit sets.
(remove_from_deps): Remove implicit sets if necessary. Check
implicit sets when clearing reg_last_in_use.
(init_deps_global): Clear implicit_reg_pending_clobbers and
implicit_reg_pending_uses.
* ira.h (ira_hard_regno_cover_class, ira_reg_class_nregs,
ira_memory_move_cost, ira_class_hard_regs,
ira_class_hard_regs_num, ira_no_alloc_regs,
ira_available_class_regs, ira_reg_class_cover_size,
ira_reg_class_cover, ira_class_translate): Move from ira-int.h.
(ira_setup_eliminable_regset, ira_set_pseudo_classes,
ira_implicitly_set_insn_hard_regs): New prototypes.
* ira-costs.c (pseudo_classes_defined_p, allocno_p,
cost_elements_num): New variables.
(allocno_costs, total_costs): Rename to costs and
total_allocno_costs.
(COSTS_OF_ALLOCNO): Rename to COSTS.
(allocno_pref): Rename to pref.
(allocno_pref_buffer): Rename to pref_buffer.
(common_classes): Rename to regno_cover_class.
(COST_INDEX): New.
(record_reg_classes): Set allocno attributes only if allocno_p.
(record_address_regs): Ditto. Use COST_INDEX instead of
ALLOCNO_NUM.
(scan_one_insn): Use COST_INDEX and COSTS instead of ALLOCNO_NUM
and COSTS_OF_ALLOCNO.
(print_costs): Rename to print_allocno_costs.
(print_pseudo_costs): New.
(process_bb_node_for_costs): Split into 2 functions with new
function process_bb_for_costs. Pass BB to process_bb_for_costs.
(find_allocno_class_costs): Rename to find_costs_and_classes. Add
new parameter dump_file. Use cost_elements_num instead of
ira_allocnos_num. Make one iteration if preferred classes were
already calculated for scheduler. Make 2 versions of code
depending on allocno_p.
(setup_allocno_cover_class_and_costs): Check allocno_p. Use
regno_cover_class and COSTS instead of common_classes and
COSTS_OF_ALLOCNO.
(init_costs, finish_costs): New.
(ira_costs): Set up allocno_p and cost_elements_num. Call
init_costs and finish_costs.
(ira_set_pseudo_classes): New.
* rtl.h (allocate_reg_info): Remove.
(resize_reg_info): Change return type.
(reg_cover_class): New.
(setup_reg_classes): Add new parameter.
* sched-int.h (struct deps_reg): New member implicit_sets.
(sched_pressure_p, sched_regno_cover_class): New external
definitions.
(INCREASE_BITS): New macro.
(struct reg_pressure_data, struct reg_use_data): New.
(struct _haifa_insn_data): Remove reg_weight. Add members
reg_pressure, reg_use_list, reg_set_list, and
reg_pressure_excess_cost_change.
(struct deps): New member implicit_sets.
(pressure_p): New variable.
(COVER_CLASS_BITS, INCREASE_BITS): New macros.
(struct reg_pressure_data, struct reg_use_data): New.
(INSN_REG_WEIGHT): Remove.
(INSN_REG_PRESSURE, INSN_MAX_REG_PRESSURE, INSN_REG_USE_LIST,
INSN_REG_SET_LIST, INSN_REG_PRESSURE_EXCESS_COST_CHANGE): New
macros.
(sched_init_region_reg_pressure_info,
sched_setup_bb_reg_pressure_info): New prototypes.
* reginfo.c (struct reg_pref): New member coverclass.
(reg_cover_class): New function.
(reginfo_init, pass_reginfo_init): Move after free_reg_info.
(reg_info_size): New variable.
(allocate_reg_info): Make static. Setup reg_info_size.
(resize_reg_info): Use reg_info_size. Return flag of resizing.
(setup_reg_classes): Add a new parameter. Setup cover class too.
* Makefile.in (reload.o, haifa-sched.o, sched-deps.o): Add ira.h to the
dependencies.
* sched-rgn.c (deps_join): Set up implicit_sets.
(schedule_region): Set up region and basic blocks pressure
relative info.
* passes.c (init_optimization_passes): Move
pass_subregs_of_mode_init before pass_sched.
From-SVN: r151348
Diffstat (limited to 'gcc/sched-deps.c')
-rw-r--r-- | gcc/sched-deps.c | 659 |
1 files changed, 535 insertions, 124 deletions
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c index 17df6a5..25f03d2 100644 --- a/gcc/sched-deps.c +++ b/gcc/sched-deps.c @@ -41,6 +41,7 @@ along with GCC; see the file COPYING3. If not see #include "sched-int.h" #include "params.h" #include "cselib.h" +#include "ira.h" #ifdef INSN_SCHEDULING @@ -396,6 +397,15 @@ static regset reg_pending_clobbers; static regset reg_pending_uses; static enum reg_pending_barrier_mode reg_pending_barrier; +/* Hard registers implicitly clobbered or used (or may be implicitly + clobbered or used) by the currently analyzed insn. For example, + insn in its constraint has one register class. Even if there is + currently no hard register in the insn, the particular hard + register will be in the insn after reload pass because the + constraint requires it. */ +static HARD_REG_SET implicit_reg_pending_clobbers; +static HARD_REG_SET implicit_reg_pending_uses; + /* To speed up the test for duplicate dependency links we keep a record of dependencies created by add_dependence when the average number of instructions in a basic block is very large. @@ -417,8 +427,8 @@ static int cache_size; static int deps_may_trap_p (const_rtx); static void add_dependence_list (rtx, rtx, int, enum reg_note); -static void add_dependence_list_and_free (struct deps *, rtx, - rtx *, int, enum reg_note); +static void add_dependence_list_and_free (struct deps *, rtx, + rtx *, int, enum reg_note); static void delete_all_dependences (rtx); static void fixup_sched_groups (rtx); @@ -1367,7 +1377,7 @@ add_dependence_list (rtx insn, rtx list, int uncond, enum reg_note dep_type) is not readonly. */ static void -add_dependence_list_and_free (struct deps *deps, rtx insn, rtx *listp, +add_dependence_list_and_free (struct deps *deps, rtx insn, rtx *listp, int uncond, enum reg_note dep_type) { rtx list, next; @@ -1625,7 +1635,7 @@ haifa_note_mem_dep (rtx mem, rtx pending_mem, rtx pending_insn, ds_t ds) { dep_def _dep, *dep = &_dep; - init_dep_1 (dep, pending_insn, cur_insn, ds_to_dt (ds), + init_dep_1 (dep, pending_insn, cur_insn, ds_to_dt (ds), current_sched_info->flags & USE_DEPS_LIST ? ds : -1); maybe_add_or_update_dep_1 (dep, false, pending_mem, mem); } @@ -1691,6 +1701,327 @@ ds_to_dt (ds_t ds) return REG_DEP_ANTI; } } + + + +/* Functions for computation of info needed for register pressure + sensitive insn scheduling. */ + + +/* Allocate and return reg_use_data structure for REGNO and INSN. */ +static struct reg_use_data * +create_insn_reg_use (int regno, rtx insn) +{ + struct reg_use_data *use; + + use = (struct reg_use_data *) xmalloc (sizeof (struct reg_use_data)); + use->regno = regno; + use->insn = insn; + use->next_insn_use = INSN_REG_USE_LIST (insn); + INSN_REG_USE_LIST (insn) = use; + return use; +} + +/* Allocate and return reg_set_data structure for REGNO and INSN. */ +static struct reg_set_data * +create_insn_reg_set (int regno, rtx insn) +{ + struct reg_set_data *set; + + set = (struct reg_set_data *) xmalloc (sizeof (struct reg_set_data)); + set->regno = regno; + set->insn = insn; + set->next_insn_set = INSN_REG_SET_LIST (insn); + INSN_REG_SET_LIST (insn) = set; + return set; +} + +/* Set up insn register uses for INSN and dependency context DEPS. */ +static void +setup_insn_reg_uses (struct deps *deps, rtx insn) +{ + unsigned i; + reg_set_iterator rsi; + rtx list; + struct reg_use_data *use, *use2, *next; + struct deps_reg *reg_last; + + EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi) + { + if (i < FIRST_PSEUDO_REGISTER + && TEST_HARD_REG_BIT (ira_no_alloc_regs, i)) + continue; + + if (find_regno_note (insn, REG_DEAD, i) == NULL_RTX + && ! REGNO_REG_SET_P (reg_pending_sets, i) + && ! REGNO_REG_SET_P (reg_pending_clobbers, i)) + /* Ignore use which is not dying. */ + continue; + + use = create_insn_reg_use (i, insn); + use->next_regno_use = use; + reg_last = &deps->reg_last[i]; + + /* Create the cycle list of uses. */ + for (list = reg_last->uses; list; list = XEXP (list, 1)) + { + use2 = create_insn_reg_use (i, XEXP (list, 0)); + next = use->next_regno_use; + use->next_regno_use = use2; + use2->next_regno_use = next; + } + } +} + +/* Register pressure info for the currently processed insn. */ +static struct reg_pressure_data reg_pressure_info[N_REG_CLASSES]; + +/* Return TRUE if INSN has the use structure for REGNO. */ +static bool +insn_use_p (rtx insn, int regno) +{ + struct reg_use_data *use; + + for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use) + if (use->regno == regno) + return true; + return false; +} + +/* Update the register pressure info after birth of pseudo register REGNO + in INSN. Arguments CLOBBER_P and UNUSED_P say correspondingly that + the register is in clobber or unused after the insn. */ +static void +mark_insn_pseudo_birth (rtx insn, int regno, bool clobber_p, bool unused_p) +{ + int incr, new_incr; + enum reg_class cl; + + gcc_assert (regno >= FIRST_PSEUDO_REGISTER); + cl = sched_regno_cover_class[regno]; + if (cl != NO_REGS) + { + incr = ira_reg_class_nregs[cl][PSEUDO_REGNO_MODE (regno)]; + if (clobber_p) + { + new_incr = reg_pressure_info[cl].clobber_increase + incr; + reg_pressure_info[cl].clobber_increase = new_incr; + } + else if (unused_p) + { + new_incr = reg_pressure_info[cl].unused_set_increase + incr; + reg_pressure_info[cl].unused_set_increase = new_incr; + } + else + { + new_incr = reg_pressure_info[cl].set_increase + incr; + reg_pressure_info[cl].set_increase = new_incr; + if (! insn_use_p (insn, regno)) + reg_pressure_info[cl].change += incr; + create_insn_reg_set (regno, insn); + } + gcc_assert (new_incr < (1 << INCREASE_BITS)); + } +} + +/* Like mark_insn_pseudo_regno_birth except that NREGS saying how many + hard registers involved in the birth. */ +static void +mark_insn_hard_regno_birth (rtx insn, int regno, int nregs, + bool clobber_p, bool unused_p) +{ + enum reg_class cl; + int new_incr, last = regno + nregs; + + while (regno < last) + { + gcc_assert (regno < FIRST_PSEUDO_REGISTER); + if (! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno)) + { + cl = sched_regno_cover_class[regno]; + if (cl != NO_REGS) + { + if (clobber_p) + { + new_incr = reg_pressure_info[cl].clobber_increase + 1; + reg_pressure_info[cl].clobber_increase = new_incr; + } + else if (unused_p) + { + new_incr = reg_pressure_info[cl].unused_set_increase + 1; + reg_pressure_info[cl].unused_set_increase = new_incr; + } + else + { + new_incr = reg_pressure_info[cl].set_increase + 1; + reg_pressure_info[cl].set_increase = new_incr; + if (! insn_use_p (insn, regno)) + reg_pressure_info[cl].change += 1; + create_insn_reg_set (regno, insn); + } + gcc_assert (new_incr < (1 << INCREASE_BITS)); + } + } + regno++; + } +} + +/* Update the register pressure info after birth of pseudo or hard + register REG in INSN. Arguments CLOBBER_P and UNUSED_P say + correspondingly that the register is in clobber or unused after the + insn. */ +static void +mark_insn_reg_birth (rtx insn, rtx reg, bool clobber_p, bool unused_p) +{ + int regno; + + if (GET_CODE (reg) == SUBREG) + reg = SUBREG_REG (reg); + + if (! REG_P (reg)) + return; + + regno = REGNO (reg); + if (regno < FIRST_PSEUDO_REGISTER) + mark_insn_hard_regno_birth (insn, regno, + hard_regno_nregs[regno][GET_MODE (reg)], + clobber_p, unused_p); + else + mark_insn_pseudo_birth (insn, regno, clobber_p, unused_p); +} + +/* Update the register pressure info after death of pseudo register + REGNO. */ +static void +mark_pseudo_death (int regno) +{ + int incr; + enum reg_class cl; + + gcc_assert (regno >= FIRST_PSEUDO_REGISTER); + cl = sched_regno_cover_class[regno]; + if (cl != NO_REGS) + { + incr = ira_reg_class_nregs[cl][PSEUDO_REGNO_MODE (regno)]; + reg_pressure_info[cl].change -= incr; + } +} + +/* Like mark_pseudo_death except that NREGS saying how many hard + registers involved in the death. */ +static void +mark_hard_regno_death (int regno, int nregs) +{ + enum reg_class cl; + int last = regno + nregs; + + while (regno < last) + { + gcc_assert (regno < FIRST_PSEUDO_REGISTER); + if (! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno)) + { + cl = sched_regno_cover_class[regno]; + if (cl != NO_REGS) + reg_pressure_info[cl].change -= 1; + } + regno++; + } +} + +/* Update the register pressure info after death of pseudo or hard + register REG. */ +static void +mark_reg_death (rtx reg) +{ + int regno; + + if (GET_CODE (reg) == SUBREG) + reg = SUBREG_REG (reg); + + if (! REG_P (reg)) + return; + + regno = REGNO (reg); + if (regno < FIRST_PSEUDO_REGISTER) + mark_hard_regno_death (regno, hard_regno_nregs[regno][GET_MODE (reg)]); + else + mark_pseudo_death (regno); +} + +/* Process SETTER of REG. DATA is an insn containing the setter. */ +static void +mark_insn_reg_store (rtx reg, const_rtx setter, void *data) +{ + if (setter != NULL_RTX && GET_CODE (setter) != SET) + return; + mark_insn_reg_birth + ((rtx) data, reg, false, + find_reg_note ((const_rtx) data, REG_UNUSED, reg) != NULL_RTX); +} + +/* Like mark_insn_reg_store except notice just CLOBBERs; ignore SETs. */ +static void +mark_insn_reg_clobber (rtx reg, const_rtx setter, void *data) +{ + if (GET_CODE (setter) == CLOBBER) + mark_insn_reg_birth ((rtx) data, reg, true, false); +} + +/* Set up reg pressure info related to INSN. */ +static void +setup_insn_reg_pressure_info (rtx insn) +{ + int i, len; + enum reg_class cl; + static struct reg_pressure_data *pressure_info; + rtx link; + + gcc_assert (sched_pressure_p); + + if (! INSN_P (insn)) + return; + + for (i = 0; i < ira_reg_class_cover_size; i++) + { + cl = ira_reg_class_cover[i]; + reg_pressure_info[cl].clobber_increase = 0; + reg_pressure_info[cl].set_increase = 0; + reg_pressure_info[cl].unused_set_increase = 0; + reg_pressure_info[cl].change = 0; + } + + note_stores (PATTERN (insn), mark_insn_reg_clobber, insn); + + note_stores (PATTERN (insn), mark_insn_reg_store, insn); + +#ifdef AUTO_INC_DEC + for (link = REG_NOTES (insn); link; link = XEXP (link, 1)) + if (REG_NOTE_KIND (link) == REG_INC) + mark_insn_reg_store (XEXP (link, 0), NULL_RTX, insn); +#endif + + for (link = REG_NOTES (insn); link; link = XEXP (link, 1)) + if (REG_NOTE_KIND (link) == REG_DEAD) + mark_reg_death (XEXP (link, 0)); + + len = sizeof (struct reg_pressure_data) * ira_reg_class_cover_size; + pressure_info + = INSN_REG_PRESSURE (insn) = (struct reg_pressure_data *) xmalloc (len); + INSN_MAX_REG_PRESSURE (insn) = (int *) xmalloc (ira_reg_class_cover_size + * sizeof (int)); + for (i = 0; i < ira_reg_class_cover_size; i++) + { + cl = ira_reg_class_cover[i]; + pressure_info[i].clobber_increase + = reg_pressure_info[cl].clobber_increase; + pressure_info[i].set_increase = reg_pressure_info[cl].set_increase; + pressure_info[i].unused_set_increase + = reg_pressure_info[cl].unused_set_increase; + pressure_info[i].change = reg_pressure_info[cl].change; + } +} + + /* Internal variable for sched_analyze_[12] () functions. @@ -1905,10 +2236,16 @@ sched_analyze_1 (struct deps *deps, rtx x, rtx insn) /* Treat all writes to a stack register as modifying the TOS. */ if (regno >= FIRST_STACK_REG && regno <= LAST_STACK_REG) { + int nregs; + /* Avoid analyzing the same register twice. */ if (regno != FIRST_STACK_REG) sched_analyze_reg (deps, FIRST_STACK_REG, mode, code, insn); - sched_analyze_reg (deps, FIRST_STACK_REG, mode, USE, insn); + + nregs = hard_regno_nregs[FIRST_STACK_REG][mode]; + while (--nregs >= 0) + SET_HARD_REG_BIT (implicit_reg_pending_uses, + FIRST_STACK_REG + nregs); } #endif } @@ -2243,6 +2580,16 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) unsigned i; reg_set_iterator rsi; + if (! reload_completed) + { + HARD_REG_SET temp; + + extract_insn (insn); + preprocess_constraints (); + ira_implicitly_set_insn_hard_regs (&temp); + IOR_HARD_REG_SET (implicit_reg_pending_clobbers, temp); + } + can_start_lhs_rhs_p = (NONJUMP_INSN_P (insn) && code == SET); @@ -2263,7 +2610,8 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) and others know that a value is dead. Depend on the last call instruction so that reg-stack won't get confused. */ if (code == CLOBBER) - add_dependence_list (insn, deps->last_function_call, 1, REG_DEP_OUTPUT); + add_dependence_list (insn, deps->last_function_call, 1, + REG_DEP_OUTPUT); } else if (code == PARALLEL) { @@ -2326,6 +2674,8 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) { struct deps_reg *reg_last = &deps->reg_last[i]; add_dependence_list (insn, reg_last->sets, 0, REG_DEP_ANTI); + add_dependence_list (insn, reg_last->implicit_sets, + 0, REG_DEP_ANTI); add_dependence_list (insn, reg_last->clobbers, 0, REG_DEP_ANTI); @@ -2381,6 +2731,12 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) || (NONJUMP_INSN_P (insn) && control_flow_insn_p (insn))) reg_pending_barrier = MOVE_BARRIER; + if (sched_pressure_p) + { + setup_insn_reg_uses (deps, insn); + setup_insn_reg_pressure_info (insn); + } + /* Add register dependencies for insn. */ if (DEBUG_INSN_P (insn)) { @@ -2421,119 +2777,160 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) if (prev && NONDEBUG_INSN_P (prev)) add_dependence (insn, prev, REG_DEP_ANTI); } - /* If the current insn is conditional, we can't free any - of the lists. */ - else if (sched_has_condition_p (insn)) - { - EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi) - { - struct deps_reg *reg_last = &deps->reg_last[i]; - add_dependence_list (insn, reg_last->sets, 0, REG_DEP_TRUE); - add_dependence_list (insn, reg_last->clobbers, 0, REG_DEP_TRUE); - - if (!deps->readonly) - { - reg_last->uses = alloc_INSN_LIST (insn, reg_last->uses); - reg_last->uses_length++; - } - } - EXECUTE_IF_SET_IN_REG_SET (reg_pending_clobbers, 0, i, rsi) - { - struct deps_reg *reg_last = &deps->reg_last[i]; - add_dependence_list (insn, reg_last->sets, 0, REG_DEP_OUTPUT); - add_dependence_list (insn, reg_last->uses, 0, REG_DEP_ANTI); - - if (!deps->readonly) - { - reg_last->clobbers = alloc_INSN_LIST (insn, reg_last->clobbers); - reg_last->clobbers_length++; - } - } - EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i, rsi) - { - struct deps_reg *reg_last = &deps->reg_last[i]; - add_dependence_list (insn, reg_last->sets, 0, REG_DEP_OUTPUT); - add_dependence_list (insn, reg_last->clobbers, 0, REG_DEP_OUTPUT); - add_dependence_list (insn, reg_last->uses, 0, REG_DEP_ANTI); - - if (!deps->readonly) - { - reg_last->sets = alloc_INSN_LIST (insn, reg_last->sets); - SET_REGNO_REG_SET (&deps->reg_conditional_sets, i); - } - } - } else { EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi) - { - struct deps_reg *reg_last = &deps->reg_last[i]; - add_dependence_list (insn, reg_last->sets, 0, REG_DEP_TRUE); - add_dependence_list (insn, reg_last->clobbers, 0, REG_DEP_TRUE); - - if (!deps->readonly) - { - reg_last->uses_length++; - reg_last->uses = alloc_INSN_LIST (insn, reg_last->uses); - } - } - EXECUTE_IF_SET_IN_REG_SET (reg_pending_clobbers, 0, i, rsi) - { - struct deps_reg *reg_last = &deps->reg_last[i]; - if (reg_last->uses_length > MAX_PENDING_LIST_LENGTH - || reg_last->clobbers_length > MAX_PENDING_LIST_LENGTH) - { - add_dependence_list_and_free (deps, insn, ®_last->sets, 0, - REG_DEP_OUTPUT); - add_dependence_list_and_free (deps, insn, ®_last->uses, 0, - REG_DEP_ANTI); - add_dependence_list_and_free (deps, insn, ®_last->clobbers, 0, - REG_DEP_OUTPUT); - - if (!deps->readonly) - { - reg_last->sets = alloc_INSN_LIST (insn, reg_last->sets); - reg_last->clobbers_length = 0; - reg_last->uses_length = 0; - } - } - else - { - add_dependence_list (insn, reg_last->sets, 0, REG_DEP_OUTPUT); - add_dependence_list (insn, reg_last->uses, 0, REG_DEP_ANTI); - } - - if (!deps->readonly) - { - reg_last->clobbers_length++; - reg_last->clobbers = alloc_INSN_LIST (insn, reg_last->clobbers); - } - } - EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i, rsi) - { - struct deps_reg *reg_last = &deps->reg_last[i]; - add_dependence_list_and_free (deps, insn, ®_last->sets, 0, - REG_DEP_OUTPUT); - add_dependence_list_and_free (deps, insn, ®_last->clobbers, 0, - REG_DEP_OUTPUT); - add_dependence_list_and_free (deps, insn, ®_last->uses, 0, - REG_DEP_ANTI); + { + struct deps_reg *reg_last = &deps->reg_last[i]; + add_dependence_list (insn, reg_last->sets, 0, REG_DEP_TRUE); + add_dependence_list (insn, reg_last->implicit_sets, 0, REG_DEP_ANTI); + add_dependence_list (insn, reg_last->clobbers, 0, REG_DEP_TRUE); + + if (!deps->readonly) + { + reg_last->uses = alloc_INSN_LIST (insn, reg_last->uses); + reg_last->uses_length++; + } + } + + for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) + if (TEST_HARD_REG_BIT (implicit_reg_pending_uses, i)) + { + struct deps_reg *reg_last = &deps->reg_last[i]; + add_dependence_list (insn, reg_last->sets, 0, REG_DEP_TRUE); + add_dependence_list (insn, reg_last->implicit_sets, 0, + REG_DEP_ANTI); + add_dependence_list (insn, reg_last->clobbers, 0, REG_DEP_TRUE); + + if (!deps->readonly) + { + reg_last->uses = alloc_INSN_LIST (insn, reg_last->uses); + reg_last->uses_length++; + } + } - if (!deps->readonly) - { - reg_last->sets = alloc_INSN_LIST (insn, reg_last->sets); - reg_last->uses_length = 0; - reg_last->clobbers_length = 0; - CLEAR_REGNO_REG_SET (&deps->reg_conditional_sets, i); - } - } + /* If the current insn is conditional, we can't free any + of the lists. */ + if (sched_has_condition_p (insn)) + { + EXECUTE_IF_SET_IN_REG_SET (reg_pending_clobbers, 0, i, rsi) + { + struct deps_reg *reg_last = &deps->reg_last[i]; + add_dependence_list (insn, reg_last->sets, 0, REG_DEP_OUTPUT); + add_dependence_list (insn, reg_last->implicit_sets, 0, + REG_DEP_ANTI); + add_dependence_list (insn, reg_last->uses, 0, REG_DEP_ANTI); + + if (!deps->readonly) + { + reg_last->clobbers + = alloc_INSN_LIST (insn, reg_last->clobbers); + reg_last->clobbers_length++; + } + } + EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i, rsi) + { + struct deps_reg *reg_last = &deps->reg_last[i]; + add_dependence_list (insn, reg_last->sets, 0, REG_DEP_OUTPUT); + add_dependence_list (insn, reg_last->implicit_sets, 0, + REG_DEP_ANTI); + add_dependence_list (insn, reg_last->clobbers, 0, REG_DEP_OUTPUT); + add_dependence_list (insn, reg_last->uses, 0, REG_DEP_ANTI); + + if (!deps->readonly) + { + reg_last->sets = alloc_INSN_LIST (insn, reg_last->sets); + SET_REGNO_REG_SET (&deps->reg_conditional_sets, i); + } + } + } + else + { + EXECUTE_IF_SET_IN_REG_SET (reg_pending_clobbers, 0, i, rsi) + { + struct deps_reg *reg_last = &deps->reg_last[i]; + if (reg_last->uses_length > MAX_PENDING_LIST_LENGTH + || reg_last->clobbers_length > MAX_PENDING_LIST_LENGTH) + { + add_dependence_list_and_free (deps, insn, ®_last->sets, 0, + REG_DEP_OUTPUT); + add_dependence_list_and_free (deps, insn, + ®_last->implicit_sets, 0, + REG_DEP_ANTI); + add_dependence_list_and_free (deps, insn, ®_last->uses, 0, + REG_DEP_ANTI); + add_dependence_list_and_free + (deps, insn, ®_last->clobbers, 0, REG_DEP_OUTPUT); + + if (!deps->readonly) + { + reg_last->sets = alloc_INSN_LIST (insn, reg_last->sets); + reg_last->clobbers_length = 0; + reg_last->uses_length = 0; + } + } + else + { + add_dependence_list (insn, reg_last->sets, 0, REG_DEP_OUTPUT); + add_dependence_list (insn, reg_last->implicit_sets, 0, + REG_DEP_ANTI); + add_dependence_list (insn, reg_last->uses, 0, REG_DEP_ANTI); + } + + if (!deps->readonly) + { + reg_last->clobbers_length++; + reg_last->clobbers + = alloc_INSN_LIST (insn, reg_last->clobbers); + } + } + EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i, rsi) + { + struct deps_reg *reg_last = &deps->reg_last[i]; + + add_dependence_list_and_free (deps, insn, ®_last->sets, 0, + REG_DEP_OUTPUT); + add_dependence_list_and_free (deps, insn, + ®_last->implicit_sets, + 0, REG_DEP_ANTI); + add_dependence_list_and_free (deps, insn, ®_last->clobbers, 0, + REG_DEP_OUTPUT); + add_dependence_list_and_free (deps, insn, ®_last->uses, 0, + REG_DEP_ANTI); + + if (!deps->readonly) + { + reg_last->sets = alloc_INSN_LIST (insn, reg_last->sets); + reg_last->uses_length = 0; + reg_last->clobbers_length = 0; + CLEAR_REGNO_REG_SET (&deps->reg_conditional_sets, i); + } + } + } } + for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) + if (TEST_HARD_REG_BIT (implicit_reg_pending_clobbers, i)) + { + struct deps_reg *reg_last = &deps->reg_last[i]; + add_dependence_list (insn, reg_last->sets, 0, REG_DEP_ANTI); + add_dependence_list (insn, reg_last->clobbers, 0, REG_DEP_ANTI); + add_dependence_list (insn, reg_last->uses, 0, REG_DEP_ANTI); + + if (!deps->readonly) + reg_last->implicit_sets + = alloc_INSN_LIST (insn, reg_last->implicit_sets); + } + if (!deps->readonly) { IOR_REG_SET (&deps->reg_last_in_use, reg_pending_uses); IOR_REG_SET (&deps->reg_last_in_use, reg_pending_clobbers); IOR_REG_SET (&deps->reg_last_in_use, reg_pending_sets); + for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) + if (TEST_HARD_REG_BIT (implicit_reg_pending_uses, i) + || TEST_HARD_REG_BIT (implicit_reg_pending_clobbers, i)) + SET_REGNO_REG_SET (&deps->reg_last_in_use, i); /* Set up the pending barrier found. */ deps->last_reg_pending_barrier = reg_pending_barrier; @@ -2542,6 +2939,8 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) CLEAR_REG_SET (reg_pending_uses); CLEAR_REG_SET (reg_pending_clobbers); CLEAR_REG_SET (reg_pending_sets); + CLEAR_HARD_REG_SET (implicit_reg_pending_clobbers); + CLEAR_HARD_REG_SET (implicit_reg_pending_uses); /* Add dependencies if a scheduling barrier was found. */ if (reg_pending_barrier) @@ -2554,12 +2953,14 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) { struct deps_reg *reg_last = &deps->reg_last[i]; add_dependence_list (insn, reg_last->uses, 0, REG_DEP_ANTI); - add_dependence_list - (insn, reg_last->sets, 0, - reg_pending_barrier == TRUE_BARRIER ? REG_DEP_TRUE : REG_DEP_ANTI); - add_dependence_list - (insn, reg_last->clobbers, 0, - reg_pending_barrier == TRUE_BARRIER ? REG_DEP_TRUE : REG_DEP_ANTI); + add_dependence_list (insn, reg_last->sets, 0, + reg_pending_barrier == TRUE_BARRIER + ? REG_DEP_TRUE : REG_DEP_ANTI); + add_dependence_list (insn, reg_last->implicit_sets, 0, + REG_DEP_ANTI); + add_dependence_list (insn, reg_last->clobbers, 0, + reg_pending_barrier == TRUE_BARRIER + ? REG_DEP_TRUE : REG_DEP_ANTI); } } else @@ -2569,12 +2970,15 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) struct deps_reg *reg_last = &deps->reg_last[i]; add_dependence_list_and_free (deps, insn, ®_last->uses, 0, REG_DEP_ANTI); - add_dependence_list_and_free - (deps, insn, ®_last->sets, 0, - reg_pending_barrier == TRUE_BARRIER ? REG_DEP_TRUE : REG_DEP_ANTI); - add_dependence_list_and_free - (deps, insn, ®_last->clobbers, 0, - reg_pending_barrier == TRUE_BARRIER ? REG_DEP_TRUE : REG_DEP_ANTI); + add_dependence_list_and_free (deps, insn, ®_last->sets, 0, + reg_pending_barrier == TRUE_BARRIER + ? REG_DEP_TRUE : REG_DEP_ANTI); + add_dependence_list_and_free (deps, insn, + ®_last->implicit_sets, 0, + REG_DEP_ANTI); + add_dependence_list_and_free (deps, insn, ®_last->clobbers, 0, + reg_pending_barrier == TRUE_BARRIER + ? REG_DEP_TRUE : REG_DEP_ANTI); if (!deps->readonly) { @@ -2750,7 +3154,7 @@ deps_analyze_insn (struct deps *deps, rtx insn) if (global_regs[i]) { SET_REGNO_REG_SET (reg_pending_sets, i); - SET_REGNO_REG_SET (reg_pending_uses, i); + SET_HARD_REG_BIT (implicit_reg_pending_uses, i); } /* Other call-clobbered hard regs may be clobbered. Since we only have a choice between 'might be clobbered' @@ -2763,7 +3167,7 @@ deps_analyze_insn (struct deps *deps, rtx insn) by the function, but it is certain that the stack pointer is among them, but be conservative. */ else if (fixed_regs[i]) - SET_REGNO_REG_SET (reg_pending_uses, i); + SET_HARD_REG_BIT (implicit_reg_pending_uses, i); /* The frame pointer is normally not used by the function itself, but by the debugger. */ /* ??? MIPS o32 is an exception. It uses the frame pointer @@ -2772,7 +3176,7 @@ deps_analyze_insn (struct deps *deps, rtx insn) else if (i == FRAME_POINTER_REGNUM || (i == HARD_FRAME_POINTER_REGNUM && (! reload_completed || frame_pointer_needed))) - SET_REGNO_REG_SET (reg_pending_uses, i); + SET_HARD_REG_BIT (implicit_reg_pending_uses, i); } /* For each insn which shouldn't cross a call, add a dependence @@ -2988,6 +3392,8 @@ free_deps (struct deps *deps) free_INSN_LIST_list (®_last->uses); if (reg_last->sets) free_INSN_LIST_list (®_last->sets); + if (reg_last->implicit_sets) + free_INSN_LIST_list (®_last->implicit_sets); if (reg_last->clobbers) free_INSN_LIST_list (®_last->clobbers); } @@ -3025,9 +3431,12 @@ remove_from_deps (struct deps *deps, rtx insn) remove_from_dependence_list (insn, ®_last->uses); if (reg_last->sets) remove_from_dependence_list (insn, ®_last->sets); + if (reg_last->implicit_sets) + remove_from_dependence_list (insn, ®_last->implicit_sets); if (reg_last->clobbers) remove_from_dependence_list (insn, ®_last->clobbers); - if (!reg_last->uses && !reg_last->sets && !reg_last->clobbers) + if (!reg_last->uses && !reg_last->sets && !reg_last->implicit_sets + && !reg_last->clobbers) CLEAR_REGNO_REG_SET (&deps->reg_last_in_use, i); } @@ -3167,6 +3576,8 @@ sched_deps_finish (void) void init_deps_global (void) { + CLEAR_HARD_REG_SET (implicit_reg_pending_clobbers); + CLEAR_HARD_REG_SET (implicit_reg_pending_uses); reg_pending_sets = ALLOC_REG_SET (®_obstack); reg_pending_clobbers = ALLOC_REG_SET (®_obstack); reg_pending_uses = ALLOC_REG_SET (®_obstack); |