1. 30 Sep, 2019 40 commits
    • Hide regs_invalidated_by_call etc. · 7c395881
      The previous patches removed all target-independent uses of
      regs_invalidated_by_call, call_used_or_fixed_regs and
      call_used_or_fixed_reg_p.  This patch therefore restricts
      them to target-specific code (and reginfo.c, which sets them up).
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* hard-reg-set.h (regs_invalidated_by_call): Only define if
      	IN_TARGET_CODE.
      	(call_used_or_fixed_regs): Likewise.
      	(call_used_or_fixed_reg_p): Likewise.
      	* reginfo.c (regs_invalidated_by_call): New macro.
      
      From-SVN: r276338
      Richard Sandiford committed
    • Remove global call sets: shrink-wrap.c · b21a62b6
      This is a straight replacement of "calls we can clobber without saving
      them first".
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* shrink-wrap.c: Include function-abi.h.
      	(requires_stack_frame_p): Use crtl->abi to test whether the
      	current function can use a register without saving it first.
      
      From-SVN: r276337
      Richard Sandiford committed
    • Remove global call sets: sel-sched.c · 497b699b
      The main change here is to replace a crosses_call boolean with
      a bitmask of the ABIs used by the crossed calls.  For space reasons,
      I didn't also add a HARD_REG_SET that tracks the set of registers
      that are actually clobbered, which means that this is the one part
      of the series that doesn't benefit from -fipa-ra.  The existing
      FIXME suggests that the current structures aren't the preferred
      way of representing this anyhow, and the pass already makes
      conservative assumptions about call-crossing registers.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* sel-sched-ir.h (_def::crosses_call): Replace with...
      	(_def::crossed_call_abis): ..this new field.
      	(def_list_add): Take a mask of ABIs instead of a crosses_call
      	boolean.
      	* sel-sched-ir.c (def_list_add): Likewise.  Update initialization
      	of _def accordingly.
      	* sel-sched.c: Include function-abi.h.
      	(hard_regs_data::regs_for_call_clobbered): Delete.
      	(reg_rename::crosses_call): Replace with...
      	(reg_rename::crossed_call_abis): ...this new field.
      	(fur_static_params::crosses_call): Replace with...
      	(fur_static_params::crossed_call_abis): ...this new field.
      	(init_regs_for_mode): Don't initialize sel_hrd.regs_for_call_clobbered.
      	(init_hard_regs_data): Use crtl->abi to test which registers the
      	current function would need to save before it uses them.
      	(mark_unavailable_hard_regs): Update handling of call-clobbered
      	registers, using call_clobbers_in_region to find out which registers
      	might be call-clobbered (but without taking -fipa-ra into account
      	for now).  Remove separate handling of partially call-clobbered
      	registers.
      	(verify_target_availability): Use crossed_call_abis instead of
      	crosses_call.
      	(get_spec_check_type_for_insn, find_used_regs): Likewise.
      	(fur_orig_expr_found, fur_on_enter, fur_orig_expr_not_found): Likewise.
      
      From-SVN: r276336
      Richard Sandiford committed
    • Remove global call sets: sched-deps.c · 2e2c6df3
      This is a straight replacement of an existing "full or partial"
      call-clobber check.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* sched-deps.c (deps_analyze_insn): Use the ABI of the target
      	function to test whether a register is fully or partly clobbered.
      
      From-SVN: r276335
      Richard Sandiford committed
    • Remove global call sets: rtlanal.c · 52053c3b
      The reg_set_p part is simple, since the caller is asking about
      a specific REG rtx, with a known register number and mode.
      
      The find_all_hard_reg_sets part emphasises that the "implicit"
      behaviour was always a bit suspect, since it includes fully-clobbered
      registers but not partially-clobbered registers.  The only current
      user of this path is the c6x-specific scheduler predication code,
      and c6x doesn't have partly call-clobbered registers, so in practice
      it's fine.  I've added a comment to try to disuade future users.
      (The !implicit path is OK and useful though.)
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* rtlanal.c: Include function-abi.h.
      	(reg_set_p): Use insn_callee_abi to get the ABI of the called
      	function and clobbers_reg_p to test whether the register
      	is call-clobbered.
      	(find_all_hard_reg_sets): When implicit is true, use insn_callee_abi
      	to get the ABI of the called function and full_reg_clobbers to
      	get the set of fully call-clobbered registers.  Warn about the
      	pitfalls of using this mode.
      
      From-SVN: r276334
      Richard Sandiford committed
    • Remove global call sets: reload.c · 12e20dde
      The inheritance code in find_equiv_reg can use clobbers_reg_p
      to test whether a call clobbers either of the equivalent registers.
      
      reload and find_reg use crtl->abi to test whether a register needs
      to be saved in the prologue before use.
      
      reload_as_needed can use full_and_partial_reg_clobbers and thus
      avoid needing to keep its own record of which registers are part
      call-clobbered.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* reload.c: Include function-abi.h.
      	(find_equiv_reg): Use clobbers_reg_p to test whether either
      	of the equivalent registers is clobbered by a call.
      	* reload1.c: Include function-abi.h.
      	(reg_reloaded_call_part_clobbered): Delete.
      	(reload): Use crtl->abi to test which registers would need
      	saving in the prologue before use.
      	(find_reg): Likewise.
      	(emit_reload_insns): Remove code for reg_reloaded_call_part_clobbered.
      	(reload_as_needed): Likewise.  Use full_and_partial_reg_clobbers
      	instead of call_used_or_fixed_regs | reg_reloaded_call_part_clobbered.
      
      From-SVN: r276333
      Richard Sandiford committed
    • Remove global call sets: regrename.c · 0ce77f46
      This patch makes regrename use a similar mask-and-clobber-set
      pair to IRA when tracking whether registers are clobbered by
      calls in a region.  Testing for a nonzero ABI mask is equivalent
      to testing for a register that crosses a call.
      
      Since AArch64 and c6x use regrename.h, they need to be updated
      to include function-abi.h first.  AIUI this is preferred over
      including function-abi.h in regrename.h.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* regrename.h (du_head::call_clobber_mask): New field.
      	(du_head::need_caller_save_reg): Replace with...
      	(du_head::call_abis): ...this new field.
      	* regrename.c: Include function-abi.h.
      	(call_clobbered_in_chain_p): New function.
      	(check_new_reg_p): Use crtl->abi when deciding whether a register
      	is free for use after RA.  Use call_clobbered_in_chain_p to test
      	whether a candidate register would be clobbered by a call.
      	(find_rename_reg): Don't add call-clobber conflicts here.
      	(rename_chains): Check call_abis instead of need_caller_save_reg.
      	(merge_chains): Update for changes to du_head.
      	(build_def_use): Use insn_callee_abi to get the ABI of the call insn
      	target.  Record the ABI identifier in call_abis and the set of
      	fully or partially clobbered registers in call_clobber_mask.
      	Add fully-clobbered registers to hard_conflicts here rather
      	than in find_rename_reg.
      	* config/aarch64/cortex-a57-fma-steering.c: Include function-abi.h.
      	(rename_single_chain): Check call_abis instead of need_caller_save_reg.
      	* config/aarch64/falkor-tag-collision-avoidance.c: Include
      	function-abi.h.
      	* config/c6x/c6x.c: Likewise.
      
      From-SVN: r276332
      Richard Sandiford committed
    • Remove global call sets: regcprop.c · 30503f4e
      This is a direct replacement of an existing test for fully and
      partially clobbered registers.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* regcprop.c (copyprop_hardreg_forward_1): Use the recorded
      	mode of the register when deciding whether it is no longer
      	available after a call.
      
      From-SVN: r276331
      Richard Sandiford committed
    • Remove global call sets: recog.c · 35b81ea3
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* recog.c: Include function-abi.h.
      	(peep2_find_free_register): Use crtl->abi when deciding whether
      	a register is free for use after RA.
      
      From-SVN: r276330
      Richard Sandiford committed
    • Remove global call sets: postreload-gcse.c · 7187286e
      This is another case in which we should conservatively treat
      partial kills as full kills.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* postreload-gcse.c: Include regs.h and function-abi.h.
      	(record_opr_changes): Use insn_callee_abi to get the ABI of the
      	call insn target.  Conservatively assume that partially-clobbered
      	registers are altered.
      
      From-SVN: r276329
      Richard Sandiford committed
    • Remove global call sets: postreload.c · 3df28f00
      The "|= fixed_regs" in reload_combine isn't necessary, since the
      set is only used to determine which values have changed (rather than,
      for example, which registers are available for use).
      
      In reload_cse_move2add we can be accurate about which registers
      are still available.  BLKmode indicates a continuation of the
      previous register, and since clobbers_reg_p handles multi-register
      values, it's enough to skip over BLKmode entries and just test the
      start register.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* postreload.c (reload_combine_recognize_pattern): Use crtl->abi
      	when deciding whether a register is free for use after RA.
      	(reload_combine): Remove unnecessary use of fixed_reg_set.
      	(reload_cse_move2add): Use insn_callee_abi to get the ABI of the
      	call insn target.  Use reg_mode when testing whether a register
      	is no longer available.
      
      From-SVN: r276328
      Richard Sandiford committed
    • Remove global call sets: LRA · a1e6ee38
      lra_reg has an actual_call_used_reg_set field that is only used during
      inheritance.  This in turn required a special lra_create_live_ranges
      pass for flag_ipa_ra to set up this field.  This patch instead makes
      the inheritance code do its own live register tracking, using the
      same ABI-mask-and-clobber-set pair as for IRA.
      
      Tracking ABIs simplifies (and cheapens) the logic in lra-lives.c and
      means we no longer need a separate path for -fipa-ra.  It also means
      we can remove TARGET_RETURN_CALL_WITH_MAX_CLOBBERS.
      
      The patch also strengthens the sanity check in lra_assigns so that
      we check that reg_renumber is consistent with the whole conflict set,
      not just the call-clobbered registers.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* target.def (return_call_with_max_clobbers): Delete.
      	* doc/tm.texi.in (TARGET_RETURN_CALL_WITH_MAX_CLOBBERS): Delete.
      	* doc/tm.texi: Regenerate.
      	* config/aarch64/aarch64.c (aarch64_return_call_with_max_clobbers)
      	(TARGET_RETURN_CALL_WITH_MAX_CLOBBERS): Delete.
      	* lra-int.h (lra_reg::actual_call_used_reg_set): Delete.
      	(lra_reg::call_insn): Delete.
      	* lra.c: Include function-abi.h.
      	(initialize_lra_reg_info_element): Don't initialize the fields above.
      	(lra): Use crtl->abi to test whether the current function needs to
      	save a register in the prologue.  Remove special pre-inheritance
      	lra_create_live_ranges pass for flag_ipa_ra.
      	* lra-assigns.c: Include function-abi.h
      	(find_hard_regno_for_1): Use crtl->abi to test whether the current
      	function needs to save a register in the prologue.
      	(lra_assign): Assert that registers aren't allocated to a
      	conflicting register, rather than checking only for overlaps
      	with call_used_or_fixed_regs.  Do this even for flag_ipa_ra,
      	and for registers that are not live across a call.
      	* lra-constraints.c (last_call_for_abi): New variable.
      	(full_and_partial_call_clobbers): Likewise.
      	(setup_next_usage_insn): Remove the register from
      	full_and_partial_call_clobbers.
      	(need_for_call_save_p): Use call_clobbered_in_region_p to test
      	whether the register needs a caller save.
      	(need_for_split_p): Use full_and_partial_reg_clobbers instead
      	of call_used_or_fixed_regs.
      	(inherit_in_ebb): Initialize and maintain last_call_for_abi and
      	full_and_partial_call_clobbers.
      	* lra-lives.c (check_pseudos_live_through_calls): Replace
      	last_call_used_reg_set and call_insn arguments with an abi argument.
      	Remove handling of lra_reg::call_insn.  Use function_abi::mode_clobbers
      	as the set of conflicting registers.
      	(calls_have_same_clobbers_p): Delete.
      	(process_bb_lives): Track the ABI of the last call instead of an
      	insn/HARD_REG_SET pair.  Update calls to
      	check_pseudos_live_through_calls.  Use eh_edge_abi to calculate
      	the set of registers that could be clobbered by an EH edge.
      	Include partially-clobbered as well as fully-clobbered registers.
      	(lra_create_live_ranges_1): Don't initialize lra_reg::call_insn.
      	* lra-remat.c: Include function-abi.h.
      	(call_used_regs_arr_len, call_used_regs_arr): Delete.
      	(set_bb_regs): Use insn_callee_abi to get the set of call-clobbered
      	registers and bitmap_view to combine them into dead_regs.
      	(call_used_input_regno_present_p): Take a function_abi argument
      	and use it to test whether a register is call-clobbered.
      	(calculate_gen_cands): Use insn_callee_abi to get the ABI of the
      	call insn target.  Update tje call to call_used_input_regno_present_p.
      	(do_remat): Likewise.
      	(lra_remat): Remove the initialization of call_used_regs_arr_len
      	and call_used_regs_arr.
      
      From-SVN: r276327
      Richard Sandiford committed
    • Remove global call sets: loop-iv.c · 5c64181d
      Similar idea to the combine.c and gcse.c patches.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* loop-iv.c: Include regs.h and function-abi.h.
      	(simplify_using_initial_values): Use insn_callee_abi to get the
      	ABI of the call insn target.  Conservatively assume that
      	partially-clobbered registers are altered.
      
      From-SVN: r276326
      Richard Sandiford committed
    • Remove global call sets: IRA · 6c476222
      For -fipa-ra, IRA already keeps track of which specific registers
      are call-clobbered in a region, rather than using global information.
      The patch generalises this so that it tracks which ABIs are used
      by calls in the region.
      
      We can then use the new ABI descriptors to handle partially-clobbered
      registers in the same way as fully-clobbered registers, without having
      special code for targetm.hard_regno_call_part_clobbered.  This in turn
      makes -fipa-ra work for partially-clobbered registers too.
      
      A side-effect of allowing multiple ABIs is that we no longer have
      an obvious set of conflicting registers for the self-described
      "fragile hack" in ira-constraints.c.  This code kicks in for
      user-defined registers that aren't live across a call at -O0,
      and it tries to avoid allocating a call-clobbered register to them.
      Here I've used the set of call-clobbered registers in the current
      function's ABI, applying on top of any registers that are clobbered by
      called functions.  This is enough to keep gcc.dg/debug/dwarf2/pr5948.c
      happy.
      
      The handling of GENERIC_STACK_CHECK in do_reload seemed to have
      a reversed condition:
      
            for (int i = 0; i < FIRST_PSEUDO_REGISTER; i++)
      	if (df_regs_ever_live_p (i)
      	    && !fixed_regs[i]
      	    && call_used_or_fixed_reg_p (i))
      	  size += UNITS_PER_WORD;
      
      The final part of the condition counts registers that don't need to be
      saved in the prologue, but I think the opposite was intended.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* function-abi.h (call_clobbers_in_region): Declare.
      	(call_clobbered_in_region_p): New function.
      	* function-abi.cc (call_clobbers_in_region): Likewise.
      	* ira-int.h: Include function-abi.h.
      	(ira_allocno::crossed_calls_abis): New field.
      	(ALLOCNO_CROSSED_CALLS_ABIS): New macro.
      	(ira_need_caller_save_regs): New function.
      	(ira_need_caller_save_p): Likewise.
      	* ira.c (setup_reg_renumber): Use ira_need_caller_save_p instead
      	of call_used_or_fixed_regs.
      	(do_reload): Use crtl->abi to test whether the current function
      	needs to save a register in the prologue.  Count registers that
      	need to be saved rather than registers that don't.
      	* ira-build.c (create_cap_allocno): Copy ALLOCNO_CROSSED_CALLS_ABIS.
      	Remove unnecessary | from ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS.
      	(propagate_allocno_info): Merge ALLOCNO_CROSSED_CALLS_ABIS too.
      	(propagate_some_info_from_allocno): Likewise.
      	(copy_info_to_removed_store_destinations): Likewise.
      	(ira_flattening): Say that ALLOCNO_CROSSED_CALLS_ABIS and
      	ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS are handled conservatively.
      	(ira_build): Use ira_need_caller_save_regs instead of
      	call_used_or_fixed_regs.
      	* ira-color.c (calculate_saved_nregs): Use crtl->abi to test
      	whether the current function would need to save a register
      	before using it.
      	(calculate_spill_cost): Likewise.
      	(allocno_reload_assign): Use ira_need_caller_save_regs and
      	ira_need_caller_save_p instead of call_used_or_fixed_regs.
      	* ira-conflicts.c (ira_build_conflicts): Use
      	ira_need_caller_save_regs rather than call_used_or_fixed_regs
      	as the set of call-clobbered registers.  Remove the
      	call_used_or_fixed_regs mask from the calculation of
      	temp_hard_reg_set and mask its use instead.  Remove special
      	handling of partially-clobbered registers.
      	* ira-costs.c (ira_tune_allocno_costs): Use ira_need_caller_save_p.
      	* ira-lives.c (process_bb_node_lives): Use mode_clobbers to
      	calculate the set of conflicting registers for calls that
      	can throw.  Record the ABIs of calls in ALLOCNO_CROSSED_CALLS_ABIS.
      	Use full_and_partial_reg_clobbers rather than full_reg_clobbers
      	for the calculation of ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS.
      	Use eh_edge_abi to calculate the set of registers that could
      	be clobbered by an EH edge.  Include partially-clobbered as
      	well as fully-clobbered registers.
      
      From-SVN: r276325
      Richard Sandiford committed
    • Remove global call sets: haifa-sched.c · 7450506b
      The code patched here is counting how many registers the current
      function would need to save in the prologue before it uses them.
      The code is called per function, so using crtl is OK.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* haifa-sched.c: Include function-abi.h.
      	(alloc_global_sched_pressure_data): Use crtl->abi to check whether
      	the function would need to save a register before using it.
      
      From-SVN: r276324
      Richard Sandiford committed
    • Remove global call sets: gcse.c · a4dfaad2
      This is another case in which we can conservatively treat partial
      kills as full kills.  Again this is in principle a bug fix for
      TARGET_HARD_REGNO_CALL_PART_CLOBBERED targets, but in practice
      it probably doesn't make a difference.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* gcse.c: Include function-abi.h.
      	(compute_hash_table_work): Use insn_callee_abi to get the ABI of
      	the call insn target.  Invalidate partially call-clobbered
      	registers as well as fully call-clobbered ones.
      
      From-SVN: r276323
      Richard Sandiford committed
    • Remove global call sets: function.c · c1b58272
      Whatever the rights and wrongs of the way aggregate_value_p
      handles call-preserved registers, it's a de facto part of the ABI,
      so we shouldn't change it.  The patch simply extends the current
      approach to whatever call-preserved set the function happens to
      be using.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* function.c (aggregate_value_p): Work out which ABI the
      	function is using before testing which registers are at least
      	partly preserved by a call.
      
      From-SVN: r276322
      Richard Sandiford committed
    • Remove global call sets: early-remat.c · 18495696
      This pass previously excluded rematerialisation candidates if they
      clobbered a call-preserved register, on the basis that it then
      wouldn't be safe to add new instances of the candidate instruction
      after a call.  This patch instead makes the decision on a call-by-call
      basis.
      
      The second emit_remat_insns_for_block hunk probably isn't needed,
      but it seems safer and more consistent to have it, so that every call
      to emit_remat_insns is preceded by a check for invalid clobbers.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* early-remat.c: Include regs.h and function-abi.h.
      	(early_remat::maybe_add_candidate): Don't check for call-clobbered
      	registers here.
      	(early_remat::restrict_remat_for_unavail_regs): New function.
      	(early_remat::restrict_remat_for_call): Likewise.
      	(early_remat::process_block): Before calling emit_remat_insns
      	for a previous call in the block, invalidate any candidates
      	that would clobber call-preserved registers.
      	(early_remat::emit_remat_insns_for_block): Likewise for the
      	final call in a block.  Do the same thing for live-in registers
      	when calling emit_remat_insns at the head of a block.
      
      From-SVN: r276321
      Richard Sandiford committed
    • Remove global call sets: DF (entry/exit defs) · 559c1ae1
      The code patched here is seeing whether the current function
      needs to save at least part of a register before using it.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* df-scan.c (df_get_entry_block_def_set): Use crtl->abi to test
      	whether the current function needs to save at least part of a
      	register before using it.
      	(df_get_exit_block_use_set): Likewise for epilogue restores.
      
      From-SVN: r276320
      Richard Sandiford committed
    • Remove global call sets: DF (EH edges) · c9250371
      The DF dense_invalidated_by_call and sparse_invalidated_by_call
      sets are actually only used on EH edges, and so are more the set
      of registers that are invalidated by a taken EH edge.  Under the
      new order, that means that they describe eh_edge_abi.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* df-problems.c: Include regs.h and function-abi.h.
      	(df_rd_problem_data): Rename sparse_invalidated_by_call to
      	sparse_invalidated_by_eh and dense_invalidated_by_call to
      	dense_invalidated_by_eh.
      	(df_print_bb_index): Update accordingly.
      	(df_rd_alloc, df_rd_start_dump, df_rd_confluence_n): Likewise.
      	(df_lr_confluence_n): Use eh_edge_abi to get the set of registers
      	that are clobbered by an EH edge.  Clobber partially-clobbered
      	registers as well as fully-clobbered ones.
      	(df_md_confluence_n): Likewise.
      	(df_rd_local_compute): Likewise.  Update for changes to
      	df_rd_problem_data.
      	* df-scan.c (df_scan_start_dump): Use eh_edge_abi to get the set
      	of registers that are clobbered by an EH edge.  Includde partially-
      	clobbered registers as well as fully-clobbered ones.
      
      From-SVN: r276319
      Richard Sandiford committed
    • Remove global call sets: cselib.c · 3bd29185
      cselib_invalidate_regno is a no-op if REG_VALUES (i) is null,
      so we can check that first.  Then, if we know what mode the register
      currently has, we can check whether it's clobbered in that mode.
      
      Using GET_MODE (values->elt->val_rtx) to get the mode of the last
      set is taken from cselib_reg_set_mode.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* cselib.c (cselib_process_insn): If we know what mode a
      	register was set in, check whether it is clobbered in that
      	mode by a call.  Only fall back to reg_raw_mode if that fails.
      
      From-SVN: r276318
      Richard Sandiford committed
    • Remove global call sets: cse.c · 311b62ce
      Like with the combine.c patch, this one keeps things simple by
      invalidating values in partially-clobbered registers, rather than
      trying to tell whether the value in a partially-clobbered register
      is actually clobbered or not.  Again, this is in principle a bug fix,
      but probably never matters in practice.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* cse.c: Include regs.h and function-abi.h.
      	(invalidate_for_call): Take the call insn as an argument.
      	Use insn_callee_abi to get the ABI of the call and invalidate
      	partially clobbered registers as well as fully clobbered ones.
      	(cse_insn): Update call accordingly.
      
      From-SVN: r276317
      Richard Sandiford committed
    • Remove global call sets: combine.c · 212b7076
      There shouldn't be many cases in which a useful hard register is
      live across a call before RA, so we might as well keep things simple
      and invalidate partially-clobbered registers here, in case the values
      they hold leak into the call-clobbered part.  In principle this is
      a bug fix for TARGET_HARD_REGNO_CALL_PART_CLOBBERED targets,
      but in practice it probably doesn't make a difference.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* combine.c: Include function-abi.h.
      	(record_dead_and_set_regs): Use insn_callee_abi to get the ABI
      	of the target of call insns.  Invalidate partially-clobbered
      	registers as well as fully-clobbered ones.
      
      From-SVN: r276316
      Richard Sandiford committed
    • Remove global call sets: cfgloopanal.c · 43b484fb
      ...or rather, make the use of the default ABI explicit.  That seems
      OK if not ideal for this heuristic.
      
      In practical terms, the code patched here is counting GENERAL_REGS,
      which are treated in the same way by all concurrent ABI variants
      on AArch64.  It might give bad results if used for interrupt
      handlers though.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* cfgloopanal.c: Include regs.h and function-abi.h.
      	(init_set_costs): Use default_function_abi to test whether
      	a general register is call-clobbered.
      
      From-SVN: r276315
      Richard Sandiford committed
    • Remove global call sets: cfgcleanup.c · 01699686
      old_insns_match_p just tests whether two instructions are
      similar enough to merge.  With insn_callee_abi it makes more
      sense to compare the ABIs directly.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* cfgcleanup.c (old_insns_match_p): Compare the ABIs of calls
      	instead of the call-clobbered sets.
      
      From-SVN: r276314
      Richard Sandiford committed
    • Remove global call sets: caller-save.c · 7392e5d8
      All caller-save.c uses of "|= fixed_reg_set" added in a previous patch
      were redundant, since the sets are later ANDed with ~fixed_reg_set.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* caller-save.c (setup_save_areas): Remove redundant |s of
      	fixed_reg_set.
      	(save_call_clobbered_regs): Likewise.  Use the call ABI rather
      	than call_used_or_fixed_regs to decide whether a REG_RETURNED
      	value is useful.
      
      From-SVN: r276313
      Richard Sandiford committed
    • Pass an ABI to choose_hard_reg_mode · 737d6a1a
      choose_hard_reg_mode previously took a boolean saying whether the
      mode needed to be call-preserved.  This patch replaces it with an
      optional ABI pointer instead, so that the function can use that
      to test whether a value is call-saved.
      
      default_dwarf_frame_reg_mode uses eh_edge_abi because that's the
      ABI that matters for unwinding.  Targets need to override the hook
      if they want something different.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* rtl.h (predefined_function_abi): Declare.
      	(choose_hard_reg_mode): Take a pointer to a predefined_function_abi
      	instead of a boolean call_save flag.
      	* config/gcn/gcn.c (gcn_hard_regno_caller_save_mode): Update call
      	accordingly.
      	* config/i386/i386.h (HARD_REGNO_CALLER_SAVE_MODE): Likewise.
      	* config/ia64/ia64.h (HARD_REGNO_CALLER_SAVE_MODE): Likewise.
      	* config/mips/mips.c (mips_hard_regno_caller_save_mode): Likewise.
      	* config/msp430/msp430.h (HARD_REGNO_CALLER_SAVE_MODE): Likewise.
      	* config/rs6000/rs6000.h (HARD_REGNO_CALLER_SAVE_MODE): Likewise.
      	* config/sh/sh.c (sh_hard_regno_caller_save_mode): Likewise.
      	* reginfo.c (init_reg_modes_target): Likewise.
      	(choose_hard_reg_mode): Take a pointer to a predefined_function_abi
      	instead of a boolean call_save flag.
      	* targhooks.c: Include function-abi.h.
      	(default_dwarf_frame_reg_mode): Update call to choose_hard_reg_mode,
      	using eh_edge_abi to choose the mode.
      
      From-SVN: r276312
      Richard Sandiford committed
    • Pass an ABI identifier to hard_regno_call_part_clobbered · 6ee2cc70
      This patch replaces the rtx_insn argument to
      targetm.hard_regno_call_part_clobbered with an ABI identifier, since
      call insns are now just one possible way of getting an ABI handle.
      This in turn allows predefined_function_abi::initialize to do the
      right thing for non-default ABIs.
      
      The horrible ?: in need_for_call_save_p goes away in a later patch,
      with the series as a whole removing most direct calls to the hook in
      favour of function_abi operations.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* target.def (hard_regno_call_part_clobbered): Take an ABI
      	identifier instead of an rtx_insn.
      	* doc/tm.texi: Regenerate.
      	* hooks.h (hook_bool_insn_uint_mode_false): Delete.
      	(hook_bool_uint_uint_mode_false): New function.
      	* hooks.c (hook_bool_insn_uint_mode_false): Delete.
      	(hook_bool_uint_uint_mode_false): New function.
      	* config/aarch64/aarch64.c (aarch64_hard_regno_call_part_clobbered):
      	Take an ABI identifier instead of an rtx_insn.
      	* config/avr/avr.c (avr_hard_regno_call_part_clobbered): Likewise.
      	* config/i386/i386.c (ix86_hard_regno_call_part_clobbered): Likewise.
      	* config/mips/mips.c (mips_hard_regno_call_part_clobbered): Likewise.
      	* config/pru/pru.c (pru_hard_regno_call_part_clobbered): Likewise.
      	* config/rs6000/rs6000.c (rs6000_hard_regno_call_part_clobbered):
      	Likewise.
      	* config/s390/s390.c (s390_hard_regno_call_part_clobbered): Likewise.
      	* cselib.c: Include function-abi.h.
      	(cselib_process_insn): Update call to
      	targetm.hard_regno_call_part_clobbered, using insn_callee_abi
      	to get the appropriate ABI identifier.
      	* function-abi.cc (predefined_function_abi::initialize): Update call
      	to targetm.hard_regno_call_part_clobbered.
      	* ira-conflicts.c (ira_build_conflicts): Likewise.
      	* ira-costs.c (ira_tune_allocno_costs): Likewise.
      	* lra-constraints.c: Include function-abi.h.
      	(need_for_call_save_p): Update call to
      	targetm.hard_regno_call_part_clobbered, using insn_callee_abi
      	to get the appropriate ABI identifier.
      	* lra-lives.c (check_pseudos_live_through_calls): Likewise.
      	* regcprop.c (copyprop_hardreg_forward_1): Update call
      	to targetm.hard_regno_call_part_clobbered.
      	* reginfo.c (choose_hard_reg_mode): Likewise.
      	* regrename.c (check_new_reg_p): Likewise.
      	* reload.c (find_equiv_reg): Likewise.
      	* reload1.c (emit_reload_insns): Likewise.
      	* sched-deps.c: Include function-abi.h.
      	(deps_analyze_insn): Update call to
      	targetm.hard_regno_call_part_clobbered, using insn_callee_abi
      	to get the appropriate ABI identifier.
      	* sel-sched.c (init_regs_for_mode, mark_unavailable_hard_regs): Update
      	call to targetm.hard_regno_call_part_clobbered.
      	* targhooks.c (default_dwarf_frame_reg_mode): Likewise.
      
      From-SVN: r276311
      Richard Sandiford committed
    • [x86] Robustify vzeroupper handling across calls · 2a2e3a0d
      One of the effects of the function_abi series is to make -fipa-ra
      work for partially call-clobbered registers.  E.g. if a call preserves
      only the low 32 bits of a register R, we handled the partial clobber
      separately from -fipa-ra, and so treated the upper bits of R as
      clobbered even if we knew that the target function doesn't touch R.
      
      "Fixing" this caused problems for the vzeroupper handling on x86.
      The pass that inserts the vzerouppers assumes that no 256-bit or 512-bit
      values are live across a call unless the call takes a 256-bit or 512-bit
      argument:
      
            /* Needed mode is set to AVX_U128_CLEAN if there are
      	 no 256bit or 512bit modes used in function arguments. */
      
      This implicitly relies on:
      
      /* Implement TARGET_HARD_REGNO_CALL_PART_CLOBBERED.  The only ABI that
         saves SSE registers across calls is Win64 (thus no need to check the
         current ABI here), and with AVX enabled Win64 only guarantees that
         the low 16 bytes are saved.  */
      
      static bool
      ix86_hard_regno_call_part_clobbered (rtx_insn *insn ATTRIBUTE_UNUSED,
      				     unsigned int regno, machine_mode mode)
      {
        return SSE_REGNO_P (regno) && GET_MODE_SIZE (mode) > 16;
      }
      
      The comment suggests that this code is only needed for Win64 and that
      not testing for Win64 is just a simplification.  But in practice it was
      needed for correctness on GNU/Linux and other targets too, since without
      it the RA would be able to keep 256-bit and 512-bit values in SSE
      registers across calls that are known not to clobber them.
      
      This patch conservatively treats calls as AVX_U128_ANY if the RA can see
      that some SSE registers are not touched by a call.  There are then no
      regressions if the ix86_hard_regno_call_part_clobbered check is disabled
      for GNU/Linux (not something we should do, was just for testing).
      
      If in fact we want -fipa-ra to pretend that all functions clobber
      SSE registers above 128 bits, it'd certainly be possible to arrange
      that.  But IMO that would be an optimisation decision, whereas what
      the patch is fixing is a correctness decision.  So I think we should
      have this check even so.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/i386/i386.c: Include function-abi.h.
      	(ix86_avx_u128_mode_needed): Treat function calls as AVX_U128_ANY
      	if they preserve some 256-bit or 512-bit SSE registers.
      
      From-SVN: r276310
      Richard Sandiford committed
    • Add a function for getting the ABI of a call insn target · 5a5a3bc5
      This patch replaces get_call_reg_set_usage with insn_callee_abi,
      which returns the ABI of the target of a call insn.  The ABI's
      full_reg_clobbers corresponds to regs_invalidated_by_call,
      whereas many callers instead passed call_used_or_fixed_regs, i.e.:
      
        (regs_invalidated_by_call | fixed_reg_set)
      
      The patch slavishly preserves the "| fixed_reg_set" for these callers;
      later patches will clean this up.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* target.def (insn_callee_abi): New hook.
      	(remove_extra_call_preserved_regs): Delete.
      	* doc/tm.texi.in (TARGET_INSN_CALLEE_ABI): New macro.
      	(TARGET_REMOVE_EXTRA_CALL_PRESERVED_REGS): Delete.
      	* doc/tm.texi: Regenerate.
      	* targhooks.h (default_remove_extra_call_preserved_regs): Delete.
      	* targhooks.c (default_remove_extra_call_preserved_regs): Delete.
      	* config/aarch64/aarch64.c (aarch64_simd_call_p): Constify the
      	insn argument.
      	(aarch64_remove_extra_call_preserved_regs): Delete.
      	(aarch64_insn_callee_abi): New function.
      	(TARGET_REMOVE_EXTRA_CALL_PRESERVED_REGS): Delete.
      	(TARGET_INSN_CALLEE_ABI): New macro.
      	* rtl.h (get_call_fndecl): Declare.
      	(cgraph_rtl_info): Fix formatting.  Tweak comment for
      	function_used_regs.  Remove function_used_regs_valid.
      	* rtlanal.c (get_call_fndecl): Moved from final.c
      	* function-abi.h (insn_callee_abi): Declare.
      	(target_function_abi_info): Mention insn_callee_abi.
      	* function-abi.cc (fndecl_abi): Handle flag_ipa_ra in a similar
      	way to get_call_reg_set_usage did.
      	(insn_callee_abi): New function.
      	* regs.h (get_call_reg_set_usage): Delete.
      	* final.c: Include function-abi.h.
      	(collect_fn_hard_reg_usage): Add fixed and stack registers to
      	function_used_regs before the main loop rather than afterwards.
      	Use insn_callee_abi instead of get_call_reg_set_usage.  Exit early
      	if function_used_regs ends up not being useful.
      	(get_call_fndecl): Move to rtlanal.c
      	(get_call_cgraph_rtl_info, get_call_reg_set_usage): Delete.
      	* caller-save.c: Include function-abi.h.
      	(setup_save_areas, save_call_clobbered_regs): Use insn_callee_abi
      	instead of get_call_reg_set_usage.
      	* cfgcleanup.c: Include function-abi.h.
      	(old_insns_match_p): Use insn_callee_abi instead of
      	get_call_reg_set_usage.
      	* cgraph.h (cgraph_node::rtl_info): Take a const_tree instead of
      	a tree.
      	* cgraph.c (cgraph_node::rtl_info): Likewise.  Initialize
      	function_used_regs.
      	* df-scan.c: Include function-abi.h.
      	(df_get_call_refs): Use insn_callee_abi instead of
      	get_call_reg_set_usage.
      	* ira-lives.c: Include function-abi.h.
      	(process_bb_node_lives): Use insn_callee_abi instead of
      	get_call_reg_set_usage.
      	* lra-lives.c: Include function-abi.h.
      	(process_bb_lives): Use insn_callee_abi instead of
      	get_call_reg_set_usage.
      	* postreload.c: Include function-abi.h.
      	(reload_combine): Use insn_callee_abi instead of
      	get_call_reg_set_usage.
      	* regcprop.c: Include function-abi.h.
      	(copyprop_hardreg_forward_1): Use insn_callee_abi instead of
      	get_call_reg_set_usage.
      	* resource.c: Include function-abi.h.
      	(mark_set_resources, mark_target_live_regs): Use insn_callee_abi
      	instead of get_call_reg_set_usage.
      	* var-tracking.c: Include function-abi.h.
      	(dataflow_set_clear_at_call): Use insn_callee_abi instead of
      	get_call_reg_set_usage.
      
      From-SVN: r276309
      Richard Sandiford committed
    • Add a target hook for getting an ABI from a function type · 002ffd3c
      This patch adds a target hook that allows targets to return
      the ABI associated with a particular function type.  Generally,
      when multiple ABIs are in use, it must be possible to tell from
      a function type and its attributes which ABI it is using.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* target.def (fntype_abi): New target hook.
      	* doc/tm.texi.in (TARGET_FNTYPE_ABI): Likewise.
      	* doc/tm.texi: Regenerate.
      	* target.h (predefined_function_abi): Declare.
      	* function-abi.cc (fntype_abi): Call targetm.calls.fntype_abi,
      	if defined.
      	* config/aarch64/aarch64.h (ARM_PCS_SIMD): New arm_pcs value.
      	* config/aarch64/aarch64.c: Include function-abi.h.
      	(aarch64_simd_abi, aarch64_fntype_abi): New functions.
      	(TARGET_FNTYPE_ABI): Define.
      
      From-SVN: r276308
      Richard Sandiford committed
    • Add function_abi.{h,cc} · bd785b44
      This patch adds new structures and functions for handling
      multiple ABIs in a translation unit.  The structures are:
      
      - predefined_function_abi: describes a static, predefined ABI
      - function_abi: describes either a predefined ABI or a local
        variant of one (e.g. taking -fipa-ra into account)
      
      The patch adds functions for getting the ABI from a given type
      or decl; a later patch will also add a function for getting the
      ABI of the target of a call insn.
      
      Although ABIs are about much more than call-clobber/saved choices,
      I wanted to keep the name general in case we add more ABI-related
      information in future.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* Makefile.in (OBJS): Add function-abi.o.
      	(GTFILES): Add function-abi.h.
      	* function-abi.cc: New file.
      	* function-abi.h: Likewise.
      	* emit-rtl.h (rtl_data::abi): New field.
      	* function.c: Include function-abi.h.
      	(prepare_function_start): Initialize crtl->abi.
      	* read-rtl-function.c: Include regs.h and function-abi.h.
      	(read_rtl_function_body): Initialize crtl->abi.
      	(read_rtl_function_body_from_file_range): Likewise.
      	* reginfo.c: Include function-abi.h.
      	(init_reg_sets_1): Initialize default_function_abi.
      	(globalize_reg): Call add_full_reg_clobber for each predefined ABI
      	when making a register global.
      	* target-globals.h (this_target_function_abi_info): Declare.
      	(target_globals::function_abi_info): New field.
      	(restore_target_globals): Copy it.
      	* target-globals.c: Include function-abi.h.
      	(default_target_globals): Initialize the function_abi_info field.
      	(target_globals): Allocate it.
      	(save_target_globals): Free it.
      
      From-SVN: r276307
      Richard Sandiford committed
    • Fix compile time warning about building the FRV backend by adding missing break… · 0c88d078
      Fix compile time warning about building the FRV backend by adding missing break statements to the switches in frv_register_move_cost.
      
      	PR target/85978
      	* config/frv/frv.c (frv_register_move_cost): Add break statements
      	to avoid falling through to the wrong cases.  Tidy code.
      
      From-SVN: r276306
      Nick Clifton committed
    • [AArch64] Strengthen aarch64_hard_regno_call_part_clobbered · 51051f47
      The aarch64_vector_pcs handling in aarch64_hard_regno_call_part_clobbered
      checks whether the mode might be bigger than 16 bytes, since on SVE
      targets the (non-SVE) vector PCS only guarantees that the low 16 bytes
      are preserved.  But for multi-register modes, we should instead test
      whether each single-register part might be bigger than 16 bytes.
      (The size is always divided evenly between registers.)
      
      The testcase uses XImode as an example where this helps.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/aarch64.c (aarch64_hard_regno_call_part_clobbered):
      	For multi-registers modes, test how big each register part is.
      
      gcc/testsuite/
      	* gcc.target/aarch64/torture/simd-abi-8.c: New test.
      
      From-SVN: r276305
      Richard Sandiford committed
    • Remove the iq2000_select_section function the iq2000 backend - it never provided… · 4baad986
      Remove the iq2000_select_section function the iq2000 backend - it never provided any useful functionality.
      
      	PR target/59205
      	* config/iq2000/iq2000.c (iq2000_select_section): Delete.
      	(TARGET_ASM_SELECT_SECTION): Remove definition.
      	(TARGET_HAVE_SWITCHABLE_BSS_SECTIONS): Allow definition.
      
      From-SVN: r276304
      Nick Clifton committed
    • Introduce rtx_alloca, alloca_raw_REG and alloca_rtx_fmt_* · 20fa157e
      When one passes short-lived fake rtxes to backends in order to test
      their capabilities, it might be beneficial to allocate these rtxes on
      stack in order to reduce the load on GC.
      
      Provide macro counterparts of some of the gen_* functions for that
      purpose.
      
      gcc/ChangeLog:
      
      2019-09-30  Ilya Leoshkevich  <iii@linux.ibm.com>
      
      	* emit-rtl.c (init_raw_REG): New function.
      	(gen_raw_REG): Use init_raw_REG.
      	* gengenrtl.c (gendef): Emit init_* functions and alloca_*
      	macros.
      	* rtl.c (rtx_alloc_stat_v): Use rtx_init.
      	* rtl.h (rtx_init): New function.
      	(rtx_alloca): New function.
      	(init_raw_REG): New function.
      	(alloca_raw_REG): New macro.
      
      From-SVN: r276303
      Ilya Leoshkevich committed
    • [C] Print ", ..." rather than ", ..." in diagnostics · 9343bf99
      pp_separate_with inserts a space after the separator, so there's
      no need to add whitespace before "..." as well.
      
      2019-09-30  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/c-family/
      	* c-pretty-print.c (pp_c_parameter_type_list): Avoid printing
      	two spaces between a comma and "...".
      
      gcc/testsuite/
      	* gcc.dg/Wincompatible-pointer-types-1.c (f1): Expect only one
      	space between the comma and "...".
      
      From-SVN: r276302
      Richard Sandiford committed
    • libgomp_g.h: Include stdint.h instead of gstdint.h. · d7f9ee98
      2019-09-30  Kwok Cheung Yeung  <kcy@codesourcery.com>
      
              * libgomp_g.h: Include stdint.h instead of gstdint.h.
      
      From-SVN: r276301
      Kwok Cheung Yeung committed
    • Add initial support for prefixed/PC-relative addressing. · 26ca7d1b
      2019-09-30  Michael Meissner  <meissner@linux.ibm.com>
      
      	* config/rs6000/predicates.md (pcrel_address): Delete predicate.
      	(pcrel_local_address): Replace pcrel_address predicate, use the
      	new function address_to_insn_form.
      	(pcrel_external_address): Replace with new implementation using
      	address_to_insn_form..
      	(prefixed_mem_operand): Delete predicate which is now unused.
      	(pcrel_external_mem_operand): Delete predicate which is now
      	unused.
      	* config/rs6000/rs6000-protos.h (enum insn_form): New
      	enumeration.
      	(enum non_prefixed): New enumeration.
      	(address_to_insn_form): New declaration.
      	(prefixed_load_p): New declaration.
      	(prefixed_store_p): New declaration.
      	(prefixed_paddi_p): New declaration.
      	(rs6000_asm_output_opcode): New declaration.
      	(rs6000_final_prescan_insn): Move declaration and update calling
      	signature.
      	(address_is_prefixed): New helper inline function.
      	* config/rs6000/rs6000.c(print_operand_address): Check for either
      	PC-relative local symbols or PC-relative external symbols.
      	(rs6000_emit_move): Support loading PC-relative addresses.
      	(mode_supports_prefixed_address_p): Delete, no longer used.
      	(rs6000_prefixed_address_mode_p): Delete, no longer used.
      	(address_to_insn_form): New function to decode an address format.
      	(reg_to_non_prefixed): New function to identify what the
      	non-prefixed memory instruction format is for a register.
      	(prefixed_load_p): New function to identify prefixed loads.
      	(prefixed_store_p): New function to identify prefixed stores.
      	(prefixed_paddi_p): New function to identify prefixed load
      	immediates.
      	(next_insn_prefixed_p): New static state variable.
      	(rs6000_final_prescan_insn): New function to determine if an insn
      	uses a prefixed instruction.
      	(rs6000_asm_output_opcode): New function to emit 'p' in front of a
      	prefixed instruction.
      	* config/rs6000/rs6000.h (FINAL_PRESCAN_INSN): New target hook.
      	(ASM_OUTPUT_OPCODE): New target hook.
      	* config/rs6000/rs6000.md (prefixed): New insn attribute for
      	prefixed instructions.
      	(prefixed_length): New insn attribute for the size of prefixed
      	instructions.
      	(non_prefixed_length): New insn attribute for the size of
      	non-prefixed instructions.
      	(pcrel_local_addr): New insn to load up a local PC-relative
      	address.
      	(pcrel_extern_addr): New insn to load up an external PC-relative
      	address.
      	(mov<mode>_64bit_dm): Split the alternatives for loading 0.0 to a
      	GPR and loading a 128-bit floating point type to a GPR.
      
      From-SVN: r276300
      Michael Meissner committed
    • gimple.c (gimple_get_lhs): For PHIs return the result. · 61362d9d
      2019-09-30  Richard Biener  <rguenther@suse.de>
      
      	* gimple.c (gimple_get_lhs): For PHIs return the result.
      	* tree-vectorizer.h (vectorizable_live_operation): Also get the
      	SLP instance as argument.
      	* tree-vect-loop.c (vect_analyze_loop_operations): Also handle
      	double-reduction PHIs with vectorizable_lc_phi.
      	(vect_analyze_loop_operations): Adjust.
      	(vect_create_epilog_for_reduction): Remove all code not dealing
      	with reduction LC PHI or epilogue generation.
      	(vectorizable_live_operation): Call vect_create_epilog_for_reduction
      	for live stmts of reductions.
      	* tree-vect-stmts.c (vectorizable_condition): When !for_reduction
      	do not handle defs that are not vect_internal_def.
      	(can_vectorize_live_stmts): Adjust.
      	(vect_analyze_stmt): When the vectorized stmt defined a value
      	used on backedges adjust the backedge uses of vectorized PHIs.
      
      From-SVN: r276299
      Richard Biener committed