Commit c3f14d55 by David Malcolm Committed by David Malcolm

re PR bootstrap/62304 (ICE in follow_jumps, find_dead_or_set_registers)

	PR bootstrap/62304

	* gcc/reorg.c (skip_consecutive_labels): Convert return type and
	param back from rtx_insn * to rtx.  Rename param from "label" to
	"label_or_return", reintroducing "label" as an rtx_insn * after
	we've ensured it's not a RETURN.
	(first_active_target_insn): Likewise for return type and param;
	add a checked cast to rtx_insn * once we've ensured "insn" is not
	a RETURN.
	(steal_delay_list_from_target): Convert param "pnew_thread" back
	from rtx_insn ** to rtx *.  Replace use of JUMP_LABEL_AS_INSN
	with JUMP_LABEL.
	(own_thread_p): Convert param "thread" back from an rtx_insn * to
	an rtx.  Introduce local rtx_insn * "thread_insn" with a checked
	cast once we've established we're not dealing with a RETURN,
	renaming subsequent uses of "thread" to "thread_insn".
	(fill_simple_delay_slots): Convert uses of JUMP_LABEL_AS_INSN back
	to JUMP_LABEL.
	(follow_jumps): Convert return type and param "label" from
	rtx_insn * back to rtx.  Move initialization of "value" to after
	the handling for ANY_RETURN_P, adding a checked cast there to
	rtx_insn *.  Convert local rtx_insn * "this_label" to an rtx and
	rename to "this_label_or_return", reintroducing "this_label" as
	an rtx_insn * once we've handled the case where it could be an
	ANY_RETURN_P.
	(fill_slots_from_thread): Rename param "thread" to
	"thread_or_return", converting from an rtx_insn * back to an rtx.
	Reintroduce name "thread" as an rtx_insn * local with a checked
	cast once we've handled the case of it being an ANY_RETURN_P.
	Convert local "new_thread" from an rtx_insn * back to an rtx.
	Add a checked cast when assigning to "trial" from "new_thread".
	Convert use of JUMP_LABEL_AS_INSN back to JUMP_LABEL.  Add a
	checked cast to rtx_insn * from "new_thread" when invoking
	get_label_before.
	(fill_eager_delay_slots): Convert locals "target_label",
	"insn_at_target" from rtx_insn * back to rtx.
	Convert uses of JUMP_LABEL_AS_INSN back to JUMP_LABEL.
	(relax_delay_slots): Convert locals "trial", "target_label" from
	rtx_insn * back to rtx.  Convert uses of JUMP_LABEL_AS_INSN back
	to JUMP_LABEL.  Add a checked cast to rtx_insn * on "trial" when
	invoking update_block.
	(dbr_schedule): Convert use of JUMP_LABEL_AS_INSN back to
	JUMP_LABEL; this removes all JUMP_LABEL_AS_INSN from reorg.c.

	* resource.h (mark_target_live_regs): Undo erroneous conversion
	of second param of r214693, converting it back from rtx_insn * to
	rtx, since it could be a RETURN.

	* resource.c (find_dead_or_set_registers): Similarly, convert
	param "jump_target" back from an rtx_insn ** to an rtx *, as we
	could be writing back a RETURN.  Rename local rtx_insn * "next" to
	"next_insn", and introduce "lab_or_return" as a local rtx,
	handling the case where JUMP_LABEL (this_jump_insn) is a RETURN.
	(mark_target_live_regs): Undo erroneous conversion
	of second param of r214693, converting it back from rtx_insn * to
	rtx, since it could be a RETURN.  Rename it from "target" to
	"target_maybe_return", reintroducing the name "target" as a local
	rtx_insn * with a checked cast, after we've handled the case of
	ANY_RETURN_P.

From-SVN: r214752
parent 124aeea1
2014-08-30 David Malcolm <dmalcolm@redhat.com>
PR bootstrap/62304
* gcc/reorg.c (skip_consecutive_labels): Convert return type and
param back from rtx_insn * to rtx. Rename param from "label" to
"label_or_return", reintroducing "label" as an rtx_insn * after
we've ensured it's not a RETURN.
(first_active_target_insn): Likewise for return type and param;
add a checked cast to rtx_insn * once we've ensured "insn" is not
a RETURN.
(steal_delay_list_from_target): Convert param "pnew_thread" back
from rtx_insn ** to rtx *. Replace use of JUMP_LABEL_AS_INSN
with JUMP_LABEL.
(own_thread_p): Convert param "thread" back from an rtx_insn * to
an rtx. Introduce local rtx_insn * "thread_insn" with a checked
cast once we've established we're not dealing with a RETURN,
renaming subsequent uses of "thread" to "thread_insn".
(fill_simple_delay_slots): Convert uses of JUMP_LABEL_AS_INSN back
to JUMP_LABEL.
(follow_jumps): Convert return type and param "label" from
rtx_insn * back to rtx. Move initialization of "value" to after
the handling for ANY_RETURN_P, adding a checked cast there to
rtx_insn *. Convert local rtx_insn * "this_label" to an rtx and
rename to "this_label_or_return", reintroducing "this_label" as
an rtx_insn * once we've handled the case where it could be an
ANY_RETURN_P.
(fill_slots_from_thread): Rename param "thread" to
"thread_or_return", converting from an rtx_insn * back to an rtx.
Reintroduce name "thread" as an rtx_insn * local with a checked
cast once we've handled the case of it being an ANY_RETURN_P.
Convert local "new_thread" from an rtx_insn * back to an rtx.
Add a checked cast when assigning to "trial" from "new_thread".
Convert use of JUMP_LABEL_AS_INSN back to JUMP_LABEL. Add a
checked cast to rtx_insn * from "new_thread" when invoking
get_label_before.
(fill_eager_delay_slots): Convert locals "target_label",
"insn_at_target" from rtx_insn * back to rtx.
Convert uses of JUMP_LABEL_AS_INSN back to JUMP_LABEL.
(relax_delay_slots): Convert locals "trial", "target_label" from
rtx_insn * back to rtx. Convert uses of JUMP_LABEL_AS_INSN back
to JUMP_LABEL. Add a checked cast to rtx_insn * on "trial" when
invoking update_block.
(dbr_schedule): Convert use of JUMP_LABEL_AS_INSN back to
JUMP_LABEL; this removes all JUMP_LABEL_AS_INSN from reorg.c.
* resource.h (mark_target_live_regs): Undo erroneous conversion
of second param of r214693, converting it back from rtx_insn * to
rtx, since it could be a RETURN.
* resource.c (find_dead_or_set_registers): Similarly, convert
param "jump_target" back from an rtx_insn ** to an rtx *, as we
could be writing back a RETURN. Rename local rtx_insn * "next" to
"next_insn", and introduce "lab_or_return" as a local rtx,
handling the case where JUMP_LABEL (this_jump_insn) is a RETURN.
(mark_target_live_regs): Undo erroneous conversion
of second param of r214693, converting it back from rtx_insn * to
rtx, since it could be a RETURN. Rename it from "target" to
"target_maybe_return", reintroducing the name "target" as a local
rtx_insn * with a checked cast, after we've handled the case of
ANY_RETURN_P.
2014-08-29 DJ Delorie <dj@redhat.com> 2014-08-29 DJ Delorie <dj@redhat.com>
* cppbuiltin.c (define_builtin_macros_for_type_sizes): Round * cppbuiltin.c (define_builtin_macros_for_type_sizes): Round
...@@ -142,13 +142,15 @@ along with GCC; see the file COPYING3. If not see ...@@ -142,13 +142,15 @@ along with GCC; see the file COPYING3. If not see
/* Return the last label to mark the same position as LABEL. Return LABEL /* Return the last label to mark the same position as LABEL. Return LABEL
itself if it is null or any return rtx. */ itself if it is null or any return rtx. */
static rtx_insn * static rtx
skip_consecutive_labels (rtx_insn *label) skip_consecutive_labels (rtx label_or_return)
{ {
rtx_insn *insn; rtx_insn *insn;
if (label && ANY_RETURN_P (label)) if (label_or_return && ANY_RETURN_P (label_or_return))
return label; return label_or_return;
rtx_insn *label = as_a <rtx_insn *> (label_or_return);
for (insn = label; insn != 0 && !INSN_P (insn); insn = NEXT_INSN (insn)) for (insn = label; insn != 0 && !INSN_P (insn); insn = NEXT_INSN (insn))
if (LABEL_P (insn)) if (LABEL_P (insn))
...@@ -229,7 +231,7 @@ static rtx_insn_list *steal_delay_list_from_target (rtx_insn *, rtx, ...@@ -229,7 +231,7 @@ static rtx_insn_list *steal_delay_list_from_target (rtx_insn *, rtx,
struct resources *, struct resources *,
struct resources *, struct resources *,
int, int *, int *, int, int *, int *,
rtx_insn **); rtx *);
static rtx_insn_list *steal_delay_list_from_fallthrough (rtx_insn *, rtx, static rtx_insn_list *steal_delay_list_from_fallthrough (rtx_insn *, rtx,
rtx_sequence *, rtx_sequence *,
rtx_insn_list *, rtx_insn_list *,
...@@ -239,15 +241,14 @@ static rtx_insn_list *steal_delay_list_from_fallthrough (rtx_insn *, rtx, ...@@ -239,15 +241,14 @@ static rtx_insn_list *steal_delay_list_from_fallthrough (rtx_insn *, rtx,
int, int *, int *); int, int *, int *);
static void try_merge_delay_insns (rtx, rtx_insn *); static void try_merge_delay_insns (rtx, rtx_insn *);
static rtx redundant_insn (rtx, rtx_insn *, rtx); static rtx redundant_insn (rtx, rtx_insn *, rtx);
static int own_thread_p (rtx_insn *, rtx, int); static int own_thread_p (rtx, rtx, int);
static void update_block (rtx_insn *, rtx); static void update_block (rtx_insn *, rtx);
static int reorg_redirect_jump (rtx_insn *, rtx); static int reorg_redirect_jump (rtx_insn *, rtx);
static void update_reg_dead_notes (rtx, rtx); static void update_reg_dead_notes (rtx, rtx);
static void fix_reg_dead_note (rtx, rtx); static void fix_reg_dead_note (rtx, rtx);
static void update_reg_unused_notes (rtx, rtx); static void update_reg_unused_notes (rtx, rtx);
static void fill_simple_delay_slots (int); static void fill_simple_delay_slots (int);
static rtx_insn_list *fill_slots_from_thread (rtx_insn *, rtx, static rtx_insn_list *fill_slots_from_thread (rtx_insn *, rtx, rtx, rtx,
rtx_insn *, rtx_insn *,
int, int, int, int, int, int, int, int,
int *, rtx_insn_list *); int *, rtx_insn_list *);
static void fill_eager_delay_slots (void); static void fill_eager_delay_slots (void);
...@@ -257,12 +258,12 @@ static void make_return_insns (rtx_insn *); ...@@ -257,12 +258,12 @@ static void make_return_insns (rtx_insn *);
/* A wrapper around next_active_insn which takes care to return ret_rtx /* A wrapper around next_active_insn which takes care to return ret_rtx
unchanged. */ unchanged. */
static rtx_insn * static rtx
first_active_target_insn (rtx_insn *insn) first_active_target_insn (rtx insn)
{ {
if (ANY_RETURN_P (insn)) if (ANY_RETURN_P (insn))
return insn; return insn;
return next_active_insn (insn); return next_active_insn (as_a <rtx_insn *> (insn));
} }
/* Return true iff INSN is a simplejump, or any kind of return insn. */ /* Return true iff INSN is a simplejump, or any kind of return insn. */
...@@ -1089,7 +1090,7 @@ steal_delay_list_from_target (rtx_insn *insn, rtx condition, rtx_sequence *seq, ...@@ -1089,7 +1090,7 @@ steal_delay_list_from_target (rtx_insn *insn, rtx condition, rtx_sequence *seq,
struct resources *needed, struct resources *needed,
struct resources *other_needed, struct resources *other_needed,
int slots_to_fill, int *pslots_filled, int slots_to_fill, int *pslots_filled,
int *pannul_p, rtx_insn **pnew_thread) int *pannul_p, rtx *pnew_thread)
{ {
int slots_remaining = slots_to_fill - *pslots_filled; int slots_remaining = slots_to_fill - *pslots_filled;
int total_slots_filled = *pslots_filled; int total_slots_filled = *pslots_filled;
...@@ -1202,7 +1203,7 @@ steal_delay_list_from_target (rtx_insn *insn, rtx condition, rtx_sequence *seq, ...@@ -1202,7 +1203,7 @@ steal_delay_list_from_target (rtx_insn *insn, rtx condition, rtx_sequence *seq,
update_block (seq->insn (i), insn); update_block (seq->insn (i), insn);
/* Show the place to which we will be branching. */ /* Show the place to which we will be branching. */
*pnew_thread = first_active_target_insn (JUMP_LABEL_AS_INSN (seq->insn (0))); *pnew_thread = first_active_target_insn (JUMP_LABEL (seq->insn (0)));
/* Add any new insns to the delay list and update the count of the /* Add any new insns to the delay list and update the count of the
number of slots filled. */ number of slots filled. */
...@@ -1715,7 +1716,7 @@ redundant_insn (rtx insn, rtx_insn *target, rtx delay_list) ...@@ -1715,7 +1716,7 @@ redundant_insn (rtx insn, rtx_insn *target, rtx delay_list)
finding an active insn, we do not own this thread. */ finding an active insn, we do not own this thread. */
static int static int
own_thread_p (rtx_insn *thread, rtx label, int allow_fallthrough) own_thread_p (rtx thread, rtx label, int allow_fallthrough)
{ {
rtx_insn *active_insn; rtx_insn *active_insn;
rtx_insn *insn; rtx_insn *insn;
...@@ -1724,10 +1725,13 @@ own_thread_p (rtx_insn *thread, rtx label, int allow_fallthrough) ...@@ -1724,10 +1725,13 @@ own_thread_p (rtx_insn *thread, rtx label, int allow_fallthrough)
if (thread == 0 || ANY_RETURN_P (thread)) if (thread == 0 || ANY_RETURN_P (thread))
return 0; return 0;
/* Get the first active insn, or THREAD, if it is an active insn. */ /* We have a non-NULL insn. */
active_insn = next_active_insn (PREV_INSN (thread)); rtx_insn *thread_insn = as_a <rtx_insn *> (thread);
/* Get the first active insn, or THREAD_INSN, if it is an active insn. */
active_insn = next_active_insn (PREV_INSN (thread_insn));
for (insn = thread; insn != active_insn; insn = NEXT_INSN (insn)) for (insn = thread_insn; insn != active_insn; insn = NEXT_INSN (insn))
if (LABEL_P (insn) if (LABEL_P (insn)
&& (insn != label || LABEL_NUSES (insn) != 1)) && (insn != label || LABEL_NUSES (insn) != 1))
return 0; return 0;
...@@ -1736,7 +1740,7 @@ own_thread_p (rtx_insn *thread, rtx label, int allow_fallthrough) ...@@ -1736,7 +1740,7 @@ own_thread_p (rtx_insn *thread, rtx label, int allow_fallthrough)
return 1; return 1;
/* Ensure that we reach a BARRIER before any insn or label. */ /* Ensure that we reach a BARRIER before any insn or label. */
for (insn = prev_nonnote_insn (thread); for (insn = prev_nonnote_insn (thread_insn);
insn == 0 || !BARRIER_P (insn); insn == 0 || !BARRIER_P (insn);
insn = prev_nonnote_insn (insn)) insn = prev_nonnote_insn (insn))
if (insn == 0 if (insn == 0
...@@ -2275,8 +2279,8 @@ fill_simple_delay_slots (int non_jumps_p) ...@@ -2275,8 +2279,8 @@ fill_simple_delay_slots (int non_jumps_p)
= fill_slots_from_thread (insn, const_true_rtx, = fill_slots_from_thread (insn, const_true_rtx,
next_active_insn (JUMP_LABEL (insn)), next_active_insn (JUMP_LABEL (insn)),
NULL, 1, 1, NULL, 1, 1,
own_thread_p (JUMP_LABEL_AS_INSN (insn), own_thread_p (JUMP_LABEL (insn),
JUMP_LABEL_AS_INSN (insn), 0), JUMP_LABEL (insn), 0),
slots_to_fill, &slots_filled, slots_to_fill, &slots_filled,
delay_list); delay_list);
...@@ -2301,17 +2305,19 @@ fill_simple_delay_slots (int non_jumps_p) ...@@ -2301,17 +2305,19 @@ fill_simple_delay_slots (int non_jumps_p)
If the returned label is obtained by following a crossing jump, If the returned label is obtained by following a crossing jump,
set *CROSSING to true, otherwise set it to false. */ set *CROSSING to true, otherwise set it to false. */
static rtx_insn * static rtx
follow_jumps (rtx_insn *label, rtx jump, bool *crossing) follow_jumps (rtx label, rtx jump, bool *crossing)
{ {
rtx_insn *insn; rtx_insn *insn;
rtx_insn *next; rtx_insn *next;
rtx_insn *value = label;
int depth; int depth;
*crossing = false; *crossing = false;
if (ANY_RETURN_P (label)) if (ANY_RETURN_P (label))
return label; return label;
rtx_insn *value = as_a <rtx_insn *> (label);
for (depth = 0; for (depth = 0;
(depth < 10 (depth < 10
&& (insn = next_active_insn (value)) != 0 && (insn = next_active_insn (value)) != 0
...@@ -2323,15 +2329,17 @@ follow_jumps (rtx_insn *label, rtx jump, bool *crossing) ...@@ -2323,15 +2329,17 @@ follow_jumps (rtx_insn *label, rtx jump, bool *crossing)
&& BARRIER_P (next)); && BARRIER_P (next));
depth++) depth++)
{ {
rtx_insn *this_label = JUMP_LABEL_AS_INSN (insn); rtx this_label_or_return = JUMP_LABEL (insn);
/* If we have found a cycle, make the insn jump to itself. */ /* If we have found a cycle, make the insn jump to itself. */
if (this_label == label) if (this_label_or_return == label)
return label; return label;
/* Cannot follow returns and cannot look through tablejumps. */ /* Cannot follow returns and cannot look through tablejumps. */
if (ANY_RETURN_P (this_label)) if (ANY_RETURN_P (this_label_or_return))
return this_label; return this_label_or_return;
rtx_insn *this_label = as_a <rtx_insn *> (this_label_or_return);
if (NEXT_INSN (this_label) if (NEXT_INSN (this_label)
&& JUMP_TABLE_DATA_P (NEXT_INSN (this_label))) && JUMP_TABLE_DATA_P (NEXT_INSN (this_label)))
break; break;
...@@ -2372,13 +2380,13 @@ follow_jumps (rtx_insn *label, rtx jump, bool *crossing) ...@@ -2372,13 +2380,13 @@ follow_jumps (rtx_insn *label, rtx jump, bool *crossing)
slot. We then adjust the jump to point after the insns we have taken. */ slot. We then adjust the jump to point after the insns we have taken. */
static rtx_insn_list * static rtx_insn_list *
fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread, fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx thread_or_return,
rtx_insn *opposite_thread, int likely, rtx opposite_thread, int likely,
int thread_if_true, int thread_if_true,
int own_thread, int slots_to_fill, int own_thread, int slots_to_fill,
int *pslots_filled, rtx_insn_list *delay_list) int *pslots_filled, rtx_insn_list *delay_list)
{ {
rtx_insn *new_thread; rtx new_thread;
struct resources opposite_needed, set, needed; struct resources opposite_needed, set, needed;
rtx_insn *trial; rtx_insn *trial;
int lose = 0; int lose = 0;
...@@ -2393,9 +2401,11 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread, ...@@ -2393,9 +2401,11 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread,
/* If our thread is the end of subroutine, we can't get any delay /* If our thread is the end of subroutine, we can't get any delay
insns from that. */ insns from that. */
if (thread == NULL_RTX || ANY_RETURN_P (thread)) if (thread_or_return == NULL_RTX || ANY_RETURN_P (thread_or_return))
return delay_list; return delay_list;
rtx_insn *thread = as_a <rtx_insn *> (thread_or_return);
/* If this is an unconditional branch, nothing is needed at the /* If this is an unconditional branch, nothing is needed at the
opposite thread. Otherwise, compute what is needed there. */ opposite thread. Otherwise, compute what is needed there. */
if (condition == const_true_rtx) if (condition == const_true_rtx)
...@@ -2716,7 +2726,9 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread, ...@@ -2716,7 +2726,9 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread,
rtx dest; rtx dest;
rtx src; rtx src;
trial = new_thread; /* We know "new_thread" is an insn due to NONJUMP_INSN_P (new_thread)
above. */
trial = as_a <rtx_insn *> (new_thread);
pat = PATTERN (trial); pat = PATTERN (trial);
if (!NONJUMP_INSN_P (trial) if (!NONJUMP_INSN_P (trial)
...@@ -2797,7 +2809,7 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread, ...@@ -2797,7 +2809,7 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread,
&& redirect_with_delay_list_safe_p (insn, && redirect_with_delay_list_safe_p (insn,
JUMP_LABEL (new_thread), JUMP_LABEL (new_thread),
delay_list)) delay_list))
new_thread = follow_jumps (JUMP_LABEL_AS_INSN (new_thread), insn, new_thread = follow_jumps (JUMP_LABEL (new_thread), insn,
&crossing); &crossing);
if (ANY_RETURN_P (new_thread)) if (ANY_RETURN_P (new_thread))
...@@ -2805,7 +2817,8 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread, ...@@ -2805,7 +2817,8 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx_insn *thread,
else if (LABEL_P (new_thread)) else if (LABEL_P (new_thread))
label = new_thread; label = new_thread;
else else
label = get_label_before (new_thread, JUMP_LABEL (insn)); label = get_label_before (as_a <rtx_insn *> (new_thread),
JUMP_LABEL (insn));
if (label) if (label)
{ {
...@@ -2838,7 +2851,8 @@ fill_eager_delay_slots (void) ...@@ -2838,7 +2851,8 @@ fill_eager_delay_slots (void)
for (i = 0; i < num_unfilled_slots; i++) for (i = 0; i < num_unfilled_slots; i++)
{ {
rtx condition; rtx condition;
rtx_insn *target_label, *insn_at_target, *fallthrough_insn; rtx target_label, insn_at_target;
rtx_insn *fallthrough_insn;
rtx_insn_list *delay_list = 0; rtx_insn_list *delay_list = 0;
int own_target; int own_target;
int own_fallthrough; int own_fallthrough;
...@@ -2867,7 +2881,7 @@ fill_eager_delay_slots (void) ...@@ -2867,7 +2881,7 @@ fill_eager_delay_slots (void)
continue; continue;
slots_filled = 0; slots_filled = 0;
target_label = JUMP_LABEL_AS_INSN (insn); target_label = JUMP_LABEL (insn);
condition = get_branch_condition (insn, target_label); condition = get_branch_condition (insn, target_label);
if (condition == 0) if (condition == 0)
...@@ -2911,7 +2925,7 @@ fill_eager_delay_slots (void) ...@@ -2911,7 +2925,7 @@ fill_eager_delay_slots (void)
we might have found a redundant insn which we deleted we might have found a redundant insn which we deleted
from the thread that was filled. So we have to recompute from the thread that was filled. So we have to recompute
the next insn at the target. */ the next insn at the target. */
target_label = JUMP_LABEL_AS_INSN (insn); target_label = JUMP_LABEL (insn);
insn_at_target = first_active_target_insn (target_label); insn_at_target = first_active_target_insn (target_label);
delay_list delay_list
...@@ -3153,7 +3167,9 @@ relax_delay_slots (rtx_insn *first) ...@@ -3153,7 +3167,9 @@ relax_delay_slots (rtx_insn *first)
{ {
rtx_insn *insn, *next; rtx_insn *insn, *next;
rtx_sequence *pat; rtx_sequence *pat;
rtx_insn *trial, *delay_insn, *target_label; rtx trial;
rtx_insn *delay_insn;
rtx target_label;
/* Look at every JUMP_INSN and see if we can improve it. */ /* Look at every JUMP_INSN and see if we can improve it. */
for (insn = first; insn; insn = next) for (insn = first; insn; insn = next)
...@@ -3168,7 +3184,7 @@ relax_delay_slots (rtx_insn *first) ...@@ -3168,7 +3184,7 @@ relax_delay_slots (rtx_insn *first)
group of consecutive labels. */ group of consecutive labels. */
if (JUMP_P (insn) if (JUMP_P (insn)
&& (condjump_p (insn) || condjump_in_parallel_p (insn)) && (condjump_p (insn) || condjump_in_parallel_p (insn))
&& !ANY_RETURN_P (target_label = JUMP_LABEL_AS_INSN (insn))) && !ANY_RETURN_P (target_label = JUMP_LABEL (insn)))
{ {
target_label target_label
= skip_consecutive_labels (follow_jumps (target_label, insn, = skip_consecutive_labels (follow_jumps (target_label, insn,
...@@ -3243,7 +3259,7 @@ relax_delay_slots (rtx_insn *first) ...@@ -3243,7 +3259,7 @@ relax_delay_slots (rtx_insn *first)
&& 0 > mostly_true_jump (other)) && 0 > mostly_true_jump (other))
{ {
rtx other_target = JUMP_LABEL (other); rtx other_target = JUMP_LABEL (other);
target_label = JUMP_LABEL_AS_INSN (insn); target_label = JUMP_LABEL (insn);
if (invert_jump (other, target_label, 0)) if (invert_jump (other, target_label, 0))
reorg_redirect_jump (insn, other_target); reorg_redirect_jump (insn, other_target);
...@@ -3315,7 +3331,7 @@ relax_delay_slots (rtx_insn *first) ...@@ -3315,7 +3331,7 @@ relax_delay_slots (rtx_insn *first)
|| !(condjump_p (delay_insn) || condjump_in_parallel_p (delay_insn))) || !(condjump_p (delay_insn) || condjump_in_parallel_p (delay_insn)))
continue; continue;
target_label = JUMP_LABEL_AS_INSN (delay_insn); target_label = JUMP_LABEL (delay_insn);
if (target_label && ANY_RETURN_P (target_label)) if (target_label && ANY_RETURN_P (target_label))
continue; continue;
...@@ -3353,8 +3369,10 @@ relax_delay_slots (rtx_insn *first) ...@@ -3353,8 +3369,10 @@ relax_delay_slots (rtx_insn *first)
if (tmp) if (tmp)
{ {
/* Insert the special USE insn and update dataflow info. */ /* Insert the special USE insn and update dataflow info.
update_block (trial, tmp); We know "trial" is an insn here as it is the output of
next_real_insn () above. */
update_block (as_a <rtx_insn *> (trial), tmp);
/* Now emit a label before the special USE insn, and /* Now emit a label before the special USE insn, and
redirect our jump to the new label. */ redirect our jump to the new label. */
...@@ -3374,7 +3392,7 @@ relax_delay_slots (rtx_insn *first) ...@@ -3374,7 +3392,7 @@ relax_delay_slots (rtx_insn *first)
&& redundant_insn (XVECEXP (PATTERN (trial), 0, 1), insn, 0)) && redundant_insn (XVECEXP (PATTERN (trial), 0, 1), insn, 0))
{ {
rtx_sequence *trial_seq = as_a <rtx_sequence *> (PATTERN (trial)); rtx_sequence *trial_seq = as_a <rtx_sequence *> (PATTERN (trial));
target_label = JUMP_LABEL_AS_INSN (trial_seq->insn (0)); target_label = JUMP_LABEL (trial_seq->insn (0));
if (ANY_RETURN_P (target_label)) if (ANY_RETURN_P (target_label))
target_label = find_end_label (target_label); target_label = find_end_label (target_label);
...@@ -3716,7 +3734,7 @@ dbr_schedule (rtx_insn *first) ...@@ -3716,7 +3734,7 @@ dbr_schedule (rtx_insn *first)
if (JUMP_P (insn) if (JUMP_P (insn)
&& (condjump_p (insn) || condjump_in_parallel_p (insn)) && (condjump_p (insn) || condjump_in_parallel_p (insn))
&& !ANY_RETURN_P (JUMP_LABEL (insn)) && !ANY_RETURN_P (JUMP_LABEL (insn))
&& ((target = skip_consecutive_labels (JUMP_LABEL_AS_INSN (insn))) && ((target = skip_consecutive_labels (JUMP_LABEL (insn)))
!= JUMP_LABEL (insn))) != JUMP_LABEL (insn)))
redirect_jump (insn, target, 1); redirect_jump (insn, target, 1);
} }
......
...@@ -81,7 +81,7 @@ static void update_live_status (rtx, const_rtx, void *); ...@@ -81,7 +81,7 @@ static void update_live_status (rtx, const_rtx, void *);
static int find_basic_block (rtx, int); static int find_basic_block (rtx, int);
static rtx_insn *next_insn_no_annul (rtx_insn *); static rtx_insn *next_insn_no_annul (rtx_insn *);
static rtx_insn *find_dead_or_set_registers (rtx_insn *, struct resources*, static rtx_insn *find_dead_or_set_registers (rtx_insn *, struct resources*,
rtx_insn **, int, struct resources, rtx *, int, struct resources,
struct resources); struct resources);
/* Utility function called from mark_target_live_regs via note_stores. /* Utility function called from mark_target_live_regs via note_stores.
...@@ -422,19 +422,20 @@ mark_referenced_resources (rtx x, struct resources *res, ...@@ -422,19 +422,20 @@ mark_referenced_resources (rtx x, struct resources *res,
static rtx_insn * static rtx_insn *
find_dead_or_set_registers (rtx_insn *target, struct resources *res, find_dead_or_set_registers (rtx_insn *target, struct resources *res,
rtx_insn **jump_target, int jump_count, rtx *jump_target, int jump_count,
struct resources set, struct resources needed) struct resources set, struct resources needed)
{ {
HARD_REG_SET scratch; HARD_REG_SET scratch;
rtx_insn *insn, *next; rtx_insn *insn;
rtx_insn *next_insn;
rtx_insn *jump_insn = 0; rtx_insn *jump_insn = 0;
int i; int i;
for (insn = target; insn; insn = next) for (insn = target; insn; insn = next_insn)
{ {
rtx_insn *this_jump_insn = insn; rtx_insn *this_jump_insn = insn;
next = NEXT_INSN (insn); next_insn = NEXT_INSN (insn);
/* If this instruction can throw an exception, then we don't /* If this instruction can throw an exception, then we don't
know where we might end up next. That means that we have to know where we might end up next. That means that we have to
...@@ -497,14 +498,16 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res, ...@@ -497,14 +498,16 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
if (any_uncondjump_p (this_jump_insn) if (any_uncondjump_p (this_jump_insn)
|| ANY_RETURN_P (PATTERN (this_jump_insn))) || ANY_RETURN_P (PATTERN (this_jump_insn)))
{ {
next = JUMP_LABEL_AS_INSN (this_jump_insn); rtx lab_or_return = JUMP_LABEL (this_jump_insn);
if (ANY_RETURN_P (next)) if (ANY_RETURN_P (lab_or_return))
next = NULL; next_insn = NULL;
else
next_insn = as_a <rtx_insn *> (lab_or_return);
if (jump_insn == 0) if (jump_insn == 0)
{ {
jump_insn = insn; jump_insn = insn;
if (jump_target) if (jump_target)
*jump_target = JUMP_LABEL_AS_INSN (this_jump_insn); *jump_target = JUMP_LABEL (this_jump_insn);
} }
} }
else if (any_condjump_p (this_jump_insn)) else if (any_condjump_p (this_jump_insn))
...@@ -572,7 +575,7 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res, ...@@ -572,7 +575,7 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
find_dead_or_set_registers (JUMP_LABEL_AS_INSN (this_jump_insn), find_dead_or_set_registers (JUMP_LABEL_AS_INSN (this_jump_insn),
&target_res, 0, jump_count, &target_res, 0, jump_count,
target_set, needed); target_set, needed);
find_dead_or_set_registers (next, find_dead_or_set_registers (next_insn,
&fallthrough_res, 0, jump_count, &fallthrough_res, 0, jump_count,
set, needed); set, needed);
IOR_HARD_REG_SET (fallthrough_res.regs, target_res.regs); IOR_HARD_REG_SET (fallthrough_res.regs, target_res.regs);
...@@ -880,26 +883,30 @@ return_insn_p (const_rtx insn) ...@@ -880,26 +883,30 @@ return_insn_p (const_rtx insn)
init_resource_info () was invoked before we are called. */ init_resource_info () was invoked before we are called. */
void void
mark_target_live_regs (rtx_insn *insns, rtx_insn *target, struct resources *res) mark_target_live_regs (rtx_insn *insns, rtx target_maybe_return, struct resources *res)
{ {
int b = -1; int b = -1;
unsigned int i; unsigned int i;
struct target_info *tinfo = NULL; struct target_info *tinfo = NULL;
rtx_insn *insn; rtx_insn *insn;
rtx jump_insn = 0; rtx jump_insn = 0;
rtx_insn *jump_target; rtx jump_target;
HARD_REG_SET scratch; HARD_REG_SET scratch;
struct resources set, needed; struct resources set, needed;
/* Handle end of function. */ /* Handle end of function. */
if (target == 0 || ANY_RETURN_P (target)) if (target_maybe_return == 0 || ANY_RETURN_P (target_maybe_return))
{ {
*res = end_of_function_needs; *res = end_of_function_needs;
return; return;
} }
/* We've handled the case of RETURN/SIMPLE_RETURN; we should now have an
instruction. */
rtx_insn *target = as_a <rtx_insn *> (target_maybe_return);
/* Handle return insn. */ /* Handle return insn. */
else if (return_insn_p (target)) if (return_insn_p (target))
{ {
*res = end_of_function_needs; *res = end_of_function_needs;
mark_referenced_resources (target, res, false); mark_referenced_resources (target, res, false);
......
...@@ -44,7 +44,7 @@ enum mark_resource_type ...@@ -44,7 +44,7 @@ enum mark_resource_type
MARK_SRC_DEST_CALL = 1 MARK_SRC_DEST_CALL = 1
}; };
extern void mark_target_live_regs (rtx_insn *, rtx_insn *, struct resources *); extern void mark_target_live_regs (rtx_insn *, rtx, struct resources *);
extern void mark_set_resources (rtx, struct resources *, int, extern void mark_set_resources (rtx, struct resources *, int,
enum mark_resource_type); enum mark_resource_type);
extern void mark_referenced_resources (rtx, struct resources *, bool); extern void mark_referenced_resources (rtx, struct resources *, bool);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment