Commit 7499cd25 by Jeff Law Committed by Jeff Law

tree-ssa-threadupdate.c: Replace 8 space sequences with tabs.

        * tree-ssa-threadupdate.c: Replace 8 space sequences with tabs.
        No functional changes.

From-SVN: r223734
parent 6acd8c92
2015-05-26 Jeff Law <law@redhat.com>
* tree-ssa-threadupdate.c: Replace 8 space sequences with tabs.
No functional changes.
2015-05-26 Jan Hubicka <hubicka@ucw.cz> 2015-05-26 Jan Hubicka <hubicka@ucw.cz>
* ipa-polymorphic-call.c * ipa-polymorphic-call.c
......
...@@ -605,25 +605,25 @@ any_remaining_duplicated_blocks (vec<jump_thread_edge *> *path, ...@@ -605,25 +605,25 @@ any_remaining_duplicated_blocks (vec<jump_thread_edge *> *path,
For example, assume we have the following control flow and identified For example, assume we have the following control flow and identified
jump threading paths: jump threading paths:
A B C A B C
\ | / \ | /
Ea \ |Eb / Ec Ea \ |Eb / Ec
\ | / \ | /
v v v v v v
J <-- Joiner J <-- Joiner
/ \ / \
Eoff/ \Eon Eoff/ \Eon
/ \ / \
v v v v
Soff Son <--- Normal Soff Son <--- Normal
/\ /\
Ed/ \ Ee Ed/ \ Ee
/ \ / \
v v v v
D E D E
Jump threading paths: A -> J -> Son -> D (path 1) Jump threading paths: A -> J -> Son -> D (path 1)
C -> J -> Son -> E (path 2) C -> J -> Son -> E (path 2)
Note that the control flow could be more complicated: Note that the control flow could be more complicated:
- Each jump threading path may have more than one incoming edge. I.e. A and - Each jump threading path may have more than one incoming edge. I.e. A and
...@@ -639,22 +639,22 @@ any_remaining_duplicated_blocks (vec<jump_thread_edge *> *path, ...@@ -639,22 +639,22 @@ any_remaining_duplicated_blocks (vec<jump_thread_edge *> *path,
In the aboe example, after all jump threading is complete, we will In the aboe example, after all jump threading is complete, we will
end up with the following control flow: end up with the following control flow:
A B C A B C
| | | | | |
Ea| |Eb |Ec Ea| |Eb |Ec
| | | | | |
v v v v v v
Ja J Jc Ja J Jc
/ \ / \Eon' / \ / \ / \Eon' / \
Eona/ \ ---/---\-------- \Eonc Eona/ \ ---/---\-------- \Eonc
/ \ / / \ \ / \ / / \ \
v v v v v v v v v v
Sona Soff Son Sonc Sona Soff Son Sonc
\ /\ / \ /\ /
\___________ / \ _____/ \___________ / \ _____/
\ / \/ \ / \/
vv v vv v
D E D E
The main issue to notice here is that when we are processing path 1 The main issue to notice here is that when we are processing path 1
(A->J->Son->D) we need to figure out the outgoing edge weights to (A->J->Son->D) we need to figure out the outgoing edge weights to
...@@ -684,10 +684,10 @@ any_remaining_duplicated_blocks (vec<jump_thread_edge *> *path, ...@@ -684,10 +684,10 @@ any_remaining_duplicated_blocks (vec<jump_thread_edge *> *path,
static bool static bool
compute_path_counts (struct redirection_data *rd, compute_path_counts (struct redirection_data *rd,
ssa_local_info_t *local_info, ssa_local_info_t *local_info,
gcov_type *path_in_count_ptr, gcov_type *path_in_count_ptr,
gcov_type *path_out_count_ptr, gcov_type *path_out_count_ptr,
int *path_in_freq_ptr) int *path_in_freq_ptr)
{ {
edge e = rd->incoming_edges->e; edge e = rd->incoming_edges->e;
vec<jump_thread_edge *> *path = THREAD_PATH (e); vec<jump_thread_edge *> *path = THREAD_PATH (e);
...@@ -699,13 +699,13 @@ compute_path_counts (struct redirection_data *rd, ...@@ -699,13 +699,13 @@ compute_path_counts (struct redirection_data *rd,
/* Start by accumulating incoming edge counts to the path's first bb /* Start by accumulating incoming edge counts to the path's first bb
into a couple buckets: into a couple buckets:
path_in_count: total count of incoming edges that flow into the path_in_count: total count of incoming edges that flow into the
current path. current path.
nonpath_count: total count of incoming edges that are not nonpath_count: total count of incoming edges that are not
flowing along *any* path. These are the counts flowing along *any* path. These are the counts
that will still flow along the original path after that will still flow along the original path after
all path duplication is done by potentially multiple all path duplication is done by potentially multiple
calls to this routine. calls to this routine.
(any other incoming edge counts are for a different jump threading (any other incoming edge counts are for a different jump threading
path that will be handled by a later call to this routine.) path that will be handled by a later call to this routine.)
To make this easier, start by recording all incoming edges that flow into To make this easier, start by recording all incoming edges that flow into
...@@ -727,23 +727,23 @@ compute_path_counts (struct redirection_data *rd, ...@@ -727,23 +727,23 @@ compute_path_counts (struct redirection_data *rd,
vec<jump_thread_edge *> *ein_path = THREAD_PATH (ein); vec<jump_thread_edge *> *ein_path = THREAD_PATH (ein);
/* Simply check the incoming edge src against the set captured above. */ /* Simply check the incoming edge src against the set captured above. */
if (ein_path if (ein_path
&& bitmap_bit_p (in_edge_srcs, (*ein_path)[0]->e->src->index)) && bitmap_bit_p (in_edge_srcs, (*ein_path)[0]->e->src->index))
{ {
/* It is necessary but not sufficient that the last path edges /* It is necessary but not sufficient that the last path edges
are identical. There may be different paths that share the are identical. There may be different paths that share the
same last path edge in the case where the last edge has a nocopy same last path edge in the case where the last edge has a nocopy
source block. */ source block. */
gcc_assert (ein_path->last ()->e == elast); gcc_assert (ein_path->last ()->e == elast);
path_in_count += ein->count; path_in_count += ein->count;
path_in_freq += EDGE_FREQUENCY (ein); path_in_freq += EDGE_FREQUENCY (ein);
} }
else if (!ein_path) else if (!ein_path)
{ {
/* Keep track of the incoming edges that are not on any jump-threading /* Keep track of the incoming edges that are not on any jump-threading
path. These counts will still flow out of original path after all path. These counts will still flow out of original path after all
jump threading is complete. */ jump threading is complete. */
nonpath_count += ein->count; nonpath_count += ein->count;
} }
} }
/* This is needed due to insane incoming frequencies. */ /* This is needed due to insane incoming frequencies. */
...@@ -786,31 +786,31 @@ compute_path_counts (struct redirection_data *rd, ...@@ -786,31 +786,31 @@ compute_path_counts (struct redirection_data *rd,
edge epath = (*path)[i]->e; edge epath = (*path)[i]->e;
gcov_type cur_count = epath->count; gcov_type cur_count = epath->count;
if ((*path)[i]->type == EDGE_COPY_SRC_JOINER_BLOCK) if ((*path)[i]->type == EDGE_COPY_SRC_JOINER_BLOCK)
{ {
has_joiner = true; has_joiner = true;
cur_count = apply_probability (cur_count, onpath_scale); cur_count = apply_probability (cur_count, onpath_scale);
} }
/* In the joiner case we need to update nonpath_count for any edges /* In the joiner case we need to update nonpath_count for any edges
coming into the path that will contribute to the count flowing coming into the path that will contribute to the count flowing
into the path successor. */ into the path successor. */
if (has_joiner && epath != elast) if (has_joiner && epath != elast)
{ {
/* Look for other incoming edges after joiner. */ /* Look for other incoming edges after joiner. */
FOR_EACH_EDGE (ein, ei, epath->dest->preds) FOR_EACH_EDGE (ein, ei, epath->dest->preds)
{ {
if (ein != epath if (ein != epath
/* Ignore in edges from blocks we have duplicated for a /* Ignore in edges from blocks we have duplicated for a
threading path, which have duplicated edge counts until threading path, which have duplicated edge counts until
they are redirected by an invocation of this routine. */ they are redirected by an invocation of this routine. */
&& !bitmap_bit_p (local_info->duplicate_blocks, && !bitmap_bit_p (local_info->duplicate_blocks,
ein->src->index)) ein->src->index))
nonpath_count += ein->count; nonpath_count += ein->count;
} }
} }
if (cur_count < path_out_count) if (cur_count < path_out_count)
path_out_count = cur_count; path_out_count = cur_count;
if (epath->count < min_path_count) if (epath->count < min_path_count)
min_path_count = epath->count; min_path_count = epath->count;
} }
/* We computed path_out_count above assuming that this path targeted /* We computed path_out_count above assuming that this path targeted
...@@ -850,7 +850,7 @@ compute_path_counts (struct redirection_data *rd, ...@@ -850,7 +850,7 @@ compute_path_counts (struct redirection_data *rd,
and the duplicate edge EDUP will have a count of PATH_OUT_COUNT. */ and the duplicate edge EDUP will have a count of PATH_OUT_COUNT. */
static void static void
update_profile (edge epath, edge edup, gcov_type path_in_count, update_profile (edge epath, edge edup, gcov_type path_in_count,
gcov_type path_out_count, int path_in_freq) gcov_type path_out_count, int path_in_freq)
{ {
/* First update the duplicated block's count / frequency. */ /* First update the duplicated block's count / frequency. */
...@@ -899,22 +899,22 @@ recompute_probabilities (basic_block bb) ...@@ -899,22 +899,22 @@ recompute_probabilities (basic_block bb)
FOR_EACH_EDGE (esucc, ei, bb->succs) FOR_EACH_EDGE (esucc, ei, bb->succs)
{ {
if (!bb->count) if (!bb->count)
continue; continue;
/* Prevent overflow computation due to insane profiles. */ /* Prevent overflow computation due to insane profiles. */
if (esucc->count < bb->count) if (esucc->count < bb->count)
esucc->probability = GCOV_COMPUTE_SCALE (esucc->count, esucc->probability = GCOV_COMPUTE_SCALE (esucc->count,
bb->count); bb->count);
else else
/* Can happen with missing/guessed probabilities, since we /* Can happen with missing/guessed probabilities, since we
may determine that more is flowing along duplicated may determine that more is flowing along duplicated
path than joiner succ probabilities allowed. path than joiner succ probabilities allowed.
Counts and freqs will be insane after jump threading, Counts and freqs will be insane after jump threading,
at least make sure probability is sane or we will at least make sure probability is sane or we will
get a flow verification error. get a flow verification error.
Not much we can do to make counts/freqs sane without Not much we can do to make counts/freqs sane without
redoing the profile estimation. */ redoing the profile estimation. */
esucc->probability = REG_BR_PROB_BASE; esucc->probability = REG_BR_PROB_BASE;
} }
} }
...@@ -927,8 +927,8 @@ recompute_probabilities (basic_block bb) ...@@ -927,8 +927,8 @@ recompute_probabilities (basic_block bb)
static void static void
update_joiner_offpath_counts (edge epath, basic_block dup_bb, update_joiner_offpath_counts (edge epath, basic_block dup_bb,
gcov_type path_in_count, gcov_type path_in_count,
gcov_type path_out_count) gcov_type path_out_count)
{ {
/* Compute the count that currently flows off path from the joiner. /* Compute the count that currently flows off path from the joiner.
In other words, the total count of joiner's out edges other than In other words, the total count of joiner's out edges other than
...@@ -943,7 +943,7 @@ update_joiner_offpath_counts (edge epath, basic_block dup_bb, ...@@ -943,7 +943,7 @@ update_joiner_offpath_counts (edge epath, basic_block dup_bb,
FOR_EACH_EDGE (enonpath, ei, epath->src->succs) FOR_EACH_EDGE (enonpath, ei, epath->src->succs)
{ {
if (enonpath == epath) if (enonpath == epath)
continue; continue;
total_orig_off_path_count += enonpath->count; total_orig_off_path_count += enonpath->count;
} }
...@@ -959,31 +959,31 @@ update_joiner_offpath_counts (edge epath, basic_block dup_bb, ...@@ -959,31 +959,31 @@ update_joiner_offpath_counts (edge epath, basic_block dup_bb,
{ {
/* Look for edges going off of the threading path. */ /* Look for edges going off of the threading path. */
if (enonpath == epath) if (enonpath == epath)
continue; continue;
/* Find the corresponding edge out of the duplicated joiner. */ /* Find the corresponding edge out of the duplicated joiner. */
edge enonpathdup = find_edge (dup_bb, enonpath->dest); edge enonpathdup = find_edge (dup_bb, enonpath->dest);
gcc_assert (enonpathdup); gcc_assert (enonpathdup);
/* We can't use the original probability of the joiner's out /* We can't use the original probability of the joiner's out
edges, since the probabilities of the original branch edges, since the probabilities of the original branch
and the duplicated branches may vary after all threading is and the duplicated branches may vary after all threading is
complete. But apportion the duplicated joiner's off-path complete. But apportion the duplicated joiner's off-path
total edge count computed earlier (total_dup_off_path_count) total edge count computed earlier (total_dup_off_path_count)
among the duplicated off-path edges based on their original among the duplicated off-path edges based on their original
ratio to the full off-path count (total_orig_off_path_count). ratio to the full off-path count (total_orig_off_path_count).
*/ */
int scale = GCOV_COMPUTE_SCALE (enonpath->count, int scale = GCOV_COMPUTE_SCALE (enonpath->count,
total_orig_off_path_count); total_orig_off_path_count);
/* Give the duplicated offpath edge a portion of the duplicated /* Give the duplicated offpath edge a portion of the duplicated
total. */ total. */
enonpathdup->count = apply_scale (scale, enonpathdup->count = apply_scale (scale,
total_dup_off_path_count); total_dup_off_path_count);
/* Now update the original offpath edge count, handling underflow /* Now update the original offpath edge count, handling underflow
due to rounding errors. */ due to rounding errors. */
enonpath->count -= enonpathdup->count; enonpath->count -= enonpathdup->count;
if (enonpath->count < 0) if (enonpath->count < 0)
enonpath->count = 0; enonpath->count = 0;
} }
} }
...@@ -1003,7 +1003,7 @@ estimated_freqs_path (struct redirection_data *rd) ...@@ -1003,7 +1003,7 @@ estimated_freqs_path (struct redirection_data *rd)
FOR_EACH_EDGE (ein, ei, e->dest->preds) FOR_EACH_EDGE (ein, ei, e->dest->preds)
{ {
if (ein->count) if (ein->count)
return false; return false;
non_zero_freq |= ein->src->frequency != 0; non_zero_freq |= ein->src->frequency != 0;
} }
...@@ -1011,15 +1011,15 @@ estimated_freqs_path (struct redirection_data *rd) ...@@ -1011,15 +1011,15 @@ estimated_freqs_path (struct redirection_data *rd)
{ {
edge epath = (*path)[i]->e; edge epath = (*path)[i]->e;
if (epath->src->count) if (epath->src->count)
return false; return false;
non_zero_freq |= epath->src->frequency != 0; non_zero_freq |= epath->src->frequency != 0;
edge esucc; edge esucc;
FOR_EACH_EDGE (esucc, ei, epath->src->succs) FOR_EACH_EDGE (esucc, ei, epath->src->succs)
{ {
if (esucc->count) if (esucc->count)
return false; return false;
non_zero_freq |= esucc->src->frequency != 0; non_zero_freq |= esucc->src->frequency != 0;
} }
} }
return non_zero_freq; return non_zero_freq;
} }
...@@ -1045,10 +1045,10 @@ freqs_to_counts_path (struct redirection_data *rd) ...@@ -1045,10 +1045,10 @@ freqs_to_counts_path (struct redirection_data *rd)
FOR_EACH_EDGE (ein, ei, e->dest->preds) FOR_EACH_EDGE (ein, ei, e->dest->preds)
{ {
/* Scale up the frequency by REG_BR_PROB_BASE, to avoid rounding /* Scale up the frequency by REG_BR_PROB_BASE, to avoid rounding
errors applying the probability when the frequencies are very errors applying the probability when the frequencies are very
small. */ small. */
ein->count = apply_probability (ein->src->frequency * REG_BR_PROB_BASE, ein->count = apply_probability (ein->src->frequency * REG_BR_PROB_BASE,
ein->probability); ein->probability);
} }
for (unsigned int i = 1; i < path->length (); i++) for (unsigned int i = 1; i < path->length (); i++)
...@@ -1056,12 +1056,12 @@ freqs_to_counts_path (struct redirection_data *rd) ...@@ -1056,12 +1056,12 @@ freqs_to_counts_path (struct redirection_data *rd)
edge epath = (*path)[i]->e; edge epath = (*path)[i]->e;
edge esucc; edge esucc;
/* Scale up the frequency by REG_BR_PROB_BASE, to avoid rounding /* Scale up the frequency by REG_BR_PROB_BASE, to avoid rounding
errors applying the edge probability when the frequencies are very errors applying the edge probability when the frequencies are very
small. */ small. */
epath->src->count = epath->src->frequency * REG_BR_PROB_BASE; epath->src->count = epath->src->frequency * REG_BR_PROB_BASE;
FOR_EACH_EDGE (esucc, ei, epath->src->succs) FOR_EACH_EDGE (esucc, ei, epath->src->succs)
esucc->count = apply_probability (esucc->src->count, esucc->count = apply_probability (esucc->src->count,
esucc->probability); esucc->probability);
} }
} }
...@@ -1088,7 +1088,7 @@ clear_counts_path (struct redirection_data *rd) ...@@ -1088,7 +1088,7 @@ clear_counts_path (struct redirection_data *rd)
{ {
edge epath = (*path)[i]->e; edge epath = (*path)[i]->e;
FOR_EACH_EDGE (esucc, ei, epath->src->succs) FOR_EACH_EDGE (esucc, ei, epath->src->succs)
esucc->count = 0; esucc->count = 0;
epath->src->count = 0; epath->src->count = 0;
} }
/* Also need to clear the counts along duplicated path. */ /* Also need to clear the counts along duplicated path. */
...@@ -1096,9 +1096,9 @@ clear_counts_path (struct redirection_data *rd) ...@@ -1096,9 +1096,9 @@ clear_counts_path (struct redirection_data *rd)
{ {
basic_block dup = rd->dup_blocks[i]; basic_block dup = rd->dup_blocks[i];
if (!dup) if (!dup)
continue; continue;
FOR_EACH_EDGE (esucc, ei, dup->succs) FOR_EACH_EDGE (esucc, ei, dup->succs)
esucc->count = 0; esucc->count = 0;
dup->count = 0; dup->count = 0;
} }
} }
...@@ -1128,7 +1128,7 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd, ...@@ -1128,7 +1128,7 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd,
to see if the paths through RD are using estimated frequencies because to see if the paths through RD are using estimated frequencies because
the routine had zero profile counts. */ the routine had zero profile counts. */
bool do_freqs_to_counts = (profile_status_for_fn (cfun) != PROFILE_READ bool do_freqs_to_counts = (profile_status_for_fn (cfun) != PROFILE_READ
|| estimated_freqs_path (rd)); || estimated_freqs_path (rd));
if (do_freqs_to_counts) if (do_freqs_to_counts)
freqs_to_counts_path (rd); freqs_to_counts_path (rd);
...@@ -1139,8 +1139,8 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd, ...@@ -1139,8 +1139,8 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd,
non-joiner case the path_in_count and path_out_count should be the non-joiner case the path_in_count and path_out_count should be the
same. */ same. */
bool has_joiner = compute_path_counts (rd, local_info, bool has_joiner = compute_path_counts (rd, local_info,
&path_in_count, &path_out_count, &path_in_count, &path_out_count,
&path_in_freq); &path_in_freq);
int cur_path_freq = path_in_freq; int cur_path_freq = path_in_freq;
for (unsigned int count = 0, i = 1; i < path->length (); i++) for (unsigned int count = 0, i = 1; i < path->length (); i++)
...@@ -1156,7 +1156,7 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd, ...@@ -1156,7 +1156,7 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd,
edge victim; edge victim;
edge e2; edge e2;
gcc_assert (has_joiner); gcc_assert (has_joiner);
/* This updates the PHIs at the destination of the duplicate /* This updates the PHIs at the destination of the duplicate
block. Pass 0 instead of i if we are threading a path which block. Pass 0 instead of i if we are threading a path which
...@@ -1221,7 +1221,7 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd, ...@@ -1221,7 +1221,7 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd,
/* Next we need to update the counts of the original and duplicated /* Next we need to update the counts of the original and duplicated
edges from the joiner that go off path. */ edges from the joiner that go off path. */
update_joiner_offpath_counts (epath, e2->src, path_in_count, update_joiner_offpath_counts (epath, e2->src, path_in_count,
path_out_count); path_out_count);
/* Finally, we need to set the probabilities on the duplicated /* Finally, we need to set the probabilities on the duplicated
edges out of the duplicated joiner (e2->src). The probabilities edges out of the duplicated joiner (e2->src). The probabilities
...@@ -1255,7 +1255,7 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd, ...@@ -1255,7 +1255,7 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd,
cur_path_freq); cur_path_freq);
} }
else else
{ {
/* No copy case. In this case we don't have an equivalent block /* No copy case. In this case we don't have an equivalent block
on the duplicated thread path to update, but we do need on the duplicated thread path to update, but we do need
to remove the portion of the counts/freqs that were moved to remove the portion of the counts/freqs that were moved
...@@ -1274,9 +1274,9 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd, ...@@ -1274,9 +1274,9 @@ ssa_fix_duplicate_block_edges (struct redirection_data *rd,
} }
/* Increment the index into the duplicated path when we processed /* Increment the index into the duplicated path when we processed
a duplicated block. */ a duplicated block. */
if ((*path)[i]->type == EDGE_COPY_SRC_JOINER_BLOCK if ((*path)[i]->type == EDGE_COPY_SRC_JOINER_BLOCK
|| (*path)[i]->type == EDGE_COPY_SRC_BLOCK) || (*path)[i]->type == EDGE_COPY_SRC_BLOCK)
{ {
count++; count++;
} }
...@@ -1320,7 +1320,7 @@ ssa_create_duplicates (struct redirection_data **slot, ...@@ -1320,7 +1320,7 @@ ssa_create_duplicates (struct redirection_data **slot,
|| (*path)[i]->type == EDGE_COPY_SRC_JOINER_BLOCK) || (*path)[i]->type == EDGE_COPY_SRC_JOINER_BLOCK)
{ {
create_block_for_threading ((*path)[i]->e->src, rd, 1, create_block_for_threading ((*path)[i]->e->src, rd, 1,
&local_info->duplicate_blocks); &local_info->duplicate_blocks);
break; break;
} }
} }
...@@ -1330,7 +1330,7 @@ ssa_create_duplicates (struct redirection_data **slot, ...@@ -1330,7 +1330,7 @@ ssa_create_duplicates (struct redirection_data **slot,
if (local_info->template_block == NULL) if (local_info->template_block == NULL)
{ {
create_block_for_threading ((*path)[1]->e->src, rd, 0, create_block_for_threading ((*path)[1]->e->src, rd, 0,
&local_info->duplicate_blocks); &local_info->duplicate_blocks);
local_info->template_block = rd->dup_blocks[0]; local_info->template_block = rd->dup_blocks[0];
/* We do not create any outgoing edges for the template. We will /* We do not create any outgoing edges for the template. We will
...@@ -1340,7 +1340,7 @@ ssa_create_duplicates (struct redirection_data **slot, ...@@ -1340,7 +1340,7 @@ ssa_create_duplicates (struct redirection_data **slot,
else else
{ {
create_block_for_threading (local_info->template_block, rd, 0, create_block_for_threading (local_info->template_block, rd, 0,
&local_info->duplicate_blocks); &local_info->duplicate_blocks);
/* Go ahead and wire up outgoing edges and update PHIs for the duplicate /* Go ahead and wire up outgoing edges and update PHIs for the duplicate
block. */ block. */
...@@ -2156,20 +2156,20 @@ mark_threaded_blocks (bitmap threaded_blocks) ...@@ -2156,20 +2156,20 @@ mark_threaded_blocks (bitmap threaded_blocks)
vec<jump_thread_edge *> *path = paths[i]; vec<jump_thread_edge *> *path = paths[i];
if ((*path)[1]->type == EDGE_COPY_SRC_JOINER_BLOCK) if ((*path)[1]->type == EDGE_COPY_SRC_JOINER_BLOCK)
{ {
/* Attach the path to the starting edge if none is yet recorded. */ /* Attach the path to the starting edge if none is yet recorded. */
if ((*path)[0]->e->aux == NULL) if ((*path)[0]->e->aux == NULL)
{ {
(*path)[0]->e->aux = path; (*path)[0]->e->aux = path;
} }
else else
{ {
paths.unordered_remove (i); paths.unordered_remove (i);
if (dump_file && (dump_flags & TDF_DETAILS)) if (dump_file && (dump_flags & TDF_DETAILS))
dump_jump_thread_path (dump_file, *path, false); dump_jump_thread_path (dump_file, *path, false);
delete_jump_thread_path (path); delete_jump_thread_path (path);
} }
} }
} }
/* Second, look for paths that have any other jump thread attached to /* Second, look for paths that have any other jump thread attached to
them, and either finish converting them or cancel them. */ them, and either finish converting them or cancel them. */
...@@ -2194,7 +2194,7 @@ mark_threaded_blocks (bitmap threaded_blocks) ...@@ -2194,7 +2194,7 @@ mark_threaded_blocks (bitmap threaded_blocks)
e->aux = NULL; e->aux = NULL;
paths.unordered_remove (i); paths.unordered_remove (i);
if (dump_file && (dump_flags & TDF_DETAILS)) if (dump_file && (dump_flags & TDF_DETAILS))
dump_jump_thread_path (dump_file, *path, false); dump_jump_thread_path (dump_file, *path, false);
delete_jump_thread_path (path); delete_jump_thread_path (path);
} }
} }
...@@ -2714,13 +2714,13 @@ thread_through_all_blocks (bool may_peel_loop_headers) ...@@ -2714,13 +2714,13 @@ thread_through_all_blocks (bool may_peel_loop_headers)
} }
/* Our path is still valid, thread it. */ /* Our path is still valid, thread it. */
if (e->aux) if (e->aux)
{ {
if (thread_block ((*path)[0]->e->dest, false)) if (thread_block ((*path)[0]->e->dest, false))
e->aux = NULL; e->aux = NULL;
else else
{ {
delete_jump_thread_path (path); delete_jump_thread_path (path);
e->aux = NULL; e->aux = NULL;
ei_next (&ei); ei_next (&ei);
} }
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment