Commit b5c4ff78 by Jeff Law Committed by Jeff Law

tree-ssa-threadedge.c (thread_through_normal_block): New argument VISITED.

       * tree-ssa-threadedge.c (thread_through_normal_block): New argument VISITED.
        Remove VISISTED as a local variable.  When we have a threadable jump, verify
        the destination of the jump has not been visised.
        (thread_across_edge): Allocate VISITED bitmap once at function scope and
        use it throughout.  Make sure to set appropriate bits in VISITED for E (start
        of jump thread path).

        * tree-ssa-threadupdate.c (mark_threaded_blocks): Reject threading through
        a joiner if any edge on the path has a recorded jump thread.

From-SVN: r203895
parent ce0770fc
2013-10-21 Jeff Law <law@redhat.com>
* tree-ssa-threadedge.c (thread_through_normal_block): New argument VISITED.
Remove VISISTED as a local variable. When we have a threadable jump, verify
the destination of the jump has not been visised.
(thread_across_edge): Allocate VISITED bitmap once at function scope and
use it throughout. Make sure to set appropriate bits in VISITED for E (start
of jump thread path).
* tree-ssa-threadupdate.c (mark_threaded_blocks): Reject threading through
a joiner if any edge on the path has a recorded jump thread.
2013-10-21 Ian Lance Taylor <iant@google.com>
* doc/invoke.texi (Optimize Options): For -fno-toplevel-reorder,
......
......@@ -883,7 +883,8 @@ thread_through_normal_block (edge e,
bool handle_dominating_asserts,
vec<tree> *stack,
tree (*simplify) (gimple, gimple),
vec<jump_thread_edge *> *path)
vec<jump_thread_edge *> *path,
bitmap visited)
{
/* If E is a backedge, then we want to verify that the COND_EXPR,
SWITCH_EXPR or GOTO_EXPR at the end of e->dest is not affected
......@@ -922,11 +923,10 @@ thread_through_normal_block (edge e,
{
edge taken_edge = find_taken_edge (e->dest, cond);
basic_block dest = (taken_edge ? taken_edge->dest : NULL);
bitmap visited;
/* DEST could be NULL for a computed jump to an absolute
address. */
if (dest == NULL || dest == e->dest)
if (dest == NULL || dest == e->dest || bitmap_bit_p (visited, dest->index))
return false;
jump_thread_edge *x
......@@ -944,7 +944,6 @@ thread_through_normal_block (edge e,
{
/* We don't want to thread back to a block we have already
visited. This may be overly conservative. */
visited = BITMAP_ALLOC (NULL);
bitmap_set_bit (visited, dest->index);
bitmap_set_bit (visited, e->dest->index);
thread_around_empty_blocks (taken_edge,
......@@ -953,7 +952,6 @@ thread_through_normal_block (edge e,
simplify,
visited,
path);
BITMAP_FREE (visited);
}
return true;
}
......@@ -995,15 +993,21 @@ thread_across_edge (gimple dummy_cond,
vec<tree> *stack,
tree (*simplify) (gimple, gimple))
{
bitmap visited = BITMAP_ALLOC (NULL);
stmt_count = 0;
vec<jump_thread_edge *> *path = new vec<jump_thread_edge *> ();
bitmap_clear (visited);
bitmap_set_bit (visited, e->src->index);
bitmap_set_bit (visited, e->dest->index);
if (thread_through_normal_block (e, dummy_cond, handle_dominating_asserts,
stack, simplify, path))
stack, simplify, path, visited))
{
propagate_threaded_block_debug_into (path->last ()->e->dest,
e->dest);
remove_temporary_equivalences (stack);
BITMAP_FREE (visited);
register_jump_thread (path);
return;
}
......@@ -1030,7 +1034,6 @@ thread_across_edge (gimple dummy_cond,
edge taken_edge;
edge_iterator ei;
bool found;
bitmap visited = BITMAP_ALLOC (NULL);
/* Look at each successor of E->dest to see if we can thread through it. */
FOR_EACH_EDGE (taken_edge, ei, e->dest->succs)
......
......@@ -1244,7 +1244,7 @@ mark_threaded_blocks (bitmap threaded_blocks)
When this occurs ignore the jump thread request with the joiner
block. It's totally subsumed by the simpler jump thread request.
This results in less block copying, simpler CFGs. More improtantly,
This results in less block copying, simpler CFGs. More importantly,
when we duplicate the joiner block, B, in this case we will create
a new threading opportunity that we wouldn't be able to optimize
until the next jump threading iteration.
......@@ -1263,20 +1263,30 @@ mark_threaded_blocks (bitmap threaded_blocks)
}
}
/* Now iterate again, converting cases where we threaded through
a joiner block, but ignoring those where we have already
threaded through the joiner block. */
/* Now iterate again, converting cases where we want to thread
through a joiner block, but only if no other edge on the path
already has a jump thread attached to it. */
for (i = 0; i < paths.length (); i++)
{
vec<jump_thread_edge *> *path = paths[i];
if ((*path)[1]->type == EDGE_COPY_SRC_JOINER_BLOCK
&& (*path)[0]->e->aux == NULL)
if ((*path)[1]->type == EDGE_COPY_SRC_JOINER_BLOCK)
{
edge e = (*path)[0]->e;
e->aux = path;
bitmap_set_bit (tmp, e->dest->index);
unsigned int j;
for (j = 0; j < path->length (); j++)
if ((*path)[j]->e->aux != NULL)
break;
/* If we iterated through the entire path without exiting the loop,
then we are good to go, attach the path to the starting edge. */
if (j == path->length ())
{
edge e = (*path)[0]->e;
e->aux = path;
bitmap_set_bit (tmp, e->dest->index);
}
}
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment