Commit 2090d6a0 by Jeff Law Committed by Jeff Law

tree-vrp.c (find_conditional_asserts): Update comments.


2006-02-07  Jeff Law  <law@redhat.com>

	* tree-vrp.c (find_conditional_asserts): Update comments.
	(simplify_stmt_for_jump_threading): New.
	(identify_jump_threads, finalize_jump_threads): New.
	(vrp_finalize): Call identify_jump_threads.
	(execute_vrp): Call finalize_jump_threads.
	* tree-ssa-dom.c (struct opt_stats_d): Remove num_iterations field.
	(vrp_element, vrp_data, vrp_element_p): Remove.
	(vrp_hash_elt, vrp_variables_stack): Remove.
	(vrp_hash, vrp_eq, record_range): Remove.
	(simplify_cond_and_lookup_avail_expr): Remove.
	(extract_range_from_cond): Remove.
	(thread_across_edge): Relocated into tree-ssa-threadedge.c.
	(simplify_stmt_for_jump_threading): New.
	(dom_thread_across_edge): New wrapper.
	(tree_ssa_dominator_optimize): No longer initialize or
	finalize any of the VRP datastructures.  Remove iteration
	step and simplify as a result of removal of iteration step.
	(pass_dominator): Perform a cfg cleanup after DOM.
	(dom_opt_finalize_block): Use the new common routines
	for threading jumps.  Simplify stack management slightly.
	No longer need to unwind VRP state.
	(record_equivalences_from_incoming_edge): No longer record
	VRP information.
	(eliminate_redundant_computations): No longer call
	simplify_cond_and_lookup_avail_expr.
	* tree-flow.h (potentially_threadable_block): Prototype.
	(thread_across_edge): Likewise.
	* Makefile.in (OBJS-common):  Add tree-ssa-threadedge.o
	(tree-ssa-threadedge.o): Add dependencies.
	* tree-ssa-threadedge.c: New file.
	* passes.c (init_optimization_passes): Merge PHIs before
	calling VRP.  Run VRP again late in the SSA optimization pipeline.


	* gcc.dg/tree-ssa/vrp01.c: Update dumpfile names now that we have
	multiple VRP passes.
	* gcc.dg/tree-ssa/vrp09.c: Likewise.
	* gcc.dg/tree-ssa/vrp18.c: Likewise.
	* gcc.dg/tree-ssa/pr21582.c: Likewise.
	* gcc.dg/tree-ssa/pr20657.c: Likewise.
	* gcc.dg/tree-ssa/pr21001.c: Likewise.
	* gcc.dg/tree-ssa/vrp02.c: Likewise
	* gcc.dg/tree-ssa/vrp11.c: Likewise
	* gcc.dg/tree-ssa/pr14341.c: Likewise
	* gcc.dg/tree-ssa/vrp19.c: Likewise
	* gcc.dg/tree-ssa/vrp20.c: Likewise
	* gcc.dg/tree-ssa/vrp03.c: Likewise
	* gcc.dg/tree-ssa/pr21086.c: Likewise
	* gcc.dg/tree-ssa/pr21959.c: Likewise
	* gcc.dg/tree-ssa/vrp21.c: Likewise
	* gcc.dg/tree-ssa/vrp04.c: Likewise 
	* gcc.dg/tree-ssa/pr25485.c: Likewise
	* gcc.dg/tree-ssa/pr22026.c: Likewise
	* gcc.dg/tree-ssa/vrp22.c: Likewise
	* gcc.dg/tree-ssa/vrp05.c: Likewise
	* gcc.dg/tree-ssa/20030807-10.c: Likewise
	* gcc.dg/tree-ssa/pr20701.c: Likewise
	* gcc.dg/tree-ssa/vrp23.c: Likewise
	* gcc.dg/tree-ssa/vrp06.c: Likewise
	* gcc.dg/tree-ssa/pr22117.c: Likewise
	* gcc.dg/tree-ssa/pr20702.c: Likewise
	* gcc.dg/tree-ssa/vrp15.c: Likewise
	* gcc.dg/tree-ssa/pr21090.c: Likewise
	* gcc.dg/tree-ssa/pr21294.c: Likewise
	* gcc.dg/tree-ssa/vrp24.c: Likewise
	* gcc.dg/tree-ssa/vrp07.c: Likewise
	* gcc.dg/tree-ssa/pr21563.c: Likewise
	* gcc.dg/tree-ssa/pr25382.c: Likewise
	* gcc.dg/tree-ssa/vrp16.c: Likewise
	* gcc.dg/tree-ssa/vrp25.c: Likewise
	* gcc.dg/tree-ssa/vrp08.c: Likewise
	* gcc.dg/tree-ssa/20030807-6.c: Likewise
	* gcc.dg/tree-ssa/vrp17.c: Likewise
	* gcc.dg/tree-ssa/pr21458.c: Likewise
	* g++.dg/tree-ssa/pr18178.C: Likewise

From-SVN: r110705
parent e45dcf9c
2006-02-07 Jeff Law <law@redhat.com>
* tree-vrp.c (find_conditional_asserts): Update comments.
(simplify_stmt_for_jump_threading): New.
(identify_jump_threads, finalize_jump_threads): New.
(vrp_finalize): Call identify_jump_threads.
(execute_vrp): Call finalize_jump_threads.
* tree-ssa-dom.c (struct opt_stats_d): Remove num_iterations field.
(vrp_element, vrp_data, vrp_element_p): Remove.
(vrp_hash_elt, vrp_variables_stack): Remove.
(vrp_hash, vrp_eq, record_range): Remove.
(simplify_cond_and_lookup_avail_expr): Remove.
(extract_range_from_cond): Remove.
(thread_across_edge): Relocated into tree-ssa-threadedge.c.
(simplify_stmt_for_jump_threading): New.
(dom_thread_across_edge): New wrapper.
(tree_ssa_dominator_optimize): No longer initialize or
finalize any of the VRP datastructures. Remove iteration
step and simplify as a result of removal of iteration step.
(pass_dominator): Perform a cfg cleanup after DOM.
(dom_opt_finalize_block): Use the new common routines
for threading jumps. Simplify stack management slightly.
No longer need to unwind VRP state.
(record_equivalences_from_incoming_edge): No longer record
VRP information.
(eliminate_redundant_computations): No longer call
simplify_cond_and_lookup_avail_expr.
* tree-flow.h (potentially_threadable_block): Prototype.
(thread_across_edge): Likewise.
* Makefile.in (OBJS-common): Add tree-ssa-threadedge.o
(tree-ssa-threadedge.o): Add dependencies.
* tree-ssa-threadedge.c: New file.
* passes.c (init_optimization_passes): Merge PHIs before
calling VRP. Run VRP again late in the SSA optimization pipeline.
2006-02-07 Richard Guenther <rguenther@suse.de> 2006-02-07 Richard Guenther <rguenther@suse.de>
PR c++/26140 PR c++/26140
......
...@@ -961,7 +961,7 @@ OBJS-common = \ ...@@ -961,7 +961,7 @@ OBJS-common = \
tree-ssa-dom.o domwalk.o tree-tailcall.o gimple-low.o tree-iterator.o \ tree-ssa-dom.o domwalk.o tree-tailcall.o gimple-low.o tree-iterator.o \
omp-low.o tree-phinodes.o tree-ssanames.o tree-sra.o tree-complex.o \ omp-low.o tree-phinodes.o tree-ssanames.o tree-sra.o tree-complex.o \
tree-vect-generic.o tree-ssa-loop.o tree-ssa-loop-niter.o \ tree-vect-generic.o tree-ssa-loop.o tree-ssa-loop-niter.o \
tree-ssa-loop-manip.o tree-ssa-threadupdate.o \ tree-ssa-loop-manip.o tree-ssa-threadupdate.o tree-ssa-threadedge.o \
tree-vectorizer.o tree-vect-analyze.o tree-vect-transform.o \ tree-vectorizer.o tree-vect-analyze.o tree-vect-transform.o \
tree-vect-patterns.o \ tree-vect-patterns.o \
tree-ssa-loop-ivcanon.o tree-ssa-propagate.o tree-ssa-address.o \ tree-ssa-loop-ivcanon.o tree-ssa-propagate.o tree-ssa-address.o \
...@@ -1860,6 +1860,10 @@ tree-ssa-uncprop.o : tree-ssa-uncprop.c $(TREE_FLOW_H) $(CONFIG_H) \ ...@@ -1860,6 +1860,10 @@ tree-ssa-uncprop.o : tree-ssa-uncprop.c $(TREE_FLOW_H) $(CONFIG_H) \
$(DIAGNOSTIC_H) $(FUNCTION_H) $(TIMEVAR_H) $(TM_H) coretypes.h \ $(DIAGNOSTIC_H) $(FUNCTION_H) $(TIMEVAR_H) $(TM_H) coretypes.h \
$(TREE_DUMP_H) $(BASIC_BLOCK_H) domwalk.h real.h tree-pass.h $(FLAGS_H) \ $(TREE_DUMP_H) $(BASIC_BLOCK_H) domwalk.h real.h tree-pass.h $(FLAGS_H) \
langhooks.h tree-ssa-propagate.h langhooks.h tree-ssa-propagate.h
tree-ssa-threadedge.o : tree-ssa-threadedge.c $(TREE_FLOW_H) $(CONFIG_H) \
$(SYSTEM_H) $(RTL_H) $(TREE_H) $(TM_P_H) $(EXPR_H) $(GGC_H) output.h \
$(DIAGNOSTIC_H) $(FUNCTION_H) $(TM_H) coretypes.h $(TREE_DUMP_H) \
$(BASIC_BLOCK_H) $(FLAGS_H) tree-pass.h $(CFGLOOP_H)
tree-ssa-threadupdate.o : tree-ssa-threadupdate.c $(TREE_FLOW_H) $(CONFIG_H) \ tree-ssa-threadupdate.o : tree-ssa-threadupdate.c $(TREE_FLOW_H) $(CONFIG_H) \
$(SYSTEM_H) $(RTL_H) $(TREE_H) $(TM_P_H) $(EXPR_H) $(GGC_H) output.h \ $(SYSTEM_H) $(RTL_H) $(TREE_H) $(TM_P_H) $(EXPR_H) $(GGC_H) output.h \
$(DIAGNOSTIC_H) $(FUNCTION_H) $(TM_H) coretypes.h $(TREE_DUMP_H) \ $(DIAGNOSTIC_H) $(FUNCTION_H) $(TM_H) coretypes.h $(TREE_DUMP_H) \
......
/* Top level of GCC compilers (cc1, cc1plus, etc.) /* Top level of GCC compilers (cc1, cc1plus, etc.)
Copyright (C) 1987, 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, Copyright (C) 1987, 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998,
1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
This file is part of GCC. This file is part of GCC.
...@@ -507,9 +507,9 @@ init_optimization_passes (void) ...@@ -507,9 +507,9 @@ init_optimization_passes (void)
NEXT_PASS (pass_dce); NEXT_PASS (pass_dce);
NEXT_PASS (pass_forwprop); NEXT_PASS (pass_forwprop);
NEXT_PASS (pass_copy_prop); NEXT_PASS (pass_copy_prop);
NEXT_PASS (pass_merge_phi);
NEXT_PASS (pass_vrp); NEXT_PASS (pass_vrp);
NEXT_PASS (pass_dce); NEXT_PASS (pass_dce);
NEXT_PASS (pass_merge_phi);
NEXT_PASS (pass_dominator); NEXT_PASS (pass_dominator);
/* The only copy propagation opportunities left after DOM /* The only copy propagation opportunities left after DOM
...@@ -560,6 +560,7 @@ init_optimization_passes (void) ...@@ -560,6 +560,7 @@ init_optimization_passes (void)
NEXT_PASS (pass_tree_loop); NEXT_PASS (pass_tree_loop);
NEXT_PASS (pass_cse_reciprocals); NEXT_PASS (pass_cse_reciprocals);
NEXT_PASS (pass_reassoc); NEXT_PASS (pass_reassoc);
NEXT_PASS (pass_vrp);
NEXT_PASS (pass_dominator); NEXT_PASS (pass_dominator);
/* The only copy propagation opportunities left after DOM /* The only copy propagation opportunities left after DOM
......
2006-02-07 Jeff Law <law@redhat.com>
* gcc.dg/tree-ssa/vrp01.c: Update dumpfile names now that we have
multiple VRP passes.
* gcc.dg/tree-ssa/vrp09.c: Likewise.
* gcc.dg/tree-ssa/vrp18.c: Likewise.
* gcc.dg/tree-ssa/pr21582.c: Likewise.
* gcc.dg/tree-ssa/pr20657.c: Likewise.
* gcc.dg/tree-ssa/pr21001.c: Likewise.
* gcc.dg/tree-ssa/vrp02.c: Likewise
* gcc.dg/tree-ssa/vrp11.c: Likewise
* gcc.dg/tree-ssa/pr14341.c: Likewise
* gcc.dg/tree-ssa/vrp19.c: Likewise
* gcc.dg/tree-ssa/vrp20.c: Likewise
* gcc.dg/tree-ssa/vrp03.c: Likewise
* gcc.dg/tree-ssa/pr21086.c: Likewise
* gcc.dg/tree-ssa/pr21959.c: Likewise
* gcc.dg/tree-ssa/vrp21.c: Likewise
* gcc.dg/tree-ssa/vrp04.c: Likewise
* gcc.dg/tree-ssa/pr25485.c: Likewise
* gcc.dg/tree-ssa/pr22026.c: Likewise
* gcc.dg/tree-ssa/vrp22.c: Likewise
* gcc.dg/tree-ssa/vrp05.c: Likewise
* gcc.dg/tree-ssa/20030807-10.c: Likewise
* gcc.dg/tree-ssa/pr20701.c: Likewise
* gcc.dg/tree-ssa/vrp23.c: Likewise
* gcc.dg/tree-ssa/vrp06.c: Likewise
* gcc.dg/tree-ssa/pr22117.c: Likewise
* gcc.dg/tree-ssa/pr20702.c: Likewise
* gcc.dg/tree-ssa/vrp15.c: Likewise
* gcc.dg/tree-ssa/pr21090.c: Likewise
* gcc.dg/tree-ssa/pr21294.c: Likewise
* gcc.dg/tree-ssa/vrp24.c: Likewise
* gcc.dg/tree-ssa/vrp07.c: Likewise
* gcc.dg/tree-ssa/pr21563.c: Likewise
* gcc.dg/tree-ssa/pr25382.c: Likewise
* gcc.dg/tree-ssa/vrp16.c: Likewise
* gcc.dg/tree-ssa/vrp25.c: Likewise
* gcc.dg/tree-ssa/vrp08.c: Likewise
* gcc.dg/tree-ssa/20030807-6.c: Likewise
* gcc.dg/tree-ssa/vrp17.c: Likewise
* gcc.dg/tree-ssa/pr21458.c: Likewise
* g++.dg/tree-ssa/pr18178.C: Likewise
2006-02-07 Richard Guenther <rguenther@suse.de> 2006-02-07 Richard Guenther <rguenther@suse.de>
PR c++/26140 PR c++/26140
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
// Define this to see it work. // Define this to see it work.
// #define WORK_WORK_WORK // #define WORK_WORK_WORK
...@@ -43,5 +43,5 @@ void doit (array *a) ...@@ -43,5 +43,5 @@ void doit (array *a)
/* VRP should remove all but 1 if() in the loop. */ /* VRP should remove all but 1 if() in the loop. */
/* { dg-final { scan-tree-dump-times "if " 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times "if " 1 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
extern const unsigned char mode_size[]; extern const unsigned char mode_size[];
...@@ -18,9 +18,9 @@ subreg_highpart_offset (outermode, innermode) ...@@ -18,9 +18,9 @@ subreg_highpart_offset (outermode, innermode)
} }
/* There should be one mask with the value 3. */ /* There should be one mask with the value 3. */
/* { dg-final { scan-tree-dump-times " \& 3" 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times " \& 3" 1 "vrp1"} } */
/* There should be one right shift by 2 places. */ /* There should be one right shift by 2 places. */
/* { dg-final { scan-tree-dump-times " >> 2" 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times " >> 2" 1 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
void void
...@@ -39,5 +39,5 @@ foo4 (distance, i, j) ...@@ -39,5 +39,5 @@ foo4 (distance, i, j)
} }
/* There should be no ABS_EXPR. */ /* There should be no ABS_EXPR. */
/* { dg-final { scan-tree-dump-times "ABS_EXPR " 0 "vrp"} } */ /* { dg-final { scan-tree-dump-times "ABS_EXPR " 0 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
void fn_call (int); void fn_call (int);
int h(int, int); int h(int, int);
...@@ -12,5 +12,5 @@ void t() ...@@ -12,5 +12,5 @@ void t()
} }
} }
/* { dg-final { scan-tree-dump-times "fn_call \\(1\\)" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "fn_call \\(1\\)" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
statement, which was needed to eliminate the second "if" statement. */ statement, which was needed to eliminate the second "if" statement. */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp1-details" } */
int int
foo (int a) foo (int a)
...@@ -14,5 +14,5 @@ foo (int a) ...@@ -14,5 +14,5 @@ foo (int a)
return 0; return 0;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
typedef struct { typedef struct {
int code; int code;
...@@ -26,5 +26,5 @@ can_combine_p (rtx insn, rtx elt) ...@@ -26,5 +26,5 @@ can_combine_p (rtx insn, rtx elt)
return 0; return 0;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate.*to 0" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate.*to 0" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
immediate successors of the basic block. */ immediate successors of the basic block. */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp1-details" } */
extern void bar (int); extern void bar (int);
...@@ -25,5 +25,5 @@ foo (int *p, int b) ...@@ -25,5 +25,5 @@ foo (int *p, int b)
return a; return a;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
range infomation out of the conditional. */ range infomation out of the conditional. */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp1-details" } */
int int
foo (int a) foo (int a)
...@@ -17,5 +17,5 @@ foo (int a) ...@@ -17,5 +17,5 @@ foo (int a)
return 0; return 0;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
int int
foo (int *p) foo (int *p)
...@@ -15,5 +15,5 @@ foo (int *p) ...@@ -15,5 +15,5 @@ foo (int *p)
return 0; return 0;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate " 2 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate " 2 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
int g, h; int g, h;
...@@ -19,5 +19,5 @@ foo (int a) ...@@ -19,5 +19,5 @@ foo (int a)
return 0; return 0;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate.*to 1" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
allows us to eliminate the second "if" statement. */ allows us to eliminate the second "if" statement. */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp1-details" } */
struct f { struct f {
int i; int i;
...@@ -19,5 +19,5 @@ foo (struct f *p) ...@@ -19,5 +19,5 @@ foo (struct f *p)
return 0; return 0;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
extern void g (void); extern void g (void);
extern void bar (int); extern void bar (int);
...@@ -16,5 +16,5 @@ foo (int a) ...@@ -16,5 +16,5 @@ foo (int a)
} }
} }
/* { dg-final { scan-tree-dump-times "Folding predicate.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate.*to 1" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
Make sure VRP folds the second "if" statement. */ Make sure VRP folds the second "if" statement. */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fno-tree-dominator-opts -fdump-tree-vrp1-details" } */
int int
foo (int a) foo (int a)
...@@ -13,5 +13,5 @@ foo (int a) ...@@ -13,5 +13,5 @@ foo (int a)
return 0; return 0;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times "Folding predicate" 1 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do link } */ /* { dg-do link } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
static inline void do_thing(char *s, int *p, char *q) static inline void do_thing(char *s, int *p, char *q)
{ {
...@@ -24,5 +24,5 @@ main() ...@@ -24,5 +24,5 @@ main()
do_other_thing ("xxx", &i, "yyy"); do_other_thing ("xxx", &i, "yyy");
} }
/* { dg-final { scan-tree-dump-times "Folding predicate p_.*" 0 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate p_.*" 0 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
unsigned char c[0xFF]; unsigned char c[0xFF];
void f(void) void f(void)
...@@ -16,5 +16,5 @@ void f(void) ...@@ -16,5 +16,5 @@ void f(void)
} }
} }
/* { dg-final { scan-tree-dump-times "Folding predicate " 0 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate " 0 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
same applies to subtraction and unsigned multiplication. */ same applies to subtraction and unsigned multiplication. */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
int int
plus (int x, int y) plus (int x, int y)
...@@ -45,5 +45,5 @@ mult (unsigned x, unsigned y) ...@@ -45,5 +45,5 @@ mult (unsigned x, unsigned y)
} }
/* None of the predicates can be folded in these functions. */ /* None of the predicates can be folded in these functions. */
/* { dg-final { scan-tree-dump-times "Folding predicate" 0 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate" 0 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
known to be zero after entering the first two "if" statements. */ known to be zero after entering the first two "if" statements. */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
void void
foo (int *p, int q) foo (int *p, int q)
...@@ -19,5 +19,5 @@ foo (int *p, int q) ...@@ -19,5 +19,5 @@ foo (int *p, int q)
} }
} }
/* { dg-final { scan-tree-dump-times "Folding predicate r_.* != 0B to 0" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate r_.* != 0B to 0" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
Check that VRP now gets ranges from BIT_AND_EXPRs. */ Check that VRP now gets ranges from BIT_AND_EXPRs. */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
int int
foo (int a) foo (int a)
...@@ -15,5 +15,5 @@ foo (int a) ...@@ -15,5 +15,5 @@ foo (int a)
return 1; return 1;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate b_.* > 300 to 0" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate b_.* > 300 to 0" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* PR tree-optimization/25485 /* PR tree-optimization/25485
VRP did not fold TRUTH_AND_EXPR. Make sure it does now. */ VRP did not fold TRUTH_AND_EXPR. Make sure it does now. */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
int int
foo (int a, int b) foo (int a, int b)
...@@ -13,5 +13,5 @@ foo (int a, int b) ...@@ -13,5 +13,5 @@ foo (int a, int b)
return 31; return 31;
} }
/* { dg-final { scan-tree-dump-times "if" 1 "vrp"} } */ /* { dg-final { scan-tree-dump-times "if" 1 "vrp1"} } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
foo (int *p, int i) foo (int *p, int i)
{ {
...@@ -24,5 +24,5 @@ foo (int *p, int i) ...@@ -24,5 +24,5 @@ foo (int *p, int i)
return i; return i;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 1" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
struct A struct A
{ {
...@@ -20,5 +20,5 @@ foo (struct A *p, struct A *q) ...@@ -20,5 +20,5 @@ foo (struct A *p, struct A *q)
return x + p->b; return x + p->b;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 1" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
struct A struct A
{ {
...@@ -30,6 +30,6 @@ foo (struct A *p, struct A *q) ...@@ -30,6 +30,6 @@ foo (struct A *p, struct A *q)
return q->a; return q->a;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate q_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate q_.*to 1" 1 "vrp1" } } */
/* { dg-final { scan-tree-dump-times "Folding predicate r_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate r_.*to 1" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
foo (int a, int b) foo (int a, int b)
{ {
...@@ -9,5 +9,5 @@ foo (int a, int b) ...@@ -9,5 +9,5 @@ foo (int a, int b)
return a + b; return a + b;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate a_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate a_.*to 1" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
foo (int k, int j) foo (int k, int j)
{ {
...@@ -16,5 +16,5 @@ foo (int k, int j) ...@@ -16,5 +16,5 @@ foo (int k, int j)
return j; return j;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate j_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate j_.*to 1" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
foo (int i, int j, int a) foo (int i, int j, int a)
{ {
...@@ -25,7 +25,7 @@ foo (int i, int j, int a) ...@@ -25,7 +25,7 @@ foo (int i, int j, int a)
return i + a + j; return i + a + j;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate i_.*to 0" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate i_.*to 0" 1 "vrp1" } } */
/* { dg-final { scan-tree-dump-times "Folding predicate j_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate j_.*to 1" 1 "vrp1" } } */
/* { dg-final { scan-tree-dump-times "Folding predicate i_.*to 0" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate i_.*to 0" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fdump-tree-vrp1-details" } */
foo (int i, int *p) foo (int i, int *p)
{ {
...@@ -30,7 +30,7 @@ foo (int i, int *p) ...@@ -30,7 +30,7 @@ foo (int i, int *p)
return i; return i;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 1" 1 "vrp1" } } */
/* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 0" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 0" 1 "vrp1" } } */
/* { dg-final { scan-tree-dump-times "PREDICATE: p_\[0-9\] ne_expr 0B" 2 "vrp" } } */ /* { dg-final { scan-tree-dump-times "PREDICATE: p_\[0-9\] ne_expr 0B" 2 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fno-tree-fre -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fno-tree-fre -fdump-tree-vrp1-details" } */
/* Compile with -fno-tree-fre -O2 to prevent CSEing *p. */ /* Compile with -fno-tree-fre -O2 to prevent CSEing *p. */
foo (int a, int *p) foo (int a, int *p)
...@@ -18,6 +18,6 @@ foo (int a, int *p) ...@@ -18,6 +18,6 @@ foo (int a, int *p)
return a; return a;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate p_.*to 1" 1 "vrp1" } } */
/* { dg-final { scan-tree-dump-times "PREDICATE: p_. ne_expr 0" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "PREDICATE: p_. ne_expr 0" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
foo (int *p) foo (int *p)
{ {
...@@ -27,5 +27,5 @@ L78: ...@@ -27,5 +27,5 @@ L78:
} }
} }
/* { dg-final { scan-tree-dump-times "Folding predicate p_.. != 0B to 1" 2 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate p_.. != 0B to 1" 2 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
foo (int k, int j, int z) foo (int k, int j, int z)
{ {
...@@ -16,5 +16,5 @@ foo (int k, int j, int z) ...@@ -16,5 +16,5 @@ foo (int k, int j, int z)
return j; return j;
} }
/* { dg-final { scan-tree-dump-times "Folding predicate.*to 1" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate.*to 1" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
extern void abort (void) __attribute__ ((__noreturn__)); extern void abort (void) __attribute__ ((__noreturn__));
...@@ -29,6 +29,6 @@ blah (tree t) ...@@ -29,6 +29,6 @@ blah (tree t)
} }
/* { dg-final { scan-tree-dump-times "tree_code_length.42." 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "tree_code_length.42." 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fdump-tree-vrp1-details" } */
extern void abort (void) __attribute__ ((__noreturn__)); extern void abort (void) __attribute__ ((__noreturn__));
...@@ -18,6 +18,6 @@ nonlocal_mentioned_p (rtx x) ...@@ -18,6 +18,6 @@ nonlocal_mentioned_p (rtx x)
abort (); abort ();
} }
/* { dg-final { scan-tree-dump-times "Folding predicate .*to 0" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate .*to 0" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
extern void abort (void) __attribute__ ((__noreturn__)); extern void abort (void) __attribute__ ((__noreturn__));
union tree_node; union tree_node;
...@@ -27,6 +27,6 @@ gimplify_for_stmt (tree stmt) ...@@ -27,6 +27,6 @@ gimplify_for_stmt (tree stmt)
abort (); abort ();
} }
/* { dg-final { scan-tree-dump-times "Simplified relational" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Simplified relational" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp" } */ /* { dg-options "-O2 -fdump-tree-vrp1" } */
static int blocksize = 4096; static int blocksize = 4096;
...@@ -30,5 +30,5 @@ void foo (void) ...@@ -30,5 +30,5 @@ void foo (void)
eof_reached = 1; eof_reached = 1;
} }
/* { dg-final { scan-tree-dump-times "Simplified relational" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Simplified relational" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-fwrapv -O1 -ftree-vrp -fdump-tree-vrp" } */ /* { dg-options "-fwrapv -O1 -ftree-vrp -fdump-tree-vrp1" } */
#include <limits.h> #include <limits.h>
extern void abort (); extern void abort ();
...@@ -22,6 +22,6 @@ int g (int b) { ...@@ -22,6 +22,6 @@ int g (int b) {
} }
return 1; return 1;
} }
/* { dg-final { scan-tree-dump "Folding predicate a_. < 0 to 0" "vrp" } } */ /* { dg-final { scan-tree-dump "Folding predicate a_. < 0 to 0" "vrp1" } } */
/* { dg-final { scan-tree-dump "Folding predicate b_. >= 0 to 1" "vrp" } } */ /* { dg-final { scan-tree-dump "Folding predicate b_. >= 0 to 1" "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-fwrapv -O1 -ftree-vrp -fdump-tree-vrp" } */ /* { dg-options "-fwrapv -O1 -ftree-vrp -fdump-tree-vrp1" } */
extern void abort (); extern void abort ();
extern void exit (int); extern void exit (int);
...@@ -23,6 +23,6 @@ int g (int b) { ...@@ -23,6 +23,6 @@ int g (int b) {
return 1; return 1;
} }
/* { dg-final { scan-tree-dump "Folding predicate a_. == 0 to 0" "vrp" } } */ /* { dg-final { scan-tree-dump "Folding predicate a_. == 0 to 0" "vrp1" } } */
/* { dg-final { scan-tree-dump "Folding predicate b_. != 0 to 1" "vrp" } } */ /* { dg-final { scan-tree-dump "Folding predicate b_. != 0 to 1" "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O1 -ftree-vrp -fdump-tree-vrp" } */ /* { dg-options "-O1 -ftree-vrp -fdump-tree-vrp1" } */
extern void link_error (); extern void link_error ();
...@@ -22,5 +22,5 @@ void test02(unsigned int a, unsigned int b) ...@@ -22,5 +22,5 @@ void test02(unsigned int a, unsigned int b)
link_error (); link_error ();
} }
/* { dg-final { scan-tree-dump-times "link_error" 0 "vrp" } } */ /* { dg-final { scan-tree-dump-times "link_error" 0 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O1 -ftree-vrp -fdump-tree-vrp" } */ /* { dg-options "-O1 -ftree-vrp -fdump-tree-vrp1" } */
extern void link_error (); extern void link_error ();
...@@ -12,5 +12,5 @@ void test02(unsigned int a, unsigned int b) ...@@ -12,5 +12,5 @@ void test02(unsigned int a, unsigned int b)
link_error (); link_error ();
} }
/* { dg-final { scan-tree-dump-times "link_error" 0 "vrp" } } */ /* { dg-final { scan-tree-dump-times "link_error" 0 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fdump-tree-vrp1-details" } */
blah (int code1, int code2) blah (int code1, int code2)
{ {
...@@ -40,6 +40,6 @@ L8: ...@@ -40,6 +40,6 @@ L8:
/* The n_sets > 0 test can be simplified into n_sets == 1 since the /* The n_sets > 0 test can be simplified into n_sets == 1 since the
only way to reach the test is when n_sets <= 1, and the only value only way to reach the test is when n_sets <= 1, and the only value
which satisfies both conditions is n_sets == 1. */ which satisfies both conditions is n_sets == 1. */
/* { dg-final { scan-tree-dump-times "Simplified relational" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Simplified relational" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fdump-tree-vrp1-details" } */
struct rtx_def; struct rtx_def;
...@@ -84,6 +84,6 @@ L7: ...@@ -84,6 +84,6 @@ L7:
/* The n_sets > 0 test can be simplified into n_sets == 1 since the /* The n_sets > 0 test can be simplified into n_sets == 1 since the
only way to reach the test is when n_sets <= 1, and the only value only way to reach the test is when n_sets <= 1, and the only value
which satisfies both conditions is n_sets == 1. */ which satisfies both conditions is n_sets == 1. */
/* { dg-final { scan-tree-dump-times "Simplified relational" 1 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Simplified relational" 1 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
/* { dg-do compile } */ /* { dg-do compile } */
/* { dg-options "-O2 -fdump-tree-vrp-details" } */ /* { dg-options "-O2 -fdump-tree-vrp1-details" } */
extern void abort (); extern void abort ();
int tree_code_length[100]; int tree_code_length[100];
...@@ -47,6 +47,6 @@ L9: ...@@ -47,6 +47,6 @@ L9:
/* The second test of (code1 != 53) and the test (D18670 <= 2) are /* The second test of (code1 != 53) and the test (D18670 <= 2) are
both totally subsumed by earlier tests and thus should be folded both totally subsumed by earlier tests and thus should be folded
away using VRP. */ away using VRP. */
/* { dg-final { scan-tree-dump-times "Folding predicate" 2 "vrp" } } */ /* { dg-final { scan-tree-dump-times "Folding predicate" 2 "vrp1" } } */
/* { dg-final { cleanup-tree-dump "vrp" } } */ /* { dg-final { cleanup-tree-dump "vrp1" } } */
...@@ -749,6 +749,11 @@ tree expand_simple_operations (tree); ...@@ -749,6 +749,11 @@ tree expand_simple_operations (tree);
void substitute_in_loop_info (struct loop *, tree, tree); void substitute_in_loop_info (struct loop *, tree, tree);
edge single_dom_exit (struct loop *); edge single_dom_exit (struct loop *);
/* In tree-ssa-threadedge.c */
extern bool potentially_threadable_block (basic_block);
extern void thread_across_edge (tree, edge, bool,
VEC(tree, heap) **, tree (*) (tree));
/* In tree-ssa-loop-im.c */ /* In tree-ssa-loop-im.c */
/* The possibilities of statement movement. */ /* The possibilities of statement movement. */
......
/* SSA Dominator optimizations for trees /* SSA Dominator optimizations for trees
Copyright (C) 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. Copyright (C) 2001, 2002, 2003, 2004, 2005, 2006
Free Software Foundation, Inc.
Contributed by Diego Novillo <dnovillo@redhat.com> Contributed by Diego Novillo <dnovillo@redhat.com>
This file is part of GCC. This file is part of GCC.
...@@ -160,92 +161,10 @@ struct opt_stats_d ...@@ -160,92 +161,10 @@ struct opt_stats_d
long num_re; long num_re;
long num_const_prop; long num_const_prop;
long num_copy_prop; long num_copy_prop;
long num_iterations;
}; };
static struct opt_stats_d opt_stats; static struct opt_stats_d opt_stats;
/* Value range propagation record. Each time we encounter a conditional
of the form SSA_NAME COND CONST we create a new vrp_element to record
how the condition affects the possible values SSA_NAME may have.
Each record contains the condition tested (COND), and the range of
values the variable may legitimately have if COND is true. Note the
range of values may be a smaller range than COND specifies if we have
recorded other ranges for this variable. Each record also contains the
block in which the range was recorded for invalidation purposes.
Note that the current known range is computed lazily. This allows us
to avoid the overhead of computing ranges which are never queried.
When we encounter a conditional, we look for records which constrain
the SSA_NAME used in the condition. In some cases those records allow
us to determine the condition's result at compile time. In other cases
they may allow us to simplify the condition.
We also use value ranges to do things like transform signed div/mod
operations into unsigned div/mod or to simplify ABS_EXPRs.
Simple experiments have shown these optimizations to not be all that
useful on switch statements (much to my surprise). So switch statement
optimizations are not performed.
Note carefully we do not propagate information through each statement
in the block. i.e., if we know variable X has a value defined of
[0, 25] and we encounter Y = X + 1, we do not track a value range
for Y (which would be [1, 26] if we cared). Similarly we do not
constrain values as we encounter narrowing typecasts, etc. */
struct vrp_element
{
/* The highest and lowest values the variable in COND may contain when
COND is true. Note this may not necessarily be the same values
tested by COND if the same variable was used in earlier conditionals.
Note this is computed lazily and thus can be NULL indicating that
the values have not been computed yet. */
tree low;
tree high;
/* The actual conditional we recorded. This is needed since we compute
ranges lazily. */
tree cond;
/* The basic block where this record was created. We use this to determine
when to remove records. */
basic_block bb;
};
/* A hash table holding value range records (VRP_ELEMENTs) for a given
SSA_NAME. We used to use a varray indexed by SSA_NAME_VERSION, but
that gets awful wasteful, particularly since the density objects
with useful information is very low. */
static htab_t vrp_data;
typedef struct vrp_element *vrp_element_p;
DEF_VEC_P(vrp_element_p);
DEF_VEC_ALLOC_P(vrp_element_p,heap);
/* An entry in the VRP_DATA hash table. We record the variable and a
varray of VRP_ELEMENT records associated with that variable. */
struct vrp_hash_elt
{
tree var;
VEC(vrp_element_p,heap) *records;
};
/* Array of variables which have their values constrained by operations
in this basic block. We use this during finalization to know
which variables need their VRP data updated. */
/* Stack of SSA_NAMEs which had their values constrained by operations
in this basic block. During finalization of this block we use this
list to determine which variables need their VRP data updated.
A NULL entry marks the end of the SSA_NAMEs associated with this block. */
static VEC(tree,heap) *vrp_variables_stack;
struct eq_expr_value struct eq_expr_value
{ {
tree src; tree src;
...@@ -257,8 +176,6 @@ static void optimize_stmt (struct dom_walk_data *, ...@@ -257,8 +176,6 @@ static void optimize_stmt (struct dom_walk_data *,
basic_block bb, basic_block bb,
block_stmt_iterator); block_stmt_iterator);
static tree lookup_avail_expr (tree, bool); static tree lookup_avail_expr (tree, bool);
static hashval_t vrp_hash (const void *);
static int vrp_eq (const void *, const void *);
static hashval_t avail_expr_hash (const void *); static hashval_t avail_expr_hash (const void *);
static hashval_t real_avail_expr_hash (const void *); static hashval_t real_avail_expr_hash (const void *);
static int avail_expr_eq (const void *, const void *); static int avail_expr_eq (const void *, const void *);
...@@ -266,14 +183,11 @@ static void htab_statistics (FILE *, htab_t); ...@@ -266,14 +183,11 @@ static void htab_statistics (FILE *, htab_t);
static void record_cond (tree, tree); static void record_cond (tree, tree);
static void record_const_or_copy (tree, tree); static void record_const_or_copy (tree, tree);
static void record_equality (tree, tree); static void record_equality (tree, tree);
static tree simplify_cond_and_lookup_avail_expr (tree);
static void record_range (tree, basic_block);
static bool extract_range_from_cond (tree, tree *, tree *, int *);
static void record_equivalences_from_phis (basic_block); static void record_equivalences_from_phis (basic_block);
static void record_equivalences_from_incoming_edge (basic_block); static void record_equivalences_from_incoming_edge (basic_block);
static bool eliminate_redundant_computations (tree); static bool eliminate_redundant_computations (tree);
static void record_equivalences_from_stmt (tree, int, stmt_ann_t); static void record_equivalences_from_stmt (tree, int, stmt_ann_t);
static void thread_across_edge (struct dom_walk_data *, edge); static void dom_thread_across_edge (struct dom_walk_data *, edge);
static void dom_opt_finalize_block (struct dom_walk_data *, basic_block); static void dom_opt_finalize_block (struct dom_walk_data *, basic_block);
static void dom_opt_initialize_block (struct dom_walk_data *, basic_block); static void dom_opt_initialize_block (struct dom_walk_data *, basic_block);
static void propagate_to_outgoing_edges (struct dom_walk_data *, basic_block); static void propagate_to_outgoing_edges (struct dom_walk_data *, basic_block);
...@@ -328,18 +242,6 @@ free_all_edge_infos (void) ...@@ -328,18 +242,6 @@ free_all_edge_infos (void)
} }
} }
/* Free an instance of vrp_hash_elt. */
static void
vrp_free (void *data)
{
struct vrp_hash_elt *elt = (struct vrp_hash_elt *) data;
struct VEC(vrp_element_p,heap) **vrp_elt = &elt->records;
VEC_free (vrp_element_p, heap, *vrp_elt);
free (elt);
}
/* Jump threading, redundancy elimination and const/copy propagation. /* Jump threading, redundancy elimination and const/copy propagation.
This pass may expose new symbols that need to be renamed into SSA. For This pass may expose new symbols that need to be renamed into SSA. For
...@@ -357,12 +259,9 @@ tree_ssa_dominator_optimize (void) ...@@ -357,12 +259,9 @@ tree_ssa_dominator_optimize (void)
/* Create our hash tables. */ /* Create our hash tables. */
avail_exprs = htab_create (1024, real_avail_expr_hash, avail_expr_eq, free); avail_exprs = htab_create (1024, real_avail_expr_hash, avail_expr_eq, free);
vrp_data = htab_create (ceil_log2 (num_ssa_names), vrp_hash, vrp_eq,
vrp_free);
avail_exprs_stack = VEC_alloc (tree, heap, 20); avail_exprs_stack = VEC_alloc (tree, heap, 20);
const_and_copies_stack = VEC_alloc (tree, heap, 20); const_and_copies_stack = VEC_alloc (tree, heap, 20);
nonzero_vars_stack = VEC_alloc (tree, heap, 20); nonzero_vars_stack = VEC_alloc (tree, heap, 20);
vrp_variables_stack = VEC_alloc (tree, heap, 20);
stmts_to_rescan = VEC_alloc (tree, heap, 20); stmts_to_rescan = VEC_alloc (tree, heap, 20);
nonzero_vars = BITMAP_ALLOC (NULL); nonzero_vars = BITMAP_ALLOC (NULL);
need_eh_cleanup = BITMAP_ALLOC (NULL); need_eh_cleanup = BITMAP_ALLOC (NULL);
...@@ -401,16 +300,6 @@ tree_ssa_dominator_optimize (void) ...@@ -401,16 +300,6 @@ tree_ssa_dominator_optimize (void)
cleanup_tree_cfg (); cleanup_tree_cfg ();
calculate_dominance_info (CDI_DOMINATORS); calculate_dominance_info (CDI_DOMINATORS);
/* If we prove certain blocks are unreachable, then we want to
repeat the dominator optimization process as PHI nodes may
have turned into copies which allows better propagation of
values. So we repeat until we do not identify any new unreachable
blocks. */
do
{
/* Optimize the dominator tree. */
cfg_altered = false;
/* We need accurate information regarding back edges in the CFG /* We need accurate information regarding back edges in the CFG
for jump threading. */ for jump threading. */
mark_dfs_back_edges (); mark_dfs_back_edges ();
...@@ -424,11 +313,9 @@ tree_ssa_dominator_optimize (void) ...@@ -424,11 +313,9 @@ tree_ssa_dominator_optimize (void)
FOR_EACH_BB (bb) FOR_EACH_BB (bb)
{ {
for (bsi = bsi_start (bb); !bsi_end_p (bsi); bsi_next (&bsi)) for (bsi = bsi_start (bb); !bsi_end_p (bsi); bsi_next (&bsi))
{
update_stmt_if_modified (bsi_stmt (bsi)); update_stmt_if_modified (bsi_stmt (bsi));
} }
} }
}
/* If we exposed any new variables, go ahead and put them into /* If we exposed any new variables, go ahead and put them into
SSA form now, before we handle jump threading. This simplifies SSA form now, before we handle jump threading. This simplifies
...@@ -453,41 +340,8 @@ tree_ssa_dominator_optimize (void) ...@@ -453,41 +340,8 @@ tree_ssa_dominator_optimize (void)
if (cfg_altered) if (cfg_altered)
free_dominance_info (CDI_DOMINATORS); free_dominance_info (CDI_DOMINATORS);
/* Only iterate if we threaded jumps AND the CFG cleanup did
something interesting. Other cases generate far fewer
optimization opportunities and thus are not worth another
full DOM iteration. */
cfg_altered &= cleanup_tree_cfg ();
if (rediscover_loops_after_threading)
{
/* Rerun basic loop analysis to discover any newly
created loops and update the set of exit edges. */
rediscover_loops_after_threading = false;
flow_loops_find (&loops_info);
mark_loop_exit_edges (&loops_info);
flow_loops_free (&loops_info);
/* Remove any forwarder blocks inserted by loop
header canonicalization. */
cleanup_tree_cfg ();
}
calculate_dominance_info (CDI_DOMINATORS);
update_ssa (TODO_update_ssa);
/* Reinitialize the various tables. */
bitmap_clear (nonzero_vars);
htab_empty (avail_exprs);
htab_empty (vrp_data);
/* Finally, remove everything except invariants in SSA_NAME_VALUE. /* Finally, remove everything except invariants in SSA_NAME_VALUE.
This must be done before we iterate as we might have a
reference to an SSA_NAME which was removed by the call to
update_ssa.
Long term we will be able to let everything in SSA_NAME_VALUE Long term we will be able to let everything in SSA_NAME_VALUE
persist. However, for now, we know this is the safe thing to do. */ persist. However, for now, we know this is the safe thing to do. */
for (i = 0; i < num_ssa_names; i++) for (i = 0; i < num_ssa_names; i++)
...@@ -503,21 +357,12 @@ tree_ssa_dominator_optimize (void) ...@@ -503,21 +357,12 @@ tree_ssa_dominator_optimize (void)
SSA_NAME_VALUE (name) = NULL; SSA_NAME_VALUE (name) = NULL;
} }
opt_stats.num_iterations++;
}
while (optimize > 1 && cfg_altered);
/* Debugging dumps. */ /* Debugging dumps. */
if (dump_file && (dump_flags & TDF_STATS)) if (dump_file && (dump_flags & TDF_STATS))
dump_dominator_optimization_stats (dump_file); dump_dominator_optimization_stats (dump_file);
/* We emptied the hash table earlier, now delete it completely. */ /* Delete our main hashtable. */
htab_delete (avail_exprs); htab_delete (avail_exprs);
htab_delete (vrp_data);
/* It is not necessary to clear CURRDEFS, REDIRECTION_EDGES, VRP_DATA,
CONST_AND_COPIES, and NONZERO_VARS as they all get cleared at the bottom
of the do-while loop above. */
/* And finalize the dominator walker. */ /* And finalize the dominator walker. */
fini_walk_dominator_tree (&walk_data); fini_walk_dominator_tree (&walk_data);
...@@ -529,7 +374,6 @@ tree_ssa_dominator_optimize (void) ...@@ -529,7 +374,6 @@ tree_ssa_dominator_optimize (void)
VEC_free (tree, heap, avail_exprs_stack); VEC_free (tree, heap, avail_exprs_stack);
VEC_free (tree, heap, const_and_copies_stack); VEC_free (tree, heap, const_and_copies_stack);
VEC_free (tree, heap, nonzero_vars_stack); VEC_free (tree, heap, nonzero_vars_stack);
VEC_free (tree, heap, vrp_variables_stack);
VEC_free (tree, heap, stmts_to_rescan); VEC_free (tree, heap, stmts_to_rescan);
} }
...@@ -554,6 +398,7 @@ struct tree_opt_pass pass_dominator = ...@@ -554,6 +398,7 @@ struct tree_opt_pass pass_dominator =
0, /* todo_flags_start */ 0, /* todo_flags_start */
TODO_dump_func TODO_dump_func
| TODO_update_ssa | TODO_update_ssa
| TODO_cleanup_cfg
| TODO_verify_ssa, /* todo_flags_finish */ | TODO_verify_ssa, /* todo_flags_finish */
0 /* letter */ 0 /* letter */
}; };
...@@ -605,321 +450,6 @@ canonicalize_comparison (tree condstmt) ...@@ -605,321 +450,6 @@ canonicalize_comparison (tree condstmt)
} }
} }
} }
/* We are exiting E->src, see if E->dest ends with a conditional
jump which has a known value when reached via E.
Special care is necessary if E is a back edge in the CFG as we
will have already recorded equivalences for E->dest into our
various tables, including the result of the conditional at
the end of E->dest. Threading opportunities are severely
limited in that case to avoid short-circuiting the loop
incorrectly.
Note it is quite common for the first block inside a loop to
end with a conditional which is either always true or always
false when reached via the loop backedge. Thus we do not want
to blindly disable threading across a loop backedge. */
static void
thread_across_edge (struct dom_walk_data *walk_data, edge e)
{
block_stmt_iterator bsi;
tree stmt = NULL;
tree phi;
int stmt_count = 0;
int max_stmt_count;
/* If E->dest does not end with a conditional, then there is
nothing to do. */
bsi = bsi_last (e->dest);
if (bsi_end_p (bsi)
|| ! bsi_stmt (bsi)
|| (TREE_CODE (bsi_stmt (bsi)) != COND_EXPR
&& TREE_CODE (bsi_stmt (bsi)) != GOTO_EXPR
&& TREE_CODE (bsi_stmt (bsi)) != SWITCH_EXPR))
return;
/* The basic idea here is to use whatever knowledge we have
from our dominator walk to simplify statements in E->dest,
with the ultimate goal being to simplify the conditional
at the end of E->dest.
Note that we must undo any changes we make to the underlying
statements as the simplifications we are making are control
flow sensitive (ie, the simplifications are valid when we
traverse E, but may not be valid on other paths to E->dest. */
/* Each PHI creates a temporary equivalence, record them. Again
these are context sensitive equivalences and will be removed
by our caller. */
for (phi = phi_nodes (e->dest); phi; phi = PHI_CHAIN (phi))
{
tree src = PHI_ARG_DEF_FROM_EDGE (phi, e);
tree dst = PHI_RESULT (phi);
/* Do not include virtual PHIs in our statement count as
they never generate code. */
if (is_gimple_reg (dst))
stmt_count++;
/* If the desired argument is not the same as this PHI's result
and it is set by a PHI in E->dest, then we can not thread
through E->dest. */
if (src != dst
&& TREE_CODE (src) == SSA_NAME
&& TREE_CODE (SSA_NAME_DEF_STMT (src)) == PHI_NODE
&& bb_for_stmt (SSA_NAME_DEF_STMT (src)) == e->dest)
return;
record_const_or_copy (dst, src);
}
/* Try to simplify each statement in E->dest, ultimately leading to
a simplification of the COND_EXPR at the end of E->dest.
We might consider marking just those statements which ultimately
feed the COND_EXPR. It's not clear if the overhead of bookkeeping
would be recovered by trying to simplify fewer statements.
If we are able to simplify a statement into the form
SSA_NAME = (SSA_NAME | gimple invariant), then we can record
a context sensitive equivalency which may help us simplify
later statements in E->dest.
Failure to simplify into the form above merely means that the
statement provides no equivalences to help simplify later
statements. This does not prevent threading through E->dest. */
max_stmt_count = PARAM_VALUE (PARAM_MAX_JUMP_THREAD_DUPLICATION_STMTS);
for (bsi = bsi_start (e->dest); ! bsi_end_p (bsi); bsi_next (&bsi))
{
tree cached_lhs = NULL;
stmt = bsi_stmt (bsi);
/* Ignore empty statements and labels. */
if (IS_EMPTY_STMT (stmt) || TREE_CODE (stmt) == LABEL_EXPR)
continue;
/* If duplicating this block is going to cause too much code
expansion, then do not thread through this block. */
stmt_count++;
if (stmt_count > max_stmt_count)
return;
/* Safely handle threading across loop backedges. This is
over conservative, but still allows us to capture the
majority of the cases where we can thread across a loop
backedge. */
if ((e->flags & EDGE_DFS_BACK) != 0
&& TREE_CODE (stmt) != COND_EXPR
&& TREE_CODE (stmt) != SWITCH_EXPR)
return;
/* If the statement has volatile operands, then we assume we
can not thread through this block. This is overly
conservative in some ways. */
if (TREE_CODE (stmt) == ASM_EXPR && ASM_VOLATILE_P (stmt))
return;
/* If this is not a MODIFY_EXPR which sets an SSA_NAME to a new
value, then do not try to simplify this statement as it will
not simplify in any way that is helpful for jump threading. */
if (TREE_CODE (stmt) != MODIFY_EXPR
|| TREE_CODE (TREE_OPERAND (stmt, 0)) != SSA_NAME)
continue;
/* At this point we have a statement which assigns an RHS to an
SSA_VAR on the LHS. We want to try and simplify this statement
to expose more context sensitive equivalences which in turn may
allow us to simplify the condition at the end of the loop. */
if (TREE_CODE (TREE_OPERAND (stmt, 1)) == SSA_NAME)
cached_lhs = TREE_OPERAND (stmt, 1);
else
{
/* Copy the operands. */
tree *copy, pre_fold_expr;
ssa_op_iter iter;
use_operand_p use_p;
unsigned int num, i = 0;
num = NUM_SSA_OPERANDS (stmt, (SSA_OP_USE | SSA_OP_VUSE));
copy = XCNEWVEC (tree, num);
/* Make a copy of the uses & vuses into USES_COPY, then cprop into
the operands. */
FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE | SSA_OP_VUSE)
{
tree tmp = NULL;
tree use = USE_FROM_PTR (use_p);
copy[i++] = use;
if (TREE_CODE (use) == SSA_NAME)
tmp = SSA_NAME_VALUE (use);
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
SET_USE (use_p, tmp);
}
/* Try to fold/lookup the new expression. Inserting the
expression into the hash table is unlikely to help
Sadly, we have to handle conditional assignments specially
here, because fold expects all the operands of an expression
to be folded before the expression itself is folded, but we
can't just substitute the folded condition here. */
if (TREE_CODE (TREE_OPERAND (stmt, 1)) == COND_EXPR)
{
tree cond = COND_EXPR_COND (TREE_OPERAND (stmt, 1));
cond = fold (cond);
if (cond == boolean_true_node)
pre_fold_expr = COND_EXPR_THEN (TREE_OPERAND (stmt, 1));
else if (cond == boolean_false_node)
pre_fold_expr = COND_EXPR_ELSE (TREE_OPERAND (stmt, 1));
else
pre_fold_expr = TREE_OPERAND (stmt, 1);
}
else
pre_fold_expr = TREE_OPERAND (stmt, 1);
if (pre_fold_expr)
{
cached_lhs = fold (pre_fold_expr);
if (TREE_CODE (cached_lhs) != SSA_NAME
&& !is_gimple_min_invariant (cached_lhs))
cached_lhs = lookup_avail_expr (stmt, false);
}
/* Restore the statement's original uses/defs. */
i = 0;
FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE | SSA_OP_VUSE)
SET_USE (use_p, copy[i++]);
free (copy);
}
/* Record the context sensitive equivalence if we were able
to simplify this statement. */
if (cached_lhs
&& (TREE_CODE (cached_lhs) == SSA_NAME
|| is_gimple_min_invariant (cached_lhs)))
record_const_or_copy (TREE_OPERAND (stmt, 0), cached_lhs);
}
/* If we stopped at a COND_EXPR or SWITCH_EXPR, see if we know which arm
will be taken. */
if (stmt
&& (TREE_CODE (stmt) == COND_EXPR
|| TREE_CODE (stmt) == GOTO_EXPR
|| TREE_CODE (stmt) == SWITCH_EXPR))
{
tree cond, cached_lhs;
/* Now temporarily cprop the operands and try to find the resulting
expression in the hash tables. */
if (TREE_CODE (stmt) == COND_EXPR)
{
canonicalize_comparison (stmt);
cond = COND_EXPR_COND (stmt);
}
else if (TREE_CODE (stmt) == GOTO_EXPR)
cond = GOTO_DESTINATION (stmt);
else
cond = SWITCH_COND (stmt);
if (COMPARISON_CLASS_P (cond))
{
tree dummy_cond, op0, op1;
enum tree_code cond_code;
op0 = TREE_OPERAND (cond, 0);
op1 = TREE_OPERAND (cond, 1);
cond_code = TREE_CODE (cond);
/* Get the current value of both operands. */
if (TREE_CODE (op0) == SSA_NAME)
{
tree tmp = SSA_NAME_VALUE (op0);
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
op0 = tmp;
}
if (TREE_CODE (op1) == SSA_NAME)
{
tree tmp = SSA_NAME_VALUE (op1);
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
op1 = tmp;
}
/* Stuff the operator and operands into our dummy conditional
expression, creating the dummy conditional if necessary. */
dummy_cond = (tree) walk_data->global_data;
if (! dummy_cond)
{
dummy_cond = build2 (cond_code, boolean_type_node, op0, op1);
dummy_cond = build3 (COND_EXPR, void_type_node,
dummy_cond, NULL_TREE, NULL_TREE);
walk_data->global_data = dummy_cond;
}
else
{
TREE_SET_CODE (COND_EXPR_COND (dummy_cond), cond_code);
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 0) = op0;
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 1) = op1;
}
/* We absolutely do not care about any type conversions
we only care about a zero/nonzero value. */
cached_lhs = fold (COND_EXPR_COND (dummy_cond));
while (TREE_CODE (cached_lhs) == NOP_EXPR
|| TREE_CODE (cached_lhs) == CONVERT_EXPR
|| TREE_CODE (cached_lhs) == NON_LVALUE_EXPR)
cached_lhs = TREE_OPERAND (cached_lhs, 0);
if (! is_gimple_min_invariant (cached_lhs))
{
cached_lhs = lookup_avail_expr (dummy_cond, false);
if (!cached_lhs || ! is_gimple_min_invariant (cached_lhs))
cached_lhs = simplify_cond_and_lookup_avail_expr (dummy_cond);
}
}
/* We can have conditionals which just test the state of a
variable rather than use a relational operator. These are
simpler to handle. */
else if (TREE_CODE (cond) == SSA_NAME)
{
cached_lhs = cond;
cached_lhs = SSA_NAME_VALUE (cached_lhs);
if (cached_lhs && ! is_gimple_min_invariant (cached_lhs))
cached_lhs = NULL;
}
else
cached_lhs = lookup_avail_expr (stmt, false);
if (cached_lhs)
{
edge taken_edge = find_taken_edge (e->dest, cached_lhs);
basic_block dest = (taken_edge ? taken_edge->dest : NULL);
if (dest == e->dest)
return;
/* If we have a known destination for the conditional, then
we can perform this optimization, which saves at least one
conditional jump each time it applies since we get to
bypass the conditional at our original destination. */
if (dest)
{
struct edge_info *edge_info;
if (e->aux)
edge_info = (struct edge_info *) e->aux;
else
edge_info = allocate_edge_info (e);
register_jump_thread (e, taken_edge);
}
}
}
}
/* Initialize local stacks for this optimizer and record equivalences /* Initialize local stacks for this optimizer and record equivalences
upon entry to BB. Equivalences can come from the edge traversed to upon entry to BB. Equivalences can come from the edge traversed to
...@@ -937,7 +467,6 @@ dom_opt_initialize_block (struct dom_walk_data *walk_data ATTRIBUTE_UNUSED, ...@@ -937,7 +467,6 @@ dom_opt_initialize_block (struct dom_walk_data *walk_data ATTRIBUTE_UNUSED,
VEC_safe_push (tree, heap, avail_exprs_stack, NULL_TREE); VEC_safe_push (tree, heap, avail_exprs_stack, NULL_TREE);
VEC_safe_push (tree, heap, const_and_copies_stack, NULL_TREE); VEC_safe_push (tree, heap, const_and_copies_stack, NULL_TREE);
VEC_safe_push (tree, heap, nonzero_vars_stack, NULL_TREE); VEC_safe_push (tree, heap, nonzero_vars_stack, NULL_TREE);
VEC_safe_push (tree, heap, vrp_variables_stack, NULL_TREE);
record_equivalences_from_incoming_edge (bb); record_equivalences_from_incoming_edge (bb);
...@@ -1049,6 +578,35 @@ restore_vars_to_original_value (void) ...@@ -1049,6 +578,35 @@ restore_vars_to_original_value (void)
} }
} }
/* A trivial wrapper so that we can present the generic jump
threading code with a simple API for simplifying statements. */
static tree
simplify_stmt_for_jump_threading (tree stmt)
{
return lookup_avail_expr (stmt, false);
}
/* Wrapper for common code to attempt to thread an edge. For example,
it handles lazily building the dummy condition and the bookkeeping
when jump threading is successful. */
static void
dom_thread_across_edge (struct dom_walk_data *walk_data, edge e)
{
/* If we don't already have a dummy condition, build it now. */
if (! walk_data->global_data)
{
tree dummy_cond = build2 (NE, boolean_type_node,
integer_zero_node, integer_zero_node);
dummy_cond = build3 (COND_EXPR, void_type_node, dummy_cond, NULL, NULL);
walk_data->global_data = dummy_cond;
}
thread_across_edge (walk_data->global_data, e, false,
&const_and_copies_stack,
simplify_stmt_for_jump_threading);
}
/* We have finished processing the dominator children of BB, perform /* We have finished processing the dominator children of BB, perform
any finalization actions in preparation for leaving this node in any finalization actions in preparation for leaving this node in
the dominator tree. */ the dominator tree. */
...@@ -1058,17 +616,16 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb) ...@@ -1058,17 +616,16 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb)
{ {
tree last; tree last;
/* If we have an outgoing edge to a block with multiple incoming and /* If we have an outgoing edge to a block with multiple incoming and
outgoing edges, then we may be able to thread the edge. ie, we outgoing edges, then we may be able to thread the edge. ie, we
may be able to statically determine which of the outgoing edges may be able to statically determine which of the outgoing edges
will be traversed when the incoming edge from BB is traversed. */ will be traversed when the incoming edge from BB is traversed. */
if (single_succ_p (bb) if (single_succ_p (bb)
&& (single_succ_edge (bb)->flags & EDGE_ABNORMAL) == 0 && (single_succ_edge (bb)->flags & EDGE_ABNORMAL) == 0
&& !single_pred_p (single_succ (bb)) && potentially_threadable_block (single_succ (bb)))
&& !single_succ_p (single_succ (bb)))
{ {
thread_across_edge (walk_data, single_succ_edge (bb)); dom_thread_across_edge (walk_data, single_succ_edge (bb));
} }
else if ((last = last_stmt (bb)) else if ((last = last_stmt (bb))
&& TREE_CODE (last) == COND_EXPR && TREE_CODE (last) == COND_EXPR
...@@ -1084,7 +641,7 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb) ...@@ -1084,7 +641,7 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb)
/* Only try to thread the edge if it reaches a target block with /* Only try to thread the edge if it reaches a target block with
more than one predecessor and more than one successor. */ more than one predecessor and more than one successor. */
if (!single_pred_p (true_edge->dest) && !single_succ_p (true_edge->dest)) if (potentially_threadable_block (true_edge->dest))
{ {
struct edge_info *edge_info; struct edge_info *edge_info;
unsigned int i; unsigned int i;
...@@ -1121,21 +678,20 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb) ...@@ -1121,21 +678,20 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb)
} }
} }
/* Now thread the edge. */ dom_thread_across_edge (walk_data, true_edge);
thread_across_edge (walk_data, true_edge);
/* And restore the various tables to their state before /* And restore the various tables to their state before
we threaded this edge. */ we threaded this edge. */
remove_local_expressions_from_table (); remove_local_expressions_from_table ();
restore_vars_to_original_value ();
} }
/* Similarly for the ELSE arm. */ /* Similarly for the ELSE arm. */
if (!single_pred_p (false_edge->dest) && !single_succ_p (false_edge->dest)) if (potentially_threadable_block (false_edge->dest))
{ {
struct edge_info *edge_info; struct edge_info *edge_info;
unsigned int i; unsigned int i;
VEC_safe_push (tree, heap, const_and_copies_stack, NULL_TREE);
edge_info = (struct edge_info *) false_edge->aux; edge_info = (struct edge_info *) false_edge->aux;
/* If we have info associated with this edge, record it into /* If we have info associated with this edge, record it into
...@@ -1162,7 +718,8 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb) ...@@ -1162,7 +718,8 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb)
} }
} }
thread_across_edge (walk_data, false_edge); /* Now thread the edge. */
dom_thread_across_edge (walk_data, false_edge);
/* No need to remove local expressions from our tables /* No need to remove local expressions from our tables
or restore vars to their original value as that will or restore vars to their original value as that will
...@@ -1174,48 +731,6 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb) ...@@ -1174,48 +731,6 @@ dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb)
restore_nonzero_vars_to_original_value (); restore_nonzero_vars_to_original_value ();
restore_vars_to_original_value (); restore_vars_to_original_value ();
/* Remove VRP records associated with this basic block. They are no
longer valid.
To be efficient, we note which variables have had their values
constrained in this block. So walk over each variable in the
VRP_VARIABLEs array. */
while (VEC_length (tree, vrp_variables_stack) > 0)
{
tree var = VEC_pop (tree, vrp_variables_stack);
struct vrp_hash_elt vrp_hash_elt, *vrp_hash_elt_p;
void **slot;
/* Each variable has a stack of value range records. We want to
invalidate those associated with our basic block. So we walk
the array backwards popping off records associated with our
block. Once we hit a record not associated with our block
we are done. */
VEC(vrp_element_p,heap) **var_vrp_records;
if (var == NULL)
break;
vrp_hash_elt.var = var;
vrp_hash_elt.records = NULL;
slot = htab_find_slot (vrp_data, &vrp_hash_elt, NO_INSERT);
vrp_hash_elt_p = (struct vrp_hash_elt *) *slot;
var_vrp_records = &vrp_hash_elt_p->records;
while (VEC_length (vrp_element_p, *var_vrp_records) > 0)
{
struct vrp_element *element
= VEC_last (vrp_element_p, *var_vrp_records);
if (element->bb != bb)
break;
VEC_pop (vrp_element_p, *var_vrp_records);
}
}
/* If we queued any statements to rescan in this block, then /* If we queued any statements to rescan in this block, then
go ahead and rescan them now. */ go ahead and rescan them now. */
while (VEC_length (tree, stmts_to_rescan) > 0) while (VEC_length (tree, stmts_to_rescan) > 0)
...@@ -1366,26 +881,12 @@ record_equivalences_from_incoming_edge (basic_block bb) ...@@ -1366,26 +881,12 @@ record_equivalences_from_incoming_edge (basic_block bb)
if (cond_equivalences) if (cond_equivalences)
{ {
bool recorded_range = false;
for (i = 0; i < edge_info->max_cond_equivalences; i += 2) for (i = 0; i < edge_info->max_cond_equivalences; i += 2)
{ {
tree expr = cond_equivalences[i]; tree expr = cond_equivalences[i];
tree value = cond_equivalences[i + 1]; tree value = cond_equivalences[i + 1];
record_cond (expr, value); record_cond (expr, value);
/* For the first true equivalence, record range
information. We only do this for the first
true equivalence as it should dominate any
later true equivalences. */
if (! recorded_range
&& COMPARISON_CLASS_P (expr)
&& value == boolean_true_node
&& TREE_CONSTANT (TREE_OPERAND (expr, 1)))
{
record_range (expr, bb);
recorded_range = true;
}
} }
} }
} }
...@@ -1416,9 +917,6 @@ dump_dominator_optimization_stats (FILE *file) ...@@ -1416,9 +917,6 @@ dump_dominator_optimization_stats (FILE *file)
fprintf (file, " Copies propagated: %6ld\n", fprintf (file, " Copies propagated: %6ld\n",
opt_stats.num_copy_prop); opt_stats.num_copy_prop);
fprintf (file, "\nTotal number of DOM iterations: %6ld\n",
opt_stats.num_iterations);
fprintf (file, "\nHash table statistics:\n"); fprintf (file, "\nHash table statistics:\n");
fprintf (file, " avail_exprs: "); fprintf (file, " avail_exprs: ");
...@@ -1768,216 +1266,6 @@ simple_iv_increment_p (tree stmt) ...@@ -1768,216 +1266,6 @@ simple_iv_increment_p (tree stmt)
return false; return false;
} }
/* STMT is a COND_EXPR for which we could not trivially determine its
result. This routine attempts to find equivalent forms of the
condition which we may be able to optimize better. It also
uses simple value range propagation to optimize conditionals. */
static tree
simplify_cond_and_lookup_avail_expr (tree stmt)
{
tree cond = COND_EXPR_COND (stmt);
if (COMPARISON_CLASS_P (cond))
{
tree op0 = TREE_OPERAND (cond, 0);
tree op1 = TREE_OPERAND (cond, 1);
if (TREE_CODE (op0) == SSA_NAME && is_gimple_min_invariant (op1))
{
int limit;
tree low, high, cond_low, cond_high;
int lowequal, highequal, swapped, no_overlap, subset, cond_inverted;
VEC(vrp_element_p,heap) **vrp_records;
struct vrp_element *element;
struct vrp_hash_elt vrp_hash_elt, *vrp_hash_elt_p;
void **slot;
/* Consult the value range records for this variable (if they exist)
to see if we can eliminate or simplify this conditional.
Note two tests are necessary to determine no records exist.
First we have to see if the virtual array exists, if it
exists, then we have to check its active size.
Also note the vast majority of conditionals are not testing
a variable which has had its range constrained by an earlier
conditional. So this filter avoids a lot of unnecessary work. */
vrp_hash_elt.var = op0;
vrp_hash_elt.records = NULL;
slot = htab_find_slot (vrp_data, &vrp_hash_elt, NO_INSERT);
if (slot == NULL)
return NULL;
vrp_hash_elt_p = (struct vrp_hash_elt *) *slot;
vrp_records = &vrp_hash_elt_p->records;
limit = VEC_length (vrp_element_p, *vrp_records);
/* If we have no value range records for this variable, or we are
unable to extract a range for this condition, then there is
nothing to do. */
if (limit == 0
|| ! extract_range_from_cond (cond, &cond_high,
&cond_low, &cond_inverted))
return NULL;
/* We really want to avoid unnecessary computations of range
info. So all ranges are computed lazily; this avoids a
lot of unnecessary work. i.e., we record the conditional,
but do not process how it constrains the variable's
potential values until we know that processing the condition
could be helpful.
However, we do not want to have to walk a potentially long
list of ranges, nor do we want to compute a variable's
range more than once for a given path.
Luckily, each time we encounter a conditional that can not
be otherwise optimized we will end up here and we will
compute the necessary range information for the variable
used in this condition.
Thus you can conclude that there will never be more than one
conditional associated with a variable which has not been
processed. So we never need to merge more than one new
conditional into the current range.
These properties also help us avoid unnecessary work. */
element = VEC_last (vrp_element_p, *vrp_records);
if (element->high && element->low)
{
/* The last element has been processed, so there is no range
merging to do, we can simply use the high/low values
recorded in the last element. */
low = element->low;
high = element->high;
}
else
{
tree tmp_high, tmp_low;
int dummy;
/* The last element has not been processed. Process it now.
record_range should ensure for cond inverted is not set.
This call can only fail if cond is x < min or x > max,
which fold should have optimized into false.
If that doesn't happen, just pretend all values are
in the range. */
if (! extract_range_from_cond (element->cond, &tmp_high,
&tmp_low, &dummy))
gcc_unreachable ();
else
gcc_assert (dummy == 0);
/* If this is the only element, then no merging is necessary,
the high/low values from extract_range_from_cond are all
we need. */
if (limit == 1)
{
low = tmp_low;
high = tmp_high;
}
else
{
/* Get the high/low value from the previous element. */
struct vrp_element *prev
= VEC_index (vrp_element_p, *vrp_records, limit - 2);
low = prev->low;
high = prev->high;
/* Merge in this element's range with the range from the
previous element.
The low value for the merged range is the maximum of
the previous low value and the low value of this record.
Similarly the high value for the merged range is the
minimum of the previous high value and the high value of
this record. */
low = (low && tree_int_cst_compare (low, tmp_low) == 1
? low : tmp_low);
high = (high && tree_int_cst_compare (high, tmp_high) == -1
? high : tmp_high);
}
/* And record the computed range. */
element->low = low;
element->high = high;
}
/* After we have constrained this variable's potential values,
we try to determine the result of the given conditional.
To simplify later tests, first determine if the current
low value is the same low value as the conditional.
Similarly for the current high value and the high value
for the conditional. */
lowequal = tree_int_cst_equal (low, cond_low);
highequal = tree_int_cst_equal (high, cond_high);
if (lowequal && highequal)
return (cond_inverted ? boolean_false_node : boolean_true_node);
/* To simplify the overlap/subset tests below we may want
to swap the two ranges so that the larger of the two
ranges occurs "first". */
swapped = 0;
if (tree_int_cst_compare (low, cond_low) == 1
|| (lowequal
&& tree_int_cst_compare (cond_high, high) == 1))
{
tree temp;
swapped = 1;
temp = low;
low = cond_low;
cond_low = temp;
temp = high;
high = cond_high;
cond_high = temp;
}
/* Now determine if there is no overlap in the ranges
or if the second range is a subset of the first range. */
no_overlap = tree_int_cst_lt (high, cond_low);
subset = tree_int_cst_compare (cond_high, high) != 1;
/* If there was no overlap in the ranges, then this conditional
always has a false value (unless we had to invert this
conditional, in which case it always has a true value). */
if (no_overlap)
return (cond_inverted ? boolean_true_node : boolean_false_node);
/* If the current range is a subset of the condition's range,
then this conditional always has a true value (unless we
had to invert this conditional, in which case it always
has a true value). */
if (subset && swapped)
return (cond_inverted ? boolean_false_node : boolean_true_node);
/* We were unable to determine the result of the conditional.
However, we may be able to simplify the conditional. First
merge the ranges in the same manner as range merging above. */
low = tree_int_cst_compare (low, cond_low) == 1 ? low : cond_low;
high = tree_int_cst_compare (high, cond_high) == -1 ? high : cond_high;
/* If the range has converged to a single point, then turn this
into an equality comparison. */
if (TREE_CODE (cond) != EQ_EXPR
&& TREE_CODE (cond) != NE_EXPR
&& tree_int_cst_equal (low, high))
{
TREE_SET_CODE (cond, EQ_EXPR);
TREE_OPERAND (cond, 1) = high;
}
}
}
return 0;
}
/* CONST_AND_COPIES is a table which maps an SSA_NAME to the current /* CONST_AND_COPIES is a table which maps an SSA_NAME to the current
known value for that SSA_NAME (or NULL if no value is known). known value for that SSA_NAME (or NULL if no value is known).
...@@ -2265,11 +1553,6 @@ eliminate_redundant_computations (tree stmt) ...@@ -2265,11 +1553,6 @@ eliminate_redundant_computations (tree stmt)
/* Check if the expression has been computed before. */ /* Check if the expression has been computed before. */
cached_lhs = lookup_avail_expr (stmt, insert); cached_lhs = lookup_avail_expr (stmt, insert);
/* If this is a COND_EXPR and we did not find its expression in
the hash table, simplify the condition and try again. */
if (! cached_lhs && TREE_CODE (stmt) == COND_EXPR)
cached_lhs = simplify_cond_and_lookup_avail_expr (stmt);
opt_stats.num_exprs_considered++; opt_stats.num_exprs_considered++;
/* Get a pointer to the expression we are trying to optimize. */ /* Get a pointer to the expression we are trying to optimize. */
...@@ -2816,156 +2099,6 @@ lookup_avail_expr (tree stmt, bool insert) ...@@ -2816,156 +2099,6 @@ lookup_avail_expr (tree stmt, bool insert)
return lhs; return lhs;
} }
/* Given a condition COND, record into HI_P, LO_P and INVERTED_P the
range of values that result in the conditional having a true value.
Return true if we are successful in extracting a range from COND and
false if we are unsuccessful. */
static bool
extract_range_from_cond (tree cond, tree *hi_p, tree *lo_p, int *inverted_p)
{
tree op1 = TREE_OPERAND (cond, 1);
tree high, low, type;
int inverted;
type = TREE_TYPE (op1);
/* Experiments have shown that it's rarely, if ever useful to
record ranges for enumerations. Presumably this is due to
the fact that they're rarely used directly. They are typically
cast into an integer type and used that way. */
if (TREE_CODE (type) != INTEGER_TYPE)
return 0;
switch (TREE_CODE (cond))
{
case EQ_EXPR:
high = low = op1;
inverted = 0;
break;
case NE_EXPR:
high = low = op1;
inverted = 1;
break;
case GE_EXPR:
low = op1;
/* Get the highest value of the type. If not a constant, use that
of its base type, if it has one. */
high = TYPE_MAX_VALUE (type);
if (TREE_CODE (high) != INTEGER_CST && TREE_TYPE (type))
high = TYPE_MAX_VALUE (TREE_TYPE (type));
inverted = 0;
break;
case GT_EXPR:
high = TYPE_MAX_VALUE (type);
if (TREE_CODE (high) != INTEGER_CST && TREE_TYPE (type))
high = TYPE_MAX_VALUE (TREE_TYPE (type));
if (!tree_int_cst_lt (op1, high))
return 0;
low = int_const_binop (PLUS_EXPR, op1, integer_one_node, 1);
inverted = 0;
break;
case LE_EXPR:
high = op1;
low = TYPE_MIN_VALUE (type);
if (TREE_CODE (low) != INTEGER_CST && TREE_TYPE (type))
low = TYPE_MIN_VALUE (TREE_TYPE (type));
inverted = 0;
break;
case LT_EXPR:
low = TYPE_MIN_VALUE (type);
if (TREE_CODE (low) != INTEGER_CST && TREE_TYPE (type))
low = TYPE_MIN_VALUE (TREE_TYPE (type));
if (!tree_int_cst_lt (low, op1))
return 0;
high = int_const_binop (MINUS_EXPR, op1, integer_one_node, 1);
inverted = 0;
break;
default:
return 0;
}
*hi_p = high;
*lo_p = low;
*inverted_p = inverted;
return 1;
}
/* Record a range created by COND for basic block BB. */
static void
record_range (tree cond, basic_block bb)
{
enum tree_code code = TREE_CODE (cond);
/* We explicitly ignore NE_EXPRs and all the unordered comparisons.
They rarely allow for meaningful range optimizations and significantly
complicate the implementation. */
if ((code == LT_EXPR || code == LE_EXPR || code == GT_EXPR
|| code == GE_EXPR || code == EQ_EXPR)
&& TREE_CODE (TREE_TYPE (TREE_OPERAND (cond, 1))) == INTEGER_TYPE)
{
struct vrp_hash_elt *vrp_hash_elt;
struct vrp_element *element;
VEC(vrp_element_p,heap) **vrp_records_p;
void **slot;
vrp_hash_elt = XNEW (struct vrp_hash_elt);
vrp_hash_elt->var = TREE_OPERAND (cond, 0);
vrp_hash_elt->records = NULL;
slot = htab_find_slot (vrp_data, vrp_hash_elt, INSERT);
if (*slot == NULL)
*slot = (void *) vrp_hash_elt;
else
vrp_free (vrp_hash_elt);
vrp_hash_elt = (struct vrp_hash_elt *) *slot;
vrp_records_p = &vrp_hash_elt->records;
element = GGC_NEW (struct vrp_element);
element->low = NULL;
element->high = NULL;
element->cond = cond;
element->bb = bb;
VEC_safe_push (vrp_element_p, heap, *vrp_records_p, element);
VEC_safe_push (tree, heap, vrp_variables_stack, TREE_OPERAND (cond, 0));
}
}
/* Hashing and equality functions for VRP_DATA.
Since this hash table is addressed by SSA_NAMEs, we can hash on
their version number and equality can be determined with a
pointer comparison. */
static hashval_t
vrp_hash (const void *p)
{
tree var = ((struct vrp_hash_elt *)p)->var;
return SSA_NAME_VERSION (var);
}
static int
vrp_eq (const void *p1, const void *p2)
{
tree var1 = ((struct vrp_hash_elt *)p1)->var;
tree var2 = ((struct vrp_hash_elt *)p2)->var;
return var1 == var2;
}
/* Hashing and equality functions for AVAIL_EXPRS. The table stores /* Hashing and equality functions for AVAIL_EXPRS. The table stores
MODIFY_EXPR statements. We compute a value number for expressions using MODIFY_EXPR statements. We compute a value number for expressions using
the code of the expression and the SSA numbers of its operands. */ the code of the expression and the SSA numbers of its operands. */
......
/* SSA Jump Threading
Copyright (C) 2005, 2006 Free Software Foundation, Inc.
Contributed by Jeff Law <law@redhat.com>
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GCC is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING. If not, write to
the Free Software Foundation, 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA. */
#include "config.h"
#include "system.h"
#include "coretypes.h"
#include "tm.h"
#include "tree.h"
#include "flags.h"
#include "rtl.h"
#include "tm_p.h"
#include "ggc.h"
#include "basic-block.h"
#include "cfgloop.h"
#include "output.h"
#include "expr.h"
#include "function.h"
#include "diagnostic.h"
#include "timevar.h"
#include "tree-dump.h"
#include "tree-flow.h"
#include "domwalk.h"
#include "real.h"
#include "tree-pass.h"
#include "tree-ssa-propagate.h"
#include "langhooks.h"
#include "params.h"
/* To avoid code explosion due to jump threading, we limit the
number of statements we are going to copy. This variable
holds the number of statements currently seen that we'll have
to copy as part of the jump threading process. */
static int stmt_count;
/* Return TRUE if we may be able to thread an incoming edge into
BB to an outgoing edge from BB. Return FALSE otherwise. */
bool
potentially_threadable_block (basic_block bb)
{
block_stmt_iterator bsi;
/* If BB has a single successor or a single predecessor, then
there is no threading opportunity. */
if (single_succ_p (bb) || single_pred_p (bb))
return false;
/* If BB does not end with a conditional, switch or computed goto,
then there is no threading opportunity. */
bsi = bsi_last (bb);
if (bsi_end_p (bsi)
|| ! bsi_stmt (bsi)
|| (TREE_CODE (bsi_stmt (bsi)) != COND_EXPR
&& TREE_CODE (bsi_stmt (bsi)) != GOTO_EXPR
&& TREE_CODE (bsi_stmt (bsi)) != SWITCH_EXPR))
return false;
return true;
}
/* Return the LHS of any ASSERT_EXPR where OP appears as the first
argument to the ASSERT_EXPR and in which the ASSERT_EXPR dominates
BB. If no such ASSERT_EXPR is found, return OP. */
static tree
lhs_of_dominating_assert (tree op, basic_block bb, tree stmt)
{
imm_use_iterator imm_iter;
use_operand_p imm_use;
FOR_EACH_IMM_USE_SAFE (imm_use, imm_iter, op)
{
tree use_stmt = USE_STMT (imm_use);
if (use_stmt != stmt
&& TREE_CODE (use_stmt) == MODIFY_EXPR
&& TREE_CODE (TREE_OPERAND (use_stmt, 1)) == ASSERT_EXPR
&& TREE_OPERAND (TREE_OPERAND (use_stmt, 1), 0) == op
&& dominated_by_p (CDI_DOMINATORS, bb, bb_for_stmt (use_stmt)))
op = TREE_OPERAND (use_stmt, 0);
}
return op;
}
/* We record temporary equivalences created by PHI nodes or
statements within the target block. Doing so allows us to
identify more jump threading opportunities, even in blocks
with side effects.
We keep track of those temporary equivalences in a stack
structure so that we can unwind them when we're done processing
a particular edge. This routine handles unwinding the data
structures. */
static void
remove_temporary_equivalences (VEC(tree, heap) **stack)
{
while (VEC_length (tree, *stack) > 0)
{
tree prev_value, dest;
dest = VEC_pop (tree, *stack);
/* A NULL value indicates we should stop unwinding, oherwise
pop off the next entry as they're recorded in pairs. */
if (dest == NULL)
break;
prev_value = VEC_pop (tree, *stack);
SSA_NAME_VALUE (dest) = prev_value;
}
}
/* Record a temporary equivalence, saving enough information so that
we can restore the state of recorded equivalences when we're
done processing the current edge. */
static void
record_temporary_equivalence (tree x, tree y, VEC(tree, heap) **stack)
{
tree prev_x = SSA_NAME_VALUE (x);
if (TREE_CODE (y) == SSA_NAME)
{
tree tmp = SSA_NAME_VALUE (y);
y = tmp ? tmp : y;
}
SSA_NAME_VALUE (x) = y;
VEC_reserve (tree, heap, *stack, 2);
VEC_quick_push (tree, *stack, prev_x);
VEC_quick_push (tree, *stack, x);
}
/* Record temporary equivalences created by PHIs at the target of the
edge E. Record unwind information for the equivalences onto STACK.
If a PHI which prevents threading is encountered, then return FALSE
indicating we should not thread this edge, else return TRUE. */
static bool
record_temporary_equivalences_from_phis (edge e, VEC(tree, heap) **stack)
{
tree phi;
/* Each PHI creates a temporary equivalence, record them.
These are context sensitive equivalences and will be removed
later. */
for (phi = phi_nodes (e->dest); phi; phi = PHI_CHAIN (phi))
{
tree src = PHI_ARG_DEF_FROM_EDGE (phi, e);
tree dst = PHI_RESULT (phi);
/* If the desired argument is not the same as this PHI's result
and it is set by a PHI in E->dest, then we can not thread
through E->dest. */
if (src != dst
&& TREE_CODE (src) == SSA_NAME
&& TREE_CODE (SSA_NAME_DEF_STMT (src)) == PHI_NODE
&& bb_for_stmt (SSA_NAME_DEF_STMT (src)) == e->dest)
return false;
/* We consider any non-virtual PHI as a statement since it
count result in a constant assignment or copy operation. */
if (is_gimple_reg (dst))
stmt_count++;
record_temporary_equivalence (dst, src, stack);
}
return true;
}
/* Try to simplify each statement in E->dest, ultimately leading to
a simplification of the COND_EXPR at the end of E->dest.
Record unwind information for temporary equivalences onto STACK.
Use SIMPLIFY (a pointer to a callback function) to further simplify
statements using pass specific information.
We might consider marking just those statements which ultimately
feed the COND_EXPR. It's not clear if the overhead of bookkeeping
would be recovered by trying to simplify fewer statements.
If we are able to simplify a statement into the form
SSA_NAME = (SSA_NAME | gimple invariant), then we can record
a context sensitive equivalency which may help us simplify
later statements in E->dest. */
static tree
record_temporary_equivalences_from_stmts_at_dest (edge e,
VEC(tree, heap) **stack,
tree (*simplify) (tree))
{
block_stmt_iterator bsi;
tree stmt = NULL;
int max_stmt_count;
max_stmt_count = PARAM_VALUE (PARAM_MAX_JUMP_THREAD_DUPLICATION_STMTS);
/* Walk through each statement in the block recording equivalences
we discover. Note any equivalences we discover are context
sensitive (ie, are dependent on traversing E) and must be unwound
when we're finished processing E. */
for (bsi = bsi_start (e->dest); ! bsi_end_p (bsi); bsi_next (&bsi))
{
tree cached_lhs = NULL;
stmt = bsi_stmt (bsi);
/* Ignore empty statements and labels. */
if (IS_EMPTY_STMT (stmt) || TREE_CODE (stmt) == LABEL_EXPR)
continue;
/* Safely handle threading across loop backedges. Only allowing
a conditional at the target of the backedge is over conservative,
but still allows us to capture the majority of the cases where
we can thread across a loop backedge. */
if ((e->flags & EDGE_DFS_BACK) != 0
&& TREE_CODE (stmt) != COND_EXPR
&& TREE_CODE (stmt) != SWITCH_EXPR)
return NULL;
/* If the statement has volatile operands, then we assume we
can not thread through this block. This is overly
conservative in some ways. */
if (TREE_CODE (stmt) == ASM_EXPR && ASM_VOLATILE_P (stmt))
return NULL;
/* If duplicating this block is going to cause too much code
expansion, then do not thread through this block. */
stmt_count++;
if (stmt_count > max_stmt_count)
return NULL;
/* If this is not a MODIFY_EXPR which sets an SSA_NAME to a new
value, then do not try to simplify this statement as it will
not simplify in any way that is helpful for jump threading. */
if (TREE_CODE (stmt) != MODIFY_EXPR
|| TREE_CODE (TREE_OPERAND (stmt, 0)) != SSA_NAME)
continue;
/* At this point we have a statement which assigns an RHS to an
SSA_VAR on the LHS. We want to try and simplify this statement
to expose more context sensitive equivalences which in turn may
allow us to simplify the condition at the end of the loop.
Handle simple copy operations as well as implied copies from
ASSERT_EXPRs. */
if (TREE_CODE (TREE_OPERAND (stmt, 1)) == SSA_NAME)
cached_lhs = TREE_OPERAND (stmt, 1);
else if (TREE_CODE (TREE_OPERAND (stmt, 1)) == ASSERT_EXPR)
cached_lhs = TREE_OPERAND (TREE_OPERAND (stmt, 1), 0);
else
{
/* A statement that is not a trivial copy or ASSERT_EXPR.
We're going to temporarily copy propagate the operands
and see if that allows us to simplify this statement. */
tree *copy, pre_fold_expr;
ssa_op_iter iter;
use_operand_p use_p;
unsigned int num, i = 0;
num = NUM_SSA_OPERANDS (stmt, (SSA_OP_USE | SSA_OP_VUSE));
copy = XCNEWVEC (tree, num);
/* Make a copy of the uses & vuses into USES_COPY, then cprop into
the operands. */
FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE | SSA_OP_VUSE)
{
tree tmp = NULL;
tree use = USE_FROM_PTR (use_p);
copy[i++] = use;
if (TREE_CODE (use) == SSA_NAME)
tmp = SSA_NAME_VALUE (use);
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
SET_USE (use_p, tmp);
}
/* Try to fold/lookup the new expression. Inserting the
expression into the hash table is unlikely to help
Sadly, we have to handle conditional assignments specially
here, because fold expects all the operands of an expression
to be folded before the expression itself is folded, but we
can't just substitute the folded condition here. */
if (TREE_CODE (TREE_OPERAND (stmt, 1)) == COND_EXPR)
{
tree cond = COND_EXPR_COND (TREE_OPERAND (stmt, 1));
cond = fold (cond);
if (cond == boolean_true_node)
pre_fold_expr = COND_EXPR_THEN (TREE_OPERAND (stmt, 1));
else if (cond == boolean_false_node)
pre_fold_expr = COND_EXPR_ELSE (TREE_OPERAND (stmt, 1));
else
pre_fold_expr = TREE_OPERAND (stmt, 1);
}
else
pre_fold_expr = TREE_OPERAND (stmt, 1);
if (pre_fold_expr)
{
cached_lhs = fold (pre_fold_expr);
if (TREE_CODE (cached_lhs) != SSA_NAME
&& !is_gimple_min_invariant (cached_lhs))
cached_lhs = (*simplify) (stmt);
}
/* Restore the statement's original uses/defs. */
i = 0;
FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE | SSA_OP_VUSE)
SET_USE (use_p, copy[i++]);
free (copy);
}
/* Record the context sensitive equivalence if we were able
to simplify this statement. */
if (cached_lhs
&& (TREE_CODE (cached_lhs) == SSA_NAME
|| is_gimple_min_invariant (cached_lhs)))
record_temporary_equivalence (TREE_OPERAND (stmt, 0),
cached_lhs,
stack);
}
return stmt;
}
/* Simplify the control statement at the end of the block E->dest.
To avoid allocating memory unnecessarily, a scratch COND_EXPR
is available to use/clobber in DUMMY_COND.
Use SIMPLIFY (a pointer to a callback function) to further simplify
a condition using pass specific information.
Return the simplified condition or NULL if simplification could
not be performed. */
static tree
simplify_control_stmt_condition (edge e,
tree stmt,
tree dummy_cond,
tree (*simplify) (tree),
bool handle_dominating_asserts)
{
tree cond, cached_lhs;
if (TREE_CODE (stmt) == COND_EXPR)
cond = COND_EXPR_COND (stmt);
else if (TREE_CODE (stmt) == GOTO_EXPR)
cond = GOTO_DESTINATION (stmt);
else
cond = SWITCH_COND (stmt);
/* For comparisons, we have to update both operands, then try
to simplify the comparison. */
if (COMPARISON_CLASS_P (cond))
{
tree op0, op1;
enum tree_code cond_code;
op0 = TREE_OPERAND (cond, 0);
op1 = TREE_OPERAND (cond, 1);
cond_code = TREE_CODE (cond);
/* Get the current value of both operands. */
if (TREE_CODE (op0) == SSA_NAME)
{
tree tmp = SSA_NAME_VALUE (op0);
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
op0 = tmp;
}
if (TREE_CODE (op1) == SSA_NAME)
{
tree tmp = SSA_NAME_VALUE (op1);
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
op1 = tmp;
}
if (handle_dominating_asserts)
{
/* Now see if the operand was consumed by an ASSERT_EXPR
which dominates E->src. If so, we want to replace the
operand with the LHS of the ASSERT_EXPR. */
if (TREE_CODE (op0) == SSA_NAME)
op0 = lhs_of_dominating_assert (op0, e->src, stmt);
if (TREE_CODE (op1) == SSA_NAME)
op1 = lhs_of_dominating_assert (op1, e->src, stmt);
}
/* We may need to canonicalize the comparison. For
example, op0 might be a constant while op1 is an
SSA_NAME. Failure to canonicalize will cause us to
miss threading opportunities. */
if (cond_code != SSA_NAME
&& tree_swap_operands_p (op0, op1, false))
{
tree tmp;
cond_code = swap_tree_comparison (TREE_CODE (cond));
tmp = op0;
op0 = op1;
op1 = tmp;
}
/* Stuff the operator and operands into our dummy conditional
expression. */
TREE_SET_CODE (COND_EXPR_COND (dummy_cond), cond_code);
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 0) = op0;
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 1) = op1;
/* We absolutely do not care about any type conversions
we only care about a zero/nonzero value. */
cached_lhs = fold (COND_EXPR_COND (dummy_cond));
while (TREE_CODE (cached_lhs) == NOP_EXPR
|| TREE_CODE (cached_lhs) == CONVERT_EXPR
|| TREE_CODE (cached_lhs) == NON_LVALUE_EXPR)
cached_lhs = TREE_OPERAND (cached_lhs, 0);
/* If we have not simplified the condition down to an invariant,
then use the pass specific callback to simplify the condition. */
if (! is_gimple_min_invariant (cached_lhs))
cached_lhs = (*simplify) (dummy_cond);
}
/* We can have conditionals which just test the state of a variable
rather than use a relational operator. These are simpler to handle. */
else if (TREE_CODE (cond) == SSA_NAME)
{
cached_lhs = cond;
/* Get the variable's current value from the equivalency chains. */
while (cached_lhs
&& TREE_CODE (cached_lhs) == SSA_NAME
&& SSA_NAME_VALUE (cached_lhs))
cached_lhs = SSA_NAME_VALUE (cached_lhs);
/* If we're dominated by a suitable ASSERT_EXPR, then
update CACHED_LHS appropriately. */
if (handle_dominating_asserts && TREE_CODE (cached_lhs) == SSA_NAME)
cached_lhs = lhs_of_dominating_assert (cached_lhs, e->src, stmt);
/* If we haven't simplified to an invariant yet, then use the
pass specific callback to try and simplify it further. */
if (cached_lhs && ! is_gimple_min_invariant (cached_lhs))
cached_lhs = (*simplify) (stmt);
}
else
cached_lhs = NULL;
return cached_lhs;
}
/* We are exiting E->src, see if E->dest ends with a conditional
jump which has a known value when reached via E.
Special care is necessary if E is a back edge in the CFG as we
may have already recorded equivalences for E->dest into our
various tables, including the result of the conditional at
the end of E->dest. Threading opportunities are severely
limited in that case to avoid short-circuiting the loop
incorrectly.
Note it is quite common for the first block inside a loop to
end with a conditional which is either always true or always
false when reached via the loop backedge. Thus we do not want
to blindly disable threading across a loop backedge. */
void
thread_across_edge (tree dummy_cond,
edge e,
bool handle_dominating_asserts,
VEC(tree, heap) **stack,
tree (*simplify) (tree))
{
tree stmt;
stmt_count = 0;
/* PHIs create temporary equivalences. */
if (!record_temporary_equivalences_from_phis (e, stack))
goto fail;
/* Now walk each statement recording any context sensitive
temporary equivalences we can detect. */
stmt = record_temporary_equivalences_from_stmts_at_dest (e, stack, simplify);
if (!stmt)
goto fail;
/* If we stopped at a COND_EXPR or SWITCH_EXPR, see if we know which arm
will be taken. */
if (TREE_CODE (stmt) == COND_EXPR
|| TREE_CODE (stmt) == GOTO_EXPR
|| TREE_CODE (stmt) == SWITCH_EXPR)
{
tree cond;
/* Extract and simplify the condition. */
cond = simplify_control_stmt_condition (e, stmt, dummy_cond, simplify, handle_dominating_asserts);
if (cond && is_gimple_min_invariant (cond))
{
edge taken_edge = find_taken_edge (e->dest, cond);
basic_block dest = (taken_edge ? taken_edge->dest : NULL);
if (dest == e->dest)
goto fail;
remove_temporary_equivalences (stack);
register_jump_thread (e, taken_edge);
}
}
fail:
remove_temporary_equivalences (stack);
}
...@@ -2755,7 +2755,24 @@ find_conditional_asserts (basic_block bb) ...@@ -2755,7 +2755,24 @@ find_conditional_asserts (basic_block bb)
/* Remove the COND_EXPR operands from the FOUND_IN_SUBGRAPH bitmap. /* Remove the COND_EXPR operands from the FOUND_IN_SUBGRAPH bitmap.
Otherwise, when we finish traversing each of the sub-graphs, we Otherwise, when we finish traversing each of the sub-graphs, we
won't know whether the variables were found in the sub-graphs or won't know whether the variables were found in the sub-graphs or
if they had been found in a block upstream from BB. */ if they had been found in a block upstream from BB.
This is actually a bad idea is some cases, particularly jump
threading. Consider a CFG like the following:
0
/|
1 |
\|
2
/ \
3 4
Assume that one or more operands in the conditional at the
end of block 0 are used in a conditional in block 2, but not
anywhere in block 1. In this case we will not insert any
assert statements in block 1, which may cause us to miss
opportunities to optimize, particularly for jump threading. */
FOR_EACH_SSA_TREE_OPERAND (op, last, iter, SSA_OP_USE) FOR_EACH_SSA_TREE_OPERAND (op, last, iter, SSA_OP_USE)
RESET_BIT (found_in_subgraph, SSA_NAME_VERSION (op)); RESET_BIT (found_in_subgraph, SSA_NAME_VERSION (op));
...@@ -4202,6 +4219,157 @@ simplify_stmt_using_ranges (tree stmt) ...@@ -4202,6 +4219,157 @@ simplify_stmt_using_ranges (tree stmt)
} }
} }
/* Stack of dest,src equivalency pairs that need to be restored after
each attempt to thread a block's incoming edge to an outgoing edge.
A NULL entry is used to mark the end of pairs which need to be
restored. */
static VEC(tree,heap) *stack;
/* A trivial wrapper so that we can present the generic jump
threading code with a simple API for simplifying statements. */
static tree
simplify_stmt_for_jump_threading (tree stmt)
{
/* We only use VRP information to simplify conditionals. This is
overly conservative, but it's unclear if doing more would be
worth the compile time cost. */
if (TREE_CODE (stmt) != COND_EXPR)
return NULL;
return vrp_evaluate_conditional (COND_EXPR_COND (stmt), true);
}
/* Blocks which have more than one predecessor and more than
one successor present jump threading opportunities. ie,
when the block is reached from a specific predecessor, we
may be able to determine which of the outgoing edges will
be traversed. When this optimization applies, we are able
to avoid conditionals at runtime and we may expose secondary
optimization opportunities.
This routine is effectively a driver for the generic jump
threading code. It basically just presents the generic code
with edges that may be suitable for jump threading.
Unlike DOM, we do not iterate VRP if jump threading was successful.
While iterating may expose new opportunities for VRP, it is expected
those opportunities would be very limited and the compile time cost
to expose those opportunities would be significant.
As jump threading opportunities are discovered, they are registered
for later realization. */
static void
identify_jump_threads (void)
{
basic_block bb;
tree dummy;
/* Ugh. When substituting values earlier in this pass we can
wipe the dominance information. So rebuild the dominator
information as we need it within the jump threading code. */
calculate_dominance_info (CDI_DOMINATORS);
/* We do not allow VRP information to be used for jump threading
across a back edge in the CFG. Otherwise it becomes too
difficult to avoid eliminating loop exit tests. Of course
EDGE_DFS_BACK is not accurate at this time so we have to
recompute it. */
mark_dfs_back_edges ();
/* Allocate our unwinder stack to unwind any temporary equivalences
that might be recorded. */
stack = VEC_alloc (tree, heap, 20);
/* To avoid lots of silly node creation, we create a single
conditional and just modify it in-place when attempting to
thread jumps. */
dummy = build2 (EQ_EXPR, boolean_type_node, NULL, NULL);
dummy = build3 (COND_EXPR, void_type_node, dummy, NULL, NULL);
/* Walk through all the blocks finding those which present a
potential jump threading opportunity. We could set this up
as a dominator walker and record data during the walk, but
I doubt it's worth the effort for the classes of jump
threading opportunities we are trying to identify at this
point in compilation. */
FOR_EACH_BB (bb)
{
tree last, cond;
/* If the generic jump threading code does not find this block
interesting, then there is nothing to do. */
if (! potentially_threadable_block (bb))
continue;
/* We only care about blocks ending in a COND_EXPR. While there
may be some value in handling SWITCH_EXPR here, I doubt it's
terribly important. */
last = bsi_stmt (bsi_last (bb));
if (TREE_CODE (last) != COND_EXPR)
continue;
/* We're basically looking for any kind of conditional with
integral type arguments. */
cond = COND_EXPR_COND (last);
if ((TREE_CODE (cond) == SSA_NAME
&& INTEGRAL_TYPE_P (TREE_TYPE (cond)))
|| (COMPARISON_CLASS_P (cond)
&& TREE_CODE (TREE_OPERAND (cond, 0)) == SSA_NAME
&& INTEGRAL_TYPE_P (TREE_TYPE (TREE_OPERAND (cond, 0)))
&& (TREE_CODE (TREE_OPERAND (cond, 1)) == SSA_NAME
|| is_gimple_min_invariant (TREE_OPERAND (cond, 1)))
&& INTEGRAL_TYPE_P (TREE_TYPE (TREE_OPERAND (cond, 1)))))
{
edge_iterator ei;
edge e;
/* We've got a block with multiple predecessors and multiple
successors which also ends in a suitable conditional. For
each predecessor, see if we can thread it to a specific
successor. */
FOR_EACH_EDGE (e, ei, bb->preds)
{
/* Do not thread across back edges or abnormal edges
in the CFG. */
if (e->flags & (EDGE_DFS_BACK | EDGE_COMPLEX))
continue;
thread_across_edge (dummy, e, true,
&stack,
simplify_stmt_for_jump_threading);
}
}
}
/* We do not actually update the CFG or SSA graphs at this point as
ASSERT_EXPRs are still in the IL and cfg cleanup code does not yet
handle ASSERT_EXPRs gracefully. */
}
/* We identified all the jump threading opportunities earlier, but could
not transform the CFG at that time. This routine transforms the
CFG and arranges for the dominator tree to be rebuilt if necessary.
Note the SSA graph update will occur during the normal TODO
processing by the pass manager. */
static void
finalize_jump_threads (void)
{
bool cfg_altered = false;
cfg_altered = thread_through_all_blocks ();
/* If we threaded jumps, then we need to recompute the dominance
information, to safely do that we must clean up the CFG first. */
if (cfg_altered)
{
free_dominance_info (CDI_DOMINATORS);
cleanup_tree_cfg ();
calculate_dominance_info (CDI_DOMINATORS);
}
VEC_free (tree, heap, stack);
}
/* Traverse all the blocks folding conditionals with known ranges. */ /* Traverse all the blocks folding conditionals with known ranges. */
...@@ -4246,6 +4414,10 @@ vrp_finalize (void) ...@@ -4246,6 +4414,10 @@ vrp_finalize (void)
substitute_and_fold (single_val_range, true); substitute_and_fold (single_val_range, true);
/* We must identify jump threading opportunities before we release
the datastructures built by VRP. */
identify_jump_threads ();
/* Free allocated memory. */ /* Free allocated memory. */
for (i = 0; i < num_ssa_names; i++) for (i = 0; i < num_ssa_names; i++)
if (vr_value[i]) if (vr_value[i])
...@@ -4323,7 +4495,12 @@ execute_vrp (void) ...@@ -4323,7 +4495,12 @@ execute_vrp (void)
current_loops = NULL; current_loops = NULL;
} }
/* ASSERT_EXPRs must be removed before finalizing jump threads
as finalizing jump threads calls the CFG cleanup code which
does not properly handle ASSERT_EXPRs. */
remove_range_assertions (); remove_range_assertions ();
finalize_jump_threads ();
} }
static bool static bool
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment