Commit 3c2a8ed0 by David Malcolm Committed by David Malcolm

dump_printf: use %T and %G throughout

As promised at Cauldron, this patch uses %T and %G with dump_printf and
dump_printf_loc calls to eliminate calls to

  dump_generic_expr (MSG_*, arg, TDF_SLIM)  (via %T)

and

  dump_gimple_stmt (MSG_*, TDF_SLIM, stmt, 0)  (via %G)

throughout the middle-end, simplifying numerous dump callsites.

A few calls to these functions didn't match the above pattern; I didn't
touch these.  I wasn't able to use %E anywhere.

gcc/ChangeLog:
	* tree-data-ref.c (runtime_alias_check_p): Use formatted printing
	with %T in place of calls to dump_generic_expr.
	(prune_runtime_alias_test_list): Likewise.
	(create_runtime_alias_checks): Likewise.
	* tree-vect-data-refs.c (vect_check_nonzero_value): Likewise.
	(vect_analyze_data_ref_dependence): Likewise.
	(vect_slp_analyze_data_ref_dependence): Likewise.
	(vect_record_base_alignment): Likewise.  Use %G in place of call
	to dump_gimple_stmt.
	(vect_compute_data_ref_alignment): Likewise.
	(verify_data_ref_alignment): Likewise.
	(vect_find_same_alignment_drs): Likewise.
	(vect_analyze_group_access_1): Likewise.
	(vect_analyze_data_ref_accesses): Likewise.
	(dependence_distance_ge_vf): Likewise.
	(dump_lower_bound): Likewise.
	(vect_prune_runtime_alias_test_list): Likewise.
	(vect_find_stmt_data_reference): Likewise.
	(vect_analyze_data_refs): Likewise.
	(vect_create_addr_base_for_vector_ref): Likewise.
	(vect_create_data_ref_ptr): Likewise.
	* tree-vect-loop-manip.c (vect_set_loop_condition): Likewise.
	(vect_can_advance_ivs_p): Likewise.
	(vect_update_ivs_after_vectorizer): Likewise.
	(vect_gen_prolog_loop_niters): Likewise.
	(vect_prepare_for_masked_peels): Likewise.
	* tree-vect-loop.c (vect_determine_vf_for_stmt): Likewise.
	(vect_determine_vectorization_factor): Likewise.
	(vect_is_simple_iv_evolution): Likewise.
	(vect_analyze_scalar_cycles_1): Likewise.
	(vect_analyze_loop_operations): Likewise.
	(report_vect_op): Likewise.
	(vect_is_slp_reduction): Likewise.
	(check_reduction_path): Likewise.
	(vect_is_simple_reduction): Likewise.
	(vect_create_epilog_for_reduction): Likewise.
	(vect_finalize_reduction:): Likewise.
	(vectorizable_induction): Likewise.
	(vect_transform_loop_stmt): Likewise.
	(vect_transform_loop): Likewise.
	(optimize_mask_stores): Likewise.
	* tree-vect-patterns.c (vect_pattern_detected): Likewise.
	(vect_split_statement): Likewise.
	(vect_recog_over_widening_pattern): Likewise.
	(vect_recog_average_pattern): Likewise.
	(vect_determine_min_output_precision_1): Likewise.
	(vect_determine_precisions_from_range): Likewise.
	(vect_determine_precisions_from_users): Likewise.
	(vect_mark_pattern_stmts): Likewise.
	(vect_pattern_recog_1): Likewise.
	* tree-vect-slp.c (vect_get_and_check_slp_defs): Likewise.
	(vect_record_max_nunits): Likewise.
	(vect_build_slp_tree_1): Likewise.
	(vect_build_slp_tree_2): Likewise.
	(vect_print_slp_tree): Likewise.
	(vect_analyze_slp_instance): Likewise.
	(vect_detect_hybrid_slp_stmts): Likewise.
	(vect_detect_hybrid_slp_1): Likewise.
	(vect_slp_analyze_operations): Likewise.
	(vect_slp_analyze_bb_1): Likewise.
	(vect_transform_slp_perm_load): Likewise.
	(vect_schedule_slp_instance): Likewise.
	* tree-vect-stmts.c (vect_mark_relevant): Likewise.
	(vect_mark_stmts_to_be_vectorized): Likewise.
	(vect_init_vector_1): Likewise.
	(vect_get_vec_def_for_operand): Likewise.
	(vect_finish_stmt_generation_1): Likewise.
	(vect_check_load_store_mask): Likewise.
	(vectorizable_call): Likewise.
	(vectorizable_conversion): Likewise.
	(vectorizable_operation): Likewise.
	(vectorizable_load): Likewise.
	(vect_analyze_stmt): Likewise.
	(vect_is_simple_use): Likewise.
	(vect_get_vector_types_for_stmt): Likewise.
	(vect_get_mask_type_for_stmt): Likewise.
	* tree-vectorizer.c (increase_alignment): Likewise.

From-SVN: r264424
parent 5bbb7115
2018-09-19 David Malcolm <dmalcolm@redhat.com>
* tree-data-ref.c (runtime_alias_check_p): Use formatted printing
with %T in place of calls to dump_generic_expr.
(prune_runtime_alias_test_list): Likewise.
(create_runtime_alias_checks): Likewise.
* tree-vect-data-refs.c (vect_check_nonzero_value): Likewise.
(vect_analyze_data_ref_dependence): Likewise.
(vect_slp_analyze_data_ref_dependence): Likewise.
(vect_record_base_alignment): Likewise. Use %G in place of call
to dump_gimple_stmt.
(vect_compute_data_ref_alignment): Likewise.
(verify_data_ref_alignment): Likewise.
(vect_find_same_alignment_drs): Likewise.
(vect_analyze_group_access_1): Likewise.
(vect_analyze_data_ref_accesses): Likewise.
(dependence_distance_ge_vf): Likewise.
(dump_lower_bound): Likewise.
(vect_prune_runtime_alias_test_list): Likewise.
(vect_find_stmt_data_reference): Likewise.
(vect_analyze_data_refs): Likewise.
(vect_create_addr_base_for_vector_ref): Likewise.
(vect_create_data_ref_ptr): Likewise.
* tree-vect-loop-manip.c (vect_set_loop_condition): Likewise.
(vect_can_advance_ivs_p): Likewise.
(vect_update_ivs_after_vectorizer): Likewise.
(vect_gen_prolog_loop_niters): Likewise.
(vect_prepare_for_masked_peels): Likewise.
* tree-vect-loop.c (vect_determine_vf_for_stmt): Likewise.
(vect_determine_vectorization_factor): Likewise.
(vect_is_simple_iv_evolution): Likewise.
(vect_analyze_scalar_cycles_1): Likewise.
(vect_analyze_loop_operations): Likewise.
(report_vect_op): Likewise.
(vect_is_slp_reduction): Likewise.
(check_reduction_path): Likewise.
(vect_is_simple_reduction): Likewise.
(vect_create_epilog_for_reduction): Likewise.
(vect_finalize_reduction:): Likewise.
(vectorizable_induction): Likewise.
(vect_transform_loop_stmt): Likewise.
(vect_transform_loop): Likewise.
(optimize_mask_stores): Likewise.
* tree-vect-patterns.c (vect_pattern_detected): Likewise.
(vect_split_statement): Likewise.
(vect_recog_over_widening_pattern): Likewise.
(vect_recog_average_pattern): Likewise.
(vect_determine_min_output_precision_1): Likewise.
(vect_determine_precisions_from_range): Likewise.
(vect_determine_precisions_from_users): Likewise.
(vect_mark_pattern_stmts): Likewise.
(vect_pattern_recog_1): Likewise.
* tree-vect-slp.c (vect_get_and_check_slp_defs): Likewise.
(vect_record_max_nunits): Likewise.
(vect_build_slp_tree_1): Likewise.
(vect_build_slp_tree_2): Likewise.
(vect_print_slp_tree): Likewise.
(vect_analyze_slp_instance): Likewise.
(vect_detect_hybrid_slp_stmts): Likewise.
(vect_detect_hybrid_slp_1): Likewise.
(vect_slp_analyze_operations): Likewise.
(vect_slp_analyze_bb_1): Likewise.
(vect_transform_slp_perm_load): Likewise.
(vect_schedule_slp_instance): Likewise.
* tree-vect-stmts.c (vect_mark_relevant): Likewise.
(vect_mark_stmts_to_be_vectorized): Likewise.
(vect_init_vector_1): Likewise.
(vect_get_vec_def_for_operand): Likewise.
(vect_finish_stmt_generation_1): Likewise.
(vect_check_load_store_mask): Likewise.
(vectorizable_call): Likewise.
(vectorizable_conversion): Likewise.
(vectorizable_operation): Likewise.
(vectorizable_load): Likewise.
(vect_analyze_stmt): Likewise.
(vect_is_simple_use): Likewise.
(vect_get_vector_types_for_stmt): Likewise.
(vect_get_mask_type_for_stmt): Likewise.
* tree-vectorizer.c (increase_alignment): Likewise.
2018-09-19 Andrew Stubbs <ams@codesourcery.com>
* doc/rtl.texi: Adjust vec_select description.
......
......@@ -1322,13 +1322,9 @@ bool
runtime_alias_check_p (ddr_p ddr, struct loop *loop, bool speed_p)
{
if (dump_enabled_p ())
{
dump_printf (MSG_NOTE, "consider run-time aliasing test between ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (DDR_A (ddr)));
dump_printf (MSG_NOTE, " and ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (DDR_B (ddr)));
dump_printf (MSG_NOTE, "\n");
}
dump_printf (MSG_NOTE,
"consider run-time aliasing test between %T and %T\n",
DR_REF (DDR_A (ddr)), DR_REF (DDR_B (ddr)));
if (!speed_p)
{
......@@ -1469,17 +1465,9 @@ prune_runtime_alias_test_list (vec<dr_with_seg_len_pair_t> *alias_pairs,
if (*dr_a1 == *dr_a2 && *dr_b1 == *dr_b2)
{
if (dump_enabled_p ())
{
dump_printf (MSG_NOTE, "found equal ranges ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a1->dr));
dump_printf (MSG_NOTE, ", ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b1->dr));
dump_printf (MSG_NOTE, " and ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a2->dr));
dump_printf (MSG_NOTE, ", ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b2->dr));
dump_printf (MSG_NOTE, "\n");
}
dump_printf (MSG_NOTE, "found equal ranges %T, %T and %T, %T\n",
DR_REF (dr_a1->dr), DR_REF (dr_b1->dr),
DR_REF (dr_a2->dr), DR_REF (dr_b2->dr));
alias_pairs->ordered_remove (i--);
continue;
}
......@@ -1576,17 +1564,9 @@ prune_runtime_alias_test_list (vec<dr_with_seg_len_pair_t> *alias_pairs,
dr_a1->align = MIN (dr_a1->align, new_align);
}
if (dump_enabled_p ())
{
dump_printf (MSG_NOTE, "merging ranges for ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a1->dr));
dump_printf (MSG_NOTE, ", ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b1->dr));
dump_printf (MSG_NOTE, " and ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a2->dr));
dump_printf (MSG_NOTE, ", ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b2->dr));
dump_printf (MSG_NOTE, "\n");
}
dump_printf (MSG_NOTE, "merging ranges for %T, %T and %T, %T\n",
DR_REF (dr_a1->dr), DR_REF (dr_b1->dr),
DR_REF (dr_a2->dr), DR_REF (dr_b2->dr));
alias_pairs->ordered_remove (i);
i--;
}
......@@ -1925,13 +1905,9 @@ create_runtime_alias_checks (struct loop *loop,
const dr_with_seg_len& dr_b = (*alias_pairs)[i].second;
if (dump_enabled_p ())
{
dump_printf (MSG_NOTE, "create runtime check for data references ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a.dr));
dump_printf (MSG_NOTE, " and ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b.dr));
dump_printf (MSG_NOTE, "\n");
}
dump_printf (MSG_NOTE,
"create runtime check for data references %T and %T\n",
DR_REF (dr_a.dr), DR_REF (dr_b.dr));
/* Create condition expression for each pair data references. */
create_intersect_range_checks (loop, &part_cond_expr, dr_a, dr_b);
......
......@@ -943,10 +943,8 @@ vect_set_loop_condition (struct loop *loop, loop_vec_info loop_vinfo,
gsi_remove (&loop_cond_gsi, true);
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "New loop exit condition: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, cond_stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location, "New loop exit condition: %G",
cond_stmt);
}
/* Helper routine of slpeel_tree_duplicate_loop_to_edge_cfg.
......@@ -1383,10 +1381,8 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
gphi *phi = gsi.phi ();
stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi_info->stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: %G",
phi_info->stmt);
/* Skip virtual phi's. The data dependences that are associated with
virtual defs/uses (i.e., memory accesses) are analyzed elsewhere.
......@@ -1506,11 +1502,8 @@ vect_update_ivs_after_vectorizer (loop_vec_info loop_vinfo,
gphi *phi1 = gsi1.phi ();
stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"vect_update_ivs_after_vectorizer: phi: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0);
}
dump_printf_loc (MSG_NOTE, vect_location,
"vect_update_ivs_after_vectorizer: phi: %G", phi);
/* Skip reduction and virtual phis. */
if (!iv_phi_p (phi_info))
......@@ -1677,12 +1670,8 @@ vect_gen_prolog_loop_niters (loop_vec_info loop_vinfo,
}
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"niters for prolog loop: ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, iters);
dump_printf (MSG_NOTE, "\n");
}
dump_printf_loc (MSG_NOTE, vect_location,
"niters for prolog loop: %T\n", iters);
var = create_tmp_var (niters_type, "prolog_loop_niters");
iters_name = force_gimple_operand (iters, &new_stmts, false, var);
......@@ -1801,12 +1790,9 @@ vect_prepare_for_masked_peels (loop_vec_info loop_vinfo)
}
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"misalignment for fully-masked loop: ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, misalign_in_elems);
dump_printf (MSG_NOTE, "\n");
}
dump_printf_loc (MSG_NOTE, vect_location,
"misalignment for fully-masked loop: %T\n",
misalign_in_elems);
LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo) = misalign_in_elems;
......
......@@ -88,10 +88,7 @@ static void
vect_pattern_detected (const char *name, gimple *stmt)
{
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "%s: detected: ", name);
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location, "%s: detected: %G", name, stmt);
}
/* Associate pattern statement PATTERN_STMT with ORIG_STMT_INFO and
......@@ -639,11 +636,8 @@ vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
vect_init_pattern_stmt (stmt1, orig_stmt2_info, vectype);
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"Splitting pattern statement: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt2_info->stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location,
"Splitting pattern statement: %G", stmt2_info->stmt);
/* Since STMT2_INFO is a pattern statement, we can change it
in-situ without worrying about changing the code for the
......@@ -652,10 +646,9 @@ vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "into: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt1, 0);
dump_printf_loc (MSG_NOTE, vect_location, "and: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt2_info->stmt, 0);
dump_printf_loc (MSG_NOTE, vect_location, "into: %G", stmt1);
dump_printf_loc (MSG_NOTE, vect_location, "and: %G",
stmt2_info->stmt);
}
gimple_seq *def_seq = &STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt2_info);
......@@ -683,11 +676,8 @@ vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
return false;
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"Splitting statement: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt2_info->stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location,
"Splitting statement: %G", stmt2_info->stmt);
/* Add STMT1 as a singleton pattern definition sequence. */
gimple_seq *def_seq = &STMT_VINFO_PATTERN_DEF_SEQ (stmt2_info);
......@@ -702,10 +692,8 @@ vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"into pattern statements: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt1, 0);
dump_printf_loc (MSG_NOTE, vect_location, "and: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, new_stmt2, 0);
"into pattern statements: %G", stmt1);
dump_printf_loc (MSG_NOTE, vect_location, "and: %G", new_stmt2);
}
return true;
......@@ -1662,13 +1650,8 @@ vect_recog_over_widening_pattern (stmt_vec_info last_stmt_info, tree *type_out)
return NULL;
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "demoting ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, type);
dump_printf (MSG_NOTE, " to ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, new_type);
dump_printf (MSG_NOTE, "\n");
}
dump_printf_loc (MSG_NOTE, vect_location, "demoting %T to %T\n",
type, new_type);
/* Calculate the rhs operands for an operation on NEW_TYPE. */
tree ops[3] = {};
......@@ -1684,11 +1667,8 @@ vect_recog_over_widening_pattern (stmt_vec_info last_stmt_info, tree *type_out)
gimple_set_location (pattern_stmt, gimple_location (last_stmt));
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"created pattern stmt: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, pattern_stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location,
"created pattern stmt: %G", pattern_stmt);
pattern_stmt = vect_convert_output (last_stmt_info, type,
pattern_stmt, new_vectype);
......@@ -1831,11 +1811,8 @@ vect_recog_average_pattern (stmt_vec_info last_stmt_info, tree *type_out)
gimple_set_location (average_stmt, gimple_location (last_stmt));
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"created pattern stmt: ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, average_stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location,
"created pattern stmt: %G", average_stmt);
return vect_convert_output (last_stmt_info, type, average_stmt, new_vectype);
}
......@@ -4411,12 +4388,9 @@ vect_determine_min_output_precision_1 (stmt_vec_info stmt_info, tree lhs)
}
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "only the low %d bits of ",
precision);
dump_generic_expr (MSG_NOTE, TDF_SLIM, lhs);
dump_printf (MSG_NOTE, " are significant\n");
}
dump_printf_loc (MSG_NOTE, vect_location,
"only the low %d bits of %T are significant\n",
precision, lhs);
stmt_info->min_output_precision = precision;
return true;
}
......@@ -4524,13 +4498,10 @@ vect_determine_precisions_from_range (stmt_vec_info stmt_info, gassign *stmt)
return;
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "can narrow to %s:%d"
" without loss of precision: ",
sign == SIGNED ? "signed" : "unsigned",
value_precision);
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location, "can narrow to %s:%d"
" without loss of precision: %G",
sign == SIGNED ? "signed" : "unsigned",
value_precision, stmt);
vect_set_operation_type (stmt_info, type, value_precision, sign);
vect_set_min_input_precision (stmt_info, type, value_precision);
......@@ -4599,13 +4570,10 @@ vect_determine_precisions_from_users (stmt_vec_info stmt_info, gassign *stmt)
if (operation_precision < precision)
{
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "can narrow to %s:%d"
" without affecting users: ",
TYPE_UNSIGNED (type) ? "unsigned" : "signed",
operation_precision);
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location, "can narrow to %s:%d"
" without affecting users: %G",
TYPE_UNSIGNED (type) ? "unsigned" : "signed",
operation_precision, stmt);
vect_set_operation_type (stmt_info, type, operation_precision,
TYPE_SIGN (type));
}
......@@ -4727,11 +4695,8 @@ vect_mark_pattern_stmts (stmt_vec_info orig_stmt_info, gimple *pattern_stmt,
sequence. */
orig_pattern_stmt = orig_stmt_info->stmt;
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"replacing earlier pattern ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, orig_pattern_stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location,
"replacing earlier pattern %G", orig_pattern_stmt);
/* To keep the book-keeping simple, just swap the lhs of the
old and new statements, so that the old one has a valid but
......@@ -4741,10 +4706,7 @@ vect_mark_pattern_stmts (stmt_vec_info orig_stmt_info, gimple *pattern_stmt,
gimple_set_lhs (pattern_stmt, old_lhs);
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location, "with ");
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, pattern_stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location, "with %G", pattern_stmt);
/* Switch to the statement that ORIG replaces. */
orig_stmt_info = STMT_VINFO_RELATED_STMT (orig_stmt_info);
......@@ -4830,11 +4792,9 @@ vect_pattern_recog_1 (vect_recog_func *recog_func, stmt_vec_info stmt_info)
/* Found a vectorizable pattern. */
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"%s pattern recognized: ", recog_func->name);
dump_gimple_stmt (MSG_NOTE, TDF_SLIM, pattern_stmt, 0);
}
dump_printf_loc (MSG_NOTE, vect_location,
"%s pattern recognized: %G",
recog_func->name, pattern_stmt);
/* Mark the stmts that are involved in the pattern. */
vect_mark_pattern_stmts (stmt_info, pattern_stmt, pattern_vectype);
......
......@@ -1425,9 +1425,7 @@ increase_alignment (void)
if (alignment && vect_can_force_dr_alignment_p (decl, alignment))
{
vnode->increase_alignment (alignment);
dump_printf (MSG_NOTE, "Increasing alignment of decl: ");
dump_generic_expr (MSG_NOTE, TDF_SLIM, decl);
dump_printf (MSG_NOTE, "\n");
dump_printf (MSG_NOTE, "Increasing alignment of decl: %T\n", decl);
}
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment