Commit 49ab4621 by Richard Biener Committed by Richard Biener

tree-vect-data-refs.c (vect_enhance_data_refs_alignment): When all DRs have…

tree-vect-data-refs.c (vect_enhance_data_refs_alignment): When all DRs have unknown misaligned do not always peel when...

2017-05-03  Richard Biener  <rguenther@suse.de>

	* tree-vect-data-refs.c (vect_enhance_data_refs_alignment):
	When all DRs have unknown misaligned do not always peel
	when there is a store but apply the same costing model as if
	there were only loads.

	* gcc.dg/vect/costmodel/x86_64/costmodel-alignpeel.c: New testcase.

From-SVN: r247544
parent 8d5f521a
2017-05-03 Richard Biener <rguenther@suse.de> 2017-05-03 Richard Biener <rguenther@suse.de>
* tree-vect-data-refs.c (vect_enhance_data_refs_alignment):
When all DRs have unknown misaligned do not always peel
when there is a store but apply the same costing model as if
there were only loads.
2017-05-03 Richard Biener <rguenther@suse.de>
Revert Revert
PR tree-optimization/80492 PR tree-optimization/80492
* tree-ssa-alias.c (decl_refs_may_alias_p): Handle * tree-ssa-alias.c (decl_refs_may_alias_p): Handle
......
2017-05-03 Richard Biener <rguenther@suse.de>
* gcc.dg/vect/costmodel/x86_64/costmodel-alignpeel.c: New testcase.
2017-05-03 Jakub Jelinek <jakub@redhat.com> 2017-05-03 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/79472 PR tree-optimization/79472
......
/* { dg-do compile } */
void func(double * __restrict__ v1, double * v2, unsigned n)
{
for (unsigned i = 0; i < n; ++i)
v1[i] = v2[i];
}
/* { dg-final { scan-tree-dump-not "Alignment of access forced using peeling" "vect" } } */
...@@ -1715,18 +1715,18 @@ vect_enhance_data_refs_alignment (loop_vec_info loop_vinfo) ...@@ -1715,18 +1715,18 @@ vect_enhance_data_refs_alignment (loop_vec_info loop_vinfo)
dr0 = first_store; dr0 = first_store;
} }
/* In case there are only loads with different unknown misalignments, use /* Use peeling only if it may help to align other accesses in the loop or
peeling only if it may help to align other accesses in the loop or
if it may help improving load bandwith when we'd end up using if it may help improving load bandwith when we'd end up using
unaligned loads. */ unaligned loads. */
tree dr0_vt = STMT_VINFO_VECTYPE (vinfo_for_stmt (DR_STMT (dr0))); tree dr0_vt = STMT_VINFO_VECTYPE (vinfo_for_stmt (DR_STMT (dr0)));
if (!first_store if (STMT_VINFO_SAME_ALIGN_REFS
&& !STMT_VINFO_SAME_ALIGN_REFS ( (vinfo_for_stmt (DR_STMT (dr0))).length () == 0
vinfo_for_stmt (DR_STMT (dr0))).length ()
&& (vect_supportable_dr_alignment (dr0, false) && (vect_supportable_dr_alignment (dr0, false)
!= dr_unaligned_supported != dr_unaligned_supported
|| (builtin_vectorization_cost (vector_load, dr0_vt, 0) || (DR_IS_READ (dr0)
== builtin_vectorization_cost (unaligned_load, dr0_vt, -1)))) && (builtin_vectorization_cost (vector_load, dr0_vt, 0)
== builtin_vectorization_cost (unaligned_load,
dr0_vt, -1)))))
do_peeling = false; do_peeling = false;
} }
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment