Commit b9c25734 by Richard Sandiford Committed by Richard Sandiford

poly_int: ao_ref and vn_reference_op_t

This patch changes the offset, size and max_size fields
of ao_ref from HOST_WIDE_INT to poly_int64 and propagates
the change through the code that references it.  This includes
changing the off field of vn_reference_op_struct in the same way.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* inchash.h (inchash::hash::add_poly_int): New function.
	* tree-ssa-alias.h (ao_ref::offset, ao_ref::size, ao_ref::max_size):
	Use poly_int64 rather than HOST_WIDE_INT.
	(ao_ref::max_size_known_p): New function.
	* tree-ssa-sccvn.h (vn_reference_op_struct::off): Use poly_int64_pod
	rather than HOST_WIDE_INT.
	* tree-ssa-alias.c (ao_ref_base): Apply get_ref_base_and_extent
	to temporaries until its interface is adjusted to match.
	(ao_ref_init_from_ptr_and_size): Handle polynomial offsets and sizes.
	(aliasing_component_refs_p, decl_refs_may_alias_p)
	(indirect_ref_may_alias_decl_p, indirect_refs_may_alias_p): Take
	the offsets and max_sizes as poly_int64s instead of HOST_WIDE_INTs.
	(refs_may_alias_p_1, stmt_kills_ref_p): Adjust for changes to
	ao_ref fields.
	* alias.c (ao_ref_from_mem): Likewise.
	* tree-ssa-dce.c (mark_aliased_reaching_defs_necessary_1): Likewise.
	* tree-ssa-dse.c (valid_ao_ref_for_dse, normalize_ref)
	(clear_bytes_written_by, setup_live_bytes_from_ref, compute_trims)
	(maybe_trim_complex_store, maybe_trim_constructor_store)
	(live_bytes_read, dse_classify_store): Likewise.
	* tree-ssa-sccvn.c (vn_reference_compute_hash, vn_reference_eq):
	(copy_reference_ops_from_ref, ao_ref_init_from_vn_reference)
	(fully_constant_vn_reference_p, valueize_refs_1): Likewise.
	(vn_reference_lookup_3): Likewise.
	* tree-ssa-uninit.c (warn_uninitialized_vars): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255872
parent 5ffca72c
...@@ -2,6 +2,36 @@ ...@@ -2,6 +2,36 @@
Alan Hayward <alan.hayward@arm.com> Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com> David Sherwood <david.sherwood@arm.com>
* inchash.h (inchash::hash::add_poly_int): New function.
* tree-ssa-alias.h (ao_ref::offset, ao_ref::size, ao_ref::max_size):
Use poly_int64 rather than HOST_WIDE_INT.
(ao_ref::max_size_known_p): New function.
* tree-ssa-sccvn.h (vn_reference_op_struct::off): Use poly_int64_pod
rather than HOST_WIDE_INT.
* tree-ssa-alias.c (ao_ref_base): Apply get_ref_base_and_extent
to temporaries until its interface is adjusted to match.
(ao_ref_init_from_ptr_and_size): Handle polynomial offsets and sizes.
(aliasing_component_refs_p, decl_refs_may_alias_p)
(indirect_ref_may_alias_decl_p, indirect_refs_may_alias_p): Take
the offsets and max_sizes as poly_int64s instead of HOST_WIDE_INTs.
(refs_may_alias_p_1, stmt_kills_ref_p): Adjust for changes to
ao_ref fields.
* alias.c (ao_ref_from_mem): Likewise.
* tree-ssa-dce.c (mark_aliased_reaching_defs_necessary_1): Likewise.
* tree-ssa-dse.c (valid_ao_ref_for_dse, normalize_ref)
(clear_bytes_written_by, setup_live_bytes_from_ref, compute_trims)
(maybe_trim_complex_store, maybe_trim_constructor_store)
(live_bytes_read, dse_classify_store): Likewise.
* tree-ssa-sccvn.c (vn_reference_compute_hash, vn_reference_eq):
(copy_reference_ops_from_ref, ao_ref_init_from_vn_reference)
(fully_constant_vn_reference_p, valueize_refs_1): Likewise.
(vn_reference_lookup_3): Likewise.
* tree-ssa-uninit.c (warn_uninitialized_vars): Likewise.
2017-12-20 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* tree-ssa-alias.c (indirect_ref_may_alias_decl_p) * tree-ssa-alias.c (indirect_ref_may_alias_decl_p)
(indirect_refs_may_alias_p): Use ranges_may_overlap_p (indirect_refs_may_alias_p): Use ranges_may_overlap_p
instead of ranges_overlap_p. instead of ranges_overlap_p.
...@@ -331,9 +331,9 @@ ao_ref_from_mem (ao_ref *ref, const_rtx mem) ...@@ -331,9 +331,9 @@ ao_ref_from_mem (ao_ref *ref, const_rtx mem)
/* If MEM_OFFSET/MEM_SIZE get us outside of ref->offset/ref->max_size /* If MEM_OFFSET/MEM_SIZE get us outside of ref->offset/ref->max_size
drop ref->ref. */ drop ref->ref. */
if (MEM_OFFSET (mem) < 0 if (MEM_OFFSET (mem) < 0
|| (ref->max_size != -1 || (ref->max_size_known_p ()
&& ((MEM_OFFSET (mem) + MEM_SIZE (mem)) * BITS_PER_UNIT && maybe_gt ((MEM_OFFSET (mem) + MEM_SIZE (mem)) * BITS_PER_UNIT,
> ref->max_size))) ref->max_size)))
ref->ref = NULL_TREE; ref->ref = NULL_TREE;
/* Refine size and offset we got from analyzing MEM_EXPR by using /* Refine size and offset we got from analyzing MEM_EXPR by using
...@@ -344,19 +344,18 @@ ao_ref_from_mem (ao_ref *ref, const_rtx mem) ...@@ -344,19 +344,18 @@ ao_ref_from_mem (ao_ref *ref, const_rtx mem)
/* The MEM may extend into adjacent fields, so adjust max_size if /* The MEM may extend into adjacent fields, so adjust max_size if
necessary. */ necessary. */
if (ref->max_size != -1 if (ref->max_size_known_p ())
&& ref->size > ref->max_size) ref->max_size = upper_bound (ref->max_size, ref->size);
ref->max_size = ref->size;
/* If MEM_OFFSET and MEM_SIZE get us outside of the base object of /* If MEM_OFFSET and MEM_SIZE might get us outside of the base object of
the MEM_EXPR punt. This happens for STRICT_ALIGNMENT targets a lot. */ the MEM_EXPR punt. This happens for STRICT_ALIGNMENT targets a lot. */
if (MEM_EXPR (mem) != get_spill_slot_decl (false) if (MEM_EXPR (mem) != get_spill_slot_decl (false)
&& (ref->offset < 0 && (maybe_lt (ref->offset, 0)
|| (DECL_P (ref->base) || (DECL_P (ref->base)
&& (DECL_SIZE (ref->base) == NULL_TREE && (DECL_SIZE (ref->base) == NULL_TREE
|| TREE_CODE (DECL_SIZE (ref->base)) != INTEGER_CST || !poly_int_tree_p (DECL_SIZE (ref->base))
|| wi::ltu_p (wi::to_offset (DECL_SIZE (ref->base)), || maybe_lt (wi::to_poly_offset (DECL_SIZE (ref->base)),
ref->offset + ref->size))))) ref->offset + ref->size)))))
return false; return false;
return true; return true;
......
...@@ -57,6 +57,14 @@ class hash ...@@ -57,6 +57,14 @@ class hash
val = iterative_hash_hashval_t (v, val); val = iterative_hash_hashval_t (v, val);
} }
/* Add polynomial value V, treating each element as an unsigned int. */
template<unsigned int N, typename T>
void add_poly_int (const poly_int_pod<N, T> &v)
{
for (unsigned int i = 0; i < N; ++i)
add_int (v.coeffs[i]);
}
/* Add HOST_WIDE_INT value V. */ /* Add HOST_WIDE_INT value V. */
void add_hwi (HOST_WIDE_INT v) void add_hwi (HOST_WIDE_INT v)
{ {
......
...@@ -80,11 +80,11 @@ struct ao_ref ...@@ -80,11 +80,11 @@ struct ao_ref
the following fields are not yet computed. */ the following fields are not yet computed. */
tree base; tree base;
/* The offset relative to the base. */ /* The offset relative to the base. */
HOST_WIDE_INT offset; poly_int64 offset;
/* The size of the access. */ /* The size of the access. */
HOST_WIDE_INT size; poly_int64 size;
/* The maximum possible extent of the access or -1 if unconstrained. */ /* The maximum possible extent of the access or -1 if unconstrained. */
HOST_WIDE_INT max_size; poly_int64 max_size;
/* The alias set of the access or -1 if not yet computed. */ /* The alias set of the access or -1 if not yet computed. */
alias_set_type ref_alias_set; alias_set_type ref_alias_set;
...@@ -94,8 +94,18 @@ struct ao_ref ...@@ -94,8 +94,18 @@ struct ao_ref
/* Whether the memory is considered a volatile access. */ /* Whether the memory is considered a volatile access. */
bool volatile_p; bool volatile_p;
bool max_size_known_p () const;
}; };
/* Return true if the maximum size is known, rather than the special -1
marker. */
inline bool
ao_ref::max_size_known_p () const
{
return known_size_p (max_size);
}
/* In tree-ssa-alias.c */ /* In tree-ssa-alias.c */
extern void ao_ref_init (ao_ref *, tree); extern void ao_ref_init (ao_ref *, tree);
......
...@@ -488,13 +488,9 @@ mark_aliased_reaching_defs_necessary_1 (ao_ref *ref, tree vdef, void *data) ...@@ -488,13 +488,9 @@ mark_aliased_reaching_defs_necessary_1 (ao_ref *ref, tree vdef, void *data)
{ {
/* For a must-alias check we need to be able to constrain /* For a must-alias check we need to be able to constrain
the accesses properly. */ the accesses properly. */
if (size != -1 && size == max_size if (size == max_size
&& ref->max_size != -1) && known_subrange_p (ref->offset, ref->max_size, offset, size))
{ return true;
if (offset <= ref->offset
&& offset + size >= ref->offset + ref->max_size)
return true;
}
/* Or they need to be exactly the same. */ /* Or they need to be exactly the same. */
else if (ref->ref else if (ref->ref
/* Make sure there is no induction variable involved /* Make sure there is no induction variable involved
......
...@@ -128,13 +128,12 @@ static bool ...@@ -128,13 +128,12 @@ static bool
valid_ao_ref_for_dse (ao_ref *ref) valid_ao_ref_for_dse (ao_ref *ref)
{ {
return (ao_ref_base (ref) return (ao_ref_base (ref)
&& ref->max_size != -1 && known_size_p (ref->max_size)
&& ref->size != 0 && maybe_ne (ref->size, 0)
&& ref->max_size == ref->size && known_eq (ref->max_size, ref->size)
&& ref->offset >= 0 && known_ge (ref->offset, 0)
&& (ref->offset % BITS_PER_UNIT) == 0 && multiple_p (ref->offset, BITS_PER_UNIT)
&& (ref->size % BITS_PER_UNIT) == 0 && multiple_p (ref->size, BITS_PER_UNIT));
&& (ref->size != -1));
} }
/* Try to normalize COPY (an ao_ref) relative to REF. Essentially when we are /* Try to normalize COPY (an ao_ref) relative to REF. Essentially when we are
...@@ -144,25 +143,31 @@ valid_ao_ref_for_dse (ao_ref *ref) ...@@ -144,25 +143,31 @@ valid_ao_ref_for_dse (ao_ref *ref)
static bool static bool
normalize_ref (ao_ref *copy, ao_ref *ref) normalize_ref (ao_ref *copy, ao_ref *ref)
{ {
if (!ordered_p (copy->offset, ref->offset))
return false;
/* If COPY starts before REF, then reset the beginning of /* If COPY starts before REF, then reset the beginning of
COPY to match REF and decrease the size of COPY by the COPY to match REF and decrease the size of COPY by the
number of bytes removed from COPY. */ number of bytes removed from COPY. */
if (copy->offset < ref->offset) if (maybe_lt (copy->offset, ref->offset))
{ {
HOST_WIDE_INT diff = ref->offset - copy->offset; poly_int64 diff = ref->offset - copy->offset;
if (copy->size <= diff) if (maybe_le (copy->size, diff))
return false; return false;
copy->size -= diff; copy->size -= diff;
copy->offset = ref->offset; copy->offset = ref->offset;
} }
HOST_WIDE_INT diff = copy->offset - ref->offset; poly_int64 diff = copy->offset - ref->offset;
if (ref->size <= diff) if (maybe_le (ref->size, diff))
return false; return false;
/* If COPY extends beyond REF, chop off its size appropriately. */ /* If COPY extends beyond REF, chop off its size appropriately. */
HOST_WIDE_INT limit = ref->size - diff; poly_int64 limit = ref->size - diff;
if (copy->size > limit) if (!ordered_p (limit, copy->size))
return false;
if (maybe_gt (copy->size, limit))
copy->size = limit; copy->size = limit;
return true; return true;
} }
...@@ -183,15 +188,15 @@ clear_bytes_written_by (sbitmap live_bytes, gimple *stmt, ao_ref *ref) ...@@ -183,15 +188,15 @@ clear_bytes_written_by (sbitmap live_bytes, gimple *stmt, ao_ref *ref)
/* Verify we have the same base memory address, the write /* Verify we have the same base memory address, the write
has a known size and overlaps with REF. */ has a known size and overlaps with REF. */
HOST_WIDE_INT start, size;
if (valid_ao_ref_for_dse (&write) if (valid_ao_ref_for_dse (&write)
&& operand_equal_p (write.base, ref->base, OEP_ADDRESS_OF) && operand_equal_p (write.base, ref->base, OEP_ADDRESS_OF)
&& write.size == write.max_size && known_eq (write.size, write.max_size)
&& normalize_ref (&write, ref)) && normalize_ref (&write, ref)
{ && (write.offset - ref->offset).is_constant (&start)
HOST_WIDE_INT start = write.offset - ref->offset; && write.size.is_constant (&size))
bitmap_clear_range (live_bytes, start / BITS_PER_UNIT, bitmap_clear_range (live_bytes, start / BITS_PER_UNIT,
write.size / BITS_PER_UNIT); size / BITS_PER_UNIT);
}
} }
/* REF is a memory write. Extract relevant information from it and /* REF is a memory write. Extract relevant information from it and
...@@ -201,12 +206,14 @@ clear_bytes_written_by (sbitmap live_bytes, gimple *stmt, ao_ref *ref) ...@@ -201,12 +206,14 @@ clear_bytes_written_by (sbitmap live_bytes, gimple *stmt, ao_ref *ref)
static bool static bool
setup_live_bytes_from_ref (ao_ref *ref, sbitmap live_bytes) setup_live_bytes_from_ref (ao_ref *ref, sbitmap live_bytes)
{ {
HOST_WIDE_INT const_size;
if (valid_ao_ref_for_dse (ref) if (valid_ao_ref_for_dse (ref)
&& (ref->size / BITS_PER_UNIT && ref->size.is_constant (&const_size)
&& (const_size / BITS_PER_UNIT
<= PARAM_VALUE (PARAM_DSE_MAX_OBJECT_SIZE))) <= PARAM_VALUE (PARAM_DSE_MAX_OBJECT_SIZE)))
{ {
bitmap_clear (live_bytes); bitmap_clear (live_bytes);
bitmap_set_range (live_bytes, 0, ref->size / BITS_PER_UNIT); bitmap_set_range (live_bytes, 0, const_size / BITS_PER_UNIT);
return true; return true;
} }
return false; return false;
...@@ -231,9 +238,15 @@ compute_trims (ao_ref *ref, sbitmap live, int *trim_head, int *trim_tail, ...@@ -231,9 +238,15 @@ compute_trims (ao_ref *ref, sbitmap live, int *trim_head, int *trim_tail,
the REF to compute the trims. */ the REF to compute the trims. */
/* Now identify how much, if any of the tail we can chop off. */ /* Now identify how much, if any of the tail we can chop off. */
int last_orig = (ref->size / BITS_PER_UNIT) - 1; HOST_WIDE_INT const_size;
int last_live = bitmap_last_set_bit (live); if (ref->size.is_constant (&const_size))
*trim_tail = (last_orig - last_live) & ~0x1; {
int last_orig = (const_size / BITS_PER_UNIT) - 1;
int last_live = bitmap_last_set_bit (live);
*trim_tail = (last_orig - last_live) & ~0x1;
}
else
*trim_tail = 0;
/* Identify how much, if any of the head we can chop off. */ /* Identify how much, if any of the head we can chop off. */
int first_orig = 0; int first_orig = 0;
...@@ -267,7 +280,7 @@ maybe_trim_complex_store (ao_ref *ref, sbitmap live, gimple *stmt) ...@@ -267,7 +280,7 @@ maybe_trim_complex_store (ao_ref *ref, sbitmap live, gimple *stmt)
least half the size of the object to ensure we're trimming least half the size of the object to ensure we're trimming
the entire real or imaginary half. By writing things this the entire real or imaginary half. By writing things this
way we avoid more O(n) bitmap operations. */ way we avoid more O(n) bitmap operations. */
if (trim_tail * 2 >= ref->size / BITS_PER_UNIT) if (known_ge (trim_tail * 2 * BITS_PER_UNIT, ref->size))
{ {
/* TREE_REALPART is live */ /* TREE_REALPART is live */
tree x = TREE_REALPART (gimple_assign_rhs1 (stmt)); tree x = TREE_REALPART (gimple_assign_rhs1 (stmt));
...@@ -276,7 +289,7 @@ maybe_trim_complex_store (ao_ref *ref, sbitmap live, gimple *stmt) ...@@ -276,7 +289,7 @@ maybe_trim_complex_store (ao_ref *ref, sbitmap live, gimple *stmt)
gimple_assign_set_lhs (stmt, y); gimple_assign_set_lhs (stmt, y);
gimple_assign_set_rhs1 (stmt, x); gimple_assign_set_rhs1 (stmt, x);
} }
else if (trim_head * 2 >= ref->size / BITS_PER_UNIT) else if (known_ge (trim_head * 2 * BITS_PER_UNIT, ref->size))
{ {
/* TREE_IMAGPART is live */ /* TREE_IMAGPART is live */
tree x = TREE_IMAGPART (gimple_assign_rhs1 (stmt)); tree x = TREE_IMAGPART (gimple_assign_rhs1 (stmt));
...@@ -326,7 +339,8 @@ maybe_trim_constructor_store (ao_ref *ref, sbitmap live, gimple *stmt) ...@@ -326,7 +339,8 @@ maybe_trim_constructor_store (ao_ref *ref, sbitmap live, gimple *stmt)
return; return;
/* The number of bytes for the new constructor. */ /* The number of bytes for the new constructor. */
int count = (ref->size / BITS_PER_UNIT) - head_trim - tail_trim; poly_int64 ref_bytes = exact_div (ref->size, BITS_PER_UNIT);
poly_int64 count = ref_bytes - head_trim - tail_trim;
/* And the new type for the CONSTRUCTOR. Essentially it's just /* And the new type for the CONSTRUCTOR. Essentially it's just
a char array large enough to cover the non-trimmed parts of a char array large enough to cover the non-trimmed parts of
...@@ -483,15 +497,15 @@ live_bytes_read (ao_ref use_ref, ao_ref *ref, sbitmap live) ...@@ -483,15 +497,15 @@ live_bytes_read (ao_ref use_ref, ao_ref *ref, sbitmap live)
{ {
/* We have already verified that USE_REF and REF hit the same object. /* We have already verified that USE_REF and REF hit the same object.
Now verify that there's actually an overlap between USE_REF and REF. */ Now verify that there's actually an overlap between USE_REF and REF. */
if (normalize_ref (&use_ref, ref)) HOST_WIDE_INT start, size;
if (normalize_ref (&use_ref, ref)
&& (use_ref.offset - ref->offset).is_constant (&start)
&& use_ref.size.is_constant (&size))
{ {
HOST_WIDE_INT start = use_ref.offset - ref->offset;
HOST_WIDE_INT size = use_ref.size;
/* If USE_REF covers all of REF, then it will hit one or more /* If USE_REF covers all of REF, then it will hit one or more
live bytes. This avoids useless iteration over the bitmap live bytes. This avoids useless iteration over the bitmap
below. */ below. */
if (start == 0 && size == ref->size) if (start == 0 && known_eq (size, ref->size))
return true; return true;
/* Now check if any of the remaining bits in use_ref are set in LIVE. */ /* Now check if any of the remaining bits in use_ref are set in LIVE. */
...@@ -593,7 +607,7 @@ dse_classify_store (ao_ref *ref, gimple *stmt, gimple **use_stmt, ...@@ -593,7 +607,7 @@ dse_classify_store (ao_ref *ref, gimple *stmt, gimple **use_stmt,
ao_ref_init (&use_ref, gimple_assign_rhs1 (use_stmt)); ao_ref_init (&use_ref, gimple_assign_rhs1 (use_stmt));
if (valid_ao_ref_for_dse (&use_ref) if (valid_ao_ref_for_dse (&use_ref)
&& use_ref.base == ref->base && use_ref.base == ref->base
&& use_ref.size == use_ref.max_size && known_eq (use_ref.size, use_ref.max_size)
&& !live_bytes_read (use_ref, ref, live_bytes)) && !live_bytes_read (use_ref, ref, live_bytes))
{ {
/* If this statement has a VDEF, then it is the /* If this statement has a VDEF, then it is the
......
...@@ -93,7 +93,7 @@ typedef struct vn_reference_op_struct ...@@ -93,7 +93,7 @@ typedef struct vn_reference_op_struct
/* For storing TYPE_ALIGN for array ref element size computation. */ /* For storing TYPE_ALIGN for array ref element size computation. */
unsigned align : 6; unsigned align : 6;
/* Constant offset this op adds or -1 if it is variable. */ /* Constant offset this op adds or -1 if it is variable. */
HOST_WIDE_INT off; poly_int64_pod off;
tree type; tree type;
tree op0; tree op0;
tree op1; tree op1;
......
...@@ -294,15 +294,15 @@ warn_uninitialized_vars (bool warn_possibly_uninitialized) ...@@ -294,15 +294,15 @@ warn_uninitialized_vars (bool warn_possibly_uninitialized)
/* Do not warn if the access is fully outside of the /* Do not warn if the access is fully outside of the
variable. */ variable. */
poly_int64 decl_size;
if (DECL_P (base) if (DECL_P (base)
&& ref.size != -1 && known_size_p (ref.size)
&& ((ref.max_size == ref.size && ((known_eq (ref.max_size, ref.size)
&& ref.offset + ref.size <= 0) && known_le (ref.offset + ref.size, 0))
|| (ref.offset >= 0 || (known_ge (ref.offset, 0)
&& DECL_SIZE (base) && DECL_SIZE (base)
&& TREE_CODE (DECL_SIZE (base)) == INTEGER_CST && poly_int_tree_p (DECL_SIZE (base), &decl_size)
&& compare_tree_int (DECL_SIZE (base), && known_le (decl_size, ref.offset))))
ref.offset) <= 0)))
continue; continue;
/* Do not warn if the access is then used for a BIT_INSERT_EXPR. */ /* Do not warn if the access is then used for a BIT_INSERT_EXPR. */
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment