Commit ae927046 by Richard Sandiford Committed by Richard Sandiford

[39/77] Two changes to the get_best_mode interface

get_best_mode always returns a scalar_int_mode on success,
so this patch makes that explicit in the type system.  Also,
the "largest_mode" argument is used simply to provide a maximum
size, and in practice that size is always a compile-time constant,
even when the concept of variable-sized modes is added later.
The patch therefore passes the size directly.

2017-08-30  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* machmode.h (bit_field_mode_iterator::next_mode): Take a pointer
	to a scalar_int_mode instead of a machine_mode.
	(bit_field_mode_iterator::m_mode): Change type to opt_scalar_int_mode.
	(get_best_mode): Return a boolean and use a pointer argument to store
	the selected mode.  Replace the limit mode parameter with a bit limit.
	* expmed.c (adjust_bit_field_mem_for_reg): Use scalar_int_mode
	for the values returned by bit_field_mode_iterator::next_mode.
	(store_bit_field): Update call to get_best_mode.
	(store_fixed_bit_field): Likewise.
	(extract_fixed_bit_field): Likewise.
	* expr.c (optimize_bitfield_assignment_op): Likewise.
	* fold-const.c (optimize_bit_field_compare): Likewise.
	(fold_truth_andor_1): Likewise.
	* stor-layout.c (bit_field_mode_iterator::next_mode): As above.
	Update for new type of m_mode.
	(get_best_mode): As above.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r251491
parent 0ef40942
...@@ -2,6 +2,27 @@ ...@@ -2,6 +2,27 @@
Alan Hayward <alan.hayward@arm.com> Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com> David Sherwood <david.sherwood@arm.com>
* machmode.h (bit_field_mode_iterator::next_mode): Take a pointer
to a scalar_int_mode instead of a machine_mode.
(bit_field_mode_iterator::m_mode): Change type to opt_scalar_int_mode.
(get_best_mode): Return a boolean and use a pointer argument to store
the selected mode. Replace the limit mode parameter with a bit limit.
* expmed.c (adjust_bit_field_mem_for_reg): Use scalar_int_mode
for the values returned by bit_field_mode_iterator::next_mode.
(store_bit_field): Update call to get_best_mode.
(store_fixed_bit_field): Likewise.
(extract_fixed_bit_field): Likewise.
* expr.c (optimize_bitfield_assignment_op): Likewise.
* fold-const.c (optimize_bit_field_compare): Likewise.
(fold_truth_andor_1): Likewise.
* stor-layout.c (bit_field_mode_iterator::next_mode): As above.
Update for new type of m_mode.
(get_best_mode): As above.
2017-08-30 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* expmed.c (strict_volatile_bitfield_p): Change the type of fieldmode * expmed.c (strict_volatile_bitfield_p): Change the type of fieldmode
to scalar_int_mode. Remove check for SCALAR_INT_MODE_P. to scalar_int_mode. Remove check for SCALAR_INT_MODE_P.
(store_bit_field): Check is_a <scalar_int_mode> before calling (store_bit_field): Check is_a <scalar_int_mode> before calling
......
...@@ -461,7 +461,7 @@ adjust_bit_field_mem_for_reg (enum extraction_pattern pattern, ...@@ -461,7 +461,7 @@ adjust_bit_field_mem_for_reg (enum extraction_pattern pattern,
bit_field_mode_iterator iter (bitsize, bitnum, bitregion_start, bit_field_mode_iterator iter (bitsize, bitnum, bitregion_start,
bitregion_end, MEM_ALIGN (op0), bitregion_end, MEM_ALIGN (op0),
MEM_VOLATILE_P (op0)); MEM_VOLATILE_P (op0));
machine_mode best_mode; scalar_int_mode best_mode;
if (iter.next_mode (&best_mode)) if (iter.next_mode (&best_mode))
{ {
/* We can use a memory in BEST_MODE. See whether this is true for /* We can use a memory in BEST_MODE. See whether this is true for
...@@ -479,7 +479,7 @@ adjust_bit_field_mem_for_reg (enum extraction_pattern pattern, ...@@ -479,7 +479,7 @@ adjust_bit_field_mem_for_reg (enum extraction_pattern pattern,
fieldmode)) fieldmode))
limit_mode = insn.field_mode; limit_mode = insn.field_mode;
machine_mode wider_mode; scalar_int_mode wider_mode;
while (iter.next_mode (&wider_mode) while (iter.next_mode (&wider_mode)
&& GET_MODE_SIZE (wider_mode) <= GET_MODE_SIZE (limit_mode)) && GET_MODE_SIZE (wider_mode) <= GET_MODE_SIZE (limit_mode))
best_mode = wider_mode; best_mode = wider_mode;
...@@ -1095,7 +1095,8 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_INT bitsize, ...@@ -1095,7 +1095,8 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_INT bitsize,
bit region. */ bit region. */
if (MEM_P (str_rtx) && bitregion_start > 0) if (MEM_P (str_rtx) && bitregion_start > 0)
{ {
machine_mode bestmode; scalar_int_mode best_mode;
machine_mode addr_mode = VOIDmode;
HOST_WIDE_INT offset, size; HOST_WIDE_INT offset, size;
gcc_assert ((bitregion_start % BITS_PER_UNIT) == 0); gcc_assert ((bitregion_start % BITS_PER_UNIT) == 0);
...@@ -1105,11 +1106,13 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_INT bitsize, ...@@ -1105,11 +1106,13 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_INT bitsize,
size = (bitnum + bitsize + BITS_PER_UNIT - 1) / BITS_PER_UNIT; size = (bitnum + bitsize + BITS_PER_UNIT - 1) / BITS_PER_UNIT;
bitregion_end -= bitregion_start; bitregion_end -= bitregion_start;
bitregion_start = 0; bitregion_start = 0;
bestmode = get_best_mode (bitsize, bitnum, if (get_best_mode (bitsize, bitnum,
bitregion_start, bitregion_end, bitregion_start, bitregion_end,
MEM_ALIGN (str_rtx), VOIDmode, MEM_ALIGN (str_rtx), INT_MAX,
MEM_VOLATILE_P (str_rtx)); MEM_VOLATILE_P (str_rtx), &best_mode))
str_rtx = adjust_bitfield_address_size (str_rtx, bestmode, offset, size); addr_mode = best_mode;
str_rtx = adjust_bitfield_address_size (str_rtx, addr_mode,
offset, size);
} }
if (!store_bit_field_1 (str_rtx, bitsize, bitnum, if (!store_bit_field_1 (str_rtx, bitsize, bitnum,
...@@ -1143,10 +1146,10 @@ store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT bitsize, ...@@ -1143,10 +1146,10 @@ store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT bitsize,
if (GET_MODE_BITSIZE (mode) == 0 if (GET_MODE_BITSIZE (mode) == 0
|| GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode)) || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
mode = word_mode; mode = word_mode;
mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end, scalar_int_mode best_mode;
MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0)); if (!get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
MEM_ALIGN (op0), GET_MODE_BITSIZE (mode),
if (mode == VOIDmode) MEM_VOLATILE_P (op0), &best_mode))
{ {
/* The only way this should occur is if the field spans word /* The only way this should occur is if the field spans word
boundaries. */ boundaries. */
...@@ -1155,7 +1158,7 @@ store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT bitsize, ...@@ -1155,7 +1158,7 @@ store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT bitsize,
return; return;
} }
op0 = narrow_bit_field_mem (op0, mode, bitsize, bitnum, &bitnum); op0 = narrow_bit_field_mem (op0, best_mode, bitsize, bitnum, &bitnum);
} }
store_fixed_bit_field_1 (op0, bitsize, bitnum, value, reverse); store_fixed_bit_field_1 (op0, bitsize, bitnum, value, reverse);
...@@ -1998,11 +2001,9 @@ extract_fixed_bit_field (machine_mode tmode, rtx op0, ...@@ -1998,11 +2001,9 @@ extract_fixed_bit_field (machine_mode tmode, rtx op0,
{ {
if (MEM_P (op0)) if (MEM_P (op0))
{ {
machine_mode mode scalar_int_mode mode;
= get_best_mode (bitsize, bitnum, 0, 0, MEM_ALIGN (op0), word_mode, if (!get_best_mode (bitsize, bitnum, 0, 0, MEM_ALIGN (op0),
MEM_VOLATILE_P (op0)); BITS_PER_WORD, MEM_VOLATILE_P (op0), &mode))
if (mode == VOIDmode)
/* The only way this should occur is if the field spans word /* The only way this should occur is if the field spans word
boundaries. */ boundaries. */
return extract_split_bit_field (op0, bitsize, bitnum, unsignedp, return extract_split_bit_field (op0, bitsize, bitnum, unsignedp,
......
...@@ -4682,13 +4682,14 @@ optimize_bitfield_assignment_op (unsigned HOST_WIDE_INT bitsize, ...@@ -4682,13 +4682,14 @@ optimize_bitfield_assignment_op (unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT offset1; unsigned HOST_WIDE_INT offset1;
if (str_bitsize == 0 || str_bitsize > BITS_PER_WORD) if (str_bitsize == 0 || str_bitsize > BITS_PER_WORD)
str_mode = word_mode; str_bitsize = BITS_PER_WORD;
str_mode = get_best_mode (bitsize, bitpos,
bitregion_start, bitregion_end, scalar_int_mode best_mode;
MEM_ALIGN (str_rtx), str_mode, 0); if (!get_best_mode (bitsize, bitpos, bitregion_start, bitregion_end,
if (str_mode == VOIDmode) MEM_ALIGN (str_rtx), str_bitsize, false, &best_mode))
return false; return false;
str_bitsize = GET_MODE_BITSIZE (str_mode); str_mode = best_mode;
str_bitsize = GET_MODE_BITSIZE (best_mode);
offset1 = bitpos; offset1 = bitpos;
bitpos %= str_bitsize; bitpos %= str_bitsize;
......
...@@ -3934,7 +3934,8 @@ optimize_bit_field_compare (location_t loc, enum tree_code code, ...@@ -3934,7 +3934,8 @@ optimize_bit_field_compare (location_t loc, enum tree_code code,
tree type = TREE_TYPE (lhs); tree type = TREE_TYPE (lhs);
tree unsigned_type; tree unsigned_type;
int const_p = TREE_CODE (rhs) == INTEGER_CST; int const_p = TREE_CODE (rhs) == INTEGER_CST;
machine_mode lmode, rmode, nmode; machine_mode lmode, rmode;
scalar_int_mode nmode;
int lunsignedp, runsignedp; int lunsignedp, runsignedp;
int lreversep, rreversep; int lreversep, rreversep;
int lvolatilep = 0, rvolatilep = 0; int lvolatilep = 0, rvolatilep = 0;
...@@ -3981,12 +3982,11 @@ optimize_bit_field_compare (location_t loc, enum tree_code code, ...@@ -3981,12 +3982,11 @@ optimize_bit_field_compare (location_t loc, enum tree_code code,
/* See if we can find a mode to refer to this field. We should be able to, /* See if we can find a mode to refer to this field. We should be able to,
but fail if we can't. */ but fail if we can't. */
nmode = get_best_mode (lbitsize, lbitpos, bitstart, bitend, if (!get_best_mode (lbitsize, lbitpos, bitstart, bitend,
const_p ? TYPE_ALIGN (TREE_TYPE (linner)) const_p ? TYPE_ALIGN (TREE_TYPE (linner))
: MIN (TYPE_ALIGN (TREE_TYPE (linner)), : MIN (TYPE_ALIGN (TREE_TYPE (linner)),
TYPE_ALIGN (TREE_TYPE (rinner))), TYPE_ALIGN (TREE_TYPE (rinner))),
word_mode, false); BITS_PER_WORD, false, &nmode))
if (nmode == VOIDmode)
return 0; return 0;
/* Set signed and unsigned types of the precision of this mode for the /* Set signed and unsigned types of the precision of this mode for the
...@@ -5591,7 +5591,7 @@ fold_truth_andor_1 (location_t loc, enum tree_code code, tree truth_type, ...@@ -5591,7 +5591,7 @@ fold_truth_andor_1 (location_t loc, enum tree_code code, tree truth_type,
int ll_unsignedp, lr_unsignedp, rl_unsignedp, rr_unsignedp; int ll_unsignedp, lr_unsignedp, rl_unsignedp, rr_unsignedp;
int ll_reversep, lr_reversep, rl_reversep, rr_reversep; int ll_reversep, lr_reversep, rl_reversep, rr_reversep;
machine_mode ll_mode, lr_mode, rl_mode, rr_mode; machine_mode ll_mode, lr_mode, rl_mode, rr_mode;
machine_mode lnmode, rnmode; scalar_int_mode lnmode, rnmode;
tree ll_mask, lr_mask, rl_mask, rr_mask; tree ll_mask, lr_mask, rl_mask, rr_mask;
tree ll_and_mask, lr_and_mask, rl_and_mask, rr_and_mask; tree ll_and_mask, lr_and_mask, rl_and_mask, rr_and_mask;
tree l_const, r_const; tree l_const, r_const;
...@@ -5777,10 +5777,9 @@ fold_truth_andor_1 (location_t loc, enum tree_code code, tree truth_type, ...@@ -5777,10 +5777,9 @@ fold_truth_andor_1 (location_t loc, enum tree_code code, tree truth_type,
to be relative to a field of that size. */ to be relative to a field of that size. */
first_bit = MIN (ll_bitpos, rl_bitpos); first_bit = MIN (ll_bitpos, rl_bitpos);
end_bit = MAX (ll_bitpos + ll_bitsize, rl_bitpos + rl_bitsize); end_bit = MAX (ll_bitpos + ll_bitsize, rl_bitpos + rl_bitsize);
lnmode = get_best_mode (end_bit - first_bit, first_bit, 0, 0, if (!get_best_mode (end_bit - first_bit, first_bit, 0, 0,
TYPE_ALIGN (TREE_TYPE (ll_inner)), word_mode, TYPE_ALIGN (TREE_TYPE (ll_inner)), BITS_PER_WORD,
volatilep); volatilep, &lnmode))
if (lnmode == VOIDmode)
return 0; return 0;
lnbitsize = GET_MODE_BITSIZE (lnmode); lnbitsize = GET_MODE_BITSIZE (lnmode);
...@@ -5842,10 +5841,9 @@ fold_truth_andor_1 (location_t loc, enum tree_code code, tree truth_type, ...@@ -5842,10 +5841,9 @@ fold_truth_andor_1 (location_t loc, enum tree_code code, tree truth_type,
first_bit = MIN (lr_bitpos, rr_bitpos); first_bit = MIN (lr_bitpos, rr_bitpos);
end_bit = MAX (lr_bitpos + lr_bitsize, rr_bitpos + rr_bitsize); end_bit = MAX (lr_bitpos + lr_bitsize, rr_bitpos + rr_bitsize);
rnmode = get_best_mode (end_bit - first_bit, first_bit, 0, 0, if (!get_best_mode (end_bit - first_bit, first_bit, 0, 0,
TYPE_ALIGN (TREE_TYPE (lr_inner)), word_mode, TYPE_ALIGN (TREE_TYPE (lr_inner)), BITS_PER_WORD,
volatilep); volatilep, &rnmode))
if (rnmode == VOIDmode)
return 0; return 0;
rnbitsize = GET_MODE_BITSIZE (rnmode); rnbitsize = GET_MODE_BITSIZE (rnmode);
......
...@@ -617,11 +617,11 @@ public: ...@@ -617,11 +617,11 @@ public:
bit_field_mode_iterator (HOST_WIDE_INT, HOST_WIDE_INT, bit_field_mode_iterator (HOST_WIDE_INT, HOST_WIDE_INT,
HOST_WIDE_INT, HOST_WIDE_INT, HOST_WIDE_INT, HOST_WIDE_INT,
unsigned int, bool); unsigned int, bool);
bool next_mode (machine_mode *); bool next_mode (scalar_int_mode *);
bool prefer_smaller_modes (); bool prefer_smaller_modes ();
private: private:
machine_mode m_mode; opt_scalar_int_mode m_mode;
/* We use signed values here because the bit position can be negative /* We use signed values here because the bit position can be negative
for invalid input such as gcc.dg/pr48335-8.c. */ for invalid input such as gcc.dg/pr48335-8.c. */
HOST_WIDE_INT m_bitsize; HOST_WIDE_INT m_bitsize;
...@@ -635,11 +635,9 @@ private: ...@@ -635,11 +635,9 @@ private:
/* Find the best mode to use to access a bit field. */ /* Find the best mode to use to access a bit field. */
extern machine_mode get_best_mode (int, int, extern bool get_best_mode (int, int, unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT, unsigned int,
unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT, bool, scalar_int_mode *);
unsigned int,
machine_mode, bool);
/* Determine alignment, 1<=result<=BIGGEST_ALIGNMENT. */ /* Determine alignment, 1<=result<=BIGGEST_ALIGNMENT. */
......
...@@ -2748,15 +2748,15 @@ bit_field_mode_iterator ...@@ -2748,15 +2748,15 @@ bit_field_mode_iterator
available, storing it in *OUT_MODE if so. */ available, storing it in *OUT_MODE if so. */
bool bool
bit_field_mode_iterator::next_mode (machine_mode *out_mode) bit_field_mode_iterator::next_mode (scalar_int_mode *out_mode)
{ {
for (; m_mode != VOIDmode; scalar_int_mode mode;
m_mode = GET_MODE_WIDER_MODE (m_mode).else_void ()) for (; m_mode.exists (&mode); m_mode = GET_MODE_WIDER_MODE (mode))
{ {
unsigned int unit = GET_MODE_BITSIZE (m_mode); unsigned int unit = GET_MODE_BITSIZE (mode);
/* Skip modes that don't have full precision. */ /* Skip modes that don't have full precision. */
if (unit != GET_MODE_PRECISION (m_mode)) if (unit != GET_MODE_PRECISION (mode))
continue; continue;
/* Stop if the mode is too wide to handle efficiently. */ /* Stop if the mode is too wide to handle efficiently. */
...@@ -2783,12 +2783,12 @@ bit_field_mode_iterator::next_mode (machine_mode *out_mode) ...@@ -2783,12 +2783,12 @@ bit_field_mode_iterator::next_mode (machine_mode *out_mode)
break; break;
/* Stop if the mode requires too much alignment. */ /* Stop if the mode requires too much alignment. */
if (GET_MODE_ALIGNMENT (m_mode) > m_align if (GET_MODE_ALIGNMENT (mode) > m_align
&& SLOW_UNALIGNED_ACCESS (m_mode, m_align)) && SLOW_UNALIGNED_ACCESS (mode, m_align))
break; break;
*out_mode = m_mode; *out_mode = mode;
m_mode = GET_MODE_WIDER_MODE (m_mode).else_void (); m_mode = GET_MODE_WIDER_MODE (mode);
m_count++; m_count++;
return true; return true;
} }
...@@ -2815,12 +2815,14 @@ bit_field_mode_iterator::prefer_smaller_modes () ...@@ -2815,12 +2815,14 @@ bit_field_mode_iterator::prefer_smaller_modes ()
memory access to that range. Otherwise, we are allowed to touch memory access to that range. Otherwise, we are allowed to touch
any adjacent non bit-fields. any adjacent non bit-fields.
The underlying object is known to be aligned to a boundary of ALIGN bits. The chosen mode must have no more than LARGEST_MODE_BITSIZE bits.
If LARGEST_MODE is not VOIDmode, it means that we should not use a mode INT_MAX is a suitable value for LARGEST_MODE_BITSIZE if the caller
larger than LARGEST_MODE (usually SImode). doesn't want to apply a specific limit.
If no mode meets all these conditions, we return VOIDmode. If no mode meets all these conditions, we return VOIDmode.
The underlying object is known to be aligned to a boundary of ALIGN bits.
If VOLATILEP is false and SLOW_BYTE_ACCESS is false, we return the If VOLATILEP is false and SLOW_BYTE_ACCESS is false, we return the
smallest mode meeting these conditions. smallest mode meeting these conditions.
...@@ -2831,17 +2833,18 @@ bit_field_mode_iterator::prefer_smaller_modes () ...@@ -2831,17 +2833,18 @@ bit_field_mode_iterator::prefer_smaller_modes ()
If VOLATILEP is true the narrow_volatile_bitfields target hook is used to If VOLATILEP is true the narrow_volatile_bitfields target hook is used to
decide which of the above modes should be used. */ decide which of the above modes should be used. */
machine_mode bool
get_best_mode (int bitsize, int bitpos, get_best_mode (int bitsize, int bitpos,
unsigned HOST_WIDE_INT bitregion_start, unsigned HOST_WIDE_INT bitregion_start,
unsigned HOST_WIDE_INT bitregion_end, unsigned HOST_WIDE_INT bitregion_end,
unsigned int align, unsigned int align,
machine_mode largest_mode, bool volatilep) unsigned HOST_WIDE_INT largest_mode_bitsize, bool volatilep,
scalar_int_mode *best_mode)
{ {
bit_field_mode_iterator iter (bitsize, bitpos, bitregion_start, bit_field_mode_iterator iter (bitsize, bitpos, bitregion_start,
bitregion_end, align, volatilep); bitregion_end, align, volatilep);
machine_mode widest_mode = VOIDmode; scalar_int_mode mode;
machine_mode mode; bool found = false;
while (iter.next_mode (&mode) while (iter.next_mode (&mode)
/* ??? For historical reasons, reject modes that would normally /* ??? For historical reasons, reject modes that would normally
receive greater alignment, even if unaligned accesses are receive greater alignment, even if unaligned accesses are
...@@ -2900,14 +2903,15 @@ get_best_mode (int bitsize, int bitpos, ...@@ -2900,14 +2903,15 @@ get_best_mode (int bitsize, int bitpos,
so that the final bitfield reference still has a MEM_EXPR so that the final bitfield reference still has a MEM_EXPR
and MEM_OFFSET. */ and MEM_OFFSET. */
&& GET_MODE_ALIGNMENT (mode) <= align && GET_MODE_ALIGNMENT (mode) <= align
&& (largest_mode == VOIDmode && GET_MODE_BITSIZE (mode) <= largest_mode_bitsize)
|| GET_MODE_SIZE (mode) <= GET_MODE_SIZE (largest_mode)))
{ {
widest_mode = mode; *best_mode = mode;
found = true;
if (iter.prefer_smaller_modes ()) if (iter.prefer_smaller_modes ())
break; break;
} }
return widest_mode;
return found;
} }
/* Gets minimal and maximal values for MODE (signed or unsigned depending on /* Gets minimal and maximal values for MODE (signed or unsigned depending on
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment