Commit 77e994c9 by Richard Sandiford Committed by Richard Sandiford

[61/77] Use scalar_int_mode in the AArch64 port

This patch makes the AArch64 port use scalar_int_mode in various places.
Other ports won't need this kind of change; we only need it for AArch64
because of the variable-sized SVE modes.

The only change in functionality is in the rtx_costs handling
of CONST_INT.  If the caller doesn't supply a mode, we now pass
word_mode rather than VOIDmode to aarch64_internal_mov_immediate.
aarch64_movw_imm will therefore not now truncate large constants
in this situation.

2017-09-05  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* config/aarch64/aarch64-protos.h (aarch64_is_extend_from_extract):
	Take a scalar_int_mode instead of a machine_mode.
	(aarch64_mask_and_shift_for_ubfiz_p): Likewise.
	(aarch64_output_scalar_simd_mov_immediate): Likewise.
	(aarch64_simd_scalar_immediate_valid_for_move): Likewise.
	(aarch64_simd_attr_length_rglist): Delete.
	* config/aarch64/aarch64.c (aarch64_is_extend_from_extract): Take
	a scalar_int_mode instead of a machine_mode.
	(aarch64_add_offset): Likewise.
	(aarch64_internal_mov_immediate): Likewise
	(aarch64_add_constant_internal): Likewise.
	(aarch64_add_constant): Likewise.
	(aarch64_movw_imm): Likewise.
	(aarch64_rtx_arith_op_extract_p): Likewise.
	(aarch64_mask_and_shift_for_ubfiz_p): Likewise.
	(aarch64_simd_scalar_immediate_valid_for_move): Likewise.
	Remove assert that the mode isn't a vector.
	(aarch64_output_scalar_simd_mov_immediate): Likewise.
	(aarch64_expand_mov_immediate): Update calls after above changes.
	(aarch64_output_casesi): Use as_a <scalar_int_mode>.
	(aarch64_and_bitmask_imm): Check for scalar integer modes.
	(aarch64_move_imm): Likewise.
	(aarch64_can_const_movi_rtx_p): Likewise.
	(aarch64_strip_extend): Likewise.
	(aarch64_extr_rtx_p): Likewise.
	(aarch64_rtx_costs): Likewise, using wode_mode as the mode of
	a CONST_INT when the mode parameter is VOIDmode.
	(aarch64_float_const_rtx_p): Use scalar_int_mode for a temporary.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r251735
parent 40300fa4
2017-09-05 Richard Sandiford <richard.sandiford@linaro.org> 2017-09-05 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* config/aarch64/aarch64-protos.h (aarch64_is_extend_from_extract):
Take a scalar_int_mode instead of a machine_mode.
(aarch64_mask_and_shift_for_ubfiz_p): Likewise.
(aarch64_output_scalar_simd_mov_immediate): Likewise.
(aarch64_simd_scalar_immediate_valid_for_move): Likewise.
(aarch64_simd_attr_length_rglist): Delete.
* config/aarch64/aarch64.c (aarch64_is_extend_from_extract): Take
a scalar_int_mode instead of a machine_mode.
(aarch64_add_offset): Likewise.
(aarch64_internal_mov_immediate): Likewise
(aarch64_add_constant_internal): Likewise.
(aarch64_add_constant): Likewise.
(aarch64_movw_imm): Likewise.
(aarch64_rtx_arith_op_extract_p): Likewise.
(aarch64_mask_and_shift_for_ubfiz_p): Likewise.
(aarch64_simd_scalar_immediate_valid_for_move): Likewise.
Remove assert that the mode isn't a vector.
(aarch64_output_scalar_simd_mov_immediate): Likewise.
(aarch64_expand_mov_immediate): Update calls after above changes.
(aarch64_output_casesi): Use as_a <scalar_int_mode>.
(aarch64_and_bitmask_imm): Check for scalar integer modes.
(aarch64_move_imm): Likewise.
(aarch64_can_const_movi_rtx_p): Likewise.
(aarch64_strip_extend): Likewise.
(aarch64_extr_rtx_p): Likewise.
(aarch64_rtx_costs): Likewise, using wode_mode as the mode of
a CONST_INT when the mode parameter is VOIDmode.
(aarch64_float_const_rtx_p): Use scalar_int_mode for a temporary.
2017-09-05 Richard Sandiford <richard.sandiford@linaro.org>
* machmode.h (bitwise_mode_for_mode): Return opt_mode. * machmode.h (bitwise_mode_for_mode): Return opt_mode.
* stor-layout.c (bitwise_mode_for_mode): Likewise. * stor-layout.c (bitwise_mode_for_mode): Likewise.
......
...@@ -332,20 +332,19 @@ bool aarch64_function_arg_regno_p (unsigned); ...@@ -332,20 +332,19 @@ bool aarch64_function_arg_regno_p (unsigned);
bool aarch64_fusion_enabled_p (enum aarch64_fusion_pairs); bool aarch64_fusion_enabled_p (enum aarch64_fusion_pairs);
bool aarch64_gen_movmemqi (rtx *); bool aarch64_gen_movmemqi (rtx *);
bool aarch64_gimple_fold_builtin (gimple_stmt_iterator *); bool aarch64_gimple_fold_builtin (gimple_stmt_iterator *);
bool aarch64_is_extend_from_extract (machine_mode, rtx, rtx); bool aarch64_is_extend_from_extract (scalar_int_mode, rtx, rtx);
bool aarch64_is_long_call_p (rtx); bool aarch64_is_long_call_p (rtx);
bool aarch64_is_noplt_call_p (rtx); bool aarch64_is_noplt_call_p (rtx);
bool aarch64_label_mentioned_p (rtx); bool aarch64_label_mentioned_p (rtx);
void aarch64_declare_function_name (FILE *, const char*, tree); void aarch64_declare_function_name (FILE *, const char*, tree);
bool aarch64_legitimate_pic_operand_p (rtx); bool aarch64_legitimate_pic_operand_p (rtx);
bool aarch64_mask_and_shift_for_ubfiz_p (machine_mode, rtx, rtx); bool aarch64_mask_and_shift_for_ubfiz_p (scalar_int_mode, rtx, rtx);
bool aarch64_zero_extend_const_eq (machine_mode, rtx, machine_mode, rtx); bool aarch64_zero_extend_const_eq (machine_mode, rtx, machine_mode, rtx);
bool aarch64_move_imm (HOST_WIDE_INT, machine_mode); bool aarch64_move_imm (HOST_WIDE_INT, machine_mode);
bool aarch64_mov_operand_p (rtx, machine_mode); bool aarch64_mov_operand_p (rtx, machine_mode);
int aarch64_simd_attr_length_rglist (machine_mode);
rtx aarch64_reverse_mask (machine_mode); rtx aarch64_reverse_mask (machine_mode);
bool aarch64_offset_7bit_signed_scaled_p (machine_mode, HOST_WIDE_INT); bool aarch64_offset_7bit_signed_scaled_p (machine_mode, HOST_WIDE_INT);
char *aarch64_output_scalar_simd_mov_immediate (rtx, machine_mode); char *aarch64_output_scalar_simd_mov_immediate (rtx, scalar_int_mode);
char *aarch64_output_simd_mov_immediate (rtx, machine_mode, unsigned); char *aarch64_output_simd_mov_immediate (rtx, machine_mode, unsigned);
bool aarch64_pad_reg_upward (machine_mode, const_tree, bool); bool aarch64_pad_reg_upward (machine_mode, const_tree, bool);
bool aarch64_regno_ok_for_base_p (int, bool); bool aarch64_regno_ok_for_base_p (int, bool);
...@@ -354,7 +353,7 @@ bool aarch64_reinterpret_float_as_int (rtx value, unsigned HOST_WIDE_INT *fail); ...@@ -354,7 +353,7 @@ bool aarch64_reinterpret_float_as_int (rtx value, unsigned HOST_WIDE_INT *fail);
bool aarch64_simd_check_vect_par_cnst_half (rtx op, machine_mode mode, bool aarch64_simd_check_vect_par_cnst_half (rtx op, machine_mode mode,
bool high); bool high);
bool aarch64_simd_imm_zero_p (rtx, machine_mode); bool aarch64_simd_imm_zero_p (rtx, machine_mode);
bool aarch64_simd_scalar_immediate_valid_for_move (rtx, machine_mode); bool aarch64_simd_scalar_immediate_valid_for_move (rtx, scalar_int_mode);
bool aarch64_simd_shift_imm_p (rtx, machine_mode, bool); bool aarch64_simd_shift_imm_p (rtx, machine_mode, bool);
bool aarch64_simd_valid_immediate (rtx, machine_mode, bool, bool aarch64_simd_valid_immediate (rtx, machine_mode, bool,
struct simd_immediate_info *); struct simd_immediate_info *);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment