Commit 9d4ac06e by Richard Sandiford Committed by Richard Sandiford

Add an "else" argument to IFN_COND_* functions

As suggested by Richard B, this patch changes the IFN_COND_*
functions so that they take the else value of the ?: operation
as a final argument, rather than always using argument 1.

All current callers will still use the equivalent of argument 1,
so this patch makes the SVE code assert that for now.  Later patches
add the general case.

2018-05-25  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
	* doc/md.texi: Update the documentation of the cond_* optabs
	to mention the new final operand.  Fix GET_MODE_NUNITS call.
	Describe the scalar case too.
	* internal-fn.def (IFN_EXTRACT_LAST): Change type to fold_left.
	* internal-fn.c (expand_cond_unary_optab_fn): Expect 3 operands
	instead of 2.
	(expand_cond_binary_optab_fn): Expect 4 operands instead of 3.
	(get_conditional_internal_fn): Update comment.
	* tree-vect-loop.c (vectorizable_reduction): Pass the original
	accumulator value as a final argument to conditional functions.
	* config/aarch64/aarch64-sve.md (cond_<optab><mode>): Turn into
	a define_expand and add an "else" operand.  Assert for now that
	the else operand is equal to operand 2.  Use SVE_INT_BINARY and
	SVE_COND_FP_BINARY instead of SVE_COND_INT_OP and SVE_COND_FP_OP.
	(*cond_<optab><mode>): New patterns.
	* config/aarch64/iterators.md (UNSPEC_COND_SMAX, UNSPEC_COND_UMAX)
	(UNSPEC_COND_SMIN, UNSPEC_COND_UMIN, UNSPEC_COND_AND, UNSPEC_COND_ORR)
	(UNSPEC_COND_EOR): Delete.
	(optab): Remove associated mappings.
	(SVE_INT_BINARY): New code iterator.
	(sve_int_op): Remove int attribute and add "minus" to the code
	attribute.
	(SVE_COND_INT_OP): Delete.
	(SVE_COND_FP_OP): Rename to...
	(SVE_COND_FP_BINARY): ...this.

From-SVN: r260707
parent b883fc9b
2018-05-25 Richard Sandiford <richard.sandiford@linaro.org>
* doc/md.texi: Update the documentation of the cond_* optabs
to mention the new final operand. Fix GET_MODE_NUNITS call.
Describe the scalar case too.
* internal-fn.def (IFN_EXTRACT_LAST): Change type to fold_left.
* internal-fn.c (expand_cond_unary_optab_fn): Expect 3 operands
instead of 2.
(expand_cond_binary_optab_fn): Expect 4 operands instead of 3.
(get_conditional_internal_fn): Update comment.
* tree-vect-loop.c (vectorizable_reduction): Pass the original
accumulator value as a final argument to conditional functions.
* config/aarch64/aarch64-sve.md (cond_<optab><mode>): Turn into
a define_expand and add an "else" operand. Assert for now that
the else operand is equal to operand 2. Use SVE_INT_BINARY and
SVE_COND_FP_BINARY instead of SVE_COND_INT_OP and SVE_COND_FP_OP.
(*cond_<optab><mode>): New patterns.
* config/aarch64/iterators.md (UNSPEC_COND_SMAX, UNSPEC_COND_UMAX)
(UNSPEC_COND_SMIN, UNSPEC_COND_UMIN, UNSPEC_COND_AND, UNSPEC_COND_ORR)
(UNSPEC_COND_EOR): Delete.
(optab): Remove associated mappings.
(SVE_INT_BINARY): New code iterator.
(sve_int_op): Remove int attribute and add "minus" to the code
attribute.
(SVE_COND_INT_OP): Delete.
(SVE_COND_FP_OP): Rename to...
(SVE_COND_FP_BINARY): ...this.
2018-05-25 Richard Sandiford <richard.sandiford@linaro.org>
* optabs.c (can_reuse_operands_p): New function.
(maybe_legitimize_operands): Try to reuse the results for
earlier operands.
......
......@@ -1757,14 +1757,31 @@
"<maxmin_uns_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>"
)
;; Predicated integer operations with select.
(define_expand "cond_<optab><mode>"
[(set (match_operand:SVE_I 0 "register_operand")
(unspec:SVE_I
[(match_operand:<VPRED> 1 "register_operand")
(SVE_INT_BINARY:SVE_I
(match_operand:SVE_I 2 "register_operand")
(match_operand:SVE_I 3 "register_operand"))
(match_operand:SVE_I 4 "register_operand")]
UNSPEC_SEL))]
"TARGET_SVE"
{
gcc_assert (rtx_equal_p (operands[2], operands[4]));
})
;; Predicated integer operations.
(define_insn "cond_<optab><mode>"
(define_insn "*cond_<optab><mode>"
[(set (match_operand:SVE_I 0 "register_operand" "=w")
(unspec:SVE_I
[(match_operand:<VPRED> 1 "register_operand" "Upl")
(match_operand:SVE_I 2 "register_operand" "0")
(match_operand:SVE_I 3 "register_operand" "w")]
SVE_COND_INT_OP))]
(SVE_INT_BINARY:SVE_I
(match_operand:SVE_I 2 "register_operand" "0")
(match_operand:SVE_I 3 "register_operand" "w"))
(match_dup 2)]
UNSPEC_SEL))]
"TARGET_SVE"
"<sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>"
)
......@@ -2536,14 +2553,35 @@
}
)
;; Predicated floating-point operations with select.
(define_expand "cond_<optab><mode>"
[(set (match_operand:SVE_F 0 "register_operand")
(unspec:SVE_F
[(match_operand:<VPRED> 1 "register_operand")
(unspec:SVE_F
[(match_dup 1)
(match_operand:SVE_F 2 "register_operand")
(match_operand:SVE_F 3 "register_operand")]
SVE_COND_FP_BINARY)
(match_operand:SVE_F 4 "register_operand")]
UNSPEC_SEL))]
"TARGET_SVE"
{
gcc_assert (rtx_equal_p (operands[2], operands[4]));
})
;; Predicated floating-point operations.
(define_insn "cond_<optab><mode>"
(define_insn "*cond_<optab><mode>"
[(set (match_operand:SVE_F 0 "register_operand" "=w")
(unspec:SVE_F
[(match_operand:<VPRED> 1 "register_operand" "Upl")
(match_operand:SVE_F 2 "register_operand" "0")
(match_operand:SVE_F 3 "register_operand" "w")]
SVE_COND_FP_OP))]
(unspec:SVE_F
[(match_dup 1)
(match_operand:SVE_F 2 "register_operand" "0")
(match_operand:SVE_F 3 "register_operand" "w")]
SVE_COND_FP_BINARY)
(match_dup 2)]
UNSPEC_SEL))]
"TARGET_SVE"
"<sve_fp_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>"
)
......
......@@ -464,13 +464,6 @@
UNSPEC_UMUL_HIGHPART ; Used in aarch64-sve.md.
UNSPEC_COND_ADD ; Used in aarch64-sve.md.
UNSPEC_COND_SUB ; Used in aarch64-sve.md.
UNSPEC_COND_SMAX ; Used in aarch64-sve.md.
UNSPEC_COND_UMAX ; Used in aarch64-sve.md.
UNSPEC_COND_SMIN ; Used in aarch64-sve.md.
UNSPEC_COND_UMIN ; Used in aarch64-sve.md.
UNSPEC_COND_AND ; Used in aarch64-sve.md.
UNSPEC_COND_ORR ; Used in aarch64-sve.md.
UNSPEC_COND_EOR ; Used in aarch64-sve.md.
UNSPEC_COND_LT ; Used in aarch64-sve.md.
UNSPEC_COND_LE ; Used in aarch64-sve.md.
UNSPEC_COND_EQ ; Used in aarch64-sve.md.
......@@ -1207,6 +1200,9 @@
;; SVE floating-point unary operations.
(define_code_iterator SVE_FP_UNARY [neg abs sqrt])
(define_code_iterator SVE_INT_BINARY [plus minus smax umax smin umin
and ior xor])
;; SVE integer comparisons.
(define_code_iterator SVE_INT_CMP [lt le eq ne ge gt ltu leu geu gtu])
......@@ -1377,6 +1373,7 @@
;; The integer SVE instruction that implements an rtx code.
(define_code_attr sve_int_op [(plus "add")
(minus "sub")
(neg "neg")
(smin "smin")
(smax "smax")
......@@ -1532,14 +1529,7 @@
(define_int_iterator MUL_HIGHPART [UNSPEC_SMUL_HIGHPART UNSPEC_UMUL_HIGHPART])
(define_int_iterator SVE_COND_INT_OP [UNSPEC_COND_ADD UNSPEC_COND_SUB
UNSPEC_COND_SMAX UNSPEC_COND_UMAX
UNSPEC_COND_SMIN UNSPEC_COND_UMIN
UNSPEC_COND_AND
UNSPEC_COND_ORR
UNSPEC_COND_EOR])
(define_int_iterator SVE_COND_FP_OP [UNSPEC_COND_ADD UNSPEC_COND_SUB])
(define_int_iterator SVE_COND_FP_BINARY [UNSPEC_COND_ADD UNSPEC_COND_SUB])
(define_int_iterator SVE_COND_FP_CMP [UNSPEC_COND_LT UNSPEC_COND_LE
UNSPEC_COND_EQ UNSPEC_COND_NE
......@@ -1569,14 +1559,7 @@
(UNSPEC_IORV "ior")
(UNSPEC_XORV "xor")
(UNSPEC_COND_ADD "add")
(UNSPEC_COND_SUB "sub")
(UNSPEC_COND_SMAX "smax")
(UNSPEC_COND_UMAX "umax")
(UNSPEC_COND_SMIN "smin")
(UNSPEC_COND_UMIN "umin")
(UNSPEC_COND_AND "and")
(UNSPEC_COND_ORR "ior")
(UNSPEC_COND_EOR "xor")])
(UNSPEC_COND_SUB "sub")])
(define_int_attr maxmin_uns [(UNSPEC_UMAXV "umax")
(UNSPEC_UMINV "umin")
......@@ -1787,15 +1770,5 @@
(UNSPEC_COND_GE "ge")
(UNSPEC_COND_GT "gt")])
(define_int_attr sve_int_op [(UNSPEC_COND_ADD "add")
(UNSPEC_COND_SUB "sub")
(UNSPEC_COND_SMAX "smax")
(UNSPEC_COND_UMAX "umax")
(UNSPEC_COND_SMIN "smin")
(UNSPEC_COND_UMIN "umin")
(UNSPEC_COND_AND "and")
(UNSPEC_COND_ORR "orr")
(UNSPEC_COND_EOR "eor")])
(define_int_attr sve_fp_op [(UNSPEC_COND_ADD "fadd")
(UNSPEC_COND_SUB "fsub")])
......@@ -6349,13 +6349,21 @@ operand 0, otherwise (operand 2 + operand 3) is moved.
@itemx @samp{cond_smax@var{mode}}
@itemx @samp{cond_umin@var{mode}}
@itemx @samp{cond_umax@var{mode}}
Perform an elementwise operation on vector operands 2 and 3,
under the control of the vector mask in operand 1, and store the result
in operand 0. This is equivalent to:
When operand 1 is true, perform an operation on operands 2 and 3 and
store the result in operand 0, otherwise store operand 4 in operand 0.
The operation works elementwise if the operands are vectors.
The scalar case is equivalent to:
@smallexample
op0 = op1 ? op2 @var{op} op3 : op4;
@end smallexample
while the vector case is equivalent to:
@smallexample
for (i = 0; i < GET_MODE_NUNITS (@var{n}); i++)
op0[i] = op1[i] ? op2[i] @var{op} op3[i] : op2[i];
for (i = 0; i < GET_MODE_NUNITS (@var{m}); i++)
op0[i] = op1[i] ? op2[i] @var{op} op3[i] : op4[i];
@end smallexample
where, for example, @var{op} is @code{+} for @samp{cond_add@var{mode}}.
......@@ -6364,8 +6372,9 @@ When defined for floating-point modes, the contents of @samp{op3[i]}
are not interpreted if @var{op1[i]} is false, just like they would not
be in a normal C @samp{?:} condition.
Operands 0, 2 and 3 all have mode @var{m}, while operand 1 has the mode
returned by @code{TARGET_VECTORIZE_GET_MASK_MODE}.
Operands 0, 2, 3 and 4 all have mode @var{m}. Operand 1 is a scalar
integer if @var{m} is scalar, otherwise it has the mode returned by
@code{TARGET_VECTORIZE_GET_MASK_MODE}.
@cindex @code{neg@var{mode}cc} instruction pattern
@item @samp{neg@var{mode}cc}
......
......@@ -2988,10 +2988,10 @@ expand_while_optab_fn (internal_fn, gcall *stmt, convert_optab optab)
expand_direct_optab_fn (FN, STMT, OPTAB, 3)
#define expand_cond_unary_optab_fn(FN, STMT, OPTAB) \
expand_direct_optab_fn (FN, STMT, OPTAB, 2)
expand_direct_optab_fn (FN, STMT, OPTAB, 3)
#define expand_cond_binary_optab_fn(FN, STMT, OPTAB) \
expand_direct_optab_fn (FN, STMT, OPTAB, 3)
expand_direct_optab_fn (FN, STMT, OPTAB, 4)
#define expand_fold_extract_optab_fn(FN, STMT, OPTAB) \
expand_direct_optab_fn (FN, STMT, OPTAB, 3)
......@@ -3219,12 +3219,19 @@ static void (*const internal_fn_expanders[]) (internal_fn, gcall *) = {
0
};
/* Return a function that performs the conditional form of CODE, i.e.:
/* Return a function that only performs CODE when a certain condition is met
and that uses a given fallback value otherwise. For example, if CODE is
a binary operation associated with conditional function FN:
LHS = FN (COND, A, B, ELSE)
is equivalent to the C expression:
LHS = COND ? A CODE B : ELSE;
LHS = RHS1 ? RHS2 CODE RHS3 : RHS2
operating elementwise if the operands are vectors.
(operating elementwise if the operands are vectors). Return IFN_LAST
if no such function exists. */
Return IFN_LAST if no such function exists. */
internal_fn
get_conditional_internal_fn (tree_code code)
......
......@@ -173,7 +173,7 @@ DEF_INTERNAL_OPTAB_FN (REDUC_XOR, ECF_CONST | ECF_NOTHROW,
/* Extract the last active element from a vector. */
DEF_INTERNAL_OPTAB_FN (EXTRACT_LAST, ECF_CONST | ECF_NOTHROW,
extract_last, cond_unary)
extract_last, fold_left)
/* Same, but return the first argument if no elements are active. */
DEF_INTERNAL_OPTAB_FN (FOLD_EXTRACT_LAST, ECF_CONST | ECF_NOTHROW,
......
......@@ -7222,8 +7222,9 @@ vectorizable_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
}
tree mask = vect_get_loop_mask (gsi, masks, vec_num * ncopies,
vectype_in, i * ncopies + j);
gcall *call = gimple_build_call_internal (cond_fn, 3, mask,
vop[0], vop[1]);
gcall *call = gimple_build_call_internal (cond_fn, 4, mask,
vop[0], vop[1],
vop[0]);
new_temp = make_ssa_name (vec_dest, call);
gimple_call_set_lhs (call, new_temp);
gimple_call_set_nothrow (call, true);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment