Commit 22756ccf by James Greenhalgh Committed by James Greenhalgh

[AArch64_be] Don't fold reduction intrinsics.

gcc/

	* config/aarch64/aarch64-builtins.c
	(aarch64_gimple_fold_builtin): Don't fold reduction operations for
	BYTES_BIG_ENDIAN.

From-SVN: r213379
parent 988fa693
2014-07-31 James Greenhalgh <james.greenhalgh@arm.com>
* config/aarch64/aarch64-builtins.c
(aarch64_gimple_fold_builtin): Don't fold reduction operations for
BYTES_BIG_ENDIAN.
2014-07-31 James Greenhalgh <james.greenhalgh@arm.com>
* config/aarch64/aarch64.c (aarch64_simd_vect_par_cnst_half): Vary
the generated mask based on BYTES_BIG_ENDIAN.
(aarch64_simd_check_vect_par_cnst_half): New.
......
......@@ -1383,6 +1383,20 @@ aarch64_gimple_fold_builtin (gimple_stmt_iterator *gsi)
tree call = gimple_call_fn (stmt);
tree fndecl;
gimple new_stmt = NULL;
/* The operations folded below are reduction operations. These are
defined to leave their result in the 0'th element (from the perspective
of GCC). The architectural instruction we are folding will leave the
result in the 0'th element (from the perspective of the architecture).
For big-endian systems, these perspectives are not aligned.
It is therefore wrong to perform this fold on big-endian. There
are some tricks we could play with shuffling, but the mid-end is
inconsistent in the way it treats reduction operations, so we will
end up in difficulty. Until we fix the ambiguity - just bail out. */
if (BYTES_BIG_ENDIAN)
return false;
if (call)
{
fndecl = gimple_call_fndecl (stmt);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment