Commit a277dd9b by Sandra Loosemore Committed by Sandra Loosemore

arm.c (neon_vdup_constant): Expand into canonical RTL instead of an unspec.

2010-07-02  Sandra Loosemore  <sandra@codesourcery.com>

	gcc/
	* config/arm/arm.c (neon_vdup_constant): Expand into canonical RTL
	instead of an unspec.
	(neon_expand_vector_init): Likewise.
	* config/arm/neon.md (UNSPEC_VCOMBINE): Delete.
	(UNSPEC_VDUP_LANE): Delete.
	(UNSPEC VDUP_N): Delete.
	(UNSPEC_VGET_HIGH): Delete.
	(UNSPEC_VGET_LANE): Delete.
	(UNSPEC_VGET_LOW): Delete.
	(UNSPEC_VMVN): Delete.
	(UNSPEC_VSET_LANE): Delete.
	(V_double_vector_mode): New.
	(vec_set<mode>_internal): Make code emitted match that for the
	corresponding intrinsics.
	(vec_setv2di_internal): Likewise.
	(neon_vget_lanedi): Rewrite to expand into emit_move_insn.
	(neon_vget_lanev2di): Rewrite to expand into vec_extractv2di.
	(neon_vset_lane<mode>): Combine double and quad patterns and
	expand into vec_set<mode>_internal instead of UNSPEC_VSET_LANE.
	(neon_vset_lanedi): Rewrite to expand into emit_move_insn.
	(neon_vdup_n<mode>): Rewrite RTL without unspec.
	(neon_vdup_ndi): Rewrite as define_expand and use emit_move_insn.
	(neon_vdup_nv2di): Rewrite RTL without unspec and merge with
	with neon_vdup_lanev2di, adjusting the pattern from the latter
	to be predicable for consistency.
	(neon_vdup_lane<mode>_internal): New.
	(neon_vdup_lane<mode>): Turn into a define_expand and rewrite
	to avoid using an unspec.
	(neon_vdup_lanedi): Rewrite RTL pattern to avoid unspec.
	(neon_vdup_lanev2di): Turn into a define_expand.
	(neon_vcombine): Rewrite pattern to eliminate UNPSEC_VCOMBINE.
	(neon_vget_high<mode>): Replace with....
	(neon_vget_highv16qi): New pattern using canonical RTL.
	(neon_vget_highv8hi): Likewise.
	(neon_vget_highv4si): Likewise.
	(neon_vget_highv4sf): Likewise.
	(neon_vget_highv2di): Likewise.
	(neon_vget_low<mode>): Replace with....
	(neon_vget_lowv16qi): New pattern using canonical RTL.
	(neon_vget_lowv8hi): Likewise.
	(neon_vget_lowv4si): Likewise.
	(neon_vget_lowv4sf): Likewise.
	(neon_vget_lowv2di): Likewise.

	* config/arm/neon.ml (Vget_lane): Add No_op attribute to suppress
	test for this emitting vmov.
	(Vset_lane): Likewise.
	(Vdup_n): Likewise.
	(Vmov_n): Likewise.

	* doc/arm-neon-intrinsics.texi: Regenerated.

	gcc/testsuite/
	* gcc.target/arm/neon/vdup_ns64.c: Regenerated.
	* gcc.target/arm/neon/vdup_nu64.c: Regenerated.
	* gcc.target/arm/neon/vdupQ_ns64.c: Regenerated.
	* gcc.target/arm/neon/vdupQ_nu64.c: Regenerated.
	* gcc.target/arm/neon/vmov_ns64.c: Regenerated.
	* gcc.target/arm/neon/vmov_nu64.c: Regenerated.
	* gcc.target/arm/neon/vmovQ_ns64.c: Regenerated.
	* gcc.target/arm/neon/vmovQ_nu64.c: Regenerated.
	* gcc.target/arm/neon/vget_lanes64.c: Regenerated.
	* gcc.target/arm/neon/vget_laneu64.c: Regenerated.
	* gcc.target/arm/neon/vset_lanes64.c: Regenerated.
	* gcc.target/arm/neon/vset_laneu64.c: Regenerated.
	* gcc.target/arm/neon-vdup_ns64.c: New.
	* gcc.target/arm/neon-vdup_nu64.c: New.
	* gcc.target/arm/neon-vdupQ_ns64.c: New.
	* gcc.target/arm/neon-vdupQ_nu64.c: New.
	* gcc.target/arm/neon-vdupQ_lanes64.c: New.
	* gcc.target/arm/neon-vdupQ_laneu64.c: New.
	* gcc.target/arm/neon-vmov_ns64.c: New.
	* gcc.target/arm/neon-vmov_nu64.c: New.
	* gcc.target/arm/neon-vmovQ_ns64.c: New.
	* gcc.target/arm/neon-vmovQ_nu64.c: New.
	* gcc.target/arm/neon-vget_lanes64.c: New.
	* gcc.target/arm/neon-vget_laneu64.c: New.
	* gcc.target/arm/neon-vset_lanes64.c: New.
	* gcc.target/arm/neon-vset_laneu64.c: New.

From-SVN: r161720
parent 8c98c2a6
2010-07-02 Sandra Loosemore <sandra@codesourcery.com>
* config/arm/arm.c (neon_vdup_constant): Expand into canonical RTL
instead of an unspec.
(neon_expand_vector_init): Likewise.
* config/arm/neon.md (UNSPEC_VCOMBINE): Delete.
(UNSPEC_VDUP_LANE): Delete.
(UNSPEC VDUP_N): Delete.
(UNSPEC_VGET_HIGH): Delete.
(UNSPEC_VGET_LANE): Delete.
(UNSPEC_VGET_LOW): Delete.
(UNSPEC_VMVN): Delete.
(UNSPEC_VSET_LANE): Delete.
(V_double_vector_mode): New.
(vec_set<mode>_internal): Make code emitted match that for the
corresponding intrinsics.
(vec_setv2di_internal): Likewise.
(neon_vget_lanedi): Rewrite to expand into emit_move_insn.
(neon_vget_lanev2di): Rewrite to expand into vec_extractv2di.
(neon_vset_lane<mode>): Combine double and quad patterns and
expand into vec_set<mode>_internal instead of UNSPEC_VSET_LANE.
(neon_vset_lanedi): Rewrite to expand into emit_move_insn.
(neon_vdup_n<mode>): Rewrite RTL without unspec.
(neon_vdup_ndi): Rewrite as define_expand and use emit_move_insn.
(neon_vdup_nv2di): Rewrite RTL without unspec and merge with
with neon_vdup_lanev2di, adjusting the pattern from the latter
to be predicable for consistency.
(neon_vdup_lane<mode>_internal): New.
(neon_vdup_lane<mode>): Turn into a define_expand and rewrite
to avoid using an unspec.
(neon_vdup_lanedi): Rewrite RTL pattern to avoid unspec.
(neon_vdup_lanev2di): Turn into a define_expand.
(neon_vcombine): Rewrite pattern to eliminate UNPSEC_VCOMBINE.
(neon_vget_high<mode>): Replace with....
(neon_vget_highv16qi): New pattern using canonical RTL.
(neon_vget_highv8hi): Likewise.
(neon_vget_highv4si): Likewise.
(neon_vget_highv4sf): Likewise.
(neon_vget_highv2di): Likewise.
(neon_vget_low<mode>): Replace with....
(neon_vget_lowv16qi): New pattern using canonical RTL.
(neon_vget_lowv8hi): Likewise.
(neon_vget_lowv4si): Likewise.
(neon_vget_lowv4sf): Likewise.
(neon_vget_lowv2di): Likewise.
* config/arm/neon.ml (Vget_lane): Add No_op attribute to suppress
test for this emitting vmov.
(Vset_lane): Likewise.
(Vdup_n): Likewise.
(Vmov_n): Likewise.
* doc/arm-neon-intrinsics.texi: Regenerated.
2010-07-02 Sandra Loosemore <sandra@codesourcery.com>
* config/arm/neon.md (vec_extractv2di): Correct error in register
numbering to reconcile with neon_vget_lanev2di.
......
......@@ -8250,8 +8250,7 @@ neon_vdup_constant (rtx vals)
load. */
x = copy_to_mode_reg (inner_mode, XVECEXP (vals, 0, 0));
return gen_rtx_UNSPEC (mode, gen_rtvec (1, x),
UNSPEC_VDUP_N);
return gen_rtx_VEC_DUPLICATE (mode, x);
}
/* Generate code to load VALS, which is a PARALLEL containing only
......@@ -8347,8 +8346,7 @@ neon_expand_vector_init (rtx target, rtx vals)
{
x = copy_to_mode_reg (inner_mode, XVECEXP (vals, 0, 0));
emit_insn (gen_rtx_SET (VOIDmode, target,
gen_rtx_UNSPEC (mode, gen_rtvec (1, x),
UNSPEC_VDUP_N)));
gen_rtx_VEC_DUPLICATE (mode, x)));
return;
}
......@@ -8357,7 +8355,7 @@ neon_expand_vector_init (rtx target, rtx vals)
if (n_var == 1)
{
rtx copy = copy_rtx (vals);
rtvec ops;
rtx index = GEN_INT (one_var);
/* Load constant part of vector, substitute neighboring value for
varying element. */
......@@ -8366,9 +8364,38 @@ neon_expand_vector_init (rtx target, rtx vals)
/* Insert variable. */
x = copy_to_mode_reg (inner_mode, XVECEXP (vals, 0, one_var));
ops = gen_rtvec (3, x, target, GEN_INT (one_var));
emit_insn (gen_rtx_SET (VOIDmode, target,
gen_rtx_UNSPEC (mode, ops, UNSPEC_VSET_LANE)));
switch (mode)
{
case V8QImode:
emit_insn (gen_neon_vset_lanev8qi (target, x, target, index));
break;
case V16QImode:
emit_insn (gen_neon_vset_lanev16qi (target, x, target, index));
break;
case V4HImode:
emit_insn (gen_neon_vset_lanev4hi (target, x, target, index));
break;
case V8HImode:
emit_insn (gen_neon_vset_lanev8hi (target, x, target, index));
break;
case V2SImode:
emit_insn (gen_neon_vset_lanev2si (target, x, target, index));
break;
case V4SImode:
emit_insn (gen_neon_vset_lanev4si (target, x, target, index));
break;
case V2SFmode:
emit_insn (gen_neon_vset_lanev2sf (target, x, target, index));
break;
case V4SFmode:
emit_insn (gen_neon_vset_lanev4sf (target, x, target, index));
break;
case V2DImode:
emit_insn (gen_neon_vset_lanev2di (target, x, target, index));
break;
default:
gcc_unreachable ();
}
return;
}
......
......@@ -967,7 +967,8 @@ let ops =
Use_operands [| Corereg; Dreg; Immed |],
"vget_lane", get_lane, pf_su_8_32;
Vget_lane,
[InfoWord;
[No_op;
InfoWord;
Disassembles_as [Use_operands [| Corereg; Corereg; Dreg |]];
Instruction_name ["vmov"]; Const_valuator (fun _ -> 0)],
Use_operands [| Corereg; Dreg; Immed |],
......@@ -989,7 +990,8 @@ let ops =
Instruction_name ["vmov"]],
Use_operands [| Dreg; Corereg; Dreg; Immed |], "vset_lane",
set_lane, pf_su_8_32;
Vset_lane, [Disassembles_as [Use_operands [| Dreg; Corereg; Corereg |]];
Vset_lane, [No_op;
Disassembles_as [Use_operands [| Dreg; Corereg; Corereg |]];
Instruction_name ["vmov"]; Const_valuator (fun _ -> 0)],
Use_operands [| Dreg; Corereg; Dreg; Immed |], "vset_lane",
set_lane_notype, [S64; U64];
......@@ -1017,7 +1019,8 @@ let ops =
Use_operands [| Dreg; Corereg |], "vdup_n", bits_1,
pf_su_8_32;
Vdup_n,
[Instruction_name ["vmov"];
[No_op;
Instruction_name ["vmov"];
Disassembles_as [Use_operands [| Dreg; Corereg; Corereg |]]],
Use_operands [| Dreg; Corereg |], "vdup_n", notype_1,
[S64; U64];
......@@ -1028,7 +1031,8 @@ let ops =
Use_operands [| Qreg; Corereg |], "vdupQ_n", bits_1,
pf_su_8_32;
Vdup_n,
[Instruction_name ["vmov"];
[No_op;
Instruction_name ["vmov"];
Disassembles_as [Use_operands [| Dreg; Corereg; Corereg |];
Use_operands [| Dreg; Corereg; Corereg |]]],
Use_operands [| Qreg; Corereg |], "vdupQ_n", notype_1,
......@@ -1043,7 +1047,8 @@ let ops =
Use_operands [| Dreg; Corereg |],
"vmov_n", bits_1, pf_su_8_32;
Vmov_n,
[Builtin_name "vdup_n";
[No_op;
Builtin_name "vdup_n";
Instruction_name ["vmov"];
Disassembles_as [Use_operands [| Dreg; Corereg; Corereg |]]],
Use_operands [| Dreg; Corereg |],
......@@ -1056,7 +1061,8 @@ let ops =
Use_operands [| Qreg; Corereg |],
"vmovQ_n", bits_1, pf_su_8_32;
Vmov_n,
[Builtin_name "vdupQ_n";
[No_op;
Builtin_name "vdupQ_n";
Instruction_name ["vmov"];
Disassembles_as [Use_operands [| Dreg; Corereg; Corereg |];
Use_operands [| Dreg; Corereg; Corereg |]]],
......
......@@ -4750,13 +4750,11 @@
@itemize @bullet
@item uint64_t vget_lane_u64 (uint64x1_t, const int)
@*@emph{Form of expected instruction(s):} @code{vmov @var{r0}, @var{r0}, @var{d0}}
@end itemize
@itemize @bullet
@item int64_t vget_lane_s64 (int64x1_t, const int)
@*@emph{Form of expected instruction(s):} @code{vmov @var{r0}, @var{r0}, @var{d0}}
@end itemize
......@@ -4886,13 +4884,11 @@
@itemize @bullet
@item uint64x1_t vset_lane_u64 (uint64_t, uint64x1_t, const int)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
@itemize @bullet
@item int64x1_t vset_lane_s64 (int64_t, int64x1_t, const int)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
......@@ -5081,13 +5077,11 @@
@itemize @bullet
@item uint64x1_t vdup_n_u64 (uint64_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
@itemize @bullet
@item int64x1_t vdup_n_s64 (int64_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
......@@ -5147,13 +5141,11 @@
@itemize @bullet
@item uint64x2_t vdupq_n_u64 (uint64_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
@itemize @bullet
@item int64x2_t vdupq_n_s64 (int64_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
......@@ -5213,13 +5205,11 @@
@itemize @bullet
@item uint64x1_t vmov_n_u64 (uint64_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
@itemize @bullet
@item int64x1_t vmov_n_s64 (int64_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
......@@ -5279,13 +5269,11 @@
@itemize @bullet
@item uint64x2_t vmovq_n_u64 (uint64_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
@itemize @bullet
@item int64x2_t vmovq_n_s64 (int64_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{r0}, @var{r0}}
@end itemize
......@@ -5572,32 +5560,30 @@
@itemize @bullet
@item uint64x1_t vget_low_u64 (uint64x2_t)
@item float32x2_t vget_low_f32 (float32x4_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{d0}}
@end itemize
@itemize @bullet
@item int64x1_t vget_low_s64 (int64x2_t)
@item poly16x4_t vget_low_p16 (poly16x8_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{d0}}
@end itemize
@itemize @bullet
@item float32x2_t vget_low_f32 (float32x4_t)
@item poly8x8_t vget_low_p8 (poly8x16_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{d0}}
@end itemize
@itemize @bullet
@item poly16x4_t vget_low_p16 (poly16x8_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{d0}}
@item uint64x1_t vget_low_u64 (uint64x2_t)
@end itemize
@itemize @bullet
@item poly8x8_t vget_low_p8 (poly8x16_t)
@*@emph{Form of expected instruction(s):} @code{vmov @var{d0}, @var{d0}}
@item int64x1_t vget_low_s64 (int64x2_t)
@end itemize
......
2010-07-02 Sandra Loosemore <sandra@codesourcery.com>
* gcc.target/arm/neon/vdup_ns64.c: Regenerated.
* gcc.target/arm/neon/vdup_nu64.c: Regenerated.
* gcc.target/arm/neon/vdupQ_ns64.c: Regenerated.
* gcc.target/arm/neon/vdupQ_nu64.c: Regenerated.
* gcc.target/arm/neon/vmov_ns64.c: Regenerated.
* gcc.target/arm/neon/vmov_nu64.c: Regenerated.
* gcc.target/arm/neon/vmovQ_ns64.c: Regenerated.
* gcc.target/arm/neon/vmovQ_nu64.c: Regenerated.
* gcc.target/arm/neon/vget_lanes64.c: Regenerated.
* gcc.target/arm/neon/vget_laneu64.c: Regenerated.
* gcc.target/arm/neon/vset_lanes64.c: Regenerated.
* gcc.target/arm/neon/vset_laneu64.c: Regenerated.
* gcc.target/arm/neon-vdup_ns64.c: New.
* gcc.target/arm/neon-vdup_nu64.c: New.
* gcc.target/arm/neon-vdupQ_ns64.c: New.
* gcc.target/arm/neon-vdupQ_nu64.c: New.
* gcc.target/arm/neon-vdupQ_lanes64.c: New.
* gcc.target/arm/neon-vdupQ_laneu64.c: New.
* gcc.target/arm/neon-vmov_ns64.c: New.
* gcc.target/arm/neon-vmov_nu64.c: New.
* gcc.target/arm/neon-vmovQ_ns64.c: New.
* gcc.target/arm/neon-vmovQ_nu64.c: New.
* gcc.target/arm/neon-vget_lanes64.c: New.
* gcc.target/arm/neon-vget_laneu64.c: New.
* gcc.target/arm/neon-vset_lanes64.c: New.
* gcc.target/arm/neon-vset_laneu64.c: New.
2010-07-02 Richard Guenther <rguenther@suse.de>
* g++.dg/torture/20100702-1.C: New testcase.
......
/* Test the `vdupq_lanes64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
int64x2_t out_int64x2_t = {0, 0};
int64_t arg0_int64_t = (int64_t) 0xdeadbeef;
out_int64x2_t = vdupq_lane_s64 ((int64x1_t)arg0_int64_t, 0);
if (vgetq_lane_s64 (out_int64x2_t, 0) != arg0_int64_t)
abort();
if (vgetq_lane_s64 (out_int64x2_t, 1) != arg0_int64_t)
abort();
return 0;
}
/* Test the `vdupq_laneu64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
uint64x2_t out_uint64x2_t = {0, 0};
uint64_t arg0_uint64_t = (uint64_t) 0xdeadbeef;
out_uint64x2_t = vdupq_lane_u64 ((uint64x1_t)arg0_uint64_t, 0);
if (vgetq_lane_u64 (out_uint64x2_t, 0) != arg0_uint64_t)
abort();
if (vgetq_lane_u64 (out_uint64x2_t, 1) != arg0_uint64_t)
abort();
return 0;
}
/* Test the `vdupq_ns64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
int64x2_t out_int64x2_t = {0, 0};
int64_t arg0_int64_t = (int64_t) 0xdeadbeef;
out_int64x2_t = vdupq_n_s64 (arg0_int64_t);
if (vgetq_lane_s64 (out_int64x2_t, 0) != arg0_int64_t)
abort();
if (vgetq_lane_s64 (out_int64x2_t, 1) != arg0_int64_t)
abort();
return 0;
}
/* Test the `vdupq_nu64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
uint64x2_t out_uint64x2_t = {0, 0};
uint64_t arg0_uint64_t = (uint64_t) 0xdeadbeef;
out_uint64x2_t = vdupq_n_u64 (arg0_uint64_t);
if (vgetq_lane_u64 (out_uint64x2_t, 0) != arg0_uint64_t)
abort();
if (vgetq_lane_u64 (out_uint64x2_t, 1) != arg0_uint64_t)
abort();
return 0;
}
/* Test the `vdup_ns64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
int64x1_t out_int64x1_t = 0;
int64_t arg0_int64_t = (int64_t) 0xdeadbeef;
out_int64x1_t = vdup_n_s64 (arg0_int64_t);
if ((int64_t)out_int64x1_t != arg0_int64_t)
abort();
return 0;
}
/* Test the `vdup_nu64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
uint64x1_t out_uint64x1_t = 0;
uint64_t arg0_uint64_t = (uint64_t) 0xdeadbeef;
out_uint64x1_t = vdup_n_u64 (arg0_uint64_t);
if ((uint64_t)out_uint64x1_t != arg0_uint64_t)
abort();
return 0;
}
/* Test the `vget_lane_s64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
int64_t out_int64_t = 0;
int64x1_t arg0_int64x1_t = (int64x1_t) 0xdeadbeefbadf00dLL;
out_int64_t = vget_lane_s64 (arg0_int64x1_t, 0);
if (out_int64_t != (int64_t)arg0_int64x1_t)
abort();
return 0;
}
/* Test the `vget_lane_u64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
uint64_t out_uint64_t = 0;
uint64x1_t arg0_uint64x1_t = (uint64x1_t) 0xdeadbeefbadf00dLL;
out_uint64_t = vget_lane_u64 (arg0_uint64x1_t, 0);
if (out_uint64_t != (uint64_t)arg0_uint64x1_t)
abort();
return 0;
}
/* Test the `vmovq_ns64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
int64x2_t out_int64x2_t = {0, 0};
int64_t arg0_int64_t = (int64_t) 0xdeadbeef;
out_int64x2_t = vmovq_n_s64 (arg0_int64_t);
if (vgetq_lane_s64 (out_int64x2_t, 0) != arg0_int64_t)
abort();
if (vgetq_lane_s64 (out_int64x2_t, 1) != arg0_int64_t)
abort();
return 0;
}
/* Test the `vmovq_nu64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
uint64x2_t out_uint64x2_t = {0, 0};
uint64_t arg0_uint64_t = (uint64_t) 0xdeadbeef;
out_uint64x2_t = vmovq_n_u64 (arg0_uint64_t);
if (vgetq_lane_u64 (out_uint64x2_t, 0) != arg0_uint64_t)
abort();
if (vgetq_lane_u64 (out_uint64x2_t, 1) != arg0_uint64_t)
abort();
return 0;
}
/* Test the `vmov_ns64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
int64x1_t out_int64x1_t = 0;
int64_t arg0_int64_t = (int64_t) 0xdeadbeef;
out_int64x1_t = vmov_n_s64 (arg0_int64_t);
if ((int64_t)out_int64x1_t != arg0_int64_t)
abort();
return 0;
}
/* Test the `vmov_nu64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
uint64x1_t out_uint64x1_t = 0;
uint64_t arg0_uint64_t = (uint64_t) 0xdeadbeef;
out_uint64x1_t = vmov_n_u64 (arg0_uint64_t);
if ((uint64_t)out_uint64x1_t != arg0_uint64_t)
abort();
return 0;
}
/* Test the `vset_lane_s64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
int64x1_t out_int64x1_t = 0;
int64_t arg0_int64_t = 0xf00f00f00LL;
int64x1_t arg1_int64x1_t = (int64x1_t) 0xdeadbeefbadf00dLL;
out_int64x1_t = vset_lane_s64 (arg0_int64_t, arg1_int64x1_t, 0);
if ((int64_t)out_int64x1_t != arg0_int64_t)
abort();
return 0;
}
/* Test the `vset_lane_s64' ARM Neon intrinsic. */
/* { dg-do run } */
/* { dg-require-effective-target arm_neon_hw } */
/* { dg-options "-O0" } */
/* { dg-add-options arm_neon } */
#include "arm_neon.h"
#include <stdlib.h>
int main (void)
{
uint64x1_t out_uint64x1_t = 0;
uint64_t arg0_uint64_t = 0xf00f00f00LL;
uint64x1_t arg1_uint64x1_t = (uint64x1_t) 0xdeadbeefbadf00dLL;
out_uint64x1_t = vset_lane_u64 (arg0_uint64_t, arg1_uint64x1_t, 0);
if ((uint64_t)out_uint64x1_t != arg0_uint64_t)
abort();
return 0;
}
......@@ -16,6 +16,4 @@ void test_vdupQ_ns64 (void)
out_int64x2_t = vdupq_n_s64 (arg0_int64_t);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,6 +16,4 @@ void test_vdupQ_nu64 (void)
out_uint64x2_t = vdupq_n_u64 (arg0_uint64_t);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,5 +16,4 @@ void test_vdup_ns64 (void)
out_int64x1_t = vdup_n_s64 (arg0_int64_t);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,5 +16,4 @@ void test_vdup_nu64 (void)
out_uint64x1_t = vdup_n_u64 (arg0_uint64_t);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,5 +16,4 @@ void test_vget_lanes64 (void)
out_int64_t = vget_lane_s64 (arg0_int64x1_t, 0);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[rR\]\[0-9\]+, \[rR\]\[0-9\]+, \[dD\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,5 +16,4 @@ void test_vget_laneu64 (void)
out_uint64_t = vget_lane_u64 (arg0_uint64x1_t, 0);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[rR\]\[0-9\]+, \[rR\]\[0-9\]+, \[dD\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,6 +16,4 @@ void test_vmovQ_ns64 (void)
out_int64x2_t = vmovq_n_s64 (arg0_int64_t);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,6 +16,4 @@ void test_vmovQ_nu64 (void)
out_uint64x2_t = vmovq_n_u64 (arg0_uint64_t);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,5 +16,4 @@ void test_vmov_ns64 (void)
out_int64x1_t = vmov_n_s64 (arg0_int64_t);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -16,5 +16,4 @@ void test_vmov_nu64 (void)
out_uint64x1_t = vmov_n_u64 (arg0_uint64_t);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -17,5 +17,4 @@ void test_vset_lanes64 (void)
out_int64x1_t = vset_lane_s64 (arg0_int64_t, arg1_int64x1_t, 0);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
......@@ -17,5 +17,4 @@ void test_vset_laneu64 (void)
out_uint64x1_t = vset_lane_u64 (arg0_uint64_t, arg1_uint64x1_t, 0);
}
/* { dg-final { scan-assembler "vmov\[ \]+\[dD\]\[0-9\]+, \[rR\]\[0-9\]+, \[rR\]\[0-9\]+!?\(\[ \]+@\[a-zA-Z0-9 \]+\)?\n" } } */
/* { dg-final { cleanup-saved-temps } } */
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment