Commit 6b3a1ce9 by Matthew Wahab Committed by Matthew Wahab

re PR target/65697 (__atomic memory barriers not strong enough for __sync builtins)

2015-06-29  Matthew Wahab  <matthew.wahab@arm.com>

	PR target/65697
	* config/armc/arm.c (arm_split_atomic_op): For ARMv8, replace an
	initial acquire barrier with final barrier.

From-SVN: r225132
parent e85f8bb8
2015-06-29 Matthew Wahab <matthew.wahab@arm.com>
PR target/65697
* config/armc/arm.c (arm_split_atomic_op): For ARMv8, replace an
initial acquire barrier with final barrier.
2015-06-29 Richard Henderson <rth@redhat.com>
* config/i386/constraints.md (Bf): New constraint.
......
......@@ -27679,6 +27679,8 @@ arm_split_atomic_op (enum rtx_code code, rtx old_out, rtx new_out, rtx mem,
rtx_code_label *label;
rtx x;
bool is_armv8_sync = arm_arch8 && is_mm_sync (model);
bool use_acquire = TARGET_HAVE_LDACQ
&& !(is_mm_relaxed (model) || is_mm_consume (model)
|| is_mm_release (model));
......@@ -27687,6 +27689,11 @@ arm_split_atomic_op (enum rtx_code code, rtx old_out, rtx new_out, rtx mem,
&& !(is_mm_relaxed (model) || is_mm_consume (model)
|| is_mm_acquire (model));
/* For ARMv8, a load-acquire is too weak for __sync memory orders. Instead,
a full barrier is emitted after the store-release. */
if (is_armv8_sync)
use_acquire = false;
/* Checks whether a barrier is needed and emits one accordingly. */
if (!(use_acquire || use_release))
arm_pre_atomic_barrier (model);
......@@ -27757,7 +27764,8 @@ arm_split_atomic_op (enum rtx_code code, rtx old_out, rtx new_out, rtx mem,
emit_unlikely_jump (gen_cbranchsi4 (x, cond, const0_rtx, label));
/* Checks whether a barrier is needed and emits one accordingly. */
if (!(use_acquire || use_release))
if (is_armv8_sync
|| !(use_acquire || use_release))
arm_post_atomic_barrier (model);
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment