Commit 30fa8e9c by Jakub Jelinek Committed by Jakub Jelinek

re PR tree-optimization/83170 (ICE: Segmentation fault - during GIMPLE pass: store-merging)

	PR tree-optimization/83170
	PR tree-optimization/83241
	* gimple-ssa-store-merging.c
	(imm_store_chain_info::try_coalesce_bswap): Update vuse field from
	gimple_vuse (ins_stmt) in case it has changed.
	(imm_store_chain_info::output_merged_store): Likewise.

	* gcc.dg/store_merging_17.c: New test.

From-SVN: r255356
parent edb48cdb
2017-12-02 Jakub Jelinek <jakub@redhat.com> 2017-12-02 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/83170
PR tree-optimization/83241
* gimple-ssa-store-merging.c
(imm_store_chain_info::try_coalesce_bswap): Update vuse field from
gimple_vuse (ins_stmt) in case it has changed.
(imm_store_chain_info::output_merged_store): Likewise.
* tree-chkp.c (chkp_compute_bounds_for_assignment): Handle * tree-chkp.c (chkp_compute_bounds_for_assignment): Handle
POINTER_DIFF_EXPR. POINTER_DIFF_EXPR.
...@@ -2384,6 +2384,9 @@ imm_store_chain_info::try_coalesce_bswap (merged_store_group *merged_store, ...@@ -2384,6 +2384,9 @@ imm_store_chain_info::try_coalesce_bswap (merged_store_group *merged_store,
this_n.type = type; this_n.type = type;
if (!this_n.base_addr) if (!this_n.base_addr)
this_n.range = try_size / BITS_PER_UNIT; this_n.range = try_size / BITS_PER_UNIT;
else
/* Update vuse in case it has changed by output_merged_stores. */
this_n.vuse = gimple_vuse (info->ins_stmt);
unsigned int bitpos = info->bitpos - infof->bitpos; unsigned int bitpos = info->bitpos - infof->bitpos;
if (!do_shift_rotate (LSHIFT_EXPR, &this_n, if (!do_shift_rotate (LSHIFT_EXPR, &this_n,
BYTES_BIG_ENDIAN BYTES_BIG_ENDIAN
...@@ -3341,10 +3344,16 @@ imm_store_chain_info::output_merged_store (merged_store_group *group) ...@@ -3341,10 +3344,16 @@ imm_store_chain_info::output_merged_store (merged_store_group *group)
we've checked the aliasing already in try_coalesce_bswap and we've checked the aliasing already in try_coalesce_bswap and
we want to sink the need load into seq. So need to use new_vuse we want to sink the need load into seq. So need to use new_vuse
on the load. */ on the load. */
if (n->base_addr && n->vuse == NULL) if (n->base_addr)
{ {
n->vuse = new_vuse; if (n->vuse == NULL)
ins_stmt = NULL; {
n->vuse = new_vuse;
ins_stmt = NULL;
}
else
/* Update vuse in case it has changed by output_merged_stores. */
n->vuse = gimple_vuse (ins_stmt);
} }
bswap_res = bswap_replace (gsi_start (seq), ins_stmt, fndecl, bswap_res = bswap_replace (gsi_start (seq), ins_stmt, fndecl,
bswap_type, load_type, n, bswap); bswap_type, load_type, n, bswap);
......
2017-12-02 Jakub Jelinek <jakub@redhat.com> 2017-12-02 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/83170
PR tree-optimization/83241
* gcc.dg/store_merging_17.c: New test.
* gcc.target/i386/mpx/pointer-diff-1.c: New test. * gcc.target/i386/mpx/pointer-diff-1.c: New test.
PR c++/81212 PR c++/81212
......
/* PR tree-optimization/83241 */
/* { dg-do compile { target store_merge } } */
/* { dg-options "-O2" } */
struct S { int a; short b[32]; } e;
struct T { volatile int c; int d; } f;
void
foo ()
{
struct T g = f;
e.b[0] = 6;
e.b[1] = 6;
e.b[4] = g.d;
e.b[5] = g.d >> 16;
e.a = 1;
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment