Commit ee03e71d by Richard Biener Committed by Richard Biener

lto-streamer.h (enum LTO_tags): Add LTO_tree_scc.

2013-06-17  Richard Biener  <rguenther@suse.de>

	* lto-streamer.h (enum LTO_tags): Add LTO_tree_scc.
	(lto_input_scc): Declare.
	(lto_input_tree_1): Likewise.
	(struct lto_stats_d): Add num_tree_bodies_output and
	num_pickle_refs_output.
	* lto-streamer-in.c (lto_read_body): Use streamer_tree_cache_get_tree.
	(lto_read_tree_1): Split out from ...
	(lto_read_tree): ... this.
	(lto_input_scc): New function.
	(lto_input_tree_1): Split out from ...
	(lto_input_tree): ... this.  Handle LTO_tree_scc.
	(lto_data_in_create): Create the streamer cache without hashes.
	* lto-streamer-out.c (create_output_block): Create the streamer
	cache with hashes when not doing WPA.
	(lto_write_tree_1): Split out from ...
	(lto_write_tree): ... this.
	(get_symbol_initial_value): New function.
	(lto_output_tree_1): Split out from ...
	(lto_output_tree): ... this.  Write trees as series of SCCs
	using a DFS walk via DFS_write_tree.
	(struct sccs, struct scc_entry): New types.
	(next_dfs_num, sccstack, sccstate, sccstate_obstack): New globals.
	(DFS_write_tree_body): New function.
	(DFS_write_tree): Likewise.
	(hash_tree): Likewise.
	(scc_entry_compare): Likewise.
	(hash_scc): Likewise.
	(tree_is_indexable): DEBUG_EXPR_DECLs are local entities.
	* tree-streamer-in.c (lto_input_ts_list_tree_pointers): Stream
	TREE_CHAIN as regular reference.
	(streamer_read_integer_cst): Remove.
	(streamer_get_pickled_tree): Adjust.
	* tree-streamer-out.c (streamer_write_chain): Disable streaming
	of DECL_EXTERNALs in BLOCK_VARS for now.
	(write_ts_list_tree_pointers): Stream TREE_CHAIN as regular
	reference.
	* tree-streamer.c (streamer_tree_cache_add_to_node_array):
	Add hash value argument and record that if hashes are recorded
	in the cache.
	(streamer_tree_cache_insert_1): Adjust.
	(streamer_tree_cache_insert): Likewise.
	(streamer_tree_cache_insert_at): Rename to ...
	(streamer_tree_cache_replace_tree): ... this and adjust.
	(streamer_tree_cache_append): Adjust.
	(record_common_node): Likewise.
	(streamer_tree_cache_create): Add argument whether to
	record hash values together with trees.
	(streamer_tree_cache_delete): Adjust.
	* tree-streamer.h (struct streamer_tree_cache_d): Add
	vector of hashes.
	(streamer_read_integer_cst): Remove.
	(streamer_tree_cache_insert): Adjust.
	(streamer_tree_cache_append): Likewise.
	(streamer_tree_cache_insert_at): Rename to ...
	(streamer_tree_cache_replace_tree): ... this and adjust.
	(streamer_tree_cache_create): Add argument whether to record hashes.
	(streamer_tree_cache_get): Rename to ...
	(streamer_tree_cache_get_tree): ... this.
	(streamer_tree_cache_get_hash): New function.
	* tree.c (cache_integer_cst): New function.
	* tree.h (cache_integer_cst): Declare.
	(ANON_AGGRNAME_FORMAT, ANON_AGGRNAME_P): Move here from cp/cp-tree.h.
	* lto-symtab.c (lto_varpool_replace_node): Only release
	DECL_INITIAL of non-prevailing decls.
	* varpool.c (varpool_remove_initializer): Do not release
	DECL_INITIAL when we are still in CGRAPH_LTO_STREAMING.

	cp/
	* cp-tree.h (ANON_AGGRNAME_FORMAT, ANON_AGGRNAME_P): Move to tree.h.

	lto/
	* Make-lang.in (lto.o): Add $(DATA_STREAMER_H) dependency.
	* lto.c: Include data-streamer.h.
	(lto_read_in_decl_state): Use streamer_tree_cache_get_tree.
	(gimple_type_leader_entry_s, gimple_type_leader,
	gimple_lookup_type_leader): Remove.
	(gtc_visit): Simplify.
	(gimple_types_compatible_p): Likewise.
	(gimple_register_type_1): Likewise.  Merge into ...
	(gimple_register_type): ... this.  Keep it as legacy for
	statistics purposes for now.
	(fixup_integer_cst): Remove.
	(LTO_FIXUP_TREE, lto_fixup_types, lto_ft_*): Simplify and
	rename to ...
	(MAYBE_REMEMBER_WITH_VARS, maybe_remember_with_vars,
	maybe_remember_with_vars_*): ... these.
	(uniquify_nodes): Remove.
	(lto_fixup_prevailing_type): New function.
	(struct tree_scc, struct tree_scc_hasher): New type and hasher.
	(tree_scc_hash, tree_scc_hash_obstack): New globals.
	(num_merged_types, num_prevailing_types, num_not_merged_types,
	num_not_merged_types_in_same_scc, total_scc_size, num_sccs_read,
	total_scc_size_merged, num_sccs_merged, num_scc_compares,
	num_scc_compare_collisions): New global counters.
	(compare_tree_sccs_1): New function.
	(compare_tree_sccs): Likewise.
	(unify_scc): Likewise.
	(lto_read_decls): Stream in tree SCCs and unify them on the
	way in.  Finalize prevailing SCC tree members.
	(read_cgraph_and_symbols): Do not initialize or free gimple_type_leader.
	Allocate and free tree_scc_hash_obstack and tree_scc_hash, do not bother
	to ggc-collect during merging.
	(print_lto_report_1): Adjust for new merging code.

From-SVN: r200151
parent a0e33cbc
2013-06-17 Richard Biener <rguenther@suse.de>
* lto-streamer.h (enum LTO_tags): Add LTO_tree_scc.
(lto_input_scc): Declare.
(lto_input_tree_1): Likewise.
(struct lto_stats_d): Add num_tree_bodies_output and
num_pickle_refs_output.
* lto-streamer-in.c (lto_read_body): Use streamer_tree_cache_get_tree.
(lto_read_tree_1): Split out from ...
(lto_read_tree): ... this.
(lto_input_scc): New function.
(lto_input_tree_1): Split out from ...
(lto_input_tree): ... this. Handle LTO_tree_scc.
(lto_data_in_create): Create the streamer cache without hashes.
* lto-streamer-out.c (create_output_block): Create the streamer
cache with hashes when not doing WPA.
(lto_write_tree_1): Split out from ...
(lto_write_tree): ... this.
(get_symbol_initial_value): New function.
(lto_output_tree_1): Split out from ...
(lto_output_tree): ... this. Write trees as series of SCCs
using a DFS walk via DFS_write_tree.
(struct sccs, struct scc_entry): New types.
(next_dfs_num, sccstack, sccstate, sccstate_obstack): New globals.
(DFS_write_tree_body): New function.
(DFS_write_tree): Likewise.
(hash_tree): Likewise.
(scc_entry_compare): Likewise.
(hash_scc): Likewise.
(tree_is_indexable): DEBUG_EXPR_DECLs are local entities.
* tree-streamer-in.c (lto_input_ts_list_tree_pointers): Stream
TREE_CHAIN as regular reference.
(streamer_read_integer_cst): Remove.
(streamer_get_pickled_tree): Adjust.
* tree-streamer-out.c (streamer_write_chain): Disable streaming
of DECL_EXTERNALs in BLOCK_VARS for now.
(write_ts_list_tree_pointers): Stream TREE_CHAIN as regular
reference.
* tree-streamer.c (streamer_tree_cache_add_to_node_array):
Add hash value argument and record that if hashes are recorded
in the cache.
(streamer_tree_cache_insert_1): Adjust.
(streamer_tree_cache_insert): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_append): Adjust.
(record_common_node): Likewise.
(streamer_tree_cache_create): Add argument whether to
record hash values together with trees.
(streamer_tree_cache_delete): Adjust.
* tree-streamer.h (struct streamer_tree_cache_d): Add
vector of hashes.
(streamer_read_integer_cst): Remove.
(streamer_tree_cache_insert): Adjust.
(streamer_tree_cache_append): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_create): Add argument whether to record hashes.
(streamer_tree_cache_get): Rename to ...
(streamer_tree_cache_get_tree): ... this.
(streamer_tree_cache_get_hash): New function.
* tree.c (cache_integer_cst): New function.
* tree.h (cache_integer_cst): Declare.
(ANON_AGGRNAME_FORMAT, ANON_AGGRNAME_P): Move here from cp/cp-tree.h.
* lto-symtab.c (lto_varpool_replace_node): Only release
DECL_INITIAL of non-prevailing decls.
* varpool.c (varpool_remove_initializer): Do not release
DECL_INITIAL when we are still in CGRAPH_LTO_STREAMING.
2013-06-16 Jürgen Urban <JuergenUrban@gmx.de>
* config/mips/mips.h (ISA_HAS_MUL3): Include TARGET_MIPS5900.
......
2013-06-17 Richard Biener <rguenther@suse.de>
* cp-tree.h (ANON_AGGRNAME_FORMAT, ANON_AGGRNAME_P): Move to tree.h.
2013-06-17 Paolo Carlini <paolo.carlini@oracle.com>
PR c++/16128
......
......@@ -4345,8 +4345,6 @@ extern GTY(()) vec<tree, va_gc> *local_classes;
#define VFIELD_NAME "_vptr."
#define VFIELD_NAME_FORMAT "_vptr.%s"
#define ANON_AGGRNAME_FORMAT "._%d"
#else /* NO_DOT_IN_LABEL */
#ifndef NO_DOLLAR_IN_LABEL
......@@ -4357,7 +4355,6 @@ extern GTY(()) vec<tree, va_gc> *local_classes;
#define VFIELD_BASE "$vf"
#define VFIELD_NAME "_vptr$"
#define VFIELD_NAME_FORMAT "_vptr$%s"
#define ANON_AGGRNAME_FORMAT "$_%d"
#else /* NO_DOLLAR_IN_LABEL */
......@@ -4376,12 +4373,6 @@ extern GTY(()) vec<tree, va_gc> *local_classes;
sizeof (VFIELD_NAME) - 1))
#define VFIELD_NAME_FORMAT "__vptr_%s"
#define ANON_AGGRNAME_PREFIX "__anon_"
#define ANON_AGGRNAME_P(ID_NODE) \
(!strncmp (IDENTIFIER_POINTER (ID_NODE), ANON_AGGRNAME_PREFIX, \
sizeof (ANON_AGGRNAME_PREFIX) - 1))
#define ANON_AGGRNAME_FORMAT "__anon_%d"
#endif /* NO_DOLLAR_IN_LABEL */
#endif /* NO_DOT_IN_LABEL */
......@@ -4418,11 +4409,6 @@ extern GTY(()) vec<tree, va_gc> *local_classes;
#define VFIELD_NAME_P(ID_NODE) \
(!strncmp (IDENTIFIER_POINTER (ID_NODE), VFIELD_NAME, sizeof(VFIELD_NAME)-1))
/* For anonymous aggregate types, we need some sort of name to
hold on to. In practice, this should not appear, but it should
not be harmful if it does. */
#define ANON_AGGRNAME_P(ID_NODE) (IDENTIFIER_POINTER (ID_NODE)[0] == JOINER \
&& IDENTIFIER_POINTER (ID_NODE)[1] == '_')
#endif /* !defined(NO_DOLLAR_IN_LABEL) || !defined(NO_DOT_IN_LABEL) */
......
......@@ -1016,7 +1016,7 @@ lto_read_body (struct lto_file_decl_data *file_data, tree fn_decl,
unsigned i;
for (i = len; i-- > from;)
{
tree t = cache->nodes[i];
tree t = streamer_tree_cache_get_tree (cache, i);
if (t == NULL_TREE)
continue;
......@@ -1056,12 +1056,43 @@ lto_input_function_body (struct lto_file_decl_data *file_data,
}
/* Read the physical representation of a tree node EXPR from
input block IB using the per-file context in DATA_IN. */
static void
lto_read_tree_1 (struct lto_input_block *ib, struct data_in *data_in, tree expr)
{
/* Read all the bitfield values in EXPR. Note that for LTO, we
only write language-independent bitfields, so no more unpacking is
needed. */
streamer_read_tree_bitfields (ib, data_in, expr);
/* Read all the pointer fields in EXPR. */
streamer_read_tree_body (ib, data_in, expr);
/* Read any LTO-specific data not read by the tree streamer. */
if (DECL_P (expr)
&& TREE_CODE (expr) != FUNCTION_DECL
&& TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
DECL_INITIAL (expr) = stream_read_tree (ib, data_in);
/* We should never try to instantiate an MD or NORMAL builtin here. */
if (TREE_CODE (expr) == FUNCTION_DECL)
gcc_assert (!streamer_handle_as_builtin_p (expr));
#ifdef LTO_STREAMER_DEBUG
/* Remove the mapping to RESULT's original address set by
streamer_alloc_tree. */
lto_orig_address_remove (expr);
#endif
}
/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */
static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
enum LTO_tags tag)
enum LTO_tags tag, hashval_t hash)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
......@@ -1069,35 +1100,70 @@ lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
streamer_tree_cache_append (data_in->reader_cache, result);
streamer_tree_cache_append (data_in->reader_cache, result, hash);
/* Read all the bitfield values in RESULT. Note that for LTO, we
only write language-independent bitfields, so no more unpacking is
needed. */
streamer_read_tree_bitfields (ib, data_in, result);
lto_read_tree_1 (ib, data_in, result);
/* Read all the pointer fields in RESULT. */
streamer_read_tree_body (ib, data_in, result);
/* end_marker = */ streamer_read_uchar (ib);
/* Read any LTO-specific data not read by the tree streamer. */
if (DECL_P (result)
&& TREE_CODE (result) != FUNCTION_DECL
&& TREE_CODE (result) != TRANSLATION_UNIT_DECL)
DECL_INITIAL (result) = stream_read_tree (ib, data_in);
return result;
}
/* We should never try to instantiate an MD or NORMAL builtin here. */
if (TREE_CODE (result) == FUNCTION_DECL)
gcc_assert (!streamer_handle_as_builtin_p (result));
/* end_marker = */ streamer_read_uchar (ib);
/* Populate the reader cache with trees materialized from the SCC
following in the IB, DATA_IN stream. */
#ifdef LTO_STREAMER_DEBUG
/* Remove the mapping to RESULT's original address set by
streamer_alloc_tree. */
lto_orig_address_remove (result);
#endif
hashval_t
lto_input_scc (struct lto_input_block *ib, struct data_in *data_in,
unsigned *len, unsigned *entry_len)
{
/* A blob of unnamed tree nodes, fill the cache from it and
recurse. */
unsigned size = streamer_read_uhwi (ib);
hashval_t scc_hash = streamer_read_uhwi (ib);
unsigned scc_entry_len = 1;
return result;
if (size == 1)
{
enum LTO_tags tag = streamer_read_record_start (ib);
lto_input_tree_1 (ib, data_in, tag, scc_hash);
}
else
{
unsigned int first = data_in->reader_cache->nodes.length ();
tree result;
scc_entry_len = streamer_read_uhwi (ib);
/* Materialize size trees by reading their headers. */
for (unsigned i = 0; i < size; ++i)
{
enum LTO_tags tag = streamer_read_record_start (ib);
if (tag == LTO_null
|| (tag >= LTO_field_decl_ref && tag <= LTO_global_decl_ref)
|| tag == LTO_tree_pickle_reference
|| tag == LTO_builtin_decl
|| tag == LTO_integer_cst
|| tag == LTO_tree_scc)
gcc_unreachable ();
result = streamer_alloc_tree (ib, data_in, tag);
streamer_tree_cache_append (data_in->reader_cache, result, 0);
}
/* Read the tree bitpacks and references. */
for (unsigned i = 0; i < size; ++i)
{
result = streamer_tree_cache_get_tree (data_in->reader_cache,
first + i);
lto_read_tree_1 (ib, data_in, result);
/* end_marker = */ streamer_read_uchar (ib);
}
}
*len = size;
*entry_len = scc_entry_len;
return scc_hash;
}
......@@ -1106,12 +1172,11 @@ lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
to previously read nodes. */
tree
lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
lto_input_tree_1 (struct lto_input_block *ib, struct data_in *data_in,
enum LTO_tags tag, hashval_t hash)
{
enum LTO_tags tag;
tree result;
tag = streamer_read_record_start (ib);
gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);
if (tag == LTO_null)
......@@ -1137,19 +1202,39 @@ lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
}
else if (tag == LTO_integer_cst)
{
/* For shared integer constants we only need the type and its hi/low
words. */
result = streamer_read_integer_cst (ib, data_in);
/* For shared integer constants in singletons we can use the existing
tree integer constant merging code. */
tree type = stream_read_tree (ib, data_in);
unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
HOST_WIDE_INT high = streamer_read_hwi (ib);
result = build_int_cst_wide (type, low, high);
streamer_tree_cache_append (data_in->reader_cache, result, hash);
}
else if (tag == LTO_tree_scc)
{
unsigned len, entry_len;
/* Input and skip the SCC. */
lto_input_scc (ib, data_in, &len, &entry_len);
/* Recurse. */
return lto_input_tree (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
result = lto_read_tree (ib, data_in, tag);
result = lto_read_tree (ib, data_in, tag, hash);
}
return result;
}
tree
lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
{
return lto_input_tree_1 (ib, data_in, streamer_read_record_start (ib), 0);
}
/* Input toplevel asms. */
......@@ -1220,7 +1305,7 @@ lto_data_in_create (struct lto_file_decl_data *file_data, const char *strings,
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
data_in->reader_cache = streamer_tree_cache_create ();
data_in->reader_cache = streamer_tree_cache_create (false);
return data_in;
}
......
......@@ -71,7 +71,7 @@ create_output_block (enum lto_section_type section_type)
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
ob->writer_cache = streamer_tree_cache_create ();
ob->writer_cache = streamer_tree_cache_create (!flag_wpa);
if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
......@@ -129,6 +129,8 @@ tree_is_indexable (tree t)
else if (TREE_CODE (t) == VAR_DECL && decl_function_context (t)
&& !TREE_STATIC (t))
return false;
else if (TREE_CODE (t) == DEBUG_EXPR_DECL)
return false;
/* Variably modified types need to be streamed alongside function
bodies because they can refer to local entities. Together with
them we have to localize their members as well.
......@@ -293,27 +295,48 @@ lto_is_streamable (tree expr)
}
/* For EXPR lookup and return what we want to stream to OB as DECL_INITIAL. */
static tree
get_symbol_initial_value (struct output_block *ob, tree expr)
{
gcc_checking_assert (DECL_P (expr)
&& TREE_CODE (expr) != FUNCTION_DECL
&& TREE_CODE (expr) != TRANSLATION_UNIT_DECL);
/* Handle DECL_INITIAL for symbols. */
tree initial = DECL_INITIAL (expr);
if (TREE_CODE (expr) == VAR_DECL
&& (TREE_STATIC (expr) || DECL_EXTERNAL (expr))
&& !DECL_IN_CONSTANT_POOL (expr)
&& initial)
{
lto_symtab_encoder_t encoder;
struct varpool_node *vnode;
encoder = ob->decl_state->symtab_node_encoder;
vnode = varpool_get_node (expr);
if (!vnode
|| !lto_symtab_encoder_encode_initializer_p (encoder,
vnode))
initial = error_mark_node;
}
return initial;
}
/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */
static void
lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
lto_write_tree_1 (struct output_block *ob, tree expr, bool ref_p)
{
struct bitpack_d bp;
if (!lto_is_streamable (expr))
internal_error ("tree code %qs is not supported in LTO streams",
tree_code_name[TREE_CODE (expr)]);
/* Write the header, containing everything needed to materialize
EXPR on the reading side. */
streamer_write_tree_header (ob, expr);
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
bp = bitpack_create (ob->main_stream);
bitpack_d bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);
......@@ -326,30 +349,939 @@ lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
&& TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
{
/* Handle DECL_INITIAL for symbols. */
tree initial = DECL_INITIAL (expr);
if (TREE_CODE (expr) == VAR_DECL
&& (TREE_STATIC (expr) || DECL_EXTERNAL (expr))
&& !DECL_IN_CONSTANT_POOL (expr)
&& initial)
{
lto_symtab_encoder_t encoder;
struct varpool_node *vnode;
encoder = ob->decl_state->symtab_node_encoder;
vnode = varpool_get_node (expr);
if (!vnode
|| !lto_symtab_encoder_encode_initializer_p (encoder,
vnode))
initial = error_mark_node;
}
tree initial = get_symbol_initial_value (ob, expr);
stream_write_tree (ob, initial, ref_p);
}
}
/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */
static void
lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
{
if (!lto_is_streamable (expr))
internal_error ("tree code %qs is not supported in LTO streams",
tree_code_name[TREE_CODE (expr)]);
/* Write the header, containing everything needed to materialize
EXPR on the reading side. */
streamer_write_tree_header (ob, expr);
lto_write_tree_1 (ob, expr, ref_p);
/* Mark the end of EXPR. */
streamer_write_zero (ob);
}
/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */
static void
lto_output_tree_1 (struct output_block *ob, tree expr, hashval_t hash,
bool ref_p, bool this_ref_p)
{
unsigned ix;
gcc_checking_assert (expr != NULL_TREE
&& !(this_ref_p && tree_is_indexable (expr)));
bool exists_p = streamer_tree_cache_insert (ob->writer_cache,
expr, hash, &ix);
gcc_assert (!exists_p);
if (streamer_handle_as_builtin_p (expr))
{
/* MD and NORMAL builtins do not need to be written out
completely as they are always instantiated by the
compiler on startup. The only builtins that need to
be written out are BUILT_IN_FRONTEND. For all other
builtins, we simply write the class and code. */
streamer_write_builtin (ob, expr);
}
else if (TREE_CODE (expr) == INTEGER_CST
&& !TREE_OVERFLOW (expr))
{
/* Shared INTEGER_CST nodes are special because they need their
original type to be materialized by the reader (to implement
TYPE_CACHED_VALUES). */
streamer_write_integer_cst (ob, expr, ref_p);
}
else
{
/* This is the first time we see EXPR, write its fields
to OB. */
lto_write_tree (ob, expr, ref_p);
}
}
struct sccs
{
unsigned int dfsnum;
unsigned int low;
};
struct scc_entry
{
tree t;
hashval_t hash;
};
static unsigned int next_dfs_num;
static vec<scc_entry> sccstack;
static struct pointer_map_t *sccstate;
static struct obstack sccstate_obstack;
static void
DFS_write_tree (struct output_block *ob, sccs *from_state,
tree expr, bool ref_p, bool this_ref_p);
/* Handle the tree EXPR in the DFS walk with SCC state EXPR_STATE and
DFS recurse for all tree edges originating from it. */
static void
DFS_write_tree_body (struct output_block *ob,
tree expr, sccs *expr_state, bool ref_p)
{
#define DFS_follow_tree_edge(DEST) \
DFS_write_tree (ob, expr_state, DEST, ref_p, ref_p)
enum tree_code code;
code = TREE_CODE (expr);
if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
{
if (TREE_CODE (expr) != IDENTIFIER_NODE)
DFS_follow_tree_edge (TREE_TYPE (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
{
for (unsigned i = 0; i < VECTOR_CST_NELTS (expr); ++i)
DFS_follow_tree_edge (VECTOR_CST_ELT (expr, i));
}
if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
{
DFS_follow_tree_edge (TREE_REALPART (expr));
DFS_follow_tree_edge (TREE_IMAGPART (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
{
/* Drop names that were created for anonymous entities. */
if (DECL_NAME (expr)
&& TREE_CODE (DECL_NAME (expr)) == IDENTIFIER_NODE
&& ANON_AGGRNAME_P (DECL_NAME (expr)))
;
else
DFS_follow_tree_edge (DECL_NAME (expr));
DFS_follow_tree_edge (DECL_CONTEXT (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
{
DFS_follow_tree_edge (DECL_SIZE (expr));
DFS_follow_tree_edge (DECL_SIZE_UNIT (expr));
/* Note, DECL_INITIAL is not handled here. Since DECL_INITIAL needs
special handling in LTO, it must be handled by streamer hooks. */
DFS_follow_tree_edge (DECL_ATTRIBUTES (expr));
/* Do not follow DECL_ABSTRACT_ORIGIN. We cannot handle debug information
for early inlining so drop it on the floor instead of ICEing in
dwarf2out.c. */
if ((TREE_CODE (expr) == VAR_DECL
|| TREE_CODE (expr) == PARM_DECL)
&& DECL_HAS_VALUE_EXPR_P (expr))
DFS_follow_tree_edge (DECL_VALUE_EXPR (expr));
if (TREE_CODE (expr) == VAR_DECL)
DFS_follow_tree_edge (DECL_DEBUG_EXPR (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
{
if (TREE_CODE (expr) == FUNCTION_DECL)
{
for (tree t = DECL_ARGUMENTS (expr); t; t = TREE_CHAIN (t))
DFS_follow_tree_edge (t);
DFS_follow_tree_edge (DECL_RESULT (expr));
}
else if (TREE_CODE (expr) == TYPE_DECL)
DFS_follow_tree_edge (DECL_ORIGINAL_TYPE (expr));
DFS_follow_tree_edge (DECL_VINDEX (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
{
/* Make sure we don't inadvertently set the assembler name. */
if (DECL_ASSEMBLER_NAME_SET_P (expr))
DFS_follow_tree_edge (DECL_ASSEMBLER_NAME (expr));
DFS_follow_tree_edge (DECL_SECTION_NAME (expr));
DFS_follow_tree_edge (DECL_COMDAT_GROUP (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
{
DFS_follow_tree_edge (DECL_FIELD_OFFSET (expr));
DFS_follow_tree_edge (DECL_BIT_FIELD_TYPE (expr));
DFS_follow_tree_edge (DECL_BIT_FIELD_REPRESENTATIVE (expr));
DFS_follow_tree_edge (DECL_FIELD_BIT_OFFSET (expr));
DFS_follow_tree_edge (DECL_FCONTEXT (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
{
DFS_follow_tree_edge (DECL_FUNCTION_PERSONALITY (expr));
DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_TARGET (expr));
DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
{
DFS_follow_tree_edge (TYPE_SIZE (expr));
DFS_follow_tree_edge (TYPE_SIZE_UNIT (expr));
DFS_follow_tree_edge (TYPE_ATTRIBUTES (expr));
DFS_follow_tree_edge (TYPE_NAME (expr));
/* Do not follow TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
reconstructed during fixup. */
/* Do not follow TYPE_NEXT_VARIANT, we reconstruct the variant lists
during fixup. */
DFS_follow_tree_edge (TYPE_MAIN_VARIANT (expr));
DFS_follow_tree_edge (TYPE_CONTEXT (expr));
/* TYPE_CANONICAL is re-computed during type merging, so no need
to follow it here. */
DFS_follow_tree_edge (TYPE_STUB_DECL (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
{
if (TREE_CODE (expr) == ENUMERAL_TYPE)
DFS_follow_tree_edge (TYPE_VALUES (expr));
else if (TREE_CODE (expr) == ARRAY_TYPE)
DFS_follow_tree_edge (TYPE_DOMAIN (expr));
else if (RECORD_OR_UNION_TYPE_P (expr))
for (tree t = TYPE_FIELDS (expr); t; t = TREE_CHAIN (t))
DFS_follow_tree_edge (t);
else if (TREE_CODE (expr) == FUNCTION_TYPE
|| TREE_CODE (expr) == METHOD_TYPE)
DFS_follow_tree_edge (TYPE_ARG_TYPES (expr));
if (!POINTER_TYPE_P (expr))
DFS_follow_tree_edge (TYPE_MINVAL (expr));
DFS_follow_tree_edge (TYPE_MAXVAL (expr));
if (RECORD_OR_UNION_TYPE_P (expr))
DFS_follow_tree_edge (TYPE_BINFO (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_LIST))
{
DFS_follow_tree_edge (TREE_PURPOSE (expr));
DFS_follow_tree_edge (TREE_VALUE (expr));
DFS_follow_tree_edge (TREE_CHAIN (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_VEC))
{
for (int i = 0; i < TREE_VEC_LENGTH (expr); i++)
DFS_follow_tree_edge (TREE_VEC_ELT (expr, i));
}
if (CODE_CONTAINS_STRUCT (code, TS_EXP))
{
for (int i = 0; i < TREE_OPERAND_LENGTH (expr); i++)
DFS_follow_tree_edge (TREE_OPERAND (expr, i));
DFS_follow_tree_edge (TREE_BLOCK (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
{
for (tree t = BLOCK_VARS (expr); t; t = TREE_CHAIN (t))
/* ??? FIXME. See also streamer_write_chain. */
if (!(VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t)))
DFS_follow_tree_edge (t);
DFS_follow_tree_edge (BLOCK_SUPERCONTEXT (expr));
/* Follow BLOCK_ABSTRACT_ORIGIN for the limited cases we can
handle - those that represent inlined function scopes.
For the drop rest them on the floor instead of ICEing
in dwarf2out.c. */
if (inlined_function_outer_scope_p (expr))
{
tree ultimate_origin = block_ultimate_origin (expr);
DFS_follow_tree_edge (ultimate_origin);
}
/* Do not follow BLOCK_NONLOCALIZED_VARS. We cannot handle debug
information for early inlined BLOCKs so drop it on the floor instead
of ICEing in dwarf2out.c. */
/* BLOCK_FRAGMENT_ORIGIN and BLOCK_FRAGMENT_CHAIN is not live at LTO
streaming time. */
/* Do not output BLOCK_SUBBLOCKS. Instead on streaming-in this
list is re-constructed from BLOCK_SUPERCONTEXT. */
}
if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
{
unsigned i;
tree t;
/* Note that the number of BINFO slots has already been emitted in
EXPR's header (see streamer_write_tree_header) because this length
is needed to build the empty BINFO node on the reader side. */
FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (expr), i, t)
DFS_follow_tree_edge (t);
DFS_follow_tree_edge (BINFO_OFFSET (expr));
DFS_follow_tree_edge (BINFO_VTABLE (expr));
DFS_follow_tree_edge (BINFO_VPTR_FIELD (expr));
/* The number of BINFO_BASE_ACCESSES has already been emitted in
EXPR's bitfield section. */
FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (expr), i, t)
DFS_follow_tree_edge (t);
DFS_follow_tree_edge (BINFO_INHERITANCE_CHAIN (expr));
DFS_follow_tree_edge (BINFO_SUBVTT_INDEX (expr));
DFS_follow_tree_edge (BINFO_VPTR_INDEX (expr));
}
if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
{
unsigned i;
tree index, value;
FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (expr), i, index, value)
{
DFS_follow_tree_edge (index);
DFS_follow_tree_edge (value);
}
}
#undef DFS_follow_tree_edge
}
/* Return a hash value for the tree T. */
static hashval_t
hash_tree (struct streamer_tree_cache_d *cache, tree t)
{
#define visit(SIBLING) \
do { \
unsigned ix; \
if (SIBLING && streamer_tree_cache_lookup (cache, SIBLING, &ix)) \
v = iterative_hash_hashval_t (streamer_tree_cache_get_hash (cache, ix), v); \
} while (0)
/* Hash TS_BASE. */
enum tree_code code = TREE_CODE (t);
hashval_t v = iterative_hash_host_wide_int (code, 0);
if (!TYPE_P (t))
{
v = iterative_hash_host_wide_int (TREE_SIDE_EFFECTS (t)
| (TREE_CONSTANT (t) << 1)
| (TREE_READONLY (t) << 2)
| (TREE_PUBLIC (t) << 3), v);
}
v = iterative_hash_host_wide_int (TREE_ADDRESSABLE (t)
| (TREE_THIS_VOLATILE (t) << 1), v);
if (DECL_P (t))
v = iterative_hash_host_wide_int (DECL_UNSIGNED (t), v);
else if (TYPE_P (t))
v = iterative_hash_host_wide_int (TYPE_UNSIGNED (t), v);
if (TYPE_P (t))
v = iterative_hash_host_wide_int (TYPE_ARTIFICIAL (t), v);
else
v = iterative_hash_host_wide_int (TREE_NO_WARNING (t), v);
v = iterative_hash_host_wide_int (TREE_NOTHROW (t)
| (TREE_STATIC (t) << 1)
| (TREE_PROTECTED (t) << 2)
| (TREE_DEPRECATED (t) << 3), v);
if (code != TREE_BINFO)
v = iterative_hash_host_wide_int (TREE_PRIVATE (t), v);
if (TYPE_P (t))
v = iterative_hash_host_wide_int (TYPE_SATURATING (t)
| (TYPE_ADDR_SPACE (t) << 1), v);
else if (code == SSA_NAME)
v = iterative_hash_host_wide_int (SSA_NAME_IS_DEFAULT_DEF (t), v);
if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
{
v = iterative_hash_host_wide_int (TREE_INT_CST_LOW (t), v);
v = iterative_hash_host_wide_int (TREE_INT_CST_HIGH (t), v);
}
if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
{
REAL_VALUE_TYPE r = TREE_REAL_CST (t);
v = iterative_hash_host_wide_int (r.cl, v);
v = iterative_hash_host_wide_int (r.decimal
| (r.sign << 1)
| (r.signalling << 2)
| (r.canonical << 3), v);
v = iterative_hash_host_wide_int (r.uexp, v);
for (unsigned i = 0; i < SIGSZ; ++i)
v = iterative_hash_host_wide_int (r.sig[i], v);
}
if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
{
FIXED_VALUE_TYPE f = TREE_FIXED_CST (t);
v = iterative_hash_host_wide_int (f.mode, v);
v = iterative_hash_host_wide_int (f.data.low, v);
v = iterative_hash_host_wide_int (f.data.high, v);
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
{
v = iterative_hash_host_wide_int (DECL_MODE (t), v);
v = iterative_hash_host_wide_int (DECL_NONLOCAL (t)
| (DECL_VIRTUAL_P (t) << 1)
| (DECL_IGNORED_P (t) << 2)
| (DECL_ABSTRACT (t) << 3)
| (DECL_ARTIFICIAL (t) << 4)
| (DECL_USER_ALIGN (t) << 5)
| (DECL_PRESERVE_P (t) << 6)
| (DECL_EXTERNAL (t) << 7)
| (DECL_GIMPLE_REG_P (t) << 8), v);
v = iterative_hash_host_wide_int (DECL_ALIGN (t), v);
if (code == LABEL_DECL)
{
v = iterative_hash_host_wide_int (DECL_ERROR_ISSUED (t), v);
v = iterative_hash_host_wide_int (EH_LANDING_PAD_NR (t), v);
v = iterative_hash_host_wide_int (LABEL_DECL_UID (t), v);
}
else if (code == FIELD_DECL)
{
v = iterative_hash_host_wide_int (DECL_PACKED (t)
| (DECL_NONADDRESSABLE_P (t) << 1),
v);
v = iterative_hash_host_wide_int (DECL_OFFSET_ALIGN (t), v);
}
else if (code == VAR_DECL)
{
v = iterative_hash_host_wide_int (DECL_HAS_DEBUG_EXPR_P (t)
| (DECL_NONLOCAL_FRAME (t) << 1),
v);
}
if (code == RESULT_DECL
|| code == PARM_DECL
|| code == VAR_DECL)
{
v = iterative_hash_host_wide_int (DECL_BY_REFERENCE (t), v);
if (code == VAR_DECL
|| code == PARM_DECL)
v = iterative_hash_host_wide_int (DECL_HAS_VALUE_EXPR_P (t), v);
}
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
v = iterative_hash_host_wide_int (DECL_REGISTER (t), v);
if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
{
v = iterative_hash_host_wide_int (DECL_DEFER_OUTPUT (t)
| (DECL_COMMON (t) << 1)
| (DECL_DLLIMPORT_P (t) << 2)
| (DECL_WEAK (t) << 3)
| (DECL_SEEN_IN_BIND_EXPR_P (t) << 4)
| (DECL_COMDAT (t) << 5)
| (DECL_VISIBILITY_SPECIFIED (t) << 6),
v);
v = iterative_hash_host_wide_int (DECL_VISIBILITY (t), v);
if (code == VAR_DECL)
{
v = iterative_hash_host_wide_int (DECL_HARD_REGISTER (t)
| (DECL_IN_TEXT_SECTION (t) << 1)
| (DECL_IN_CONSTANT_POOL (t) << 2),
v);
v = iterative_hash_host_wide_int (DECL_TLS_MODEL (t), v);
}
if (VAR_OR_FUNCTION_DECL_P (t))
v = iterative_hash_host_wide_int (DECL_INIT_PRIORITY (t), v);
}
if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
{
v = iterative_hash_host_wide_int (DECL_BUILT_IN_CLASS (t), v);
v = iterative_hash_host_wide_int (DECL_STATIC_CONSTRUCTOR (t)
| (DECL_STATIC_DESTRUCTOR (t) << 1)
| (DECL_UNINLINABLE (t) << 2)
| (DECL_POSSIBLY_INLINED (t) << 3)
| (DECL_IS_NOVOPS (t) << 4)
| (DECL_IS_RETURNS_TWICE (t) << 5)
| (DECL_IS_MALLOC (t) << 6)
| (DECL_IS_OPERATOR_NEW (t) << 7)
| (DECL_DECLARED_INLINE_P (t) << 8)
| (DECL_STATIC_CHAIN (t) << 9)
| (DECL_NO_INLINE_WARNING_P (t) << 10)
| (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT (t) << 11)
| (DECL_NO_LIMIT_STACK (t) << 12)
| (DECL_DISREGARD_INLINE_LIMITS (t) << 13)
| (DECL_PURE_P (t) << 14)
| (DECL_LOOPING_CONST_OR_PURE_P (t) << 15), v);
if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
v = iterative_hash_host_wide_int (DECL_FUNCTION_CODE (t), v);
if (DECL_STATIC_DESTRUCTOR (t))
v = iterative_hash_host_wide_int (DECL_FINI_PRIORITY (t), v);
}
if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
{
v = iterative_hash_host_wide_int (TYPE_MODE (t), v);
v = iterative_hash_host_wide_int (TYPE_STRING_FLAG (t)
| (TYPE_NO_FORCE_BLK (t) << 1)
| (TYPE_NEEDS_CONSTRUCTING (t) << 2)
| (TYPE_PACKED (t) << 3)
| (TYPE_RESTRICT (t) << 4)
| (TYPE_USER_ALIGN (t) << 5)
| (TYPE_READONLY (t) << 6), v);
if (RECORD_OR_UNION_TYPE_P (t))
v = iterative_hash_host_wide_int (TYPE_TRANSPARENT_AGGR (t), v);
else if (code == ARRAY_TYPE)
v = iterative_hash_host_wide_int (TYPE_NONALIASED_COMPONENT (t), v);
v = iterative_hash_host_wide_int (TYPE_PRECISION (t), v);
v = iterative_hash_host_wide_int (TYPE_ALIGN (t), v);
v = iterative_hash_host_wide_int ((TYPE_ALIAS_SET (t) == 0
|| (!in_lto_p
&& get_alias_set (t) == 0))
? 0 : -1, v);
}
if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
v = iterative_hash (TRANSLATION_UNIT_LANGUAGE (t),
strlen (TRANSLATION_UNIT_LANGUAGE (t)), v);
if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
v = iterative_hash (t, sizeof (struct cl_target_option), v);
if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
v = iterative_hash (t, sizeof (struct cl_optimization), v);
if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
v = iterative_hash_host_wide_int (IDENTIFIER_HASH_VALUE (t), v);
if (CODE_CONTAINS_STRUCT (code, TS_STRING))
v = iterative_hash (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t), v);
if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
{
if (POINTER_TYPE_P (t))
{
/* For pointers factor in the pointed-to type recursively as
we cannot recurse through only pointers.
??? We can generalize this by keeping track of the
in-SCC edges for each tree (or arbitrarily the first
such edge) and hashing that in in a second stage
(instead of the quadratic mixing of the SCC we do now). */
hashval_t x;
unsigned ix;
if (streamer_tree_cache_lookup (cache, TREE_TYPE (t), &ix))
x = streamer_tree_cache_get_hash (cache, ix);
else
x = hash_tree (cache, TREE_TYPE (t));
v = iterative_hash_hashval_t (x, v);
}
else if (code != IDENTIFIER_NODE)
visit (TREE_TYPE (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
for (unsigned i = 0; i < VECTOR_CST_NELTS (t); ++i)
visit (VECTOR_CST_ELT (t, i));
if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
{
visit (TREE_REALPART (t));
visit (TREE_IMAGPART (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
{
/* Drop names that were created for anonymous entities. */
if (DECL_NAME (t)
&& TREE_CODE (DECL_NAME (t)) == IDENTIFIER_NODE
&& ANON_AGGRNAME_P (DECL_NAME (t)))
;
else
visit (DECL_NAME (t));
if (DECL_FILE_SCOPE_P (t))
;
else
visit (DECL_CONTEXT (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
{
visit (DECL_SIZE (t));
visit (DECL_SIZE_UNIT (t));
visit (DECL_ATTRIBUTES (t));
if ((code == VAR_DECL
|| code == PARM_DECL)
&& DECL_HAS_VALUE_EXPR_P (t))
visit (DECL_VALUE_EXPR (t));
if (code == VAR_DECL
&& DECL_HAS_DEBUG_EXPR_P (t))
visit (DECL_DEBUG_EXPR (t));
/* ??? Hash DECL_INITIAL as streamed. Needs the output-block to
be able to call get_symbol_initial_value. */
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
{
if (code == FUNCTION_DECL)
{
for (tree a = DECL_ARGUMENTS (t); a; a = DECL_CHAIN (a))
visit (a);
visit (DECL_RESULT (t));
}
else if (code == TYPE_DECL)
visit (DECL_ORIGINAL_TYPE (t));
visit (DECL_VINDEX (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
{
if (DECL_ASSEMBLER_NAME_SET_P (t))
visit (DECL_ASSEMBLER_NAME (t));
visit (DECL_SECTION_NAME (t));
visit (DECL_COMDAT_GROUP (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
{
visit (DECL_FIELD_OFFSET (t));
visit (DECL_BIT_FIELD_TYPE (t));
visit (DECL_BIT_FIELD_REPRESENTATIVE (t));
visit (DECL_FIELD_BIT_OFFSET (t));
visit (DECL_FCONTEXT (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
{
visit (DECL_FUNCTION_PERSONALITY (t));
visit (DECL_FUNCTION_SPECIFIC_TARGET (t));
visit (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
{
visit (TYPE_SIZE (t));
visit (TYPE_SIZE_UNIT (t));
visit (TYPE_ATTRIBUTES (t));
visit (TYPE_NAME (t));
visit (TYPE_MAIN_VARIANT (t));
if (TYPE_FILE_SCOPE_P (t))
;
else
visit (TYPE_CONTEXT (t));
visit (TYPE_STUB_DECL (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
{
if (code == ENUMERAL_TYPE)
visit (TYPE_VALUES (t));
else if (code == ARRAY_TYPE)
visit (TYPE_DOMAIN (t));
else if (RECORD_OR_UNION_TYPE_P (t))
for (tree f = TYPE_FIELDS (t); f; f = TREE_CHAIN (f))
visit (f);
else if (code == FUNCTION_TYPE
|| code == METHOD_TYPE)
visit (TYPE_ARG_TYPES (t));
if (!POINTER_TYPE_P (t))
visit (TYPE_MINVAL (t));
visit (TYPE_MAXVAL (t));
if (RECORD_OR_UNION_TYPE_P (t))
visit (TYPE_BINFO (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_LIST))
{
visit (TREE_PURPOSE (t));
visit (TREE_VALUE (t));
visit (TREE_CHAIN (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_VEC))
for (int i = 0; i < TREE_VEC_LENGTH (t); ++i)
visit (TREE_VEC_ELT (t, i));
if (CODE_CONTAINS_STRUCT (code, TS_EXP))
{
v = iterative_hash_host_wide_int (TREE_OPERAND_LENGTH (t), v);
for (int i = 0; i < TREE_OPERAND_LENGTH (t); ++i)
visit (TREE_OPERAND (t, i));
}
if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
{
unsigned i;
tree b;
FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t), i, b)
visit (b);
visit (BINFO_OFFSET (t));
visit (BINFO_VTABLE (t));
visit (BINFO_VPTR_FIELD (t));
FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t), i, b)
visit (b);
visit (BINFO_INHERITANCE_CHAIN (t));
visit (BINFO_SUBVTT_INDEX (t));
visit (BINFO_VPTR_INDEX (t));
}
if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
{
unsigned i;
tree index, value;
v = iterative_hash_host_wide_int (CONSTRUCTOR_NELTS (t), v);
FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t), i, index, value)
{
visit (index);
visit (value);
}
}
return v;
#undef visit
}
/* Compare two SCC entries by their hash value for qsorting them. */
static int
scc_entry_compare (const void *p1_, const void *p2_)
{
const scc_entry *p1 = (const scc_entry *) p1_;
const scc_entry *p2 = (const scc_entry *) p2_;
if (p1->hash < p2->hash)
return -1;
else if (p1->hash > p2->hash)
return 1;
return 0;
}
/* Return a hash value for the SCC on the SCC stack from FIRST with
size SIZE. */
static hashval_t
hash_scc (struct streamer_tree_cache_d *cache, unsigned first, unsigned size)
{
/* Compute hash values for the SCC members. */
for (unsigned i = 0; i < size; ++i)
sccstack[first+i].hash = hash_tree (cache, sccstack[first+i].t);
if (size == 1)
return sccstack[first].hash;
/* Sort the SCC of type, hash pairs so that when we mix in
all members of the SCC the hash value becomes independent on
the order we visited the SCC. Disregard hashes equal to
the hash of the tree we mix into because we cannot guarantee
a stable sort for those across different TUs. */
qsort (&sccstack[first], size, sizeof (scc_entry), scc_entry_compare);
hashval_t *tem = XALLOCAVEC (hashval_t, size);
for (unsigned i = 0; i < size; ++i)
{
hashval_t hash = sccstack[first+i].hash;
hashval_t orig_hash = hash;
unsigned j;
/* Skip same hashes. */
for (j = i + 1;
j < size && sccstack[first+j].hash == orig_hash; ++j)
;
for (; j < size; ++j)
hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
for (j = 0; sccstack[first+j].hash != orig_hash; ++j)
hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
tem[i] = hash;
}
hashval_t scc_hash = 0;
for (unsigned i = 0; i < size; ++i)
{
sccstack[first+i].hash = tem[i];
scc_hash = iterative_hash_hashval_t (tem[i], scc_hash);
}
return scc_hash;
}
/* DFS walk EXPR and stream SCCs of tree bodies if they are not
already in the streamer cache. Main routine called for
each visit of EXPR. */
static void
DFS_write_tree (struct output_block *ob, sccs *from_state,
tree expr, bool ref_p, bool this_ref_p)
{
unsigned ix;
sccs **slot;
/* Handle special cases. */
if (expr == NULL_TREE)
return;
/* Do not DFS walk into indexable trees. */
if (this_ref_p && tree_is_indexable (expr))
return;
/* Check if we already streamed EXPR. */
if (streamer_tree_cache_lookup (ob->writer_cache, expr, &ix))
return;
slot = (sccs **)pointer_map_insert (sccstate, expr);
sccs *cstate = *slot;
if (!cstate)
{
scc_entry e = { expr, 0 };
/* Not yet visited. DFS recurse and push it onto the stack. */
*slot = cstate = XOBNEW (&sccstate_obstack, struct sccs);
sccstack.safe_push (e);
cstate->dfsnum = next_dfs_num++;
cstate->low = cstate->dfsnum;
if (streamer_handle_as_builtin_p (expr))
;
else if (TREE_CODE (expr) == INTEGER_CST
&& !TREE_OVERFLOW (expr))
DFS_write_tree (ob, cstate, TREE_TYPE (expr), ref_p, ref_p);
else
{
DFS_write_tree_body (ob, expr, cstate, ref_p);
/* Walk any LTO-specific edges. */
if (DECL_P (expr)
&& TREE_CODE (expr) != FUNCTION_DECL
&& TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
{
/* Handle DECL_INITIAL for symbols. */
tree initial = get_symbol_initial_value (ob, expr);
DFS_write_tree (ob, cstate, initial, ref_p, ref_p);
}
}
/* See if we found an SCC. */
if (cstate->low == cstate->dfsnum)
{
unsigned first, size;
tree x;
/* Pop the SCC and compute its size. */
first = sccstack.length ();
do
{
x = sccstack[--first].t;
}
while (x != expr);
size = sccstack.length () - first;
/* No need to compute hashes for LTRANS units, we don't perform
any merging there. */
hashval_t scc_hash = 0;
unsigned scc_entry_len = 0;
if (!flag_wpa)
{
scc_hash = hash_scc (ob->writer_cache, first, size);
/* Put the entries with the least number of collisions first. */
unsigned entry_start = 0;
scc_entry_len = size + 1;
for (unsigned i = 0; i < size;)
{
unsigned from = i;
for (i = i + 1; i < size
&& (sccstack[first + i].hash
== sccstack[first + from].hash); ++i)
;
if (i - from < scc_entry_len)
{
scc_entry_len = i - from;
entry_start = from;
}
}
for (unsigned i = 0; i < scc_entry_len; ++i)
{
scc_entry tem = sccstack[first + i];
sccstack[first + i] = sccstack[first + entry_start + i];
sccstack[first + entry_start + i] = tem;
}
}
/* Write LTO_tree_scc. */
streamer_write_record_start (ob, LTO_tree_scc);
streamer_write_uhwi (ob, size);
streamer_write_uhwi (ob, scc_hash);
/* Write size-1 SCCs without wrapping them inside SCC bundles.
All INTEGER_CSTs need to be handled this way as we need
their type to materialize them. Also builtins are handled
this way.
??? We still wrap these in LTO_tree_scc so at the
input side we can properly identify the tree we want
to ultimatively return. */
size_t old_len = ob->writer_cache->nodes.length ();
if (size == 1)
lto_output_tree_1 (ob, expr, scc_hash, ref_p, this_ref_p);
else
{
/* Write the size of the SCC entry candidates. */
streamer_write_uhwi (ob, scc_entry_len);
/* Write all headers and populate the streamer cache. */
for (unsigned i = 0; i < size; ++i)
{
hashval_t hash = sccstack[first+i].hash;
tree t = sccstack[first+i].t;
bool exists_p = streamer_tree_cache_insert (ob->writer_cache,
t, hash, &ix);
gcc_assert (!exists_p);
if (!lto_is_streamable (t))
internal_error ("tree code %qs is not supported "
"in LTO streams",
tree_code_name[TREE_CODE (t)]);
gcc_checking_assert (!streamer_handle_as_builtin_p (t));
/* Write the header, containing everything needed to
materialize EXPR on the reading side. */
streamer_write_tree_header (ob, t);
}
/* Write the bitpacks and tree references. */
for (unsigned i = 0; i < size; ++i)
{
lto_write_tree_1 (ob, sccstack[first+i].t, ref_p);
/* Mark the end of the tree. */
streamer_write_zero (ob);
}
}
gcc_assert (old_len + size == ob->writer_cache->nodes.length ());
/* Finally truncate the vector. */
sccstack.truncate (first);
if (from_state)
from_state->low = MIN (from_state->low, cstate->low);
return;
}
if (from_state)
from_state->low = MIN (from_state->low, cstate->low);
}
gcc_checking_assert (from_state);
if (cstate->dfsnum < from_state->dfsnum)
from_state->low = MIN (cstate->dfsnum, from_state->low);
}
/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
......@@ -374,16 +1306,7 @@ lto_output_tree (struct output_block *ob, tree expr,
return;
}
/* Shared INTEGER_CST nodes are special because they need their original type
to be materialized by the reader (to implement TYPE_CACHED_VALUES). */
if (TREE_CODE (expr) == INTEGER_CST
&& !TREE_OVERFLOW (expr))
{
streamer_write_integer_cst (ob, expr, ref_p);
return;
}
existed_p = streamer_tree_cache_insert (ob->writer_cache, expr, &ix);
existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
if (existed_p)
{
/* If a node has already been streamed out, make sure that
......@@ -393,21 +1316,42 @@ lto_output_tree (struct output_block *ob, tree expr,
streamer_write_uhwi (ob, ix);
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
}
else if (streamer_handle_as_builtin_p (expr))
{
/* MD and NORMAL builtins do not need to be written out
completely as they are always instantiated by the
compiler on startup. The only builtins that need to
be written out are BUILT_IN_FRONTEND. For all other
builtins, we simply write the class and code. */
streamer_write_builtin (ob, expr);
lto_stats.num_pickle_refs_output++;
}
else
{
/* This is the first time we see EXPR, write its fields
to OB. */
lto_write_tree (ob, expr, ref_p);
/* This is the first time we see EXPR, write all reachable
trees to OB. */
static bool in_dfs_walk;
/* Protect against recursion which means disconnect between
what tree edges we walk in the DFS walk and what edges
we stream out. */
gcc_assert (!in_dfs_walk);
/* Start the DFS walk. */
/* Save ob state ... */
/* let's see ... */
in_dfs_walk = true;
sccstate = pointer_map_create ();
gcc_obstack_init (&sccstate_obstack);
next_dfs_num = 1;
DFS_write_tree (ob, NULL, expr, ref_p, this_ref_p);
sccstack.release ();
pointer_map_destroy (sccstate);
obstack_free (&sccstate_obstack, NULL);
in_dfs_walk = false;
/* Finally append a reference to the tree we were writing.
??? If expr ended up as a singleton we could have
inlined it here and avoid outputting a reference. */
existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
gcc_assert (existed_p);
streamer_write_record_start (ob, LTO_tree_pickle_reference);
streamer_write_uhwi (ob, ix);
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
lto_stats.num_pickle_refs_output++;
}
}
......
......@@ -227,6 +227,13 @@ print_lto_report (const char *s)
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_output_symtab_nodes);
fprintf (stderr, "[%s] # of output tree pickle references: "
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_pickle_refs_output);
fprintf (stderr, "[%s] # of output tree bodies: "
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_tree_bodies_output);
fprintf (stderr, "[%s] # callgraph partitions: "
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_cgraph_partitions);
......
......@@ -199,6 +199,9 @@ enum LTO_tags
/* EH try/catch node. */
LTO_eh_catch,
/* Special for global streamer. A blob of unnamed tree nodes. */
LTO_tree_scc,
/* References to indexable tree nodes. These objects are stored in
tables that are written separately from the function bodies that
reference them. This way they can be instantiated even when the
......@@ -421,6 +424,8 @@ struct lto_stats_d
unsigned HOST_WIDE_INT num_compressed_il_bytes;
unsigned HOST_WIDE_INT num_input_il_bytes;
unsigned HOST_WIDE_INT num_uncompressed_il_bytes;
unsigned HOST_WIDE_INT num_tree_bodies_output;
unsigned HOST_WIDE_INT num_pickle_refs_output;
};
/* Entry of LTO symtab encoder. */
......@@ -854,6 +859,10 @@ tree lto_input_tree_ref (struct lto_input_block *, struct data_in *,
struct function *, enum LTO_tags);
void lto_tag_check_set (enum LTO_tags, int, ...);
void lto_init_eh (void);
hashval_t lto_input_scc (struct lto_input_block *, struct data_in *,
unsigned *, unsigned *);
tree lto_input_tree_1 (struct lto_input_block *, struct data_in *,
enum LTO_tags, hashval_t hash);
tree lto_input_tree (struct lto_input_block *, struct data_in *);
......
......@@ -97,7 +97,8 @@ lto_varpool_replace_node (struct varpool_node *vnode,
ipa_clone_referring ((symtab_node)prevailing_node, &vnode->symbol.ref_list);
/* Be sure we can garbage collect the initializer. */
if (DECL_INITIAL (vnode->symbol.decl))
if (DECL_INITIAL (vnode->symbol.decl)
&& vnode->symbol.decl != prevailing_node->symbol.decl)
DECL_INITIAL (vnode->symbol.decl) = error_mark_node;
/* Finally remove the replaced node. */
varpool_remove_node (vnode);
......
2013-06-17 Richard Biener <rguenther@suse.de>
* Make-lang.in (lto.o): Add $(DATA_STREAMER_H) dependency.
* lto.c: Include data-streamer.h.
(lto_read_in_decl_state): Use streamer_tree_cache_get_tree.
(gimple_type_leader_entry_s, gimple_type_leader,
gimple_lookup_type_leader): Remove.
(gtc_visit): Simplify.
(gimple_types_compatible_p): Likewise.
(gimple_register_type_1): Likewise. Merge into ...
(gimple_register_type): ... this. Keep it as legacy for
statistics purposes for now.
(fixup_integer_cst): Remove.
(LTO_FIXUP_TREE, lto_fixup_types, lto_ft_*): Simplify and
rename to ...
(MAYBE_REMEMBER_WITH_VARS, maybe_remember_with_vars,
maybe_remember_with_vars_*): ... these.
(uniquify_nodes): Remove.
(lto_fixup_prevailing_type): New function.
(struct tree_scc, struct tree_scc_hasher): New type and hasher.
(tree_scc_hash, tree_scc_hash_obstack): New globals.
(num_merged_types, num_prevailing_types, num_not_merged_types,
num_not_merged_types_in_same_scc, total_scc_size, num_sccs_read,
total_scc_size_merged, num_sccs_merged, num_scc_compares,
num_scc_compare_collisions): New global counters.
(compare_tree_sccs_1): New function.
(compare_tree_sccs): Likewise.
(unify_scc): Likewise.
(lto_read_decls): Stream in tree SCCs and unify them on the
way in. Finalize prevailing SCC tree members.
(read_cgraph_and_symbols): Do not initialize or free gimple_type_leader.
Allocate and free tree_scc_hash_obstack and tree_scc_hash, do not bother
to ggc-collect during merging.
(print_lto_report_1): Adjust for new merging code.
2013-06-12 Jan Hubicka <jh@suse.cz>
* lto.c (read_cgraph_and_symbols): Set cgraph into streaming state.
......
......@@ -85,7 +85,7 @@ lto/lto.o: lto/lto.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(OPTS_H) \
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
$(TREE_STREAMER_H) lto/lto-partition.h
$(TREE_STREAMER_H) $(DATA_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
......
......@@ -45,6 +45,7 @@ along with GCC; see the file COPYING3. If not see
#include "tree-streamer.h"
#include "splay-tree.h"
#include "lto-partition.h"
#include "data-streamer.h"
static GTY(()) tree first_personality_decl;
......@@ -254,7 +255,7 @@ lto_read_in_decl_state (struct data_in *data_in, const uint32_t *data,
uint32_t i, j;
ix = *data++;
decl = streamer_tree_cache_get (data_in->reader_cache, ix);
decl = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
......@@ -268,7 +269,7 @@ lto_read_in_decl_state (struct data_in *data_in, const uint32_t *data,
tree *decls = ggc_alloc_vec_tree (size);
for (j = 0; j < size; j++)
decls[j] = streamer_tree_cache_get (data_in->reader_cache, data[j]);
decls[j] = streamer_tree_cache_get_tree (data_in->reader_cache, data[j]);
state->streams[i].size = size;
state->streams[i].trees = decls;
......@@ -280,6 +281,9 @@ lto_read_in_decl_state (struct data_in *data_in, const uint32_t *data,
/* ??? Old hashing and merging code follows, we keep it for statistics
purposes for now. */
/* Global type table. FIXME, it should be possible to re-use some
of the type hashing routines in tree.c (type_hash_canon, type_hash_lookup,
etc), but those assume that types were built with the various
......@@ -366,33 +370,6 @@ struct sccs
static unsigned int next_dfs_num;
static unsigned int gtc_next_dfs_num;
/* GIMPLE type merging cache. A direct-mapped cache based on TYPE_UID. */
typedef struct GTY(()) gimple_type_leader_entry_s {
tree type;
tree leader;
} gimple_type_leader_entry;
#define GIMPLE_TYPE_LEADER_SIZE 16381
static GTY((length("GIMPLE_TYPE_LEADER_SIZE")))
gimple_type_leader_entry *gimple_type_leader;
/* Lookup an existing leader for T and return it or NULL_TREE, if
there is none in the cache. */
static inline tree
gimple_lookup_type_leader (tree t)
{
gimple_type_leader_entry *leader;
leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
if (leader->type != t)
return NULL_TREE;
return leader->leader;
}
/* Return true if T1 and T2 have the same name. If FOR_COMPLETION_P is
true then if any type has no name return false, otherwise return
true if both types have no names. */
......@@ -450,7 +427,6 @@ gtc_visit (tree t1, tree t2,
struct sccs *cstate = NULL;
type_pair_t p;
void **slot;
tree leader1, leader2;
/* Check first for the obvious case of pointer identity. */
if (t1 == t2)
......@@ -507,15 +483,6 @@ gtc_visit (tree t1, tree t2,
/* For other types fall through to more complex checks. */
}
/* If the types have been previously registered and found equal
they still are. */
leader1 = gimple_lookup_type_leader (t1);
leader2 = gimple_lookup_type_leader (t2);
if (leader1 == t2
|| t1 == leader2
|| (leader1 && leader1 == leader2))
return true;
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
......@@ -879,7 +846,6 @@ gimple_types_compatible_p (tree t1, tree t2)
struct obstack sccstate_obstack;
type_pair_t p = NULL;
bool res;
tree leader1, leader2;
/* Before starting to set up the SCC machinery handle simple cases. */
......@@ -938,15 +904,6 @@ gimple_types_compatible_p (tree t1, tree t2)
/* For other types fall through to more complex checks. */
}
/* If the types have been previously registered and found equal
they still are. */
leader1 = gimple_lookup_type_leader (t1);
leader2 = gimple_lookup_type_leader (t2);
if (leader1 == t2
|| t1 == leader2
|| (leader1 && leader1 == leader2))
return true;
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
......@@ -1335,68 +1292,25 @@ gimple_type_eq (const void *p1, const void *p2)
CONST_CAST_TREE (t2));
}
/* Worker for gimple_register_type.
Register type T in the global type table gimple_types.
When REGISTERING_MV is false first recurse for the main variant of T. */
/* Register type T in the global type table gimple_types. */
static tree
gimple_register_type_1 (tree t, bool registering_mv)
gimple_register_type (tree t)
{
void **slot;
gimple_type_leader_entry *leader;
/* If we registered this type before return the cached result. */
leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
if (leader->type == t)
return leader->leader;
/* Always register the main variant first. This is important so we
pick up the non-typedef variants as canonical, otherwise we'll end
up taking typedef ids for structure tags during comparison.
It also makes sure that main variants will be merged to main variants.
As we are operating on a possibly partially fixed up type graph
do not bother to recurse more than once, otherwise we may end up
walking in circles.
If we are registering a main variant it will either remain its
own main variant or it will be merged to something else in which
case we do not care for the main variant leader. */
if (!registering_mv
&& TYPE_MAIN_VARIANT (t) != t)
gimple_register_type_1 (TYPE_MAIN_VARIANT (t), true);
/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
{
tree new_type = (tree) *((tree *) slot);
leader->type = t;
leader->leader = new_type;
return new_type;
}
return (tree) *((tree *) slot);
/* If not, insert it to the cache and the hash. */
leader->type = t;
leader->leader = t;
*slot = (void *) t;
return t;
}
/* Register type T in the global type table gimple_types.
If another type T', compatible with T, already existed in
gimple_types then return T', otherwise return T. This is used by
LTO to merge identical types read from different TUs. */
static tree
gimple_register_type (tree t)
{
gcc_assert (TYPE_P (t));
return gimple_register_type_1 (t, false);
}
#define GIMPLE_REGISTER_TYPE(tt) \
(TREE_VISITED (tt) ? gimple_register_type (tt) : tt)
/* End of old merging code. */
......@@ -1413,220 +1327,173 @@ remember_with_vars (tree t)
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}
#define LTO_FIXUP_TREE(tt) \
#define MAYBE_REMEMBER_WITH_VARS(tt) \
do \
{ \
if (tt) \
{ \
if (TYPE_P (tt)) \
(tt) = GIMPLE_REGISTER_TYPE (tt); \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
if (TREE_CODE (tt) == INTEGER_CST) \
(tt) = fixup_integer_cst (tt); \
} \
} while (0)
static void lto_fixup_types (tree);
/* Return integer_cst T with updated type. */
static tree
fixup_integer_cst (tree t)
{
tree type = GIMPLE_REGISTER_TYPE (TREE_TYPE (t));
if (type == TREE_TYPE (t))
return t;
/* If overflow was set, streamer_read_integer_cst
produced local copy of T. */
if (TREE_OVERFLOW (t))
{
TREE_TYPE (t) = type;
return t;
}
else
/* Otherwise produce new shared node for the new type. */
return build_int_cst_wide (type, TREE_INT_CST_LOW (t),
TREE_INT_CST_HIGH (t));
}
/* Fix up fields of a tree_typed T. */
static void
lto_ft_typed (tree t)
maybe_remember_with_vars_typed (tree t)
{
LTO_FIXUP_TREE (TREE_TYPE (t));
MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
}
/* Fix up fields of a tree_common T. */
static void
lto_ft_common (tree t)
maybe_remember_with_vars_common (tree t)
{
lto_ft_typed (t);
LTO_FIXUP_TREE (TREE_CHAIN (t));
maybe_remember_with_vars_typed (t);
MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
}
/* Fix up fields of a decl_minimal T. */
static void
lto_ft_decl_minimal (tree t)
maybe_remember_with_vars_decl_minimal (tree t)
{
lto_ft_common (t);
LTO_FIXUP_TREE (DECL_NAME (t));
LTO_FIXUP_TREE (DECL_CONTEXT (t));
maybe_remember_with_vars_common (t);
MAYBE_REMEMBER_WITH_VARS (DECL_NAME (t));
MAYBE_REMEMBER_WITH_VARS (DECL_CONTEXT (t));
}
/* Fix up fields of a decl_common T. */
static void
lto_ft_decl_common (tree t)
maybe_remember_with_vars_decl_common (tree t)
{
lto_ft_decl_minimal (t);
LTO_FIXUP_TREE (DECL_SIZE (t));
LTO_FIXUP_TREE (DECL_SIZE_UNIT (t));
LTO_FIXUP_TREE (DECL_INITIAL (t));
LTO_FIXUP_TREE (DECL_ATTRIBUTES (t));
LTO_FIXUP_TREE (DECL_ABSTRACT_ORIGIN (t));
maybe_remember_with_vars_decl_minimal (t);
MAYBE_REMEMBER_WITH_VARS (DECL_SIZE (t));
MAYBE_REMEMBER_WITH_VARS (DECL_SIZE_UNIT (t));
MAYBE_REMEMBER_WITH_VARS (DECL_INITIAL (t));
MAYBE_REMEMBER_WITH_VARS (DECL_ATTRIBUTES (t));
MAYBE_REMEMBER_WITH_VARS (DECL_ABSTRACT_ORIGIN (t));
}
/* Fix up fields of a decl_with_vis T. */
static void
lto_ft_decl_with_vis (tree t)
maybe_remember_with_vars_decl_with_vis (tree t)
{
lto_ft_decl_common (t);
maybe_remember_with_vars_decl_common (t);
/* Accessor macro has side-effects, use field-name here. */
LTO_FIXUP_TREE (t->decl_with_vis.assembler_name);
LTO_FIXUP_TREE (DECL_SECTION_NAME (t));
MAYBE_REMEMBER_WITH_VARS (t->decl_with_vis.assembler_name);
MAYBE_REMEMBER_WITH_VARS (DECL_SECTION_NAME (t));
}
/* Fix up fields of a decl_non_common T. */
static void
lto_ft_decl_non_common (tree t)
maybe_remember_with_vars_decl_non_common (tree t)
{
lto_ft_decl_with_vis (t);
LTO_FIXUP_TREE (DECL_ARGUMENT_FLD (t));
LTO_FIXUP_TREE (DECL_RESULT_FLD (t));
LTO_FIXUP_TREE (DECL_VINDEX (t));
/* The C frontends may create exact duplicates for DECL_ORIGINAL_TYPE
like for 'typedef enum foo foo'. We have no way of avoiding to
merge them and dwarf2out.c cannot deal with this,
so fix this up by clearing DECL_ORIGINAL_TYPE in this case. */
if (TREE_CODE (t) == TYPE_DECL
&& DECL_ORIGINAL_TYPE (t) == TREE_TYPE (t))
DECL_ORIGINAL_TYPE (t) = NULL_TREE;
maybe_remember_with_vars_decl_with_vis (t);
MAYBE_REMEMBER_WITH_VARS (DECL_ARGUMENT_FLD (t));
MAYBE_REMEMBER_WITH_VARS (DECL_RESULT_FLD (t));
MAYBE_REMEMBER_WITH_VARS (DECL_VINDEX (t));
}
/* Fix up fields of a decl_non_common T. */
static void
lto_ft_function (tree t)
maybe_remember_with_vars_function (tree t)
{
lto_ft_decl_non_common (t);
LTO_FIXUP_TREE (DECL_FUNCTION_PERSONALITY (t));
maybe_remember_with_vars_decl_non_common (t);
MAYBE_REMEMBER_WITH_VARS (DECL_FUNCTION_PERSONALITY (t));
}
/* Fix up fields of a field_decl T. */
static void
lto_ft_field_decl (tree t)
maybe_remember_with_vars_field_decl (tree t)
{
lto_ft_decl_common (t);
LTO_FIXUP_TREE (DECL_FIELD_OFFSET (t));
LTO_FIXUP_TREE (DECL_BIT_FIELD_TYPE (t));
LTO_FIXUP_TREE (DECL_QUALIFIER (t));
LTO_FIXUP_TREE (DECL_FIELD_BIT_OFFSET (t));
LTO_FIXUP_TREE (DECL_FCONTEXT (t));
maybe_remember_with_vars_decl_common (t);
MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_OFFSET (t));
MAYBE_REMEMBER_WITH_VARS (DECL_BIT_FIELD_TYPE (t));
MAYBE_REMEMBER_WITH_VARS (DECL_QUALIFIER (t));
MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_BIT_OFFSET (t));
MAYBE_REMEMBER_WITH_VARS (DECL_FCONTEXT (t));
}
/* Fix up fields of a type T. */
static void
lto_ft_type (tree t)
maybe_remember_with_vars_type (tree t)
{
lto_ft_common (t);
LTO_FIXUP_TREE (TYPE_CACHED_VALUES (t));
LTO_FIXUP_TREE (TYPE_SIZE (t));
LTO_FIXUP_TREE (TYPE_SIZE_UNIT (t));
LTO_FIXUP_TREE (TYPE_ATTRIBUTES (t));
LTO_FIXUP_TREE (TYPE_NAME (t));
maybe_remember_with_vars_common (t);
MAYBE_REMEMBER_WITH_VARS (TYPE_CACHED_VALUES (t));
MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE (t));
MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE_UNIT (t));
MAYBE_REMEMBER_WITH_VARS (TYPE_ATTRIBUTES (t));
MAYBE_REMEMBER_WITH_VARS (TYPE_NAME (t));
/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
LTO_FIXUP_TREE (TYPE_MINVAL (t));
LTO_FIXUP_TREE (TYPE_MAXVAL (t));
MAYBE_REMEMBER_WITH_VARS (TYPE_MINVAL (t));
MAYBE_REMEMBER_WITH_VARS (TYPE_MAXVAL (t));
/* Accessor is for derived node types only. */
LTO_FIXUP_TREE (t->type_non_common.binfo);
MAYBE_REMEMBER_WITH_VARS (t->type_non_common.binfo);
LTO_FIXUP_TREE (TYPE_CONTEXT (t));
MAYBE_REMEMBER_WITH_VARS (TYPE_CONTEXT (t));
}
/* Fix up fields of a BINFO T. */
static void
lto_ft_binfo (tree t)
maybe_remember_with_vars_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;
tree base, saved_base;
lto_ft_common (t);
LTO_FIXUP_TREE (BINFO_VTABLE (t));
LTO_FIXUP_TREE (BINFO_OFFSET (t));
LTO_FIXUP_TREE (BINFO_VIRTUALS (t));
LTO_FIXUP_TREE (BINFO_VPTR_FIELD (t));
maybe_remember_with_vars_common (t);
MAYBE_REMEMBER_WITH_VARS (BINFO_VTABLE (t));
MAYBE_REMEMBER_WITH_VARS (BINFO_OFFSET (t));
MAYBE_REMEMBER_WITH_VARS (BINFO_VIRTUALS (t));
MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
{
saved_base = base = BINFO_BASE_ACCESS (t, i);
LTO_FIXUP_TREE (base);
if (base != saved_base)
(*BINFO_BASE_ACCESSES (t))[i] = base;
}
LTO_FIXUP_TREE (BINFO_INHERITANCE_CHAIN (t));
LTO_FIXUP_TREE (BINFO_SUBVTT_INDEX (t));
LTO_FIXUP_TREE (BINFO_VPTR_INDEX (t));
MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_ACCESS (t, i));
MAYBE_REMEMBER_WITH_VARS (BINFO_INHERITANCE_CHAIN (t));
MAYBE_REMEMBER_WITH_VARS (BINFO_SUBVTT_INDEX (t));
MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
{
saved_base = base = BINFO_BASE_BINFO (t, i);
LTO_FIXUP_TREE (base);
if (base != saved_base)
(*BINFO_BASE_BINFOS (t))[i] = base;
}
MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_BINFO (t, i));
}
/* Fix up fields of a CONSTRUCTOR T. */
static void
lto_ft_constructor (tree t)
maybe_remember_with_vars_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;
lto_ft_typed (t);
maybe_remember_with_vars_typed (t);
for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
LTO_FIXUP_TREE (ce->index);
LTO_FIXUP_TREE (ce->value);
MAYBE_REMEMBER_WITH_VARS (ce->index);
MAYBE_REMEMBER_WITH_VARS (ce->value);
}
}
/* Fix up fields of an expression tree T. */
static void
lto_ft_expr (tree t)
maybe_remember_with_vars_expr (tree t)
{
int i;
lto_ft_typed (t);
maybe_remember_with_vars_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
LTO_FIXUP_TREE (TREE_OPERAND (t, i));
MAYBE_REMEMBER_WITH_VARS (TREE_OPERAND (t, i));
}
/* Given a tree T fixup fields of T by replacing types with their merged
......@@ -1635,7 +1502,7 @@ lto_ft_expr (tree t)
for instance integer or string constants. */
static void
lto_fixup_types (tree t)
maybe_remember_with_vars (tree t)
{
switch (TREE_CODE (t))
{
......@@ -1643,13 +1510,13 @@ lto_fixup_types (tree t)
break;
case TREE_LIST:
LTO_FIXUP_TREE (TREE_VALUE (t));
LTO_FIXUP_TREE (TREE_PURPOSE (t));
LTO_FIXUP_TREE (TREE_CHAIN (t));
MAYBE_REMEMBER_WITH_VARS (TREE_VALUE (t));
MAYBE_REMEMBER_WITH_VARS (TREE_PURPOSE (t));
MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
break;
case FIELD_DECL:
lto_ft_field_decl (t);
maybe_remember_with_vars_field_decl (t);
break;
case LABEL_DECL:
......@@ -1657,27 +1524,27 @@ lto_fixup_types (tree t)
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
lto_ft_decl_common (t);
maybe_remember_with_vars_decl_common (t);
break;
case VAR_DECL:
lto_ft_decl_with_vis (t);
maybe_remember_with_vars_decl_with_vis (t);
break;
case TYPE_DECL:
lto_ft_decl_non_common (t);
maybe_remember_with_vars_decl_non_common (t);
break;
case FUNCTION_DECL:
lto_ft_function (t);
maybe_remember_with_vars_function (t);
break;
case TREE_BINFO:
lto_ft_binfo (t);
maybe_remember_with_vars_binfo (t);
break;
case PLACEHOLDER_EXPR:
lto_ft_common (t);
maybe_remember_with_vars_common (t);
break;
case BLOCK:
......@@ -1686,21 +1553,19 @@ lto_fixup_types (tree t)
case TARGET_OPTION_NODE:
break;
case CONSTRUCTOR:
maybe_remember_with_vars_constructor (t);
break;
default:
if (TYPE_P (t))
lto_ft_type (t);
else if (TREE_CODE (t) == CONSTRUCTOR)
lto_ft_constructor (t);
maybe_remember_with_vars_type (t);
else if (CONSTANT_CLASS_P (t))
LTO_FIXUP_TREE (TREE_TYPE (t));
MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
else if (EXPR_P (t))
{
lto_ft_expr (t);
}
maybe_remember_with_vars_expr (t);
else
{
remember_with_vars (t);
}
remember_with_vars (t);
}
}
......@@ -1786,221 +1651,721 @@ lto_register_function_decl_in_symtab (struct data_in *data_in, tree decl)
}
}
static unsigned long num_merged_types = 0;
/* Given a streamer cache structure DATA_IN (holding a sequence of trees
for one compilation unit) go over all trees starting at index FROM until the
end of the sequence and replace fields of those trees, and the trees
themself with their canonical variants as per gimple_register_type. */
/* For the type T re-materialize it in the type variant list and
the pointer/reference-to chains. */
static void
uniquify_nodes (struct data_in *data_in, unsigned from)
lto_fixup_prevailing_type (tree t)
{
struct streamer_tree_cache_d *cache = data_in->reader_cache;
unsigned len = cache->nodes.length ();
unsigned i;
/* The following re-creates proper variant lists while fixing up
the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
variant list state before fixup is broken. */
/* Go backwards because children streamed for the first time come
as part of their parents, and hence are created after them. */
/* If we are not our own variant leader link us into our new leaders
variant list. */
if (TYPE_MAIN_VARIANT (t) != t)
{
tree mv = TYPE_MAIN_VARIANT (t);
TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
TYPE_NEXT_VARIANT (mv) = t;
}
/* First register all the types in the cache. This makes sure to
have the original structure in the type cycles when registering
them and computing hashes. */
for (i = len; i-- > from;)
/* The following reconstructs the pointer chains
of the new pointed-to type if we are a main variant. We do
not stream those so they are broken before fixup. */
if (TREE_CODE (t) == POINTER_TYPE
&& TYPE_MAIN_VARIANT (t) == t)
{
tree t = cache->nodes[i];
if (t && TYPE_P (t))
TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
TYPE_POINTER_TO (TREE_TYPE (t)) = t;
}
else if (TREE_CODE (t) == REFERENCE_TYPE
&& TYPE_MAIN_VARIANT (t) == t)
{
TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
}
}
/* We keep prevailing tree SCCs in a hashtable with manual collision
handling (in case all hashes compare the same) and keep the colliding
entries in the tree_scc->next chain. */
struct tree_scc
{
tree_scc *next;
/* Hash of the whole SCC. */
hashval_t hash;
/* Number of trees in the SCC. */
unsigned len;
/* Number of possible entries into the SCC (tree nodes [0..entry_len-1]
which share the same individual tree hash). */
unsigned entry_len;
/* The members of the SCC.
We only need to remember the first entry node candidate for prevailing
SCCs (but of course have access to all entries for SCCs we are
processing).
??? For prevailing SCCs we really only need hash and the first
entry candidate, but that's too awkward to implement. */
tree entries[1];
};
struct tree_scc_hasher : typed_noop_remove <tree_scc>
{
typedef tree_scc value_type;
typedef tree_scc compare_type;
static inline hashval_t hash (const value_type *);
static inline bool equal (const value_type *, const compare_type *);
};
hashval_t
tree_scc_hasher::hash (const value_type *scc)
{
return scc->hash;
}
bool
tree_scc_hasher::equal (const value_type *scc1, const compare_type *scc2)
{
if (scc1->hash != scc2->hash
|| scc1->len != scc2->len
|| scc1->entry_len != scc2->entry_len)
return false;
return true;
}
static hash_table <tree_scc_hasher> tree_scc_hash;
static struct obstack tree_scc_hash_obstack;
static unsigned long num_merged_types;
static unsigned long num_prevailing_types;
static unsigned long num_not_merged_types;
static unsigned long num_not_merged_types_in_same_scc;
static unsigned long num_not_merged_types_trees;
static unsigned long num_not_merged_types_in_same_scc_trees;
static unsigned long num_type_scc_trees;
static unsigned long total_scc_size;
static unsigned long num_sccs_read;
static unsigned long total_scc_size_merged;
static unsigned long num_sccs_merged;
static unsigned long num_scc_compares;
static unsigned long num_scc_compare_collisions;
/* Compare the two entries T1 and T2 of two SCCs that are possibly equal,
recursing through in-SCC tree edges. Returns true if the SCCs entered
through T1 and T2 are equal and fills in *MAP with the pairs of
SCC entries we visited, starting with (*MAP)[0] = T1 and (*MAP)[1] = T2. */
static bool
compare_tree_sccs_1 (tree t1, tree t2, tree **map)
{
enum tree_code code;
/* Mark already visited nodes. */
TREE_ASM_WRITTEN (t2) = 1;
/* Push the pair onto map. */
(*map)[0] = t1;
(*map)[1] = t2;
*map = *map + 2;
/* Compare value-fields. */
#define compare_values(X) \
do { \
if (X(t1) != X(t2)) \
return false; \
} while (0)
compare_values (TREE_CODE);
code = TREE_CODE (t1);
if (!TYPE_P (t1))
{
compare_values (TREE_SIDE_EFFECTS);
compare_values (TREE_CONSTANT);
compare_values (TREE_READONLY);
compare_values (TREE_PUBLIC);
}
compare_values (TREE_ADDRESSABLE);
compare_values (TREE_THIS_VOLATILE);
if (DECL_P (t1))
compare_values (DECL_UNSIGNED);
else if (TYPE_P (t1))
compare_values (TYPE_UNSIGNED);
if (TYPE_P (t1))
compare_values (TYPE_ARTIFICIAL);
else
compare_values (TREE_NO_WARNING);
compare_values (TREE_NOTHROW);
compare_values (TREE_STATIC);
if (code != TREE_BINFO)
compare_values (TREE_PRIVATE);
compare_values (TREE_PROTECTED);
compare_values (TREE_DEPRECATED);
if (TYPE_P (t1))
{
compare_values (TYPE_SATURATING);
compare_values (TYPE_ADDR_SPACE);
}
else if (code == SSA_NAME)
compare_values (SSA_NAME_IS_DEFAULT_DEF);
if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
{
compare_values (TREE_INT_CST_LOW);
compare_values (TREE_INT_CST_HIGH);
}
if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
{
/* ??? No suitable compare routine available. */
REAL_VALUE_TYPE r1 = TREE_REAL_CST (t1);
REAL_VALUE_TYPE r2 = TREE_REAL_CST (t2);
if (r1.cl != r2.cl
|| r1.decimal != r2.decimal
|| r1.sign != r2.sign
|| r1.signalling != r2.signalling
|| r1.canonical != r2.canonical
|| r1.uexp != r2.uexp)
return false;
for (unsigned i = 0; i < SIGSZ; ++i)
if (r1.sig[i] != r2.sig[i])
return false;
}
if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
if (!fixed_compare (EQ_EXPR,
TREE_FIXED_CST_PTR (t1), TREE_FIXED_CST_PTR (t2)))
return false;
/* We don't want to compare locations, so there is nothing do compare
for TS_DECL_MINIMAL. */
if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
{
compare_values (DECL_MODE);
compare_values (DECL_NONLOCAL);
compare_values (DECL_VIRTUAL_P);
compare_values (DECL_IGNORED_P);
compare_values (DECL_ABSTRACT);
compare_values (DECL_ARTIFICIAL);
compare_values (DECL_USER_ALIGN);
compare_values (DECL_PRESERVE_P);
compare_values (DECL_EXTERNAL);
compare_values (DECL_GIMPLE_REG_P);
compare_values (DECL_ALIGN);
if (code == LABEL_DECL)
{
tree newt = gimple_register_type (t);
/* Mark non-prevailing types so we fix them up. No need
to reset that flag afterwards - nothing that refers
to those types is left and they are collected. */
if (newt != t)
{
num_merged_types++;
TREE_VISITED (t) = 1;
}
compare_values (DECL_ERROR_ISSUED);
compare_values (EH_LANDING_PAD_NR);
compare_values (LABEL_DECL_UID);
}
else if (code == FIELD_DECL)
{
compare_values (DECL_PACKED);
compare_values (DECL_NONADDRESSABLE_P);
compare_values (DECL_OFFSET_ALIGN);
}
else if (code == VAR_DECL)
{
compare_values (DECL_HAS_DEBUG_EXPR_P);
compare_values (DECL_NONLOCAL_FRAME);
}
if (code == RESULT_DECL
|| code == PARM_DECL
|| code == VAR_DECL)
{
compare_values (DECL_BY_REFERENCE);
if (code == VAR_DECL
|| code == PARM_DECL)
compare_values (DECL_HAS_VALUE_EXPR_P);
}
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
compare_values (DECL_REGISTER);
if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
{
compare_values (DECL_DEFER_OUTPUT);
compare_values (DECL_COMMON);
compare_values (DECL_DLLIMPORT_P);
compare_values (DECL_WEAK);
compare_values (DECL_SEEN_IN_BIND_EXPR_P);
compare_values (DECL_COMDAT);
compare_values (DECL_VISIBILITY);
compare_values (DECL_VISIBILITY_SPECIFIED);
if (code == VAR_DECL)
{
compare_values (DECL_HARD_REGISTER);
compare_values (DECL_IN_TEXT_SECTION);
compare_values (DECL_IN_CONSTANT_POOL);
compare_values (DECL_TLS_MODEL);
}
if (VAR_OR_FUNCTION_DECL_P (t1))
compare_values (DECL_INIT_PRIORITY);
}
if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
{
compare_values (DECL_BUILT_IN_CLASS);
compare_values (DECL_STATIC_CONSTRUCTOR);
compare_values (DECL_STATIC_DESTRUCTOR);
compare_values (DECL_UNINLINABLE);
compare_values (DECL_POSSIBLY_INLINED);
compare_values (DECL_IS_NOVOPS);
compare_values (DECL_IS_RETURNS_TWICE);
compare_values (DECL_IS_MALLOC);
compare_values (DECL_IS_OPERATOR_NEW);
compare_values (DECL_DECLARED_INLINE_P);
compare_values (DECL_STATIC_CHAIN);
compare_values (DECL_NO_INLINE_WARNING_P);
compare_values (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT);
compare_values (DECL_NO_LIMIT_STACK);
compare_values (DECL_DISREGARD_INLINE_LIMITS);
compare_values (DECL_PURE_P);
compare_values (DECL_LOOPING_CONST_OR_PURE_P);
if (DECL_BUILT_IN_CLASS (t1) != NOT_BUILT_IN)
compare_values (DECL_FUNCTION_CODE);
if (DECL_STATIC_DESTRUCTOR (t1))
compare_values (DECL_FINI_PRIORITY);
}
if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
{
compare_values (TYPE_MODE);
compare_values (TYPE_STRING_FLAG);
compare_values (TYPE_NO_FORCE_BLK);
compare_values (TYPE_NEEDS_CONSTRUCTING);
if (RECORD_OR_UNION_TYPE_P (t1))
compare_values (TYPE_TRANSPARENT_AGGR);
else if (code == ARRAY_TYPE)
compare_values (TYPE_NONALIASED_COMPONENT);
compare_values (TYPE_PACKED);
compare_values (TYPE_RESTRICT);
compare_values (TYPE_USER_ALIGN);
compare_values (TYPE_READONLY);
compare_values (TYPE_PRECISION);
compare_values (TYPE_ALIGN);
compare_values (TYPE_ALIAS_SET);
}
/* We don't want to compare locations, so there is nothing do compare
for TS_EXP. */
/* BLOCKs are function local and we don't merge anything there, so
simply refuse to merge. */
if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
return false;
if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
if (strcmp (TRANSLATION_UNIT_LANGUAGE (t1),
TRANSLATION_UNIT_LANGUAGE (t2)) != 0)
return false;
if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
if (memcmp (TREE_TARGET_OPTION (t1), TREE_TARGET_OPTION (t2),
sizeof (struct cl_target_option)) != 0)
return false;
if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
if (memcmp (TREE_OPTIMIZATION (t1), TREE_OPTIMIZATION (t2),
sizeof (struct cl_optimization)) != 0)
return false;
if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
if (vec_safe_length (BINFO_BASE_ACCESSES (t1))
!= vec_safe_length (BINFO_BASE_ACCESSES (t2)))
return false;
if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
compare_values (CONSTRUCTOR_NELTS);
if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
if (IDENTIFIER_LENGTH (t1) != IDENTIFIER_LENGTH (t2)
|| memcmp (IDENTIFIER_POINTER (t1), IDENTIFIER_POINTER (t2),
IDENTIFIER_LENGTH (t1)) != 0)
return false;
if (CODE_CONTAINS_STRUCT (code, TS_STRING))
if (TREE_STRING_LENGTH (t1) != TREE_STRING_LENGTH (t2)
|| memcmp (TREE_STRING_POINTER (t1), TREE_STRING_POINTER (t2),
TREE_STRING_LENGTH (t1)) != 0)
return false;
#undef compare_values
/* Compare pointer fields. */
/* Recurse. Search & Replaced from DFS_write_tree_body.
Folding the early checks into the compare_tree_edges recursion
macro makes debugging way quicker as you are able to break on
compare_tree_sccs_1 and simply finish until a call returns false
to spot the SCC members with the difference. */
#define compare_tree_edges(E1, E2) \
do { \
tree t1_ = (E1), t2_ = (E2); \
if (t1_ != t2_ \
&& (!t1_ || !t2_ \
|| !TREE_VISITED (t2_) \
|| (!TREE_ASM_WRITTEN (t2_) \
&& !compare_tree_sccs_1 (t1_, t2_, map)))) \
return false; \
/* Only non-NULL trees outside of the SCC may compare equal. */ \
gcc_checking_assert (t1_ != t2_ || (!t2_ || !TREE_VISITED (t2_))); \
} while (0)
if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
{
if (code != IDENTIFIER_NODE)
compare_tree_edges (TREE_TYPE (t1), TREE_TYPE (t2));
}
/* Second fixup all trees in the new cache entries. */
for (i = len; i-- > from;)
if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
{
tree t = cache->nodes[i];
tree oldt = t;
if (!t)
continue;
unsigned i;
/* Note that the number of elements for EXPR has already been emitted
in EXPR's header (see streamer_write_tree_header). */
for (i = 0; i < VECTOR_CST_NELTS (t1); ++i)
compare_tree_edges (VECTOR_CST_ELT (t1, i), VECTOR_CST_ELT (t2, i));
}
/* First fixup the fields of T. */
lto_fixup_types (t);
if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
{
compare_tree_edges (TREE_REALPART (t1), TREE_REALPART (t2));
compare_tree_edges (TREE_IMAGPART (t1), TREE_IMAGPART (t2));
}
if (!TYPE_P (t))
continue;
if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
{
compare_tree_edges (DECL_NAME (t1), DECL_NAME (t2));
/* ??? Global decls from different TUs have non-matching
TRANSLATION_UNIT_DECLs. Only consider a small set of
decls equivalent, we should not end up merging others. */
if ((code == TYPE_DECL
|| code == NAMESPACE_DECL
|| code == IMPORTED_DECL
|| code == CONST_DECL
|| (VAR_OR_FUNCTION_DECL_P (t1)
&& (TREE_PUBLIC (t1) || DECL_EXTERNAL (t1))))
&& DECL_FILE_SCOPE_P (t1) && DECL_FILE_SCOPE_P (t2))
;
else
compare_tree_edges (DECL_CONTEXT (t1), DECL_CONTEXT (t2));
}
/* Now try to find a canonical variant of T itself. */
t = GIMPLE_REGISTER_TYPE (t);
if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
{
compare_tree_edges (DECL_SIZE (t1), DECL_SIZE (t2));
compare_tree_edges (DECL_SIZE_UNIT (t1), DECL_SIZE_UNIT (t2));
compare_tree_edges (DECL_ATTRIBUTES (t1), DECL_ATTRIBUTES (t2));
if ((code == VAR_DECL
|| code == PARM_DECL)
&& DECL_HAS_VALUE_EXPR_P (t1))
compare_tree_edges (DECL_VALUE_EXPR (t1), DECL_VALUE_EXPR (t2));
if (code == VAR_DECL
&& DECL_HAS_DEBUG_EXPR_P (t1))
compare_tree_edges (DECL_DEBUG_EXPR (t1), DECL_DEBUG_EXPR (t2));
/* LTO specific edges. */
if (code != FUNCTION_DECL
&& code != TRANSLATION_UNIT_DECL)
compare_tree_edges (DECL_INITIAL (t1), DECL_INITIAL (t2));
}
if (t == oldt)
if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
{
if (code == FUNCTION_DECL)
{
/* The following re-creates proper variant lists while fixing up
the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
variant list state before fixup is broken. */
tree tem, mv;
tree a1, a2;
for (a1 = DECL_ARGUMENTS (t1), a2 = DECL_ARGUMENTS (t2);
a1 || a2;
a1 = TREE_CHAIN (a1), a2 = TREE_CHAIN (a2))
compare_tree_edges (a1, a2);
compare_tree_edges (DECL_RESULT (t1), DECL_RESULT (t2));
}
else if (code == TYPE_DECL)
compare_tree_edges (DECL_ORIGINAL_TYPE (t1), DECL_ORIGINAL_TYPE (t2));
compare_tree_edges (DECL_VINDEX (t1), DECL_VINDEX (t2));
}
if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
{
/* Make sure we don't inadvertently set the assembler name. */
if (DECL_ASSEMBLER_NAME_SET_P (t1))
compare_tree_edges (DECL_ASSEMBLER_NAME (t1),
DECL_ASSEMBLER_NAME (t2));
compare_tree_edges (DECL_SECTION_NAME (t1), DECL_SECTION_NAME (t2));
compare_tree_edges (DECL_COMDAT_GROUP (t1), DECL_COMDAT_GROUP (t2));
}
if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
{
compare_tree_edges (DECL_FIELD_OFFSET (t1), DECL_FIELD_OFFSET (t2));
compare_tree_edges (DECL_BIT_FIELD_TYPE (t1), DECL_BIT_FIELD_TYPE (t2));
compare_tree_edges (DECL_BIT_FIELD_REPRESENTATIVE (t1),
DECL_BIT_FIELD_REPRESENTATIVE (t2));
compare_tree_edges (DECL_FIELD_BIT_OFFSET (t1),
DECL_FIELD_BIT_OFFSET (t2));
compare_tree_edges (DECL_FCONTEXT (t1), DECL_FCONTEXT (t2));
}
if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
{
compare_tree_edges (DECL_FUNCTION_PERSONALITY (t1),
DECL_FUNCTION_PERSONALITY (t2));
compare_tree_edges (DECL_FUNCTION_SPECIFIC_TARGET (t1),
DECL_FUNCTION_SPECIFIC_TARGET (t2));
compare_tree_edges (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t1),
DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t2));
}
if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
{
compare_tree_edges (TYPE_SIZE (t1), TYPE_SIZE (t2));
compare_tree_edges (TYPE_SIZE_UNIT (t1), TYPE_SIZE_UNIT (t2));
compare_tree_edges (TYPE_ATTRIBUTES (t1), TYPE_ATTRIBUTES (t2));
compare_tree_edges (TYPE_NAME (t1), TYPE_NAME (t2));
/* Do not compare TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
reconstructed during fixup. */
/* Do not compare TYPE_NEXT_VARIANT, we reconstruct the variant lists
during fixup. */
compare_tree_edges (TYPE_MAIN_VARIANT (t1), TYPE_MAIN_VARIANT (t2));
/* ??? Global types from different TUs have non-matching
TRANSLATION_UNIT_DECLs. Still merge them if they are otherwise
equal. */
if (TYPE_FILE_SCOPE_P (t1) && TYPE_FILE_SCOPE_P (t2))
;
else
compare_tree_edges (TYPE_CONTEXT (t1), TYPE_CONTEXT (t2));
/* TYPE_CANONICAL is re-computed during type merging, so do not
compare it here. */
compare_tree_edges (TYPE_STUB_DECL (t1), TYPE_STUB_DECL (t2));
}
#ifdef ENABLE_CHECKING
/* Remove us from our main variant list if we are not the
variant leader. */
if (TYPE_MAIN_VARIANT (t) != t)
{
tem = TYPE_MAIN_VARIANT (t);
while (tem && TYPE_NEXT_VARIANT (tem) != t)
tem = TYPE_NEXT_VARIANT (tem);
gcc_assert (!tem && !TYPE_NEXT_VARIANT (t));
}
#endif
if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
{
if (code == ENUMERAL_TYPE)
compare_tree_edges (TYPE_VALUES (t1), TYPE_VALUES (t2));
else if (code == ARRAY_TYPE)
compare_tree_edges (TYPE_DOMAIN (t1), TYPE_DOMAIN (t2));
else if (RECORD_OR_UNION_TYPE_P (t1))
{
tree f1, f2;
for (f1 = TYPE_FIELDS (t1), f2 = TYPE_FIELDS (t2);
f1 || f2;
f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
compare_tree_edges (f1, f2);
compare_tree_edges (TYPE_BINFO (t1), TYPE_BINFO (t2));
}
else if (code == FUNCTION_TYPE
|| code == METHOD_TYPE)
compare_tree_edges (TYPE_ARG_TYPES (t1), TYPE_ARG_TYPES (t2));
if (!POINTER_TYPE_P (t1))
compare_tree_edges (TYPE_MINVAL (t1), TYPE_MINVAL (t2));
compare_tree_edges (TYPE_MAXVAL (t1), TYPE_MAXVAL (t2));
}
/* Query our new main variant. */
mv = GIMPLE_REGISTER_TYPE (TYPE_MAIN_VARIANT (t));
if (CODE_CONTAINS_STRUCT (code, TS_LIST))
{
compare_tree_edges (TREE_PURPOSE (t1), TREE_PURPOSE (t2));
compare_tree_edges (TREE_VALUE (t1), TREE_VALUE (t2));
compare_tree_edges (TREE_CHAIN (t1), TREE_CHAIN (t2));
}
/* If we were the variant leader and we get replaced ourselves drop
all variants from our list. */
if (TYPE_MAIN_VARIANT (t) == t
&& mv != t)
{
tem = t;
while (tem)
{
tree tem2 = TYPE_NEXT_VARIANT (tem);
TYPE_NEXT_VARIANT (tem) = NULL_TREE;
tem = tem2;
}
}
if (CODE_CONTAINS_STRUCT (code, TS_VEC))
for (int i = 0; i < TREE_VEC_LENGTH (t1); i++)
compare_tree_edges (TREE_VEC_ELT (t1, i), TREE_VEC_ELT (t2, i));
/* If we are not our own variant leader link us into our new leaders
variant list. */
if (mv != t)
{
TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
TYPE_NEXT_VARIANT (mv) = t;
if (RECORD_OR_UNION_TYPE_P (t))
TYPE_BINFO (t) = TYPE_BINFO (mv);
/* Preserve the invariant that type variants share their
TYPE_FIELDS. */
if (RECORD_OR_UNION_TYPE_P (t)
&& TYPE_FIELDS (mv) != TYPE_FIELDS (t))
{
tree f1, f2;
for (f1 = TYPE_FIELDS (mv), f2 = TYPE_FIELDS (t);
f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
{
unsigned ix;
gcc_assert (f1 != f2
&& DECL_NAME (f1) == DECL_NAME (f2));
if (!streamer_tree_cache_lookup (cache, f2, &ix))
gcc_unreachable ();
/* If we're going to replace an element which we'd
still visit in the next iterations, we wouldn't
handle it, so do it here. We do have to handle it
even though the field_decl itself will be removed,
as it could refer to e.g. integer_cst which we
wouldn't reach via any other way, hence they
(and their type) would stay uncollected. */
/* ??? We should rather make sure to replace all
references to f2 with f1. That means handling
COMPONENT_REFs and CONSTRUCTOR elements in
lto_fixup_types and special-case the field-decl
operand handling. */
/* ??? Not sure the above is all relevant in this
path canonicalizing TYPE_FIELDS to that of the
main variant. */
if (ix < i)
lto_fixup_types (f2);
streamer_tree_cache_insert_at (cache, f1, ix);
}
TYPE_FIELDS (t) = TYPE_FIELDS (mv);
}
}
if (CODE_CONTAINS_STRUCT (code, TS_EXP))
{
for (int i = 0; i < TREE_OPERAND_LENGTH (t1); i++)
compare_tree_edges (TREE_OPERAND (t1, i),
TREE_OPERAND (t2, i));
/* Finally adjust our main variant and fix it up. */
TYPE_MAIN_VARIANT (t) = mv;
/* BLOCKs are function local and we don't merge anything there. */
if (TREE_BLOCK (t1) || TREE_BLOCK (t2))
return false;
}
/* The following reconstructs the pointer chains
of the new pointed-to type if we are a main variant. We do
not stream those so they are broken before fixup. */
if (TREE_CODE (t) == POINTER_TYPE
&& TYPE_MAIN_VARIANT (t) == t)
{
TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
TYPE_POINTER_TO (TREE_TYPE (t)) = t;
}
else if (TREE_CODE (t) == REFERENCE_TYPE
&& TYPE_MAIN_VARIANT (t) == t)
{
TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
}
if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
{
unsigned i;
tree t;
/* Lengths have already been compared above. */
FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t1), i, t)
compare_tree_edges (t, BINFO_BASE_BINFO (t2, i));
FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t1), i, t)
compare_tree_edges (t, BINFO_BASE_ACCESS (t2, i));
compare_tree_edges (BINFO_OFFSET (t1), BINFO_OFFSET (t2));
compare_tree_edges (BINFO_VTABLE (t1), BINFO_VTABLE (t2));
compare_tree_edges (BINFO_VPTR_FIELD (t1), BINFO_VPTR_FIELD (t2));
compare_tree_edges (BINFO_INHERITANCE_CHAIN (t1),
BINFO_INHERITANCE_CHAIN (t2));
compare_tree_edges (BINFO_SUBVTT_INDEX (t1),
BINFO_SUBVTT_INDEX (t2));
compare_tree_edges (BINFO_VPTR_INDEX (t1),
BINFO_VPTR_INDEX (t2));
}
if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
{
unsigned i;
tree index, value;
/* Lengths have already been compared above. */
FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t1), i, index, value)
{
compare_tree_edges (index, CONSTRUCTOR_ELT (t2, i)->index);
compare_tree_edges (value, CONSTRUCTOR_ELT (t2, i)->value);
}
}
else
#undef compare_tree_edges
return true;
}
/* Compare the tree scc SCC to the prevailing candidate PSCC, filling
out MAP if they are equal. */
static bool
compare_tree_sccs (tree_scc *pscc, tree_scc *scc,
tree *map)
{
/* Assume SCC entry hashes are sorted after their cardinality. Which
means we can simply take the first n-tuple of equal hashes
(which is recorded as entry_len) and do n SCC entry candidate
comparisons. */
for (unsigned i = 0; i < pscc->entry_len; ++i)
{
tree *mapp = map;
num_scc_compare_collisions++;
if (compare_tree_sccs_1 (pscc->entries[0], scc->entries[i], &mapp))
{
if (RECORD_OR_UNION_TYPE_P (t))
{
tree f1, f2;
if (TYPE_FIELDS (t) != TYPE_FIELDS (oldt))
for (f1 = TYPE_FIELDS (t), f2 = TYPE_FIELDS (oldt);
f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
{
unsigned ix;
gcc_assert (f1 != f2 && DECL_NAME (f1) == DECL_NAME (f2));
if (!streamer_tree_cache_lookup (cache, f2, &ix))
gcc_unreachable ();
/* If we're going to replace an element which we'd
still visit in the next iterations, we wouldn't
handle it, so do it here. We do have to handle it
even though the field_decl itself will be removed,
as it could refer to e.g. integer_cst which we
wouldn't reach via any other way, hence they
(and their type) would stay uncollected. */
/* ??? We should rather make sure to replace all
references to f2 with f1. That means handling
COMPONENT_REFs and CONSTRUCTOR elements in
lto_fixup_types and special-case the field-decl
operand handling. */
if (ix < i)
lto_fixup_types (f2);
streamer_tree_cache_insert_at (cache, f1, ix);
}
}
/* Equal - no need to reset TREE_VISITED or TREE_ASM_WRITTEN
on the scc as all trees will be freed. */
return true;
}
/* Reset TREE_ASM_WRITTEN on scc for the next compare or in case
the SCC prevails. */
for (unsigned j = 0; j < scc->len; ++j)
TREE_ASM_WRITTEN (scc->entries[j]) = 0;
}
return false;
}
/* Try to unify the SCC with nodes FROM to FROM + LEN in CACHE and
hash value SCC_HASH with an already recorded SCC. Return true if
that was successful, otherwise return false. */
/* If we found a tree that is equal to oldt replace it in the
cache, so that further users (in the various LTO sections)
make use of it. */
streamer_tree_cache_insert_at (cache, t, i);
static bool
unify_scc (struct streamer_tree_cache_d *cache, unsigned from,
unsigned len, unsigned scc_entry_len, hashval_t scc_hash)
{
bool unified_p = false;
tree_scc *scc
= (tree_scc *) alloca (sizeof (tree_scc) + (len - 1) * sizeof (tree));
scc->next = NULL;
scc->hash = scc_hash;
scc->len = len;
scc->entry_len = scc_entry_len;
for (unsigned i = 0; i < len; ++i)
{
tree t = streamer_tree_cache_get_tree (cache, from + i);
scc->entries[i] = t;
/* Do not merge SCCs with local entities inside them. Also do
not merge TRANSLATION_UNIT_DECLs. */
if (TREE_CODE (t) == TRANSLATION_UNIT_DECL
|| (VAR_OR_FUNCTION_DECL_P (t)
&& !(TREE_PUBLIC (t) || DECL_EXTERNAL (t)))
|| TREE_CODE (t) == LABEL_DECL)
{
/* Avoid doing any work for these cases and do not worry to
record the SCCs for further merging. */
return false;
}
}
/* Finally compute the canonical type of all TREE_TYPEs and register
VAR_DECL and FUNCTION_DECL nodes in the symbol table.
From this point there are no longer any types with
TYPE_STRUCTURAL_EQUALITY_P and its type-based alias problems.
This step requires the TYPE_POINTER_TO lists being present, so
make sure it is done last. */
for (i = len; i-- > from;)
/* Look for the list of candidate SCCs to compare against. */
tree_scc **slot;
slot = tree_scc_hash.find_slot_with_hash (scc, scc_hash, INSERT);
if (*slot)
{
tree t = cache->nodes[i];
if (t == NULL_TREE)
continue;
/* Try unifying against each candidate. */
num_scc_compares++;
/* Set TREE_VISITED on the scc so we can easily identify tree nodes
outside of the scc when following tree edges. Make sure
that TREE_ASM_WRITTEN is unset so we can use it as 2nd bit
to track whether we visited the SCC member during the compare.
We cannot use TREE_VISITED on the pscc members as the extended
scc and pscc can overlap. */
for (unsigned i = 0; i < scc->len; ++i)
{
TREE_VISITED (scc->entries[i]) = 1;
gcc_assert (!TREE_ASM_WRITTEN (scc->entries[i]));
}
tree *map = XALLOCAVEC (tree, 2 * len);
for (tree_scc *pscc = *slot; pscc; pscc = pscc->next)
{
if (!compare_tree_sccs (pscc, scc, map))
continue;
/* Found an equal SCC. */
unified_p = true;
num_scc_compare_collisions--;
num_sccs_merged++;
total_scc_size_merged += len;
/* Fixup the streamer cache with the prevailing nodes according
to the tree node mapping computed by compare_tree_sccs. */
for (unsigned i = 0; i < len; ++i)
{
tree t = map[2*i+1];
enum tree_code code = TREE_CODE (t);
unsigned ix;
bool r;
/* IDENTIFIER_NODEs should be singletons and are merged by the
streamer. The others should be singletons, too, and we
should not merge them in any way. */
gcc_assert (code != TRANSLATION_UNIT_DECL
&& code != IDENTIFIER_NODE
&& !streamer_handle_as_builtin_p (t));
r = streamer_tree_cache_lookup (cache, t, &ix);
gcc_assert (r && ix >= from);
streamer_tree_cache_replace_tree (cache, map[2 * i], ix);
if (TYPE_P (t))
num_merged_types++;
}
/* Free the tree nodes from the read SCC. */
for (unsigned i = 0; i < len; ++i)
ggc_free (scc->entries[i]);
break;
}
/* Reset TREE_VISITED if we didn't unify the SCC with another. */
if (!unified_p)
for (unsigned i = 0; i < scc->len; ++i)
TREE_VISITED (scc->entries[i]) = 0;
}
if (TREE_CODE (t) == VAR_DECL)
lto_register_var_decl_in_symtab (data_in, t);
else if (TREE_CODE (t) == FUNCTION_DECL && !DECL_BUILT_IN (t))
lto_register_function_decl_in_symtab (data_in, t);
else if (!flag_wpa
&& TREE_CODE (t) == TYPE_DECL)
debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
else if (TYPE_P (t) && !TYPE_CANONICAL (t))
TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
/* If we didn't unify it to any candidate duplicate the relevant
pieces to permanent storage and link it into the chain. */
if (!unified_p)
{
tree_scc *pscc
= XOBNEWVAR (&tree_scc_hash_obstack, tree_scc, sizeof (tree_scc));
memcpy (pscc, scc, sizeof (tree_scc));
pscc->next = (*slot);
*slot = pscc;
}
return unified_p;
}
......@@ -2036,9 +2401,123 @@ lto_read_decls (struct lto_file_decl_data *decl_data, const void *data,
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
t = stream_read_tree (&ib_main, data_in);
gcc_assert (t && ib_main.p <= ib_main.len);
uniquify_nodes (data_in, from);
/* Read and uniquify SCCs as in the input stream. */
enum LTO_tags tag = streamer_read_record_start (&ib_main);
if (tag == LTO_tree_scc)
{
unsigned len_;
unsigned scc_entry_len;
hashval_t scc_hash = lto_input_scc (&ib_main, data_in, &len_,
&scc_entry_len);
unsigned len = data_in->reader_cache->nodes.length () - from;
gcc_assert (len == len_);
total_scc_size += len;
num_sccs_read++;
/* We have the special case of size-1 SCCs that are pre-merged
by means of identifier and string sharing for example.
??? Maybe we should avoid streaming those as SCCs. */
tree first = streamer_tree_cache_get_tree (data_in->reader_cache,
from);
if (len == 1
&& (TREE_CODE (first) == IDENTIFIER_NODE
|| TREE_CODE (first) == INTEGER_CST
|| TREE_CODE (first) == TRANSLATION_UNIT_DECL
|| streamer_handle_as_builtin_p (first)))
continue;
/* Try to unify the SCC with already existing ones. */
if (!flag_ltrans
&& unify_scc (data_in->reader_cache, from,
len, scc_entry_len, scc_hash))
continue;
/* Do remaining fixup tasks for prevailing nodes. */
bool seen_type = false;
bool not_merged_type_same_scc = false;
bool not_merged_type_not_same_scc = false;
for (unsigned i = 0; i < len; ++i)
{
tree t = streamer_tree_cache_get_tree (data_in->reader_cache,
from + i);
/* For statistics, see if the old code would have merged
the type. */
if (TYPE_P (t)
&& (flag_lto_report || (flag_wpa && flag_lto_report_wpa)))
{
tree newt = gimple_register_type (t);
if (newt != t)
{
num_not_merged_types++;
unsigned j;
/* Check if we can never merge the types because
they are in the same SCC and thus the old
code was broken. */
for (j = 0; j < len; ++j)
if (i != j
&& streamer_tree_cache_get_tree
(data_in->reader_cache, from + j) == newt)
{
num_not_merged_types_in_same_scc++;
not_merged_type_same_scc = true;
break;
}
if (j == len)
not_merged_type_not_same_scc = true;
}
}
/* Reconstruct the type variant and pointer-to/reference-to
chains. */
if (TYPE_P (t))
{
seen_type = true;
num_prevailing_types++;
lto_fixup_prevailing_type (t);
}
/* Compute the canonical type of all types.
??? Should be able to assert that !TYPE_CANONICAL. */
if (TYPE_P (t) && !TYPE_CANONICAL (t))
TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
/* Link shared INTEGER_CSTs into TYPE_CACHED_VALUEs of its
type which is also member of this SCC. */
if (TREE_CODE (t) == INTEGER_CST
&& !TREE_OVERFLOW (t))
cache_integer_cst (t);
/* Register TYPE_DECLs with the debuginfo machinery. */
if (!flag_wpa
&& TREE_CODE (t) == TYPE_DECL)
debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
if (!flag_ltrans)
{
/* Register variables and functions with the
symbol table. */
if (TREE_CODE (t) == VAR_DECL)
lto_register_var_decl_in_symtab (data_in, t);
else if (TREE_CODE (t) == FUNCTION_DECL
&& !DECL_BUILT_IN (t))
lto_register_function_decl_in_symtab (data_in, t);
/* Scan the tree for references to global functions or
variables and record those for later fixup. */
maybe_remember_with_vars (t);
}
}
if (not_merged_type_same_scc)
{
num_not_merged_types_in_same_scc_trees += len;
num_not_merged_types_trees += len;
}
else if (not_merged_type_not_same_scc)
num_not_merged_types_trees += len;
if (seen_type)
num_type_scc_trees += len;
}
else
{
/* Pickle stray references. */
t = lto_input_tree_1 (&ib_main, data_in, tag, 0);
gcc_assert (t && data_in->reader_cache->nodes.length () == from);
}
}
/* Read in lto_in_decl_state objects. */
......@@ -2898,9 +3377,9 @@ read_cgraph_and_symbols (unsigned nfiles, const char **fnames)
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
gimple_type_leader = ggc_alloc_cleared_vec_gimple_type_leader_entry_s
(GIMPLE_TYPE_LEADER_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);
gcc_obstack_init (&tree_scc_hash_obstack);
tree_scc_hash.create (4096);
if (!quiet_flag)
fprintf (stderr, "Reading object files:");
......@@ -2933,7 +3412,6 @@ read_cgraph_and_symbols (unsigned nfiles, const char **fnames)
lto_obj_file_close (current_lto_file);
free (current_lto_file);
current_lto_file = NULL;
ggc_collect ();
}
lto_flatten_files (decl_data, count, last_file_ix);
......@@ -2955,7 +3433,8 @@ read_cgraph_and_symbols (unsigned nfiles, const char **fnames)
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
gimple_type_leader = NULL;
tree_scc_hash.dispose ();
obstack_free (&tree_scc_hash_obstack, NULL);
free_gimple_type_tables ();
ggc_collect ();
......@@ -3121,27 +3600,49 @@ print_lto_report_1 (void)
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);
if (gimple_types)
fprintf (stderr, "[%s] GIMPLE type table: size %ld, %ld elements, "
"%ld searches, %ld collisions (ratio: %f)\n", pfx,
(long) htab_size (gimple_types),
(long) htab_elements (gimple_types),
(long) gimple_types->searches,
(long) gimple_types->collisions,
htab_collisions (gimple_types));
else
fprintf (stderr, "[%s] GIMPLE type table is empty\n", pfx);
if (type_hash_cache)
fprintf (stderr, "[%s] GIMPLE type hash cache table: size %ld, %ld elements, "
"%ld searches, %ld collisions (ratio: %f)\n", pfx,
(long) htab_size (type_hash_cache),
(long) htab_elements (type_hash_cache),
(long) type_hash_cache->searches,
(long) type_hash_cache->collisions,
htab_collisions (type_hash_cache));
else
fprintf (stderr, "[%s] GIMPLE type hash cache table is empty\n", pfx);
fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);
fprintf (stderr, "[%s] read %lu SCCs of average size %f\n",
pfx, num_sccs_read, total_scc_size / (double)num_sccs_read);
fprintf (stderr, "[%s] %lu tree bodies read in total\n", pfx, total_scc_size);
if (flag_wpa && tree_scc_hash.is_created ())
{
fprintf (stderr, "[%s] tree SCC table: size %ld, %ld elements, "
"collision ratio: %f\n", pfx,
(long) tree_scc_hash.size (),
(long) tree_scc_hash.elements (),
tree_scc_hash.collisions ());
hash_table<tree_scc_hasher>::iterator hiter;
tree_scc *scc, *max_scc = NULL;
unsigned max_length = 0;
FOR_EACH_HASH_TABLE_ELEMENT (tree_scc_hash, scc, x, hiter)
{
unsigned length = 0;
tree_scc *s = scc;
for (; s; s = s->next)
length++;
if (length > max_length)
{
max_length = length;
max_scc = scc;
}
}
fprintf (stderr, "[%s] tree SCC max chain length %u (size %u)\n",
pfx, max_length, max_scc->len);
fprintf (stderr, "[%s] Compared %lu SCCs, %lu collisions (%f)\n", pfx,
num_scc_compares, num_scc_compare_collisions,
num_scc_compare_collisions / (double) num_scc_compares);
fprintf (stderr, "[%s] Merged %lu SCCs\n", pfx, num_sccs_merged);
fprintf (stderr, "[%s] Merged %lu tree bodies\n", pfx,
total_scc_size_merged);
fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);
fprintf (stderr, "[%s] %lu types prevailed (%lu associated trees)\n",
pfx, num_prevailing_types, num_type_scc_trees);
fprintf (stderr, "[%s] Old merging code merges an additional %lu types"
" of which %lu are in the same SCC with their "
"prevailing variant (%lu and %lu associated trees)\n",
pfx, num_not_merged_types, num_not_merged_types_in_same_scc,
num_not_merged_types_trees,
num_not_merged_types_in_same_scc_trees);
}
print_gimple_types_stats (pfx);
print_lto_report (pfx);
......
......@@ -122,10 +122,10 @@ unpack_ts_base_value_fields (struct bitpack_d *bp, tree expr)
TYPE_ARTIFICIAL (expr) = (unsigned) bp_unpack_value (bp, 1);
else
TREE_NO_WARNING (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_USED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_NOTHROW (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_STATIC (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_PRIVATE (expr) = (unsigned) bp_unpack_value (bp, 1);
if (TREE_CODE (expr) != TREE_BINFO)
TREE_PRIVATE (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_PROTECTED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_DEPRECATED (expr) = (unsigned) bp_unpack_value (bp, 1);
if (TYPE_P (expr))
......@@ -351,8 +351,6 @@ unpack_ts_type_common_value_fields (struct bitpack_d *bp, tree expr)
TYPE_NONALIASED_COMPONENT (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_PACKED (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_RESTRICT (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_CONTAINS_PLACEHOLDER_INTERNAL (expr)
= (unsigned) bp_unpack_value (bp, 2);
TYPE_USER_ALIGN (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_READONLY (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_PRECISION (expr) = bp_unpack_var_len_unsigned (bp);
......@@ -811,7 +809,7 @@ lto_input_ts_list_tree_pointers (struct lto_input_block *ib,
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
TREE_CHAIN (expr) = streamer_read_chain (ib, data_in);
TREE_CHAIN (expr) = stream_read_tree (ib, data_in);
}
......@@ -1021,19 +1019,6 @@ streamer_read_tree_body (struct lto_input_block *ib, struct data_in *data_in,
}
/* Read and INTEGER_CST node from input block IB using the per-file
context in DATA_IN. */
tree
streamer_read_integer_cst (struct lto_input_block *ib, struct data_in *data_in)
{
tree type = stream_read_tree (ib, data_in);
unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
HOST_WIDE_INT high = streamer_read_hwi (ib);
return build_int_cst_wide (type, low, high);
}
/* Read an index IX from input block IB and return the tree node at
DATA_IN->FILE_DATA->GLOBALS_INDEX[IX]. */
......@@ -1047,7 +1032,7 @@ streamer_get_pickled_tree (struct lto_input_block *ib, struct data_in *data_in)
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);
result = streamer_tree_cache_get (data_in->reader_cache, ix);
result = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));
......@@ -1091,7 +1076,7 @@ streamer_get_builtin_tree (struct lto_input_block *ib, struct data_in *data_in)
if (asmname)
set_builtin_user_assembler_name (result, asmname);
streamer_tree_cache_append (data_in->reader_cache, result);
streamer_tree_cache_append (data_in->reader_cache, result, 0);
return result;
}
......@@ -95,10 +95,10 @@ pack_ts_base_value_fields (struct bitpack_d *bp, tree expr)
bp_pack_value (bp, TYPE_ARTIFICIAL (expr), 1);
else
bp_pack_value (bp, TREE_NO_WARNING (expr), 1);
bp_pack_value (bp, TREE_USED (expr), 1);
bp_pack_value (bp, TREE_NOTHROW (expr), 1);
bp_pack_value (bp, TREE_STATIC (expr), 1);
bp_pack_value (bp, TREE_PRIVATE (expr), 1);
if (TREE_CODE (expr) != TREE_BINFO)
bp_pack_value (bp, TREE_PRIVATE (expr), 1);
bp_pack_value (bp, TREE_PROTECTED (expr), 1);
bp_pack_value (bp, TREE_DEPRECATED (expr), 1);
if (TYPE_P (expr))
......@@ -298,12 +298,15 @@ pack_ts_type_common_value_fields (struct bitpack_d *bp, tree expr)
bp_pack_value (bp, TYPE_NONALIASED_COMPONENT (expr), 1);
bp_pack_value (bp, TYPE_PACKED (expr), 1);
bp_pack_value (bp, TYPE_RESTRICT (expr), 1);
bp_pack_value (bp, TYPE_CONTAINS_PLACEHOLDER_INTERNAL (expr), 2);
bp_pack_value (bp, TYPE_USER_ALIGN (expr), 1);
bp_pack_value (bp, TYPE_READONLY (expr), 1);
bp_pack_var_len_unsigned (bp, TYPE_PRECISION (expr));
bp_pack_var_len_unsigned (bp, TYPE_ALIGN (expr));
bp_pack_var_len_int (bp, TYPE_ALIAS_SET (expr) == 0 ? 0 : -1);
/* Make sure to preserve the fact whether the frontend would assign
alias-set zero to this type. */
bp_pack_var_len_int (bp, (TYPE_ALIAS_SET (expr) == 0
|| (!in_lto_p
&& get_alias_set (expr) == 0)) ? 0 : -1);
}
......@@ -491,9 +494,10 @@ streamer_write_chain (struct output_block *ob, tree t, bool ref_p)
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
/* ??? FIXME wrt SCC streaming. Drop these for now. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
stream_write_tree_shallow_non_ref (ob, t, ref_p);
; /* stream_write_tree_shallow_non_ref (ob, t, ref_p); */
else
stream_write_tree (ob, t, ref_p);
......@@ -553,7 +557,13 @@ static void
write_ts_decl_minimal_tree_pointers (struct output_block *ob, tree expr,
bool ref_p)
{
stream_write_tree (ob, DECL_NAME (expr), ref_p);
/* Drop names that were created for anonymous entities. */
if (DECL_NAME (expr)
&& TREE_CODE (DECL_NAME (expr)) == IDENTIFIER_NODE
&& ANON_AGGRNAME_P (DECL_NAME (expr)))
stream_write_tree (ob, NULL_TREE, ref_p);
else
stream_write_tree (ob, DECL_NAME (expr), ref_p);
stream_write_tree (ob, DECL_CONTEXT (expr), ref_p);
}
......@@ -716,7 +726,7 @@ write_ts_list_tree_pointers (struct output_block *ob, tree expr, bool ref_p)
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
streamer_write_chain (ob, TREE_CHAIN (expr), ref_p);
stream_write_tree (ob, TREE_CHAIN (expr), ref_p);
}
......@@ -842,6 +852,8 @@ streamer_write_tree_body (struct output_block *ob, tree expr, bool ref_p)
{
enum tree_code code;
lto_stats.num_tree_bodies_output++;
code = TREE_CODE (expr);
if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
......
......@@ -92,16 +92,24 @@ streamer_check_handled_ts_structures (void)
static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
unsigned ix, tree t)
unsigned ix, tree t, hashval_t hash)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());
if (ix == cache->nodes.length ())
cache->nodes.safe_push (t);
{
cache->nodes.safe_push (t);
if (cache->hashes.exists ())
cache->hashes.safe_push (hash);
}
else
cache->nodes[ix] = t;
{
cache->nodes[ix] = t;
if (cache->hashes.exists ())
cache->hashes[ix] = hash;
}
}
......@@ -117,7 +125,7 @@ streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
tree t, unsigned *ix_p,
tree t, hashval_t hash, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
......@@ -136,7 +144,7 @@ streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);
streamer_tree_cache_add_to_node_array (cache, ix, t);
streamer_tree_cache_add_to_node_array (cache, ix, t, hash);
/* Indicate that the item was not present in the cache. */
existed_p = false;
......@@ -151,7 +159,7 @@ streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
streamer_tree_cache_add_to_node_array (cache, ix, t);
streamer_tree_cache_add_to_node_array (cache, ix, t, hash);
*slot = (void *)(size_t) (ix + 1);
}
......@@ -174,30 +182,33 @@ streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
unsigned *ix_p)
hashval_t hash, unsigned *ix_p)
{
return streamer_tree_cache_insert_1 (cache, t, ix_p, true);
return streamer_tree_cache_insert_1 (cache, t, hash, ix_p, true);
}
/* Insert tree node T in CACHE at slot IX. If T already
existed in the cache return true. Otherwise, return false. */
/* Replace the tree node with T in CACHE at slot IX. */
bool
streamer_tree_cache_insert_at (struct streamer_tree_cache_d *cache,
tree t, unsigned ix)
void
streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *cache,
tree t, unsigned ix)
{
return streamer_tree_cache_insert_1 (cache, t, &ix, false);
hashval_t hash = 0;
if (cache->hashes.exists ())
hash = streamer_tree_cache_get_hash (cache, ix);
streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}
/* Appends tree node T to CACHE, even if T already existed in it. */
void
streamer_tree_cache_append (struct streamer_tree_cache_d *cache, tree t)
streamer_tree_cache_append (struct streamer_tree_cache_d *cache,
tree t, hashval_t hash)
{
unsigned ix = cache->nodes.length ();
streamer_tree_cache_insert_1 (cache, t, &ix, false);
streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}
/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
......@@ -257,7 +268,10 @@ record_common_node (struct streamer_tree_cache_d *cache, tree node)
if (!node)
node = error_mark_node;
streamer_tree_cache_append (cache, node);
/* ??? FIXME, devise a better hash value. But the hash needs to be equal
for all frontend and lto1 invocations. So just use the position
in the cache as hash value. */
streamer_tree_cache_append (cache, node, cache->nodes.length ());
if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
......@@ -305,13 +319,16 @@ preload_common_nodes (struct streamer_tree_cache_d *cache)
/* Create a cache of pickled nodes. */
struct streamer_tree_cache_d *
streamer_tree_cache_create (void)
streamer_tree_cache_create (bool with_hashes)
{
struct streamer_tree_cache_d *cache;
cache = XCNEW (struct streamer_tree_cache_d);
cache->node_map = pointer_map_create ();
cache->nodes.create (165);
if (with_hashes)
cache->hashes.create (165);
/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
......@@ -332,5 +349,6 @@ streamer_tree_cache_delete (struct streamer_tree_cache_d *c)
pointer_map_destroy (c->node_map);
c->nodes.release ();
c->hashes.release ();
free (c);
}
......@@ -43,6 +43,7 @@ along with GCC; see the file COPYING3. If not see
T. The reconstructed T is inserted in some array so that when
the reference index for T is found in the input stream, it can be
used to look up into the array to get the reconstructed T. */
struct streamer_tree_cache_d
{
/* The mapping between tree nodes and slots into the nodes array. */
......@@ -50,6 +51,8 @@ struct streamer_tree_cache_d
/* The nodes pickled so far. */
vec<tree> nodes;
/* The node hashes (if available). */
vec<hashval_t> hashes;
};
/* Return true if tree node EXPR should be streamed as a builtin. For
......@@ -71,7 +74,6 @@ tree streamer_alloc_tree (struct lto_input_block *, struct data_in *,
void streamer_read_tree_body (struct lto_input_block *, struct data_in *, tree);
tree streamer_get_pickled_tree (struct lto_input_block *, struct data_in *);
tree streamer_get_builtin_tree (struct lto_input_block *, struct data_in *);
tree streamer_read_integer_cst (struct lto_input_block *, struct data_in *);
struct bitpack_d streamer_read_tree_bitfields (struct lto_input_block *,
struct data_in *, tree);
......@@ -89,22 +91,31 @@ void streamer_write_builtin (struct output_block *, tree);
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
unsigned *);
bool streamer_tree_cache_insert_at (struct streamer_tree_cache_d *, tree,
unsigned);
void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree);
hashval_t, unsigned *);
void streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *, tree,
unsigned);
void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree,
hashval_t);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
struct streamer_tree_cache_d *streamer_tree_cache_create (void);
struct streamer_tree_cache_d *streamer_tree_cache_create (bool);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);
/* Return the tree node at slot IX in CACHE. */
static inline tree
streamer_tree_cache_get (struct streamer_tree_cache_d *cache, unsigned ix)
streamer_tree_cache_get_tree (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}
/* Return the tree hash value at slot IX in CACHE. */
static inline hashval_t
streamer_tree_cache_get_hash (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->hashes[ix];
}
#endif /* GCC_TREE_STREAMER_H */
......@@ -1266,6 +1266,99 @@ build_int_cst_wide (tree type, unsigned HOST_WIDE_INT low, HOST_WIDE_INT hi)
return t;
}
void
cache_integer_cst (tree t)
{
tree type = TREE_TYPE (t);
HOST_WIDE_INT hi = TREE_INT_CST_HIGH (t);
unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (t);
int ix = -1;
int limit = 0;
gcc_assert (!TREE_OVERFLOW (t));
switch (TREE_CODE (type))
{
case NULLPTR_TYPE:
gcc_assert (hi == 0 && low == 0);
/* Fallthru. */
case POINTER_TYPE:
case REFERENCE_TYPE:
/* Cache NULL pointer. */
if (!hi && !low)
{
limit = 1;
ix = 0;
}
break;
case BOOLEAN_TYPE:
/* Cache false or true. */
limit = 2;
if (!hi && low < 2)
ix = low;
break;
case INTEGER_TYPE:
case OFFSET_TYPE:
if (TYPE_UNSIGNED (type))
{
/* Cache 0..N */
limit = INTEGER_SHARE_LIMIT;
if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
ix = low;
}
else
{
/* Cache -1..N */
limit = INTEGER_SHARE_LIMIT + 1;
if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
ix = low + 1;
else if (hi == -1 && low == -(unsigned HOST_WIDE_INT)1)
ix = 0;
}
break;
case ENUMERAL_TYPE:
break;
default:
gcc_unreachable ();
}
if (ix >= 0)
{
/* Look for it in the type's vector of small shared ints. */
if (!TYPE_CACHED_VALUES_P (type))
{
TYPE_CACHED_VALUES_P (type) = 1;
TYPE_CACHED_VALUES (type) = make_tree_vec (limit);
}
gcc_assert (TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) == NULL_TREE);
TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) = t;
}
else
{
/* Use the cache of larger shared ints. */
void **slot;
slot = htab_find_slot (int_cst_hash_table, t, INSERT);
/* If there is already an entry for the number verify it's the
same. */
if (*slot)
{
gcc_assert (TREE_INT_CST_LOW ((tree)*slot) == low
&& TREE_INT_CST_HIGH ((tree)*slot) == hi);
return;
}
/* Otherwise insert this one into the hash table. */
*slot = t;
}
}
/* Builds an integer constant in TYPE such that lowest BITS bits are ones
and the rest are zeros. */
......
......@@ -5637,6 +5637,7 @@ extern const_tree strip_invariant_refs (const_tree);
extern tree lhd_gcc_personality (void);
extern void assign_assembler_name_if_neeeded (tree);
extern void warn_deprecated_use (tree, tree);
extern void cache_integer_cst (tree);
/* In cgraph.c */
......@@ -6582,4 +6583,27 @@ builtin_decl_implicit_p (enum built_in_function fncode)
&& builtin_info.implicit_p[uns_fncode]);
}
/* For anonymous aggregate types, we need some sort of name to
hold on to. In practice, this should not appear, but it should
not be harmful if it does. */
#ifndef NO_DOT_IN_LABEL
#define ANON_AGGRNAME_FORMAT "._%d"
#define ANON_AGGRNAME_P(ID_NODE) (IDENTIFIER_POINTER (ID_NODE)[0] == '.' \
&& IDENTIFIER_POINTER (ID_NODE)[1] == '_')
#else /* NO_DOT_IN_LABEL */
#ifndef NO_DOLLAR_IN_LABEL
#define ANON_AGGRNAME_FORMAT "$_%d"
#define ANON_AGGRNAME_P(ID_NODE) (IDENTIFIER_POINTER (ID_NODE)[0] == '$' \
&& IDENTIFIER_POINTER (ID_NODE)[1] == '_')
#else /* NO_DOLLAR_IN_LABEL */
#define ANON_AGGRNAME_PREFIX "__anon_"
#define ANON_AGGRNAME_P(ID_NODE) \
(!strncmp (IDENTIFIER_POINTER (ID_NODE), ANON_AGGRNAME_PREFIX, \
sizeof (ANON_AGGRNAME_PREFIX) - 1))
#define ANON_AGGRNAME_FORMAT "__anon_%d"
#endif /* NO_DOLLAR_IN_LABEL */
#endif /* NO_DOT_IN_LABEL */
#endif /* GCC_TREE_H */
......@@ -84,7 +84,12 @@ varpool_remove_initializer (struct varpool_node *node)
/* Keep vtables for BINFO folding. */
&& !DECL_VIRTUAL_P (node->symbol.decl)
/* FIXME: http://gcc.gnu.org/PR55395 */
&& debug_info_level == DINFO_LEVEL_NONE)
&& debug_info_level == DINFO_LEVEL_NONE
/* When doing declaration merging we have duplicate
entries for given decl. Do not attempt to remove
the boides, or we will end up remiving
wrong one. */
&& cgraph_state != CGRAPH_LTO_STREAMING)
DECL_INITIAL (node->symbol.decl) = error_mark_node;
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment