Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
R
riscv-gcc-1
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
lvzhengyang
riscv-gcc-1
Commits
0cf094c0
Commit
0cf094c0
authored
Jun 25, 2015
by
Eric Botcazou
Committed by
Eric Botcazou
Jun 25, 2015
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
* lto-streamer-out.c (DFS::hash_scc): Fix typos & formatting glitches.
From-SVN: r224942
parent
f43d102e
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
47 additions
and
45 deletions
+47
-45
gcc/ChangeLog
+4
-0
gcc/lto-streamer-out.c
+43
-45
No files found.
gcc/ChangeLog
View file @
0cf094c0
2015-06-25 Eric Botcazou <ebotcazou@adacore.com>
* lto-streamer-out.c (DFS::hash_scc): Fix typos & formatting glitches.
2015-06-25 Richard Sandiford <richard.sandiford@arm.com>
2015-06-25 Richard Sandiford <richard.sandiford@arm.com>
* match.pd: Add patterns for vec_conds between 1 and 0.
* match.pd: Add patterns for vec_conds between 1 and 0.
...
...
gcc/lto-streamer-out.c
View file @
0cf094c0
...
@@ -1364,54 +1364,51 @@ DFS::scc_entry_compare (const void *p1_, const void *p2_)
...
@@ -1364,54 +1364,51 @@ DFS::scc_entry_compare (const void *p1_, const void *p2_)
return
0
;
return
0
;
}
}
/* Return a hash value for the SCC on the SCC stack from FIRST with
/* Return a hash value for the SCC on the SCC stack from FIRST with SIZE. */
size SIZE. */
hashval_t
hashval_t
DFS
::
hash_scc
(
struct
output_block
*
ob
,
DFS
::
hash_scc
(
struct
output_block
*
ob
,
unsigned
first
,
unsigned
size
)
unsigned
first
,
unsigned
size
)
{
{
unsigned
int
last_classes
=
0
,
iterations
=
0
;
unsigned
int
last_classes
=
0
,
iterations
=
0
;
/* Compute hash values for the SCC members. */
/* Compute hash values for the SCC members. */
for
(
unsigned
i
=
0
;
i
<
size
;
++
i
)
for
(
unsigned
i
=
0
;
i
<
size
;
++
i
)
sccstack
[
first
+
i
].
hash
=
hash_tree
(
ob
->
writer_cache
,
NULL
,
sccstack
[
first
+
i
].
hash
sccstack
[
first
+
i
].
t
);
=
hash_tree
(
ob
->
writer_cache
,
NULL
,
sccstack
[
first
+
i
].
t
);
if
(
size
==
1
)
if
(
size
==
1
)
return
sccstack
[
first
].
hash
;
return
sccstack
[
first
].
hash
;
/* We aim to get unique hash for every tree within SCC and compute hash value
/* We aim to get unique hash for every tree within SCC and compute hash value
of the whole SCC by combin
g all values together in an stable (entry
point
of the whole SCC by combin
ing all values together in a stable (entry-
point
independent) order. This guarantees that the same SCC regions within
independent) order. This guarantees that the same SCC regions within
different translation units will get the same hash values and therefore
different translation units will get the same hash values and therefore
will be merged at WPA time.
will be merged at WPA time.
Often the hashes are already unique. In that case we compute
scc
hash
Often the hashes are already unique. In that case we compute
the SCC
hash
by combining individual hash values in an increasing order.
by combining individual hash values in an increasing order.
If th
re are duplicates
we seek at least one tree with unique hash (and
If th
ere are duplicates,
we seek at least one tree with unique hash (and
pick one with minimal hash and this property). Then we obtain stable
pick one with minimal hash and this property). Then we obtain
a
stable
order by DFS walk starting from this unique tree and then use index
order by DFS walk starting from this unique tree and then use
the
index
within this order to make individual hash values unique.
within this order to make individual hash values unique.
If there is no tree with unique hash, we iteratively propagate the hash
If there is no tree with unique hash, we iteratively propagate the hash
values across the internal edges of SCC. This usually quickly leads
values across the internal edges of SCC. This usually quickly leads
to unique hashes. Consider, for example, an SCC containing two pointers
to unique hashes. Consider, for example, an SCC containing two pointers
that are identical except for type they point and assume that these
that are identical except for the types they point to and assume that
types are also part of the SCC.
these types are also part of the SCC. The propagation will add the
The propagation will add the points-to type information into their hash
points-to type information into their hash values. */
values. */
do
do
{
{
/* Sort the SCC so we can easily
see
check for uniqueness. */
/* Sort the SCC so we can easily check for uniqueness. */
qsort
(
&
sccstack
[
first
],
size
,
sizeof
(
scc_entry
),
scc_entry_compare
);
qsort
(
&
sccstack
[
first
],
size
,
sizeof
(
scc_entry
),
scc_entry_compare
);
unsigned
int
classes
=
1
;
unsigned
int
classes
=
1
;
int
firstunique
=
-
1
;
int
firstunique
=
-
1
;
/* Find tree with lowest unique hash (if it exists) and compute
/* Find t
he t
ree with lowest unique hash (if it exists) and compute
number of equivalence classes. */
the
number of equivalence classes. */
if
(
sccstack
[
first
].
hash
!=
sccstack
[
first
+
1
].
hash
)
if
(
sccstack
[
first
].
hash
!=
sccstack
[
first
+
1
].
hash
)
firstunique
=
0
;
firstunique
=
0
;
for
(
unsigned
i
=
1
;
i
<
size
;
++
i
)
for
(
unsigned
i
=
1
;
i
<
size
;
++
i
)
...
@@ -1424,7 +1421,7 @@ DFS::hash_scc (struct output_block *ob,
...
@@ -1424,7 +1421,7 @@ DFS::hash_scc (struct output_block *ob,
firstunique
=
i
;
firstunique
=
i
;
}
}
/* If we found
tree with unique hash;
stop the iteration. */
/* If we found
a tree with unique hash,
stop the iteration. */
if
(
firstunique
!=
-
1
if
(
firstunique
!=
-
1
/* Also terminate if we run out of iterations or if the number of
/* Also terminate if we run out of iterations or if the number of
equivalence classes is no longer increasing.
equivalence classes is no longer increasing.
...
@@ -1436,13 +1433,13 @@ DFS::hash_scc (struct output_block *ob,
...
@@ -1436,13 +1433,13 @@ DFS::hash_scc (struct output_block *ob,
hashval_t
scc_hash
;
hashval_t
scc_hash
;
/* If some hashes are not unique (CLASSES != SIZE), use the DFS walk
/* If some hashes are not unique (CLASSES != SIZE), use the DFS walk
starting from FIRSTUNIQUE to ob
stain
stable order. */
starting from FIRSTUNIQUE to ob
tain a
stable order. */
if
(
classes
!=
size
&&
firstunique
!=
-
1
)
if
(
classes
!=
size
&&
firstunique
!=
-
1
)
{
{
hash_map
<
tree
,
hashval_t
>
map
(
size
*
2
);
hash_map
<
tree
,
hashval_t
>
map
(
size
*
2
);
/* Store hash values into a map, so we can associate them with
/* Store hash values into a map, so we can associate them with
reordered SCC. */
the
reordered SCC. */
for
(
unsigned
i
=
0
;
i
<
size
;
++
i
)
for
(
unsigned
i
=
0
;
i
<
size
;
++
i
)
map
.
put
(
sccstack
[
first
+
i
].
t
,
sccstack
[
first
+
i
].
hash
);
map
.
put
(
sccstack
[
first
+
i
].
t
,
sccstack
[
first
+
i
].
hash
);
...
@@ -1455,8 +1452,8 @@ DFS::hash_scc (struct output_block *ob,
...
@@ -1455,8 +1452,8 @@ DFS::hash_scc (struct output_block *ob,
/* Update hash values of individual members by hashing in the
/* Update hash values of individual members by hashing in the
index within the stable order. This ensures uniqueness.
index within the stable order. This ensures uniqueness.
Also compute the
scc_hash by mixing in all hash values in the
Also compute the
SCC hash by mixing in all hash values in
stable order we obtained. */
the
stable order we obtained. */
sccstack
[
first
].
hash
=
*
map
.
get
(
sccstack
[
first
].
t
);
sccstack
[
first
].
hash
=
*
map
.
get
(
sccstack
[
first
].
t
);
scc_hash
=
sccstack
[
first
].
hash
;
scc_hash
=
sccstack
[
first
].
hash
;
for
(
unsigned
i
=
1
;
i
<
size
;
++
i
)
for
(
unsigned
i
=
1
;
i
<
size
;
++
i
)
...
@@ -1464,31 +1461,33 @@ DFS::hash_scc (struct output_block *ob,
...
@@ -1464,31 +1461,33 @@ DFS::hash_scc (struct output_block *ob,
sccstack
[
first
+
i
].
hash
sccstack
[
first
+
i
].
hash
=
iterative_hash_hashval_t
(
i
,
=
iterative_hash_hashval_t
(
i
,
*
map
.
get
(
sccstack
[
first
+
i
].
t
));
*
map
.
get
(
sccstack
[
first
+
i
].
t
));
scc_hash
=
iterative_hash_hashval_t
(
scc_hash
,
scc_hash
sccstack
[
first
+
i
].
hash
);
=
iterative_hash_hashval_t
(
scc_hash
,
sccstack
[
first
+
i
].
hash
);
}
}
}
}
/* If we got
unique hash values
for each tree, then sort already
/* If we got
a unique hash value
for each tree, then sort already
ensured entry
point independent order. Only compute the final
ensured entry
-
point independent order. Only compute the final
scc
hash.
SCC
hash.
If we failed to find the unique entry point, we go by the same
If we failed to find the unique entry point, we go by the same
route. We will eventually introduce unwanted hash conflicts. */
route.
We will eventually introduce unwanted hash conflicts. */
else
else
{
{
scc_hash
=
sccstack
[
first
].
hash
;
scc_hash
=
sccstack
[
first
].
hash
;
for
(
unsigned
i
=
1
;
i
<
size
;
++
i
)
for
(
unsigned
i
=
1
;
i
<
size
;
++
i
)
scc_hash
=
iterative_hash_hashval_t
(
scc_hash
,
scc_hash
sccstack
[
first
+
i
].
hash
);
=
iterative_hash_hashval_t
(
scc_hash
,
sccstack
[
first
+
i
].
hash
);
/* We can not 100% guarantee that the hash will not conflict in
in a way so the unique hash is not found. This however
/* We cannot 100% guarantee that the hash won't conflict so as
should be extremely rare situation. ICE for now so possible
to make it impossible to find a unique hash. This however
issues are found and evaulated. */
should be an extremely rare case. ICE for now so possible
issues are found and evaluated. */
gcc_checking_assert
(
classes
==
size
);
gcc_checking_assert
(
classes
==
size
);
}
}
/* To avoid conflicts across SCCs iteratively hash the whole SCC
/* To avoid conflicts across SCCs
,
iteratively hash the whole SCC
hash into the hash of each
of the elements
. */
hash into the hash of each
element
. */
for
(
unsigned
i
=
0
;
i
<
size
;
++
i
)
for
(
unsigned
i
=
0
;
i
<
size
;
++
i
)
sccstack
[
first
+
i
].
hash
sccstack
[
first
+
i
].
hash
=
iterative_hash_hashval_t
(
sccstack
[
first
+
i
].
hash
,
scc_hash
);
=
iterative_hash_hashval_t
(
sccstack
[
first
+
i
].
hash
,
scc_hash
);
...
@@ -1500,15 +1499,14 @@ DFS::hash_scc (struct output_block *ob,
...
@@ -1500,15 +1499,14 @@ DFS::hash_scc (struct output_block *ob,
/* We failed to identify the entry point; propagate hash values across
/* We failed to identify the entry point; propagate hash values across
the edges. */
the edges. */
{
hash_map
<
tree
,
hashval_t
>
map
(
size
*
2
);
hash_map
<
tree
,
hashval_t
>
map
(
size
*
2
);
for
(
unsigned
i
=
0
;
i
<
size
;
++
i
)
map
.
put
(
sccstack
[
first
+
i
].
t
,
sccstack
[
first
+
i
].
hash
);
for
(
unsigned
i
=
0
;
i
<
size
;
i
++
)
for
(
unsigned
i
=
0
;
i
<
size
;
++
i
)
sccstack
[
first
+
i
].
hash
=
hash_tree
(
ob
->
writer_cache
,
&
map
,
map
.
put
(
sccstack
[
first
+
i
].
t
,
sccstack
[
first
+
i
].
hash
);
sccstack
[
first
+
i
].
t
);
}
for
(
unsigned
i
=
0
;
i
<
size
;
i
++
)
sccstack
[
first
+
i
].
hash
=
hash_tree
(
ob
->
writer_cache
,
&
map
,
sccstack
[
first
+
i
].
t
);
}
}
while
(
true
);
while
(
true
);
}
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment