Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
R
riscv-gcc-1
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
lvzhengyang
riscv-gcc-1
Commits
eab5c70a
Commit
eab5c70a
authored
Mar 14, 2000
by
Bernd Schmidt
Committed by
Bernd Schmidt
Mar 14, 2000
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Add cselib; use it in loop and reload_cse_regs
From-SVN: r32538
parent
4bdf01f2
Show whitespace changes
Inline
Side-by-side
Showing
10 changed files
with
1630 additions
and
753 deletions
+1630
-753
gcc/ChangeLog
+64
-0
gcc/Makefile.in
+5
-4
gcc/alias.c
+66
-23
gcc/cselib.h
+66
-0
gcc/loop.c
+57
-5
gcc/reload1.c
+129
-721
gcc/rtl.def
+3
-0
gcc/rtl.h
+5
-0
gcc/simplify-rtx.c
+1230
-0
gcc/varray.h
+5
-0
No files found.
gcc/ChangeLog
View file @
eab5c70a
2000
-
03
-
14
Bernd
Schmidt
<
bernds
@cygnus
.
co
.
uk
>
*
cselib
.
h
:
New
file
.
*
alias
.
c
:
Include
"cselib.h"
.
(
fixed_scalar_and_varying_struct_p
)
:
Accept
the
addresses
of
the
MEMs
as
two
new
arguments
.
(
get_addr
)
:
New
static
function
.
(
find_base_term
)
:
Handle
VALUEs
.
(
memrefs_conflict_p
)
:
Likewise
.
(
true_dependence
)
:
Call
get_addr
on
the
addresses
.
Call
fixed_scalar_and_varying_struct_p
with
addresses
that
have
been
passed
through
get_addr
and
canon_rtx
.
(
write_dependence_p
)
:
Move
DIFFERENT_ALIAS_SETS_P
test
for
consistency
with
true_dependence
.
Call
get_addr
on
the
addresses
;
don
'
t
call
canon_rtx
on
the
MEMs
.
*
loop
.
c
:
Include
"cselib.h"
.
(
load_mems
)
:
Process
extended
basic
block
that
enters
the
loop
with
cselib
.
Use
that
information
to
change
initialization
of
the
shadow
register
so
that
a
constant
equivalence
is
seen
by
later
passes
.
*
reload1
.
c
:
Include
"cselib.h"
.
(
reload_cse_invalidate_regno
)
:
Delete
function
.
(
reload_cse_mem_conflict_p
)
:
Likewise
.
(
reload_cse_invalidate_mem
)
:
Likewise
.
(
reload_cse_invalidate_rtx
)
:
Likewise
.
(
reload_cse_regno_equal_p
)
:
Likewise
.
(
reload_cse_check_clobber
)
:
Likewise
.
(
reload_cse_record_set
)
:
Likewise
.
(
reg_values
)
:
Delete
static
variable
.
(
invalidate_regno_rtx
)
:
Likewise
.
(
reload_cse_delete_noop_set
)
:
New
static
function
.
(
reload_cse_simplify
)
:
New
static
function
,
broken
out
of
reload_cse_regs_1
.
(
reload_cse_noop_set_p
)
:
Delete
unused
argument
INSN
.
Just
call
rtx_equal_for_cselib_p
on
set
source
and
destination
.
(
reload_cse_regs_1
)
:
Break
out
some
code
into
reload_cse_simplify
and
reload_cse_delete_noop_set
.
Delete
code
to
keep
track
of
values
;
use
cselib
functions
instead
.
Delete
code
to
push
/
pop
obstacks
.
(
reload_cse_simplify_set
)
:
Use
cselib
to
find
equivalent
values
.
Delete
code
to
push
/
pop
obstacks
.
(
reload_cse_simplify_operands
)
:
Likewise
.
*
rtl
.
def
(
VALUE
)
:
New
rtx
code
.
*
rtl
.
h
(
union
rtunion_def
)
:
New
elt
rt_cselib
.
(
X0CSELIB
,
CSELIB_VAL_PTR
)
:
New
macros
.
*
simplify_rtx
.
c
:
Include
"ggc.h"
,
"obstack.h"
,
"cselib.h"
.
(
new_elt_list
,
new_elt_loc_list
,
unchain_one_value
,
clear_table
,
unchain_one_elt_list
,
unchain_one_elt_loc_list
,
check_useless_values
,
discard_useless_locs
,
discard_useless_values
,
entry_and_rtx_equal_p
,
hash_rtx
,
new_cselib_val
,
add_mem_for_addr
,
get_value_hash
,
cselib_lookup_mem
,
cselib_subst_to_values
,
cselib_invalidate_regno
,
cselib_mem_conflict_p
,
cselib_invalidate_mem
,
cselib_invalidate_rtx
,
cselib_record_set
,
cselib_record_sets
)
:
New
static
functions
.
(
cselib_lookup
,
cselib_update_varray_sizes
,
cselib_init
,
cselib_finish
,
cselib_process_insn
,
rtx_equal_for_cselib_p
,
references_value_p
)
:
New
functions
.
(
MAX_USELESS_VALUES
,
REG_VALUES
)
:
New
macros
.
(
table
,
cselib_current_insn
,
next_unknown_value
,
cselib_nregs
,
n_useless_values
,
reg_values
,
callmem
,
cselib_obstack
,
cselib_startobj
,
empty_vals
,
empty_elt_lists
,
empty_elt_loc_lists
)
:
New
static
variables
.
*
varray
.
h
(
union
varray_data_tag
)
:
New
elt
te
.
(
VARRAY_ELT_LIST_INIT
,
VARRAY_ELT_LIST
)
:
New
macros
.
*
Makefile
.
in
(
reload1
.
o
,
loop
.
o
,
simplify
-
rtx
.
o
,
alias
.
o
)
:
Update
dependencies
.
2000
-
03
-
14
Nick
Clifton
<
nickc
@cygnus
.
com
>
2000
-
03
-
14
Nick
Clifton
<
nickc
@cygnus
.
com
>
*
gcc
.
c
(
do_spec_1
)
:
Catch
the
case
where
%*
is
used
in
a
*
gcc
.
c
(
do_spec_1
)
:
Catch
the
case
where
%*
is
used
in
a
...
...
gcc/Makefile.in
View file @
eab5c70a
...
@@ -1556,7 +1556,7 @@ jump.o : jump.c $(CONFIG_H) system.h $(RTL_H) flags.h hard-reg-set.h $(REGS_H) \
...
@@ -1556,7 +1556,7 @@ jump.o : jump.c $(CONFIG_H) system.h $(RTL_H) flags.h hard-reg-set.h $(REGS_H) \
simplify-rtx.o
:
simplify-rtx.c $(CONFIG_H) system.h $(RTL_H) $(REGS_H)
\
simplify-rtx.o
:
simplify-rtx.c $(CONFIG_H) system.h $(RTL_H) $(REGS_H)
\
hard-reg-set.h flags.h real.h insn-config.h $(RECOG_H) $(EXPR_H) toplev.h
\
hard-reg-set.h flags.h real.h insn-config.h $(RECOG_H) $(EXPR_H) toplev.h
\
output.h function.h
output.h function.h
cselib.h ggc.h $(srcdir)/../include/obstack.h
cse.o
:
cse.c $(CONFIG_H) system.h $(RTL_H) $(REGS_H) hard-reg-set.h flags.h
\
cse.o
:
cse.c $(CONFIG_H) system.h $(RTL_H) $(REGS_H) hard-reg-set.h flags.h
\
real.h insn-config.h $(RECOG_H) $(EXPR_H) toplev.h output.h function.h ggc.h
real.h insn-config.h $(RECOG_H) $(EXPR_H) toplev.h output.h function.h ggc.h
gcse.o
:
gcse.c $(CONFIG_H) system.h $(RTL_H) $(REGS_H) hard-reg-set.h
\
gcse.o
:
gcse.c $(CONFIG_H) system.h $(RTL_H) $(REGS_H) hard-reg-set.h
\
...
@@ -1574,7 +1574,7 @@ profile.o : profile.c $(CONFIG_H) system.h $(RTL_H) flags.h insn-flags.h \
...
@@ -1574,7 +1574,7 @@ profile.o : profile.c $(CONFIG_H) system.h $(RTL_H) flags.h insn-flags.h \
ggc.h
ggc.h
loop.o
:
loop.c $(CONFIG_H) system.h $(RTL_H) flags.h $(LOOP_H) insn-config.h
\
loop.o
:
loop.c $(CONFIG_H) system.h $(RTL_H) flags.h $(LOOP_H) insn-config.h
\
insn-flags.h $(REGS_H) hard-reg-set.h $(RECOG_H) $(EXPR_H) real.h
\
insn-flags.h $(REGS_H) hard-reg-set.h $(RECOG_H) $(EXPR_H) real.h
\
$(BASIC_BLOCK_H) function.h toplev.h varray.h except.h
$(BASIC_BLOCK_H) function.h toplev.h varray.h except.h
cselib.h
unroll.o
:
unroll.c $(CONFIG_H) system.h $(RTL_H) insn-config.h function.h
\
unroll.o
:
unroll.c $(CONFIG_H) system.h $(RTL_H) insn-config.h function.h
\
$(INTEGRATE_H) $(REGS_H) $(RECOG_H) flags.h $(EXPR_H) $(LOOP_H) toplev.h
\
$(INTEGRATE_H) $(REGS_H) $(RECOG_H) flags.h $(EXPR_H) $(LOOP_H) toplev.h
\
varray.h
varray.h
...
@@ -1600,7 +1600,7 @@ reload.o : reload.c $(CONFIG_H) system.h $(RTL_H) flags.h output.h $(EXPR_H) \
...
@@ -1600,7 +1600,7 @@ reload.o : reload.c $(CONFIG_H) system.h $(RTL_H) flags.h output.h $(EXPR_H) \
function.h real.h toplev.h
function.h real.h toplev.h
reload1.o
:
reload1.c $(CONFIG_H) system.h $(RTL_H) real.h flags.h $(EXPR_H)
\
reload1.o
:
reload1.c $(CONFIG_H) system.h $(RTL_H) real.h flags.h $(EXPR_H)
\
reload.h $(REGS_H) hard-reg-set.h insn-config.h insn-flags.h insn-codes.h
\
reload.h $(REGS_H) hard-reg-set.h insn-config.h insn-flags.h insn-codes.h
\
$(BASIC_BLOCK_H) $(RECOG_H) output.h function.h toplev.h
$(BASIC_BLOCK_H) $(RECOG_H) output.h function.h toplev.h
cselib.h
caller-save.o
:
caller-save.c $(CONFIG_H) system.h $(RTL_H) flags.h
\
caller-save.o
:
caller-save.c $(CONFIG_H) system.h $(RTL_H) flags.h
\
$(REGS_H) hard-reg-set.h insn-config.h $(BASIC_BLOCK_H) function.h
\
$(REGS_H) hard-reg-set.h insn-config.h $(BASIC_BLOCK_H) function.h
\
$(RECOG_H) reload.h $(EXPR_H) toplev.h
$(RECOG_H) reload.h $(EXPR_H) toplev.h
...
@@ -1608,7 +1608,8 @@ reorg.o : reorg.c $(CONFIG_H) system.h $(RTL_H) conditions.h hard-reg-set.h \
...
@@ -1608,7 +1608,8 @@ reorg.o : reorg.c $(CONFIG_H) system.h $(RTL_H) conditions.h hard-reg-set.h \
$(BASIC_BLOCK_H) $(REGS_H) insn-config.h insn-attr.h insn-flags.h
\
$(BASIC_BLOCK_H) $(REGS_H) insn-config.h insn-attr.h insn-flags.h
\
$(RECOG_H) function.h flags.h output.h $(EXPR_H) toplev.h
$(RECOG_H) function.h flags.h output.h $(EXPR_H) toplev.h
alias.o
:
alias.c $(CONFIG_H) system.h $(RTL_H) flags.h hard-reg-set.h
\
alias.o
:
alias.c $(CONFIG_H) system.h $(RTL_H) flags.h hard-reg-set.h
\
$(REGS_H) toplev.h output.h $(EXPR_H) insn-flags.h ggc.h function.h
$(REGS_H) toplev.h output.h $(EXPR_H) insn-flags.h ggc.h function.h
\
cselib.h
regmove.o
:
regmove.c $(CONFIG_H) system.h $(RTL_H) insn-config.h
\
regmove.o
:
regmove.c $(CONFIG_H) system.h $(RTL_H) insn-config.h
\
$(RECOG_H) output.h reload.h $(REGS_H) hard-reg-set.h flags.h function.h
\
$(RECOG_H) output.h reload.h $(REGS_H) hard-reg-set.h flags.h function.h
\
$(EXPR_H) insn-flags.h $(BASIC_BLOCK_H) toplev.h
$(EXPR_H) insn-flags.h $(BASIC_BLOCK_H) toplev.h
...
...
gcc/alias.c
View file @
eab5c70a
...
@@ -32,6 +32,7 @@ Boston, MA 02111-1307, USA. */
...
@@ -32,6 +32,7 @@ Boston, MA 02111-1307, USA. */
#include "flags.h"
#include "flags.h"
#include "output.h"
#include "output.h"
#include "toplev.h"
#include "toplev.h"
#include "cselib.h"
#include "splay-tree.h"
#include "splay-tree.h"
#include "ggc.h"
#include "ggc.h"
...
@@ -81,6 +82,7 @@ typedef struct alias_set_entry
...
@@ -81,6 +82,7 @@ typedef struct alias_set_entry
static
rtx
canon_rtx
PARAMS
((
rtx
));
static
rtx
canon_rtx
PARAMS
((
rtx
));
static
int
rtx_equal_for_memref_p
PARAMS
((
rtx
,
rtx
));
static
int
rtx_equal_for_memref_p
PARAMS
((
rtx
,
rtx
));
static
rtx
find_symbolic_term
PARAMS
((
rtx
));
static
rtx
find_symbolic_term
PARAMS
((
rtx
));
static
rtx
get_addr
PARAMS
((
rtx
));
static
int
memrefs_conflict_p
PARAMS
((
int
,
rtx
,
int
,
rtx
,
static
int
memrefs_conflict_p
PARAMS
((
int
,
rtx
,
int
,
rtx
,
HOST_WIDE_INT
));
HOST_WIDE_INT
));
static
void
record_set
PARAMS
((
rtx
,
rtx
,
void
*
));
static
void
record_set
PARAMS
((
rtx
,
rtx
,
void
*
));
...
@@ -91,7 +93,8 @@ static rtx find_base_value PARAMS ((rtx));
...
@@ -91,7 +93,8 @@ static rtx find_base_value PARAMS ((rtx));
static
int
mems_in_disjoint_alias_sets_p
PARAMS
((
rtx
,
rtx
));
static
int
mems_in_disjoint_alias_sets_p
PARAMS
((
rtx
,
rtx
));
static
int
insert_subset_children
PARAMS
((
splay_tree_node
,
void
*
));
static
int
insert_subset_children
PARAMS
((
splay_tree_node
,
void
*
));
static
alias_set_entry
get_alias_set_entry
PARAMS
((
int
));
static
alias_set_entry
get_alias_set_entry
PARAMS
((
int
));
static
rtx
fixed_scalar_and_varying_struct_p
PARAMS
((
rtx
,
rtx
,
int
(
*
)(
rtx
)));
static
rtx
fixed_scalar_and_varying_struct_p
PARAMS
((
rtx
,
rtx
,
rtx
,
rtx
,
int
(
*
)(
rtx
)));
static
int
aliases_everything_p
PARAMS
((
rtx
));
static
int
aliases_everything_p
PARAMS
((
rtx
));
static
int
write_dependence_p
PARAMS
((
rtx
,
rtx
,
int
));
static
int
write_dependence_p
PARAMS
((
rtx
,
rtx
,
int
));
static
int
nonlocal_reference_p
PARAMS
((
rtx
));
static
int
nonlocal_reference_p
PARAMS
((
rtx
));
...
@@ -737,6 +740,9 @@ static rtx
...
@@ -737,6 +740,9 @@ static rtx
find_base_term
(
x
)
find_base_term
(
x
)
register
rtx
x
;
register
rtx
x
;
{
{
cselib_val
*
val
;
struct
elt_loc_list
*
l
;
switch
(
GET_CODE
(
x
))
switch
(
GET_CODE
(
x
))
{
{
case
REG
:
case
REG
:
...
@@ -751,6 +757,13 @@ find_base_term (x)
...
@@ -751,6 +757,13 @@ find_base_term (x)
case
POST_DEC
:
case
POST_DEC
:
return
find_base_term
(
XEXP
(
x
,
0
));
return
find_base_term
(
XEXP
(
x
,
0
));
case
VALUE
:
val
=
CSELIB_VAL_PTR
(
x
);
for
(
l
=
val
->
locs
;
l
;
l
=
l
->
next
)
if
((
x
=
find_base_term
(
l
->
loc
))
!=
0
)
return
x
;
return
0
;
case
CONST
:
case
CONST
:
x
=
XEXP
(
x
,
0
);
x
=
XEXP
(
x
,
0
);
if
(
GET_CODE
(
x
)
!=
PLUS
&&
GET_CODE
(
x
)
!=
MINUS
)
if
(
GET_CODE
(
x
)
!=
PLUS
&&
GET_CODE
(
x
)
!=
MINUS
)
...
@@ -905,6 +918,30 @@ base_alias_check (x, y, x_mode, y_mode)
...
@@ -905,6 +918,30 @@ base_alias_check (x, y, x_mode, y_mode)
return
!
(
GET_MODE
(
x_base
)
==
VOIDmode
&&
GET_MODE
(
y_base
)
==
VOIDmode
);
return
!
(
GET_MODE
(
x_base
)
==
VOIDmode
&&
GET_MODE
(
y_base
)
==
VOIDmode
);
}
}
/* Convert the address X into something we can use. This is done by returning
it unchanged unless it is a value; in the latter case we call cselib to get
a more useful rtx. */
static
rtx
get_addr
(
x
)
rtx
x
;
{
cselib_val
*
v
;
struct
elt_loc_list
*
l
;
if
(
GET_CODE
(
x
)
!=
VALUE
)
return
x
;
v
=
CSELIB_VAL_PTR
(
x
);
for
(
l
=
v
->
locs
;
l
;
l
=
l
->
next
)
if
(
CONSTANT_P
(
l
->
loc
))
return
l
->
loc
;
for
(
l
=
v
->
locs
;
l
;
l
=
l
->
next
)
if
(
GET_CODE
(
l
->
loc
)
!=
REG
&&
GET_CODE
(
l
->
loc
)
!=
MEM
)
return
l
->
loc
;
if
(
v
->
locs
)
return
v
->
locs
->
loc
;
return
x
;
}
/* Return the address of the (N_REFS + 1)th memory reference to ADDR
/* Return the address of the (N_REFS + 1)th memory reference to ADDR
where SIZE is the size in bytes of the memory reference. If ADDR
where SIZE is the size in bytes of the memory reference. If ADDR
is not modified by the memory reference then ADDR is returned. */
is not modified by the memory reference then ADDR is returned. */
...
@@ -961,13 +998,16 @@ addr_side_effect_eval (addr, size, n_refs)
...
@@ -961,13 +998,16 @@ addr_side_effect_eval (addr, size, n_refs)
Nice to notice that varying addresses cannot conflict with fp if no
Nice to notice that varying addresses cannot conflict with fp if no
local variables had their addresses taken, but that's too hard now. */
local variables had their addresses taken, but that's too hard now. */
static
int
static
int
memrefs_conflict_p
(
xsize
,
x
,
ysize
,
y
,
c
)
memrefs_conflict_p
(
xsize
,
x
,
ysize
,
y
,
c
)
register
rtx
x
,
y
;
register
rtx
x
,
y
;
int
xsize
,
ysize
;
int
xsize
,
ysize
;
HOST_WIDE_INT
c
;
HOST_WIDE_INT
c
;
{
{
if
(
GET_CODE
(
x
)
==
VALUE
)
x
=
get_addr
(
x
);
if
(
GET_CODE
(
y
)
==
VALUE
)
y
=
get_addr
(
y
);
if
(
GET_CODE
(
x
)
==
HIGH
)
if
(
GET_CODE
(
x
)
==
HIGH
)
x
=
XEXP
(
x
,
0
);
x
=
XEXP
(
x
,
0
);
else
if
(
GET_CODE
(
x
)
==
LO_SUM
)
else
if
(
GET_CODE
(
x
)
==
LO_SUM
)
...
@@ -1185,17 +1225,15 @@ read_dependence (mem, x)
...
@@ -1185,17 +1225,15 @@ read_dependence (mem, x)
MEM2 if vice versa. Otherwise, returns NULL_RTX. If a non-NULL
MEM2 if vice versa. Otherwise, returns NULL_RTX. If a non-NULL
value is returned MEM1 and MEM2 can never alias. VARIES_P is used
value is returned MEM1 and MEM2 can never alias. VARIES_P is used
to decide whether or not an address may vary; it should return
to decide whether or not an address may vary; it should return
nonzero whenever variation is possible. */
nonzero whenever variation is possible.
MEM1_ADDR and MEM2_ADDR are the addresses of MEM1 and MEM2. */
static
rtx
static
rtx
fixed_scalar_and_varying_struct_p
(
mem1
,
mem2
,
varies_p
)
fixed_scalar_and_varying_struct_p
(
mem1
,
mem2
,
mem1_addr
,
mem2_addr
,
varies_p
)
rtx
mem1
;
rtx
mem1
,
mem2
;
rtx
mem
2
;
rtx
mem
1_addr
,
mem2_addr
;
int
(
*
varies_p
)
PARAMS
((
rtx
));
int
(
*
varies_p
)
PARAMS
((
rtx
));
{
{
rtx
mem1_addr
=
XEXP
(
mem1
,
0
);
rtx
mem2_addr
=
XEXP
(
mem2
,
0
);
if
(
MEM_SCALAR_P
(
mem1
)
&&
MEM_IN_STRUCT_P
(
mem2
)
if
(
MEM_SCALAR_P
(
mem1
)
&&
MEM_IN_STRUCT_P
(
mem2
)
&&
!
varies_p
(
mem1_addr
)
&&
varies_p
(
mem2_addr
))
&&
!
varies_p
(
mem1_addr
)
&&
varies_p
(
mem2_addr
))
/* MEM1 is a scalar at a fixed address; MEM2 is a struct at a
/* MEM1 is a scalar at a fixed address; MEM2 is a struct at a
...
@@ -1260,11 +1298,14 @@ true_dependence (mem, mem_mode, x, varies)
...
@@ -1260,11 +1298,14 @@ true_dependence (mem, mem_mode, x, varies)
if
(
mem_mode
==
VOIDmode
)
if
(
mem_mode
==
VOIDmode
)
mem_mode
=
GET_MODE
(
mem
);
mem_mode
=
GET_MODE
(
mem
);
if
(
!
base_alias_check
(
XEXP
(
x
,
0
),
XEXP
(
mem
,
0
),
GET_MODE
(
x
),
mem_mode
))
x_addr
=
get_addr
(
XEXP
(
x
,
0
));
mem_addr
=
get_addr
(
XEXP
(
mem
,
0
));
if
(
!
base_alias_check
(
x_addr
,
mem_addr
,
GET_MODE
(
x
),
mem_mode
))
return
0
;
return
0
;
x_addr
=
canon_rtx
(
XEXP
(
x
,
0
)
);
x_addr
=
canon_rtx
(
x_addr
);
mem_addr
=
canon_rtx
(
XEXP
(
mem
,
0
)
);
mem_addr
=
canon_rtx
(
mem_addr
);
if
(
!
memrefs_conflict_p
(
GET_MODE_SIZE
(
mem_mode
),
mem_addr
,
if
(
!
memrefs_conflict_p
(
GET_MODE_SIZE
(
mem_mode
),
mem_addr
,
SIZE_FOR_MODE
(
x
),
x_addr
,
0
))
SIZE_FOR_MODE
(
x
),
x_addr
,
0
))
...
@@ -1283,7 +1324,8 @@ true_dependence (mem, mem_mode, x, varies)
...
@@ -1283,7 +1324,8 @@ true_dependence (mem, mem_mode, x, varies)
if
(
mem_mode
==
BLKmode
||
GET_MODE
(
x
)
==
BLKmode
)
if
(
mem_mode
==
BLKmode
||
GET_MODE
(
x
)
==
BLKmode
)
return
1
;
return
1
;
return
!
fixed_scalar_and_varying_struct_p
(
mem
,
x
,
varies
);
return
!
fixed_scalar_and_varying_struct_p
(
mem
,
x
,
mem_addr
,
x_addr
,
varies
);
}
}
/* Returns non-zero if a write to X might alias a previous read from
/* Returns non-zero if a write to X might alias a previous read from
...
@@ -1301,31 +1343,32 @@ write_dependence_p (mem, x, writep)
...
@@ -1301,31 +1343,32 @@ write_dependence_p (mem, x, writep)
if
(
MEM_VOLATILE_P
(
x
)
&&
MEM_VOLATILE_P
(
mem
))
if
(
MEM_VOLATILE_P
(
x
)
&&
MEM_VOLATILE_P
(
mem
))
return
1
;
return
1
;
if
(
DIFFERENT_ALIAS_SETS_P
(
x
,
mem
))
return
0
;
/* If MEM is an unchanging read, then it can't possibly conflict with
/* If MEM is an unchanging read, then it can't possibly conflict with
the store to X, because there is at most one store to MEM, and it must
the store to X, because there is at most one store to MEM, and it must
have occurred somewhere before MEM. */
have occurred somewhere before MEM. */
if
(
!
writep
&&
RTX_UNCHANGING_P
(
mem
))
if
(
!
writep
&&
RTX_UNCHANGING_P
(
mem
))
return
0
;
return
0
;
if
(
!
base_alias_check
(
XEXP
(
x
,
0
),
XEXP
(
mem
,
0
),
GET_MODE
(
x
),
x_addr
=
get_addr
(
XEXP
(
x
,
0
));
GET_MODE
(
mem
)))
mem_addr
=
get_addr
(
XEXP
(
mem
,
0
));
return
0
;
x
=
canon_rtx
(
x
);
mem
=
canon_rtx
(
mem
);
if
(
DIFFERENT_ALIAS_SETS_P
(
x
,
mem
))
if
(
!
base_alias_check
(
x_addr
,
mem_addr
,
GET_MODE
(
x
),
GET_MODE
(
mem
)))
return
0
;
return
0
;
x_addr
=
XEXP
(
x
,
0
);
x_addr
=
canon_rtx
(
x_addr
);
mem_addr
=
XEXP
(
mem
,
0
);
mem_addr
=
canon_rtx
(
mem_addr
);
if
(
!
memrefs_conflict_p
(
SIZE_FOR_MODE
(
mem
),
mem_addr
,
if
(
!
memrefs_conflict_p
(
SIZE_FOR_MODE
(
mem
),
mem_addr
,
SIZE_FOR_MODE
(
x
),
x_addr
,
0
))
SIZE_FOR_MODE
(
x
),
x_addr
,
0
))
return
0
;
return
0
;
fixed_scalar
fixed_scalar
=
fixed_scalar_and_varying_struct_p
(
mem
,
x
,
rtx_addr_varies_p
);
=
fixed_scalar_and_varying_struct_p
(
mem
,
x
,
mem_addr
,
x_addr
,
rtx_addr_varies_p
);
return
(
!
(
fixed_scalar
==
mem
&&
!
aliases_everything_p
(
x
))
return
(
!
(
fixed_scalar
==
mem
&&
!
aliases_everything_p
(
x
))
&&
!
(
fixed_scalar
==
x
&&
!
aliases_everything_p
(
mem
)));
&&
!
(
fixed_scalar
==
x
&&
!
aliases_everything_p
(
mem
)));
...
...
gcc/cselib.h
0 → 100644
View file @
eab5c70a
/* Common subexpression elimination for GNU compiler.
Copyright (C) 1987, 88, 89, 92-7, 1998, 1999 Free Software Foundation, Inc.
This file is part of GNU CC.
GNU CC is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU CC is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with GNU CC; see the file COPYING. If not, write to
the Free Software Foundation, 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA. */
/* Describe a value. */
typedef
struct
cselib_val_struct
{
/* The hash value. */
unsigned
int
value
;
union
{
/* A VALUE rtx that points back to this structure. */
rtx
val_rtx
;
/* Used to keep a list of free cselib_val structures. */
struct
cselib_val_struct
*
next_free
;
}
u
;
/* All rtl expressions that hold this value at the current time during a
scan. */
struct
elt_loc_list
*
locs
;
/* If this value is used as an address, points to a list of values that
use it as an address in a MEM. */
struct
elt_list
*
addr_list
;
}
cselib_val
;
/* A list of rtl expressions that hold the same value. */
struct
elt_loc_list
{
/* Next element in the list. */
struct
elt_loc_list
*
next
;
/* An rtl expression that holds the value. */
rtx
loc
;
/* The insn that made the equivalence. */
rtx
setting_insn
;
};
/* A list of cselib_val structures. */
struct
elt_list
{
struct
elt_list
*
next
;
cselib_val
*
elt
;
};
extern
cselib_val
*
cselib_lookup
PARAMS
((
rtx
,
enum
machine_mode
,
int
));
extern
void
cselib_update_varray_sizes
PARAMS
((
void
));
extern
void
cselib_init
PARAMS
((
void
));
extern
void
cselib_finish
PARAMS
((
void
));
extern
void
cselib_process_insn
PARAMS
((
rtx
));
extern
int
rtx_equal_for_cselib_p
PARAMS
((
rtx
,
rtx
));
extern
int
references_value_p
PARAMS
((
rtx
,
int
));
gcc/loop.c
View file @
eab5c70a
...
@@ -51,6 +51,7 @@ Boston, MA 02111-1307, USA. */
...
@@ -51,6 +51,7 @@ Boston, MA 02111-1307, USA. */
#include "flags.h"
#include "flags.h"
#include "real.h"
#include "real.h"
#include "loop.h"
#include "loop.h"
#include "cselib.h"
#include "except.h"
#include "except.h"
#include "toplev.h"
#include "toplev.h"
...
@@ -9773,6 +9774,19 @@ load_mems (loop)
...
@@ -9773,6 +9774,19 @@ load_mems (loop)
if
(
loop_mems_idx
==
0
)
if
(
loop_mems_idx
==
0
)
return
;
return
;
/* Find start of the extended basic block that enters the loop. */
for
(
p
=
loop
->
start
;
PREV_INSN
(
p
)
&&
GET_CODE
(
p
)
!=
CODE_LABEL
;
p
=
PREV_INSN
(
p
))
;
cselib_init
();
/* Build table of mems that get set to constant values before the
loop. */
for
(;
p
!=
loop
->
start
;
p
=
NEXT_INSN
(
p
))
cselib_process_insn
(
p
);
/* Check to see if it's possible that some instructions in the
/* Check to see if it's possible that some instructions in the
loop are never executed. */
loop are never executed. */
for
(
p
=
next_insn_in_loop
(
loop
,
loop
->
scan_start
);
for
(
p
=
next_insn_in_loop
(
loop
,
loop
->
scan_start
);
...
@@ -9924,13 +9938,49 @@ load_mems (loop)
...
@@ -9924,13 +9938,49 @@ load_mems (loop)
loop_mems
[
i
].
optimize
=
0
;
loop_mems
[
i
].
optimize
=
0
;
else
else
{
{
int
j
;
rtx
set
;
/* Load the memory immediately before LOOP->START, which is
/* Load the memory immediately before LOOP->START, which is
the NOTE_LOOP_BEG. */
the NOTE_LOOP_BEG. */
set
=
gen_move_insn
(
reg
,
mem
);
cselib_val
*
e
=
cselib_lookup
(
mem
,
VOIDmode
,
0
);
emit_insn_before
(
set
,
loop
->
start
);
rtx
set
;
rtx
best
=
mem
;
int
j
;
struct
elt_loc_list
*
const_equiv
=
0
;
if
(
e
)
{
struct
elt_loc_list
*
equiv
;
struct
elt_loc_list
*
best_equiv
=
0
;
for
(
equiv
=
e
->
locs
;
equiv
;
equiv
=
equiv
->
next
)
{
if
(
CONSTANT_P
(
equiv
->
loc
))
const_equiv
=
equiv
;
else
if
(
GET_CODE
(
equiv
->
loc
)
==
REG
)
best_equiv
=
equiv
;
}
/* Use the constant equivalence if that is cheap enough. */
if
(
!
best_equiv
)
best_equiv
=
const_equiv
;
else
if
(
const_equiv
&&
(
rtx_cost
(
const_equiv
->
loc
,
SET
)
<=
rtx_cost
(
best_equiv
->
loc
,
SET
)))
{
best_equiv
=
const_equiv
;
const_equiv
=
0
;
}
/* If best_equiv is nonzero, we know that MEM is set to a
constant or register before the loop. We will use this
knowledge to initialize the shadow register with that
constant or reg rather than by loading from MEM. */
if
(
best_equiv
)
best
=
copy_rtx
(
best_equiv
->
loc
);
}
set
=
gen_move_insn
(
reg
,
best
);
set
=
emit_insn_before
(
set
,
loop
->
start
);
if
(
const_equiv
)
REG_NOTES
(
set
)
=
gen_rtx_EXPR_LIST
(
REG_EQUAL
,
copy_rtx
(
const_equiv
->
loc
),
REG_NOTES
(
set
));
if
(
written
)
if
(
written
)
{
{
...
@@ -9992,6 +10042,8 @@ load_mems (loop)
...
@@ -9992,6 +10042,8 @@ load_mems (loop)
JUMP_LABEL
(
p
)
=
label
;
JUMP_LABEL
(
p
)
=
label
;
}
}
}
}
cselib_finish
();
}
}
/* For communication between note_reg_stored and its caller. */
/* For communication between note_reg_stored and its caller. */
...
...
gcc/reload1.c
View file @
eab5c70a
...
@@ -39,6 +39,7 @@ Boston, MA 02111-1307, USA. */
...
@@ -39,6 +39,7 @@ Boston, MA 02111-1307, USA. */
#include "reload.h"
#include "reload.h"
#include "recog.h"
#include "recog.h"
#include "output.h"
#include "output.h"
#include "cselib.h"
#include "real.h"
#include "real.h"
#include "toplev.h"
#include "toplev.h"
...
@@ -430,16 +431,9 @@ static void delete_address_reloads_1 PARAMS ((rtx, rtx, rtx));
...
@@ -430,16 +431,9 @@ static void delete_address_reloads_1 PARAMS ((rtx, rtx, rtx));
static
rtx
inc_for_reload
PARAMS
((
rtx
,
rtx
,
rtx
,
int
));
static
rtx
inc_for_reload
PARAMS
((
rtx
,
rtx
,
rtx
,
int
));
static
int
constraint_accepts_reg_p
PARAMS
((
const
char
*
,
rtx
));
static
int
constraint_accepts_reg_p
PARAMS
((
const
char
*
,
rtx
));
static
void
reload_cse_regs_1
PARAMS
((
rtx
));
static
void
reload_cse_regs_1
PARAMS
((
rtx
));
static
void
reload_cse_invalidate_regno
PARAMS
((
int
,
enum
machine_mode
,
int
));
static
int
reload_cse_noop_set_p
PARAMS
((
rtx
));
static
int
reload_cse_mem_conflict_p
PARAMS
((
rtx
,
rtx
));
static
void
reload_cse_invalidate_mem
PARAMS
((
rtx
));
static
void
reload_cse_invalidate_rtx
PARAMS
((
rtx
,
rtx
,
void
*
));
static
int
reload_cse_regno_equal_p
PARAMS
((
int
,
rtx
,
enum
machine_mode
));
static
int
reload_cse_noop_set_p
PARAMS
((
rtx
,
rtx
));
static
int
reload_cse_simplify_set
PARAMS
((
rtx
,
rtx
));
static
int
reload_cse_simplify_set
PARAMS
((
rtx
,
rtx
));
static
int
reload_cse_simplify_operands
PARAMS
((
rtx
));
static
int
reload_cse_simplify_operands
PARAMS
((
rtx
));
static
void
reload_cse_check_clobber
PARAMS
((
rtx
,
rtx
,
void
*
));
static
void
reload_cse_record_set
PARAMS
((
rtx
,
rtx
));
static
void
reload_combine
PARAMS
((
void
));
static
void
reload_combine
PARAMS
((
void
));
static
void
reload_combine_note_use
PARAMS
((
rtx
*
,
rtx
));
static
void
reload_combine_note_use
PARAMS
((
rtx
*
,
rtx
));
static
void
reload_combine_note_store
PARAMS
((
rtx
,
rtx
,
void
*
));
static
void
reload_combine_note_store
PARAMS
((
rtx
,
rtx
,
void
*
));
...
@@ -7842,354 +7836,57 @@ count_occurrences (x, find)
...
@@ -7842,354 +7836,57 @@ count_occurrences (x, find)
return
count
;
return
count
;
}
}
/* This array holds values which are equivalent to a hard register
/* INSN is a no-op; delete it.
during reload_cse_regs. Each array element is an EXPR_LIST of
If this sets the return value of the function, we must keep a USE around,
values. Each time a hard register is set, we set the corresponding
in case this is in a different basic block than the final USE. Otherwise,
array element to the value. Each time a hard register is copied
we could loose important register lifeness information on
into memory, we add the memory location to the corresponding array
SMALL_REGISTER_CLASSES machines, where return registers might be used as
element. We don't store values or memory addresses with side
spills: subsequent passes assume that spill registers are dead at the end
effects in this array.
of a basic block.
VALUE must be the return value in such a case, NULL otherwise. */
If the value is a CONST_INT, then the mode of the containing
EXPR_LIST is the mode in which that CONST_INT was referenced.
We sometimes clobber a specific entry in a list. In that case, we
just set XEXP (list-entry, 0) to 0. */
static
rtx
*
reg_values
;
/* This is a preallocated REG rtx which we use as a temporary in
reload_cse_invalidate_regno, so that we don't need to allocate a
new one each time through a loop in that function. */
static
rtx
invalidate_regno_rtx
;
/* Invalidate any entries in reg_values which depend on REGNO,
including those for REGNO itself. This is called if REGNO is
changing. If CLOBBER is true, then always forget anything we
currently know about REGNO. MODE is the mode of the assignment to
REGNO, which is used to determine how many hard registers are being
changed. If MODE is VOIDmode, then only REGNO is being changed;
this is used when invalidating call clobbered registers across a
call. */
static
void
static
void
reload_cse_invalidate_regno
(
regno
,
mode
,
clobber
)
reload_cse_delete_noop_set
(
insn
,
value
)
int
regno
;
rtx
insn
,
value
;
enum
machine_mode
mode
;
int
clobber
;
{
{
int
endregno
;
if
(
value
)
register
int
i
;
/* Our callers don't always go through true_regnum; we may see a
pseudo-register here from a CLOBBER or the like. We probably
won't ever see a pseudo-register that has a real register number,
for we check anyhow for safety. */
if
(
regno
>=
FIRST_PSEUDO_REGISTER
)
regno
=
reg_renumber
[
regno
];
if
(
regno
<
0
)
return
;
if
(
mode
==
VOIDmode
)
endregno
=
regno
+
1
;
else
endregno
=
regno
+
HARD_REGNO_NREGS
(
regno
,
mode
);
if
(
clobber
)
for
(
i
=
regno
;
i
<
endregno
;
i
++
)
reg_values
[
i
]
=
0
;
for
(
i
=
0
;
i
<
FIRST_PSEUDO_REGISTER
;
i
++
)
{
rtx
x
;
for
(
x
=
reg_values
[
i
];
x
;
x
=
XEXP
(
x
,
1
))
{
if
(
XEXP
(
x
,
0
)
!=
0
&&
refers_to_regno_p
(
regno
,
endregno
,
XEXP
(
x
,
0
),
NULL_PTR
))
{
/* If this is the only entry on the list, clear
reg_values[i]. Otherwise, just clear this entry on
the list. */
if
(
XEXP
(
x
,
1
)
==
0
&&
x
==
reg_values
[
i
])
{
{
reg_values
[
i
]
=
0
;
PATTERN
(
insn
)
=
gen_rtx_USE
(
VOIDmode
,
value
);
break
;
INSN_CODE
(
insn
)
=
-
1
;
}
REG_NOTES
(
insn
)
=
NULL_RTX
;
XEXP
(
x
,
0
)
=
0
;
}
}
}
}
else
/* We must look at earlier registers, in case REGNO is part of a
multi word value but is not the first register. If an earlier
register has a value in a mode which overlaps REGNO, then we must
invalidate that earlier register. Note that we do not need to
check REGNO or later registers (we must not check REGNO itself,
because we would incorrectly conclude that there was a conflict). */
for
(
i
=
0
;
i
<
regno
;
i
++
)
{
rtx
x
;
for
(
x
=
reg_values
[
i
];
x
;
x
=
XEXP
(
x
,
1
))
{
if
(
XEXP
(
x
,
0
)
!=
0
)
{
PUT_MODE
(
invalidate_regno_rtx
,
GET_MODE
(
x
));
REGNO
(
invalidate_regno_rtx
)
=
i
;
if
(
refers_to_regno_p
(
regno
,
endregno
,
invalidate_regno_rtx
,
NULL_PTR
))
{
{
reload_cse_invalidate_regno
(
i
,
VOIDmode
,
1
);
PUT_CODE
(
insn
,
NOTE
);
break
;
NOTE_LINE_NUMBER
(
insn
)
=
NOTE_INSN_DELETED
;
}
NOTE_SOURCE_FILE
(
insn
)
=
0
;
}
}
}
}
}
}
/* The memory at address MEM_BASE is being changed.
/* See whether a single set SET is a noop. */
Return whether this change will invalidate VAL. */
static
int
static
int
reload_cse_mem_conflict_p
(
mem_base
,
val
)
reload_cse_noop_set_p
(
set
)
rtx
mem_base
;
rtx
set
;
rtx
val
;
{
enum
rtx_code
code
;
const
char
*
fmt
;
int
i
;
code
=
GET_CODE
(
val
);
switch
(
code
)
{
/* Get rid of a few simple cases quickly. */
case
REG
:
case
PC
:
case
CC0
:
case
SCRATCH
:
case
CONST
:
case
CONST_INT
:
case
CONST_DOUBLE
:
case
SYMBOL_REF
:
case
LABEL_REF
:
return
0
;
case
MEM
:
if
(
GET_MODE
(
mem_base
)
==
BLKmode
||
GET_MODE
(
val
)
==
BLKmode
)
return
1
;
if
(
anti_dependence
(
val
,
mem_base
))
return
1
;
/* The address may contain nested MEMs. */
break
;
default
:
break
;
}
fmt
=
GET_RTX_FORMAT
(
code
);
for
(
i
=
GET_RTX_LENGTH
(
code
)
-
1
;
i
>=
0
;
i
--
)
{
if
(
fmt
[
i
]
==
'e'
)
{
if
(
reload_cse_mem_conflict_p
(
mem_base
,
XEXP
(
val
,
i
)))
return
1
;
}
else
if
(
fmt
[
i
]
==
'E'
)
{
int
j
;
for
(
j
=
0
;
j
<
XVECLEN
(
val
,
i
);
j
++
)
if
(
reload_cse_mem_conflict_p
(
mem_base
,
XVECEXP
(
val
,
i
,
j
)))
return
1
;
}
}
return
0
;
}
/* Invalidate any entries in reg_values which are changed because of a
store to MEM_RTX. If this is called because of a non-const call
instruction, MEM_RTX is (mem:BLK const0_rtx). */
static
void
reload_cse_invalidate_mem
(
mem_rtx
)
rtx
mem_rtx
;
{
register
int
i
;
for
(
i
=
0
;
i
<
FIRST_PSEUDO_REGISTER
;
i
++
)
{
rtx
x
;
for
(
x
=
reg_values
[
i
];
x
;
x
=
XEXP
(
x
,
1
))
{
if
(
XEXP
(
x
,
0
)
!=
0
&&
reload_cse_mem_conflict_p
(
mem_rtx
,
XEXP
(
x
,
0
)))
{
/* If this is the only entry on the list, clear
reg_values[i]. Otherwise, just clear this entry on
the list. */
if
(
XEXP
(
x
,
1
)
==
0
&&
x
==
reg_values
[
i
])
{
reg_values
[
i
]
=
0
;
break
;
}
XEXP
(
x
,
0
)
=
0
;
}
}
}
}
/* Invalidate DEST, which is being assigned to or clobbered. The
second parameter exists so that this function can be passed to
note_stores; it is ignored. */
static
void
reload_cse_invalidate_rtx
(
dest
,
ignore
,
data
)
rtx
dest
;
rtx
ignore
ATTRIBUTE_UNUSED
;
void
*
data
ATTRIBUTE_UNUSED
;
{
{
while
(
GET_CODE
(
dest
)
==
STRICT_LOW_PART
return
rtx_equal_for_cselib_p
(
SET_DEST
(
set
),
SET_SRC
(
set
));
||
GET_CODE
(
dest
)
==
SIGN_EXTRACT
||
GET_CODE
(
dest
)
==
ZERO_EXTRACT
||
GET_CODE
(
dest
)
==
SUBREG
)
dest
=
XEXP
(
dest
,
0
);
if
(
GET_CODE
(
dest
)
==
REG
)
reload_cse_invalidate_regno
(
REGNO
(
dest
),
GET_MODE
(
dest
),
1
);
else
if
(
GET_CODE
(
dest
)
==
MEM
)
reload_cse_invalidate_mem
(
dest
);
}
}
/* Do a very simple CSE pass over the hard registers.
/* Try to simplify INSN. */
This function detects no-op moves where we happened to assign two
different pseudo-registers to the same hard register, and then
copied one to the other. Reload will generate a useless
instruction copying a register to itself.
This function also detects cases where we load a value from memory
into two different registers, and (if memory is more expensive than
registers) changes it to simply copy the first register into the
second register.
Another optimization is performed that scans the operands of each
instruction to see whether the value is already available in a
hard register. It then replaces the operand with the hard register
if possible, much like an optional reload would. */
static
void
static
void
reload_cse_regs_1
(
first
)
reload_cse_simplify
(
insn
)
rtx
first
;
{
char
*
firstobj
;
rtx
callmem
;
register
int
i
;
rtx
insn
;
rtx
insn
;
{
rtx
body
=
PATTERN
(
insn
);
init_alias_analysis
();
reg_values
=
(
rtx
*
)
alloca
(
FIRST_PSEUDO_REGISTER
*
sizeof
(
rtx
));
bzero
((
char
*
)
reg_values
,
FIRST_PSEUDO_REGISTER
*
sizeof
(
rtx
));
/* Create our EXPR_LIST structures on reload_obstack, so that we can
free them when we are done. */
push_obstacks
(
&
reload_obstack
,
&
reload_obstack
);
firstobj
=
(
char
*
)
obstack_alloc
(
&
reload_obstack
,
0
);
/* We pass this to reload_cse_invalidate_mem to invalidate all of
memory for a non-const call instruction. */
callmem
=
gen_rtx_MEM
(
BLKmode
,
const0_rtx
);
/* This is used in reload_cse_invalidate_regno to avoid consing a
new REG in a loop in that function. */
invalidate_regno_rtx
=
gen_rtx_REG
(
VOIDmode
,
0
);
for
(
insn
=
first
;
insn
;
insn
=
NEXT_INSN
(
insn
))
{
rtx
body
;
if
(
GET_CODE
(
insn
)
==
CODE_LABEL
)
{
/* Forget all the register values at a code label. We don't
try to do anything clever around jumps. */
for
(
i
=
0
;
i
<
FIRST_PSEUDO_REGISTER
;
i
++
)
reg_values
[
i
]
=
0
;
continue
;
}
#ifdef NON_SAVING_SETJMP
if
(
NON_SAVING_SETJMP
&&
GET_CODE
(
insn
)
==
NOTE
&&
NOTE_LINE_NUMBER
(
insn
)
==
NOTE_INSN_SETJMP
)
{
for
(
i
=
0
;
i
<
FIRST_PSEUDO_REGISTER
;
i
++
)
reg_values
[
i
]
=
0
;
continue
;
}
#endif
if
(
GET_RTX_CLASS
(
GET_CODE
(
insn
))
!=
'i'
)
continue
;
/* If this is a call instruction, forget anything stored in a
call clobbered register, or, if this is not a const call, in
memory. */
if
(
GET_CODE
(
insn
)
==
CALL_INSN
)
{
for
(
i
=
0
;
i
<
FIRST_PSEUDO_REGISTER
;
i
++
)
if
(
call_used_regs
[
i
])
reload_cse_invalidate_regno
(
i
,
VOIDmode
,
1
);
if
(
!
CONST_CALL_P
(
insn
))
reload_cse_invalidate_mem
(
callmem
);
}
/* Forget all the register values at a volatile asm. */
if
(
GET_CODE
(
insn
)
==
INSN
&&
GET_CODE
(
PATTERN
(
insn
))
==
ASM_OPERANDS
&&
MEM_VOLATILE_P
(
PATTERN
(
insn
)))
for
(
i
=
0
;
i
<
FIRST_PSEUDO_REGISTER
;
i
++
)
reg_values
[
i
]
=
0
;
body
=
PATTERN
(
insn
);
if
(
GET_CODE
(
body
)
==
SET
)
if
(
GET_CODE
(
body
)
==
SET
)
{
{
int
count
=
0
;
int
count
=
0
;
if
(
reload_cse_noop_set_p
(
body
,
insn
))
if
(
reload_cse_noop_set_p
(
body
))
{
/* If this sets the return value of the function, we must keep
a USE around, in case this is in a different basic block
than the final USE. Otherwise, we could loose important
register lifeness information on SMALL_REGISTER_CLASSES
machines, where return registers might be used as spills:
subsequent passes assume that spill registers are dead at
the end of a basic block. */
if
(
REG_FUNCTION_VALUE_P
(
SET_DEST
(
body
)))
{
pop_obstacks
();
PATTERN
(
insn
)
=
gen_rtx_USE
(
VOIDmode
,
SET_DEST
(
body
));
INSN_CODE
(
insn
)
=
-
1
;
REG_NOTES
(
insn
)
=
NULL_RTX
;
push_obstacks
(
&
reload_obstack
,
&
reload_obstack
);
}
else
{
{
PUT_CODE
(
insn
,
NOTE
);
rtx
value
=
SET_DEST
(
body
);
NOTE_LINE_NUMBER
(
insn
)
=
NOTE_INSN_DELETED
;
if
(
!
REG_FUNCTION_VALUE_P
(
SET_DEST
(
body
)))
NOTE_SOURCE_FILE
(
insn
)
=
0
;
value
=
0
;
}
reload_cse_delete_noop_set
(
insn
,
value
);
return
;
/* We're done with this insn. */
continue
;
}
}
/* It's not a no-op, but we can try to simplify it. */
/* It's not a no-op, but we can try to simplify it. */
...
@@ -8199,11 +7896,10 @@ reload_cse_regs_1 (first)
...
@@ -8199,11 +7896,10 @@ reload_cse_regs_1 (first)
apply_change_group
();
apply_change_group
();
else
else
reload_cse_simplify_operands
(
insn
);
reload_cse_simplify_operands
(
insn
);
reload_cse_record_set
(
body
,
body
);
}
}
else
if
(
GET_CODE
(
body
)
==
PARALLEL
)
else
if
(
GET_CODE
(
body
)
==
PARALLEL
)
{
{
int
i
;
int
count
=
0
;
int
count
=
0
;
rtx
value
=
NULL_RTX
;
rtx
value
=
NULL_RTX
;
...
@@ -8214,7 +7910,7 @@ reload_cse_regs_1 (first)
...
@@ -8214,7 +7910,7 @@ reload_cse_regs_1 (first)
rtx
part
=
XVECEXP
(
body
,
0
,
i
);
rtx
part
=
XVECEXP
(
body
,
0
,
i
);
if
(
GET_CODE
(
part
)
==
SET
)
if
(
GET_CODE
(
part
)
==
SET
)
{
{
if
(
!
reload_cse_noop_set_p
(
part
,
insn
))
if
(
!
reload_cse_noop_set_p
(
part
))
break
;
break
;
if
(
REG_FUNCTION_VALUE_P
(
SET_DEST
(
part
)))
if
(
REG_FUNCTION_VALUE_P
(
SET_DEST
(
part
)))
{
{
...
@@ -8226,25 +7922,12 @@ reload_cse_regs_1 (first)
...
@@ -8226,25 +7922,12 @@ reload_cse_regs_1 (first)
else
if
(
GET_CODE
(
part
)
!=
CLOBBER
)
else
if
(
GET_CODE
(
part
)
!=
CLOBBER
)
break
;
break
;
}
}
if
(
i
<
0
)
if
(
i
<
0
)
{
{
if
(
value
)
reload_cse_delete_noop_set
(
insn
,
value
);
{
pop_obstacks
();
PATTERN
(
insn
)
=
gen_rtx_USE
(
VOIDmode
,
value
);
INSN_CODE
(
insn
)
=
-
1
;
REG_NOTES
(
insn
)
=
NULL_RTX
;
push_obstacks
(
&
reload_obstack
,
&
reload_obstack
);
}
else
{
PUT_CODE
(
insn
,
NOTE
);
NOTE_LINE_NUMBER
(
insn
)
=
NOTE_INSN_DELETED
;
NOTE_SOURCE_FILE
(
insn
)
=
0
;
}
/* We're done with this insn. */
/* We're done with this insn. */
continue
;
return
;
}
}
/* It's not a no-op, but we can try to simplify it. */
/* It's not a no-op, but we can try to simplify it. */
...
@@ -8256,55 +7939,46 @@ reload_cse_regs_1 (first)
...
@@ -8256,55 +7939,46 @@ reload_cse_regs_1 (first)
apply_change_group
();
apply_change_group
();
else
else
reload_cse_simplify_operands
(
insn
);
reload_cse_simplify_operands
(
insn
);
}
}
/* Look through the PARALLEL and record the values being
/* Do a very simple CSE pass over the hard registers.
set, if possible. Also handle any CLOBBERs. */
for
(
i
=
XVECLEN
(
body
,
0
)
-
1
;
i
>=
0
;
--
i
)
{
rtx
x
=
XVECEXP
(
body
,
0
,
i
);
if
(
GET_CODE
(
x
)
==
SET
)
This function detects no-op moves where we happened to assign two
reload_cse_record_set
(
x
,
body
);
different pseudo-registers to the same hard register, and then
else
copied one to the other. Reload will generate a useless
note_stores
(
x
,
reload_cse_invalidate_rtx
,
NULL
);
instruction copying a register to itself.
}
}
else
note_stores
(
body
,
reload_cse_invalidate_rtx
,
NULL
);
#ifdef AUTO_INC_DEC
This function also detects cases where we load a value from memory
/* Clobber any registers which appear in REG_INC notes. We
into two different registers, and (if memory is more expensive than
could keep track of the changes to their values, but it is
registers) changes it to simply copy the first register into the
unlikely to help. */
second register.
{
rtx
x
;
for
(
x
=
REG_NOTES
(
insn
);
x
;
x
=
XEXP
(
x
,
1
))
Another optimization is performed that scans the operands of each
if
(
REG_NOTE_KIND
(
x
)
==
REG_INC
)
instruction to see whether the value is already available in a
reload_cse_invalidate_rtx
(
XEXP
(
x
,
0
),
NULL_RTX
,
NULL
);
hard register. It then replaces the operand with the hard register
}
if possible, much like an optional reload would. */
#endif
/* Look for any CLOBBERs in CALL_INSN_FUNCTION_USAGE, but only
static
void
after we have processed the insn. */
reload_cse_regs_1
(
first
)
if
(
GET_CODE
(
insn
)
==
CALL_INSN
)
rtx
first
;
{
rtx
insn
;
cselib_init
();
init_alias_analysis
();
for
(
insn
=
first
;
insn
;
insn
=
NEXT_INSN
(
insn
))
{
{
rtx
x
;
if
(
GET_RTX_CLASS
(
GET_CODE
(
insn
))
==
'i'
)
reload_cse_simplify
(
insn
);
for
(
x
=
CALL_INSN_FUNCTION_USAGE
(
insn
);
x
;
x
=
XEXP
(
x
,
1
))
cselib_process_insn
(
insn
);
if
(
GET_CODE
(
XEXP
(
x
,
0
))
==
CLOBBER
)
reload_cse_invalidate_rtx
(
XEXP
(
XEXP
(
x
,
0
),
0
),
NULL_RTX
,
NULL
);
}
}
}
/* Clean up. */
/* Clean up. */
end_alias_analysis
();
end_alias_analysis
();
cselib_finish
();
/* Free all the temporary structures we created, and go back to the
regular obstacks. */
obstack_free
(
&
reload_obstack
,
firstobj
);
pop_obstacks
();
}
}
/* Call cse / combine like post-reload optimization phases.
/* Call cse / combine like post-reload optimization phases.
...
@@ -8320,127 +7994,6 @@ reload_cse_regs (first)
...
@@ -8320,127 +7994,6 @@ reload_cse_regs (first)
reload_cse_regs_1
(
first
);
reload_cse_regs_1
(
first
);
}
}
/* Return whether the values known for REGNO are equal to VAL. MODE
is the mode of the object that VAL is being copied to; this matters
if VAL is a CONST_INT. */
static
int
reload_cse_regno_equal_p
(
regno
,
val
,
mode
)
int
regno
;
rtx
val
;
enum
machine_mode
mode
;
{
rtx
x
;
if
(
val
==
0
)
return
0
;
for
(
x
=
reg_values
[
regno
];
x
;
x
=
XEXP
(
x
,
1
))
if
(
XEXP
(
x
,
0
)
!=
0
&&
rtx_equal_p
(
XEXP
(
x
,
0
),
val
)
&&
(
!
flag_float_store
||
GET_CODE
(
XEXP
(
x
,
0
))
!=
MEM
||
GET_MODE_CLASS
(
GET_MODE
(
x
))
!=
MODE_FLOAT
)
&&
(
GET_CODE
(
val
)
!=
CONST_INT
||
mode
==
GET_MODE
(
x
)
||
(
GET_MODE_SIZE
(
mode
)
<
GET_MODE_SIZE
(
GET_MODE
(
x
))
/* On a big endian machine if the value spans more than
one register then this register holds the high part of
it and we can't use it.
??? We should also compare with the high part of the
value. */
&&
!
(
WORDS_BIG_ENDIAN
&&
HARD_REGNO_NREGS
(
regno
,
GET_MODE
(
x
))
>
1
)
&&
TRULY_NOOP_TRUNCATION
(
GET_MODE_BITSIZE
(
mode
),
GET_MODE_BITSIZE
(
GET_MODE
(
x
))))))
return
1
;
return
0
;
}
/* See whether a single set is a noop. SET is the set instruction we
are should check, and INSN is the instruction from which it came. */
static
int
reload_cse_noop_set_p
(
set
,
insn
)
rtx
set
;
rtx
insn
ATTRIBUTE_UNUSED
;
{
rtx
src
,
dest
;
enum
machine_mode
dest_mode
;
int
dreg
,
sreg
;
int
ret
;
src
=
SET_SRC
(
set
);
dest
=
SET_DEST
(
set
);
dest_mode
=
GET_MODE
(
dest
);
if
(
side_effects_p
(
src
))
return
0
;
dreg
=
true_regnum
(
dest
);
sreg
=
true_regnum
(
src
);
/* Check for setting a register to itself. In this case, we don't
have to worry about REG_DEAD notes. */
if
(
dreg
>=
0
&&
dreg
==
sreg
)
return
1
;
ret
=
0
;
if
(
dreg
>=
0
)
{
/* Check for setting a register to itself. */
if
(
dreg
==
sreg
)
ret
=
1
;
/* Check for setting a register to a value which we already know
is in the register. */
else
if
(
reload_cse_regno_equal_p
(
dreg
,
src
,
dest_mode
))
ret
=
1
;
/* Check for setting a register DREG to another register SREG
where SREG is equal to a value which is already in DREG. */
else
if
(
sreg
>=
0
)
{
rtx
x
;
for
(
x
=
reg_values
[
sreg
];
x
;
x
=
XEXP
(
x
,
1
))
{
rtx
tmp
;
if
(
XEXP
(
x
,
0
)
==
0
)
continue
;
if
(
dest_mode
==
GET_MODE
(
x
))
tmp
=
XEXP
(
x
,
0
);
else
if
(
GET_MODE_BITSIZE
(
dest_mode
)
<
GET_MODE_BITSIZE
(
GET_MODE
(
x
)))
tmp
=
gen_lowpart_common
(
dest_mode
,
XEXP
(
x
,
0
));
else
continue
;
if
(
tmp
&&
reload_cse_regno_equal_p
(
dreg
,
tmp
,
dest_mode
))
{
ret
=
1
;
break
;
}
}
}
}
else
if
(
GET_CODE
(
dest
)
==
MEM
)
{
/* Check for storing a register to memory when we know that the
register is equivalent to the memory location. */
if
(
sreg
>=
0
&&
reload_cse_regno_equal_p
(
sreg
,
dest
,
dest_mode
)
&&
!
side_effects_p
(
dest
))
ret
=
1
;
}
return
ret
;
}
/* Try to simplify a single SET instruction. SET is the set pattern.
/* Try to simplify a single SET instruction. SET is the set pattern.
INSN is the instruction it came from.
INSN is the instruction it came from.
This function only handles one case: if we set a register to a value
This function only handles one case: if we set a register to a value
...
@@ -8452,11 +8005,13 @@ reload_cse_simplify_set (set, insn)
...
@@ -8452,11 +8005,13 @@ reload_cse_simplify_set (set, insn)
rtx
set
;
rtx
set
;
rtx
insn
;
rtx
insn
;
{
{
int
did_change
=
0
;
int
dreg
;
int
dreg
;
rtx
src
;
rtx
src
;
enum
machine_mode
dest_mode
;
enum
reg_class
dclass
;
enum
reg_class
dclass
;
register
int
i
;
int
old_cost
;
cselib_val
*
val
;
struct
elt_loc_list
*
l
;
dreg
=
true_regnum
(
SET_DEST
(
set
));
dreg
=
true_regnum
(
SET_DEST
(
set
));
if
(
dreg
<
0
)
if
(
dreg
<
0
)
...
@@ -8469,39 +8024,40 @@ reload_cse_simplify_set (set, insn)
...
@@ -8469,39 +8024,40 @@ reload_cse_simplify_set (set, insn)
dclass
=
REGNO_REG_CLASS
(
dreg
);
dclass
=
REGNO_REG_CLASS
(
dreg
);
/* If memory loads are cheaper than register copies, don't change them. */
/* If memory loads are cheaper than register copies, don't change them. */
if
(
GET_CODE
(
src
)
==
MEM
if
(
GET_CODE
(
src
)
==
MEM
)
&&
MEMORY_MOVE_COST
(
GET_MODE
(
src
),
dclass
,
1
)
<
2
)
old_cost
=
MEMORY_MOVE_COST
(
GET_MODE
(
src
),
dclass
,
1
);
return
0
;
else
if
(
CONSTANT_P
(
src
))
old_cost
=
rtx_cost
(
src
,
SET
);
else
if
(
GET_CODE
(
src
)
==
REG
)
old_cost
=
REGISTER_MOVE_COST
(
REGNO_REG_CLASS
(
REGNO
(
src
)),
dclass
);
else
/* ??? */
old_cost
=
rtx_cost
(
src
,
SET
);
/* If the constant is cheaper than a register, don't change it. */
val
=
cselib_lookup
(
src
,
VOIDmode
,
0
);
if
(
CONSTANT_P
(
src
)
if
(
!
val
)
&&
rtx_cost
(
src
,
SET
)
<
2
)
return
0
;
return
0
;
for
(
l
=
val
->
locs
;
l
;
l
=
l
->
next
)
dest_mode
=
GET_MODE
(
SET_DEST
(
set
));
{
for
(
i
=
0
;
i
<
FIRST_PSEUDO_REGISTER
;
i
++
)
int
this_cost
;
{
if
(
CONSTANT_P
(
l
->
loc
)
&&
!
references_value_p
(
l
->
loc
,
0
))
if
(
i
!=
dreg
this_cost
=
rtx_cost
(
l
->
loc
,
SET
);
&&
REGISTER_MOVE_COST
(
REGNO_REG_CLASS
(
i
),
dclass
)
==
2
else
if
(
GET_CODE
(
l
->
loc
)
==
REG
)
&&
reload_cse_regno_equal_p
(
i
,
src
,
dest_mode
))
this_cost
=
REGISTER_MOVE_COST
(
REGNO_REG_CLASS
(
REGNO
(
l
->
loc
)),
{
dclass
);
int
validated
;
else
continue
;
/* Pop back to the real obstacks while changing the insn. */
/* If equal costs, prefer registers over anything else. That tends to
pop_obstacks
();
lead to smaller instructions on some machines. */
if
((
this_cost
<
old_cost
validated
=
validate_change
(
insn
,
&
SET_SRC
(
set
),
||
(
this_cost
==
old_cost
gen_rtx_REG
(
dest_mode
,
i
),
1
);
&&
GET_CODE
(
l
->
loc
)
==
REG
&&
GET_CODE
(
SET_SRC
(
set
))
!=
REG
))
/* Go back to the obstack we are using for temporary
&&
validate_change
(
insn
,
&
SET_SRC
(
set
),
copy_rtx
(
l
->
loc
),
1
))
storage. */
old_cost
=
this_cost
,
did_change
=
1
;
push_obstacks
(
&
reload_obstack
,
&
reload_obstack
);
if
(
validated
)
return
1
;
}
}
}
return
0
;
return
did_change
;
}
}
/* Try to replace operands in INSN with equivalent values that are already
/* Try to replace operands in INSN with equivalent values that are already
...
@@ -8521,6 +8077,9 @@ reload_cse_simplify_operands (insn)
...
@@ -8521,6 +8077,9 @@ reload_cse_simplify_operands (insn)
{
{
int
i
,
j
;
int
i
,
j
;
/* For each operand, all registers that are equivalent to it. */
HARD_REG_SET
equiv_regs
[
MAX_RECOG_OPERANDS
];
const
char
*
constraints
[
MAX_RECOG_OPERANDS
];
const
char
*
constraints
[
MAX_RECOG_OPERANDS
];
/* Vector recording how bad an alternative is. */
/* Vector recording how bad an alternative is. */
...
@@ -8551,6 +8110,28 @@ reload_cse_simplify_operands (insn)
...
@@ -8551,6 +8110,28 @@ reload_cse_simplify_operands (insn)
bzero
((
char
*
)
alternative_reject
,
recog_data
.
n_alternatives
*
sizeof
(
int
));
bzero
((
char
*
)
alternative_reject
,
recog_data
.
n_alternatives
*
sizeof
(
int
));
bzero
((
char
*
)
alternative_nregs
,
recog_data
.
n_alternatives
*
sizeof
(
int
));
bzero
((
char
*
)
alternative_nregs
,
recog_data
.
n_alternatives
*
sizeof
(
int
));
/* For each operand, find out which regs are equivalent. */
for
(
i
=
0
;
i
<
recog_data
.
n_operands
;
i
++
)
{
cselib_val
*
v
;
struct
elt_loc_list
*
l
;
CLEAR_HARD_REG_SET
(
equiv_regs
[
i
]);
/* cselib blows up on CODE_LABELs. Trying to fix that doesn't seem
right, so avoid the problem here. */
if
(
GET_CODE
(
recog_data
.
operand
[
i
])
==
CODE_LABEL
)
continue
;
v
=
cselib_lookup
(
recog_data
.
operand
[
i
],
recog_data
.
operand_mode
[
i
],
0
);
if
(
!
v
)
continue
;
for
(
l
=
v
->
locs
;
l
;
l
=
l
->
next
)
if
(
GET_CODE
(
l
->
loc
)
==
REG
)
SET_HARD_REG_BIT
(
equiv_regs
[
i
],
REGNO
(
l
->
loc
));
}
for
(
i
=
0
;
i
<
recog_data
.
n_operands
;
i
++
)
for
(
i
=
0
;
i
<
recog_data
.
n_operands
;
i
++
)
{
{
enum
machine_mode
mode
;
enum
machine_mode
mode
;
...
@@ -8590,7 +8171,7 @@ reload_cse_simplify_operands (insn)
...
@@ -8590,7 +8171,7 @@ reload_cse_simplify_operands (insn)
{
{
int
class
=
(
int
)
NO_REGS
;
int
class
=
(
int
)
NO_REGS
;
if
(
!
reload_cse_regno_equal_p
(
regno
,
recog_data
.
operand
[
i
],
mode
))
if
(
!
TEST_HARD_REG_BIT
(
equiv_regs
[
i
],
regno
))
continue
;
continue
;
REGNO
(
reg
)
=
regno
;
REGNO
(
reg
)
=
regno
;
...
@@ -8696,9 +8277,6 @@ reload_cse_simplify_operands (insn)
...
@@ -8696,9 +8277,6 @@ reload_cse_simplify_operands (insn)
alternative. */
alternative. */
j
=
alternative_order
[
0
];
j
=
alternative_order
[
0
];
/* Pop back to the real obstacks while changing the insn. */
pop_obstacks
();
for
(
i
=
0
;
i
<
recog_data
.
n_operands
;
i
++
)
for
(
i
=
0
;
i
<
recog_data
.
n_operands
;
i
++
)
{
{
enum
machine_mode
mode
=
recog_data
.
operand_mode
[
i
];
enum
machine_mode
mode
=
recog_data
.
operand_mode
[
i
];
...
@@ -8721,179 +8299,9 @@ reload_cse_simplify_operands (insn)
...
@@ -8721,179 +8299,9 @@ reload_cse_simplify_operands (insn)
gen_rtx_REG
(
mode
,
op_alt_regno
[
op
][
j
]),
1
);
gen_rtx_REG
(
mode
,
op_alt_regno
[
op
][
j
]),
1
);
}
}
/* Go back to the obstack we are using for temporary
storage. */
push_obstacks
(
&
reload_obstack
,
&
reload_obstack
);
return
apply_change_group
();
return
apply_change_group
();
}
}
/* These two variables are used to pass information from
reload_cse_record_set to reload_cse_check_clobber. */
static
int
reload_cse_check_clobbered
;
static
rtx
reload_cse_check_src
;
/* See if DEST overlaps with RELOAD_CSE_CHECK_SRC. If it does, set
RELOAD_CSE_CHECK_CLOBBERED. This is called via note_stores. The
second argument, which is passed by note_stores, is ignored. */
static
void
reload_cse_check_clobber
(
dest
,
ignore
,
data
)
rtx
dest
;
rtx
ignore
ATTRIBUTE_UNUSED
;
void
*
data
ATTRIBUTE_UNUSED
;
{
if
(
reg_overlap_mentioned_p
(
dest
,
reload_cse_check_src
))
reload_cse_check_clobbered
=
1
;
}
/* Record the result of a SET instruction. SET is the set pattern.
BODY is the pattern of the insn that it came from. */
static
void
reload_cse_record_set
(
set
,
body
)
rtx
set
;
rtx
body
;
{
rtx
dest
,
src
,
x
;
int
dreg
,
sreg
;
enum
machine_mode
dest_mode
;
dest
=
SET_DEST
(
set
);
src
=
SET_SRC
(
set
);
dreg
=
true_regnum
(
dest
);
sreg
=
true_regnum
(
src
);
dest_mode
=
GET_MODE
(
dest
);
/* Some machines don't define AUTO_INC_DEC, but they still use push
instructions. We need to catch that case here in order to
invalidate the stack pointer correctly. Note that invalidating
the stack pointer is different from invalidating DEST. */
x
=
dest
;
while
(
GET_CODE
(
x
)
==
SUBREG
||
GET_CODE
(
x
)
==
ZERO_EXTRACT
||
GET_CODE
(
x
)
==
SIGN_EXTRACT
||
GET_CODE
(
x
)
==
STRICT_LOW_PART
)
x
=
XEXP
(
x
,
0
);
if
(
push_operand
(
x
,
GET_MODE
(
x
)))
{
reload_cse_invalidate_rtx
(
stack_pointer_rtx
,
NULL_RTX
,
NULL
);
reload_cse_invalidate_rtx
(
dest
,
NULL_RTX
,
NULL
);
return
;
}
/* We can only handle an assignment to a register, or a store of a
register to a memory location. For other cases, we just clobber
the destination. We also have to just clobber if there are side
effects in SRC or DEST. */
if
((
dreg
<
0
&&
GET_CODE
(
dest
)
!=
MEM
)
||
side_effects_p
(
src
)
||
side_effects_p
(
dest
))
{
reload_cse_invalidate_rtx
(
dest
,
NULL_RTX
,
NULL
);
return
;
}
#ifdef HAVE_cc0
/* We don't try to handle values involving CC, because it's a pain
to keep track of when they have to be invalidated. */
if
(
reg_mentioned_p
(
cc0_rtx
,
src
)
||
reg_mentioned_p
(
cc0_rtx
,
dest
))
{
reload_cse_invalidate_rtx
(
dest
,
NULL_RTX
,
NULL
);
return
;
}
#endif
/* If BODY is a PARALLEL, then we need to see whether the source of
SET is clobbered by some other instruction in the PARALLEL. */
if
(
GET_CODE
(
body
)
==
PARALLEL
)
{
int
i
;
for
(
i
=
XVECLEN
(
body
,
0
)
-
1
;
i
>=
0
;
--
i
)
{
rtx
x
;
x
=
XVECEXP
(
body
,
0
,
i
);
if
(
x
==
set
)
continue
;
reload_cse_check_clobbered
=
0
;
reload_cse_check_src
=
src
;
note_stores
(
x
,
reload_cse_check_clobber
,
NULL
);
if
(
reload_cse_check_clobbered
)
{
reload_cse_invalidate_rtx
(
dest
,
NULL_RTX
,
NULL
);
return
;
}
}
}
if
(
dreg
>=
0
)
{
int
i
;
/* This is an assignment to a register. Update the value we
have stored for the register. */
if
(
sreg
>=
0
)
{
rtx
x
;
/* This is a copy from one register to another. Any values
which were valid for SREG are now valid for DREG. If the
mode changes, we use gen_lowpart_common to extract only
the part of the value that is copied. */
reg_values
[
dreg
]
=
0
;
for
(
x
=
reg_values
[
sreg
];
x
;
x
=
XEXP
(
x
,
1
))
{
rtx
tmp
;
if
(
XEXP
(
x
,
0
)
==
0
)
continue
;
if
(
dest_mode
==
GET_MODE
(
XEXP
(
x
,
0
)))
tmp
=
XEXP
(
x
,
0
);
else
if
(
GET_MODE_BITSIZE
(
dest_mode
)
>
GET_MODE_BITSIZE
(
GET_MODE
(
XEXP
(
x
,
0
))))
continue
;
else
tmp
=
gen_lowpart_common
(
dest_mode
,
XEXP
(
x
,
0
));
if
(
tmp
)
reg_values
[
dreg
]
=
gen_rtx_EXPR_LIST
(
dest_mode
,
tmp
,
reg_values
[
dreg
]);
}
}
else
reg_values
[
dreg
]
=
gen_rtx_EXPR_LIST
(
dest_mode
,
src
,
NULL_RTX
);
/* We've changed DREG, so invalidate any values held by other
registers that depend upon it. */
reload_cse_invalidate_regno
(
dreg
,
dest_mode
,
0
);
/* If this assignment changes more than one hard register,
forget anything we know about the others. */
for
(
i
=
1
;
i
<
HARD_REGNO_NREGS
(
dreg
,
dest_mode
);
i
++
)
reg_values
[
dreg
+
i
]
=
0
;
}
else
if
(
GET_CODE
(
dest
)
==
MEM
)
{
/* Invalidate conflicting memory locations. */
reload_cse_invalidate_mem
(
dest
);
/* If we're storing a register to memory, add DEST to the list
in REG_VALUES. */
if
(
sreg
>=
0
&&
!
side_effects_p
(
dest
))
reg_values
[
sreg
]
=
gen_rtx_EXPR_LIST
(
dest_mode
,
dest
,
reg_values
[
sreg
]);
}
else
{
/* We should have bailed out earlier. */
abort
();
}
}
/* If reload couldn't use reg+reg+offset addressing, try to use reg+reg
/* If reload couldn't use reg+reg+offset addressing, try to use reg+reg
addressing now.
addressing now.
This code might also be useful when reload gave up on reg+reg addresssing
This code might also be useful when reload gave up on reg+reg addresssing
...
...
gcc/rtl.def
View file @
eab5c70a
...
@@ -530,6 +530,9 @@ DEF_RTL_EXPR(CONST, "const", "e", 'o')
...
@@ -530,6 +530,9 @@ DEF_RTL_EXPR(CONST, "const", "e", 'o')
by a SET whose first operand is (PC). */
by a SET whose first operand is (PC). */
DEF_RTL_EXPR(PC, "pc", "", 'o')
DEF_RTL_EXPR(PC, "pc", "", 'o')
/* Used in the cselib routines to describe a value. */
DEF_RTL_EXPR(VALUE, "value", "0", 'o')
/* A register. The "operand" is the register number, accessed with
/* A register. The "operand" is the register number, accessed with
the REGNO macro. If this number is less than FIRST_PSEUDO_REGISTER
the REGNO macro. If this number is less than FIRST_PSEUDO_REGISTER
than a hardware register is being referred to. The second operand
than a hardware register is being referred to. The second operand
...
...
gcc/rtl.h
View file @
eab5c70a
...
@@ -92,6 +92,7 @@ typedef union rtunion_def
...
@@ -92,6 +92,7 @@ typedef union rtunion_def
struct
rtvec_def
*
rtvec
;
struct
rtvec_def
*
rtvec
;
enum
machine_mode
rttype
;
enum
machine_mode
rttype
;
addr_diff_vec_flags
rt_addr_diff_vec_flags
;
addr_diff_vec_flags
rt_addr_diff_vec_flags
;
struct
cselib_val_struct
*
rt_cselib
;
struct
bitmap_head_def
*
rtbit
;
struct
bitmap_head_def
*
rtbit
;
union
tree_node
*
rttree
;
union
tree_node
*
rttree
;
struct
basic_block_def
*
bb
;
struct
basic_block_def
*
bb
;
...
@@ -330,6 +331,7 @@ extern void rtvec_check_failed_bounds PARAMS ((rtvec, int,
...
@@ -330,6 +331,7 @@ extern void rtvec_check_failed_bounds PARAMS ((rtvec, int,
#define X0TREE(RTX, N) (RTL_CHECK1(RTX, N, '0').rttree)
#define X0TREE(RTX, N) (RTL_CHECK1(RTX, N, '0').rttree)
#define X0BBDEF(RTX, N) (RTL_CHECK1(RTX, N, '0').bb)
#define X0BBDEF(RTX, N) (RTL_CHECK1(RTX, N, '0').bb)
#define X0ADVFLAGS(RTX, N) (RTL_CHECK1(RTX, N, '0').rt_addr_diff_vec_flags)
#define X0ADVFLAGS(RTX, N) (RTL_CHECK1(RTX, N, '0').rt_addr_diff_vec_flags)
#define X0CSELIB(RTX, N) (RTL_CHECK1(RTX, N, '0').rt_cselib)
#define XCWINT(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rtwint)
#define XCWINT(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rtwint)
#define XCINT(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rtint)
#define XCINT(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rtint)
...
@@ -341,6 +343,7 @@ extern void rtvec_check_failed_bounds PARAMS ((rtvec, int,
...
@@ -341,6 +343,7 @@ extern void rtvec_check_failed_bounds PARAMS ((rtvec, int,
#define XCTREE(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rttree)
#define XCTREE(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rttree)
#define XCBBDEF(RTX, N, C) (RTL_CHECKC1(RTX, N, C).bb)
#define XCBBDEF(RTX, N, C) (RTL_CHECKC1(RTX, N, C).bb)
#define XCADVFLAGS(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rt_addr_diff_vec_flags)
#define XCADVFLAGS(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rt_addr_diff_vec_flags)
#define XCCSELIB(RTX, N, C) (RTL_CHECKC1(RTX, N, C).rt_cselib)
#define XCVECEXP(RTX, N, M, C) RTVEC_ELT (XCVEC (RTX, N, C), M)
#define XCVECEXP(RTX, N, M, C) RTVEC_ELT (XCVEC (RTX, N, C), M)
#define XCVECLEN(RTX, N, C) GET_NUM_ELEM (XCVEC (RTX, N, C))
#define XCVECLEN(RTX, N, C) GET_NUM_ELEM (XCVEC (RTX, N, C))
...
@@ -470,6 +473,8 @@ extern void rtvec_check_failed_bounds PARAMS ((rtvec, int,
...
@@ -470,6 +473,8 @@ extern void rtvec_check_failed_bounds PARAMS ((rtvec, int,
#define ADDR_DIFF_VEC_FLAGS(RTX) X0ADVFLAGS(RTX, 4)
#define ADDR_DIFF_VEC_FLAGS(RTX) X0ADVFLAGS(RTX, 4)
#define CSELIB_VAL_PTR(RTX) X0CSELIB(RTX, 0)
/* Don't forget to change reg_note_name in rtl.c. */
/* Don't forget to change reg_note_name in rtl.c. */
enum
reg_note
{
REG_DEAD
=
1
,
REG_INC
=
2
,
REG_EQUIV
=
3
,
REG_WAS_0
=
4
,
enum
reg_note
{
REG_DEAD
=
1
,
REG_INC
=
2
,
REG_EQUIV
=
3
,
REG_WAS_0
=
4
,
REG_EQUAL
=
5
,
REG_RETVAL
=
6
,
REG_LIBCALL
=
7
,
REG_EQUAL
=
5
,
REG_RETVAL
=
6
,
REG_LIBCALL
=
7
,
...
...
gcc/simplify-rtx.c
View file @
eab5c70a
...
@@ -37,6 +37,10 @@ Boston, MA 02111-1307, USA. */
...
@@ -37,6 +37,10 @@ Boston, MA 02111-1307, USA. */
#include "expr.h"
#include "expr.h"
#include "toplev.h"
#include "toplev.h"
#include "output.h"
#include "output.h"
#include "ggc.h"
#include "obstack.h"
#include "hashtab.h"
#include "cselib.h"
/* Simplification and canonicalization of RTL. */
/* Simplification and canonicalization of RTL. */
...
@@ -1957,3 +1961,1229 @@ simplify_rtx (x)
...
@@ -1957,3 +1961,1229 @@ simplify_rtx (x)
return
NULL
;
return
NULL
;
}
}
}
}
static
int
entry_and_rtx_equal_p
PARAMS
((
const
void
*
,
const
void
*
));
static
unsigned
int
get_value_hash
PARAMS
((
const
void
*
));
static
struct
elt_list
*
new_elt_list
PARAMS
((
struct
elt_list
*
,
cselib_val
*
));
static
struct
elt_loc_list
*
new_elt_loc_list
PARAMS
((
struct
elt_loc_list
*
,
rtx
));
static
void
unchain_one_value
PARAMS
((
cselib_val
*
));
static
void
unchain_one_elt_list
PARAMS
((
struct
elt_list
**
));
static
void
unchain_one_elt_loc_list
PARAMS
((
struct
elt_loc_list
**
));
static
void
clear_table
PARAMS
((
void
));
static
int
check_value_useless
PARAMS
((
cselib_val
*
));
static
int
discard_useless_locs
PARAMS
((
void
**
,
void
*
));
static
int
discard_useless_values
PARAMS
((
void
**
,
void
*
));
static
void
remove_useless_values
PARAMS
((
void
));
static
unsigned
int
hash_rtx
PARAMS
((
rtx
,
enum
machine_mode
,
int
));
static
cselib_val
*
new_cselib_val
PARAMS
((
unsigned
int
,
enum
machine_mode
));
static
void
add_mem_for_addr
PARAMS
((
cselib_val
*
,
cselib_val
*
,
rtx
));
static
cselib_val
*
cselib_lookup_mem
PARAMS
((
rtx
,
int
));
static
rtx
cselib_subst_to_values
PARAMS
((
rtx
));
static
void
cselib_invalidate_regno
PARAMS
((
int
,
enum
machine_mode
));
static
int
cselib_mem_conflict_p
PARAMS
((
rtx
,
rtx
));
static
int
cselib_invalidate_mem_1
PARAMS
((
void
**
,
void
*
));
static
void
cselib_invalidate_mem
PARAMS
((
rtx
));
static
void
cselib_invalidate_rtx
PARAMS
((
rtx
,
rtx
,
void
*
));
static
void
cselib_record_set
PARAMS
((
rtx
,
cselib_val
*
,
cselib_val
*
));
static
void
cselib_record_sets
PARAMS
((
rtx
));
/* There are three ways in which cselib can look up an rtx:
- for a REG, the reg_values table (which is indexed by regno) is used
- for a MEM, we recursively look up its address and then follow the
addr_list of that value
- for everything else, we compute a hash value and go through the hash
table. Since different rtx's can still have the same hash value,
this involves walking the table entries for a given value and comparing
the locations of the entries with the rtx we are looking up. */
/* A table that enables us to look up elts by their value. */
static
htab_t
hash_table
;
/* This is a global so we don't have to pass this through every function.
It is used in new_elt_loc_list to set SETTING_INSN. */
static
rtx
cselib_current_insn
;
/* Every new unknown value gets a unique number. */
static
unsigned
int
next_unknown_value
;
/* The number of registers we had when the varrays were last resized. */
static
int
cselib_nregs
;
/* Count values without known locations. Whenever this grows too big, we
remove these useless values from the table. */
static
int
n_useless_values
;
/* Number of useless values before we remove them from the hash table. */
#define MAX_USELESS_VALUES 32
/* This table maps from register number to values. It does not contain
pointers to cselib_val structures, but rather elt_lists. The purpose is
to be able to refer to the same register in different modes. */
static
varray_type
reg_values
;
#define REG_VALUES(I) VARRAY_ELT_LIST (reg_values, (I))
/* We pass this to cselib_invalidate_mem to invalidate all of
memory for a non-const call instruction. */
static
rtx
callmem
;
/* Memory for our structures is allocated from this obstack. */
static
struct
obstack
cselib_obstack
;
/* Used to quickly free all memory. */
static
char
*
cselib_startobj
;
/* Caches for unused structures. */
static
cselib_val
*
empty_vals
;
static
struct
elt_list
*
empty_elt_lists
;
static
struct
elt_loc_list
*
empty_elt_loc_lists
;
/* Allocate a struct elt_list and fill in its two elements with the
arguments. */
static
struct
elt_list
*
new_elt_list
(
next
,
elt
)
struct
elt_list
*
next
;
cselib_val
*
elt
;
{
struct
elt_list
*
el
=
empty_elt_lists
;
if
(
el
)
empty_elt_lists
=
el
->
next
;
else
el
=
(
struct
elt_list
*
)
obstack_alloc
(
&
cselib_obstack
,
sizeof
(
struct
elt_list
));
el
->
next
=
next
;
el
->
elt
=
elt
;
return
el
;
}
/* Allocate a struct elt_loc_list and fill in its two elements with the
arguments. */
static
struct
elt_loc_list
*
new_elt_loc_list
(
next
,
loc
)
struct
elt_loc_list
*
next
;
rtx
loc
;
{
struct
elt_loc_list
*
el
=
empty_elt_loc_lists
;
if
(
el
)
empty_elt_loc_lists
=
el
->
next
;
else
el
=
(
struct
elt_loc_list
*
)
obstack_alloc
(
&
cselib_obstack
,
sizeof
(
struct
elt_loc_list
));
el
->
next
=
next
;
el
->
loc
=
loc
;
el
->
setting_insn
=
cselib_current_insn
;
return
el
;
}
/* The elt_list at *PL is no longer needed. Unchain it and free its
storage. */
static
void
unchain_one_elt_list
(
pl
)
struct
elt_list
**
pl
;
{
struct
elt_list
*
l
=
*
pl
;
*
pl
=
l
->
next
;
l
->
next
=
empty_elt_lists
;
empty_elt_lists
=
l
;
}
/* Likewise for elt_loc_lists. */
static
void
unchain_one_elt_loc_list
(
pl
)
struct
elt_loc_list
**
pl
;
{
struct
elt_loc_list
*
l
=
*
pl
;
*
pl
=
l
->
next
;
l
->
next
=
empty_elt_loc_lists
;
empty_elt_loc_lists
=
l
;
}
/* Likewise for cselib_vals. This also frees the addr_list associated with
V. */
static
void
unchain_one_value
(
v
)
cselib_val
*
v
;
{
while
(
v
->
addr_list
)
unchain_one_elt_list
(
&
v
->
addr_list
);
v
->
u
.
next_free
=
empty_vals
;
empty_vals
=
v
;
}
/* Remove all entries from the hash table. Also used during
initialization. */
static
void
clear_table
()
{
int
i
;
for
(
i
=
0
;
i
<
cselib_nregs
;
i
++
)
REG_VALUES
(
i
)
=
0
;
htab_empty
(
hash_table
);
obstack_free
(
&
cselib_obstack
,
cselib_startobj
);
empty_vals
=
0
;
empty_elt_lists
=
0
;
empty_elt_loc_lists
=
0
;
n_useless_values
=
0
;
next_unknown_value
=
0
;
}
/* The equality test for our hash table. The first argument ENTRY is a table
element (i.e. a cselib_val), while the second arg X is an rtx. */
static
int
entry_and_rtx_equal_p
(
entry
,
x_arg
)
const
void
*
entry
,
*
x_arg
;
{
struct
elt_loc_list
*
l
;
cselib_val
*
v
=
(
cselib_val
*
)
entry
;
rtx
x
=
(
rtx
)
x_arg
;
/* We don't guarantee that distinct rtx's have different hash values,
so we need to do a comparison. */
for
(
l
=
v
->
locs
;
l
;
l
=
l
->
next
)
if
(
rtx_equal_for_cselib_p
(
l
->
loc
,
x
))
return
1
;
return
0
;
}
/* The hash function for our hash table. The value is always computed with
hash_rtx when adding an element; this function just extracts the hash
value from a cselib_val structure. */
static
unsigned
int
get_value_hash
(
entry
)
const
void
*
entry
;
{
cselib_val
*
v
=
(
cselib_val
*
)
entry
;
return
v
->
value
;
}
/* If there are no more locations that hold a value, the value has become
useless. See whether that is the case for V. Return 1 if this has
just become useless. */
static
int
check_value_useless
(
v
)
cselib_val
*
v
;
{
if
(
v
->
locs
!=
0
)
return
0
;
if
(
v
->
value
==
0
)
return
0
;
/* This is a marker to indicate that the value will be reclaimed. */
v
->
value
=
0
;
n_useless_values
++
;
return
1
;
}
/* Return true if X contains a VALUE rtx. If ONLY_USELESS is set, we
only return true for values which point to a cselib_val whose value
element has been set to zero, which implies the cselib_val will be
removed. */
int
references_value_p
(
x
,
only_useless
)
rtx
x
;
int
only_useless
;
{
enum
rtx_code
code
=
GET_CODE
(
x
);
const
char
*
fmt
=
GET_RTX_FORMAT
(
code
);
int
i
;
if
(
GET_CODE
(
x
)
==
VALUE
&&
(
!
only_useless
||
CSELIB_VAL_PTR
(
x
)
->
value
==
0
))
return
1
;
for
(
i
=
GET_RTX_LENGTH
(
code
)
-
1
;
i
>=
0
;
i
--
)
{
if
(
fmt
[
i
]
==
'e'
)
{
if
(
references_value_p
(
XEXP
(
x
,
i
),
only_useless
))
return
1
;
}
else
if
(
fmt
[
i
]
==
'E'
)
{
int
j
;
for
(
j
=
0
;
j
<
XVECLEN
(
x
,
i
);
j
++
)
if
(
references_value_p
(
XVECEXP
(
x
,
i
,
j
),
only_useless
))
return
1
;
}
}
return
0
;
}
/* Set by discard_useless_locs if it deleted the last location of any
value. */
static
int
values_became_useless
;
/* For all locations found in X, delete locations that reference useless
values (i.e. values without any location). Called through
htab_traverse. */
static
int
discard_useless_locs
(
x
,
info
)
void
**
x
;
void
*
info
ATTRIBUTE_UNUSED
;
{
cselib_val
*
v
=
(
cselib_val
*
)
*
x
;
struct
elt_loc_list
**
p
=
&
v
->
locs
;
while
(
*
p
)
{
if
(
references_value_p
((
*
p
)
->
loc
,
1
))
unchain_one_elt_loc_list
(
p
);
else
p
=
&
(
*
p
)
->
next
;
}
if
(
check_value_useless
(
v
))
values_became_useless
=
1
;
return
1
;
}
/* If X is a value with no locations, remove it from the hashtable. */
static
int
discard_useless_values
(
x
,
info
)
void
**
x
;
void
*
info
ATTRIBUTE_UNUSED
;
{
cselib_val
*
v
=
(
cselib_val
*
)
*
x
;
if
(
v
->
value
==
0
)
{
htab_clear_slot
(
hash_table
,
x
);
unchain_one_value
(
v
);
n_useless_values
--
;
}
return
1
;
}
/* Clean out useless values (i.e. those which no longer have locations
associated with them) from the hash table. */
static
void
remove_useless_values
()
{
/* First pass: eliminate locations that reference the value. That in
turn can make more values useless. */
do
{
values_became_useless
=
0
;
htab_traverse
(
hash_table
,
discard_useless_locs
,
0
);
}
while
(
values_became_useless
);
/* Second pass: actually remove the values. */
htab_traverse
(
hash_table
,
discard_useless_values
,
0
);
if
(
n_useless_values
!=
0
)
abort
();
}
/* Return nonzero if we can prove that X and Y contain the same value, taking
our gathered information into account. */
int
rtx_equal_for_cselib_p
(
x
,
y
)
rtx
x
,
y
;
{
enum
rtx_code
code
;
const
char
*
fmt
;
int
i
;
if
(
GET_CODE
(
x
)
==
REG
||
GET_CODE
(
x
)
==
MEM
)
{
cselib_val
*
e
=
cselib_lookup
(
x
,
VOIDmode
,
0
);
if
(
e
)
x
=
e
->
u
.
val_rtx
;
}
if
(
GET_CODE
(
y
)
==
REG
||
GET_CODE
(
y
)
==
MEM
)
{
cselib_val
*
e
=
cselib_lookup
(
y
,
VOIDmode
,
0
);
if
(
e
)
y
=
e
->
u
.
val_rtx
;
}
if
(
x
==
y
)
return
1
;
if
(
GET_CODE
(
x
)
==
VALUE
&&
GET_CODE
(
y
)
==
VALUE
)
return
CSELIB_VAL_PTR
(
x
)
==
CSELIB_VAL_PTR
(
y
);
if
(
GET_CODE
(
x
)
==
VALUE
)
{
cselib_val
*
e
=
CSELIB_VAL_PTR
(
x
);
struct
elt_loc_list
*
l
;
for
(
l
=
e
->
locs
;
l
;
l
=
l
->
next
)
{
rtx
t
=
l
->
loc
;
/* Avoid infinite recursion. */
if
(
GET_CODE
(
t
)
==
REG
||
GET_CODE
(
t
)
==
MEM
)
continue
;
if
(
rtx_equal_for_cselib_p
(
t
,
y
))
return
1
;
}
return
0
;
}
if
(
GET_CODE
(
y
)
==
VALUE
)
{
cselib_val
*
e
=
CSELIB_VAL_PTR
(
y
);
struct
elt_loc_list
*
l
;
for
(
l
=
e
->
locs
;
l
;
l
=
l
->
next
)
{
rtx
t
=
l
->
loc
;
if
(
GET_CODE
(
t
)
==
REG
||
GET_CODE
(
t
)
==
MEM
)
continue
;
if
(
rtx_equal_for_cselib_p
(
x
,
t
))
return
1
;
}
return
0
;
}
if
(
GET_CODE
(
x
)
!=
GET_CODE
(
y
)
||
GET_MODE
(
x
)
!=
GET_MODE
(
y
))
return
0
;
/* This won't be handled correctly by the code below. */
if
(
GET_CODE
(
x
)
==
LABEL_REF
)
return
XEXP
(
x
,
0
)
==
XEXP
(
y
,
0
);
code
=
GET_CODE
(
x
);
fmt
=
GET_RTX_FORMAT
(
code
);
for
(
i
=
GET_RTX_LENGTH
(
code
)
-
1
;
i
>=
0
;
i
--
)
{
int
j
;
switch
(
fmt
[
i
])
{
case
'w'
:
if
(
XWINT
(
x
,
i
)
!=
XWINT
(
y
,
i
))
return
0
;
break
;
case
'n'
:
case
'i'
:
if
(
XINT
(
x
,
i
)
!=
XINT
(
y
,
i
))
return
0
;
break
;
case
'V'
:
case
'E'
:
/* Two vectors must have the same length. */
if
(
XVECLEN
(
x
,
i
)
!=
XVECLEN
(
y
,
i
))
return
0
;
/* And the corresponding elements must match. */
for
(
j
=
0
;
j
<
XVECLEN
(
x
,
i
);
j
++
)
if
(
!
rtx_equal_for_cselib_p
(
XVECEXP
(
x
,
i
,
j
),
XVECEXP
(
y
,
i
,
j
)))
return
0
;
break
;
case
'e'
:
if
(
!
rtx_equal_for_cselib_p
(
XEXP
(
x
,
i
),
XEXP
(
y
,
i
)))
return
0
;
break
;
case
'S'
:
case
's'
:
if
(
strcmp
(
XSTR
(
x
,
i
),
XSTR
(
y
,
i
)))
return
0
;
break
;
case
'u'
:
/* These are just backpointers, so they don't matter. */
break
;
case
'0'
:
case
't'
:
break
;
/* It is believed that rtx's at this level will never
contain anything but integers and other rtx's,
except for within LABEL_REFs and SYMBOL_REFs. */
default
:
abort
();
}
}
return
1
;
}
/* Hash an rtx. Return 0 if we couldn't hash the rtx.
For registers and memory locations, we look up their cselib_val structure
and return its VALUE element.
Possible reasons for return 0 are: the object is volatile, or we couldn't
find a register or memory location in the table and CREATE is zero. If
CREATE is nonzero, table elts are created for regs and mem.
MODE is used in hashing for CONST_INTs only;
otherwise the mode of X is used. */
static
unsigned
int
hash_rtx
(
x
,
mode
,
create
)
rtx
x
;
enum
machine_mode
mode
;
int
create
;
{
cselib_val
*
e
;
int
i
,
j
;
unsigned
int
hash
=
0
;
enum
rtx_code
code
=
GET_CODE
(
x
);
const
char
*
fmt
=
GET_RTX_FORMAT
(
code
);
/* repeat is used to turn tail-recursion into iteration. */
repeat
:
code
=
GET_CODE
(
x
);
switch
(
code
)
{
case
MEM
:
case
REG
:
e
=
cselib_lookup
(
x
,
GET_MODE
(
x
),
create
);
if
(
!
e
)
return
0
;
return
e
->
value
;
case
CONST_INT
:
{
unsigned
HOST_WIDE_INT
tem
=
INTVAL
(
x
);
hash
+=
((
unsigned
)
CONST_INT
<<
7
)
+
(
unsigned
)
mode
+
tem
;
return
hash
?
hash
:
CONST_INT
;
}
case
CONST_DOUBLE
:
/* This is like the general case, except that it only counts
the integers representing the constant. */
hash
+=
(
unsigned
)
code
+
(
unsigned
)
GET_MODE
(
x
);
if
(
GET_MODE
(
x
)
!=
VOIDmode
)
for
(
i
=
2
;
i
<
GET_RTX_LENGTH
(
CONST_DOUBLE
);
i
++
)
{
unsigned
HOST_WIDE_INT
tem
=
XWINT
(
x
,
i
);
hash
+=
tem
;
}
else
hash
+=
((
unsigned
)
CONST_DOUBLE_LOW
(
x
)
+
(
unsigned
)
CONST_DOUBLE_HIGH
(
x
));
return
hash
?
hash
:
CONST_DOUBLE
;
/* Assume there is only one rtx object for any given label. */
case
LABEL_REF
:
hash
+=
((
unsigned
)
LABEL_REF
<<
7
)
+
(
unsigned
long
)
XEXP
(
x
,
0
);
return
hash
?
hash
:
LABEL_REF
;
case
SYMBOL_REF
:
hash
+=
((
unsigned
)
SYMBOL_REF
<<
7
)
+
(
unsigned
long
)
XSTR
(
x
,
0
);
return
hash
?
hash
:
SYMBOL_REF
;
case
PRE_DEC
:
case
PRE_INC
:
case
POST_DEC
:
case
POST_INC
:
case
PC
:
case
CC0
:
case
CALL
:
case
UNSPEC_VOLATILE
:
return
0
;
case
ASM_OPERANDS
:
if
(
MEM_VOLATILE_P
(
x
))
return
0
;
break
;
default
:
break
;
}
i
=
GET_RTX_LENGTH
(
code
)
-
1
;
hash
+=
(
unsigned
)
code
+
(
unsigned
)
GET_MODE
(
x
);
fmt
=
GET_RTX_FORMAT
(
code
);
for
(;
i
>=
0
;
i
--
)
{
if
(
fmt
[
i
]
==
'e'
)
{
unsigned
int
tem_hash
;
rtx
tem
=
XEXP
(
x
,
i
);
/* If we are about to do the last recursive call
needed at this level, change it into iteration.
This function is called enough to be worth it. */
if
(
i
==
0
)
{
x
=
tem
;
goto
repeat
;
}
tem_hash
=
hash_rtx
(
tem
,
0
,
create
);
if
(
tem_hash
==
0
)
return
0
;
hash
+=
tem_hash
;
}
else
if
(
fmt
[
i
]
==
'E'
)
for
(
j
=
0
;
j
<
XVECLEN
(
x
,
i
);
j
++
)
{
unsigned
int
tem_hash
=
hash_rtx
(
XVECEXP
(
x
,
i
,
j
),
0
,
create
);
if
(
tem_hash
==
0
)
return
0
;
hash
+=
tem_hash
;
}
else
if
(
fmt
[
i
]
==
's'
)
{
unsigned
char
*
p
=
(
unsigned
char
*
)
XSTR
(
x
,
i
);
if
(
p
)
while
(
*
p
)
hash
+=
*
p
++
;
}
else
if
(
fmt
[
i
]
==
'i'
)
{
unsigned
int
tem
=
XINT
(
x
,
i
);
hash
+=
tem
;
}
else
if
(
fmt
[
i
]
==
'0'
||
fmt
[
i
]
==
't'
)
/* unused */
;
else
abort
();
}
return
hash
?
hash
:
1
+
GET_CODE
(
x
);
}
/* Create a new value structure for VALUE and initialize it. The mode of the
value is MODE. */
static
cselib_val
*
new_cselib_val
(
value
,
mode
)
unsigned
int
value
;
enum
machine_mode
mode
;
{
cselib_val
*
e
=
empty_vals
;
if
(
e
)
empty_vals
=
e
->
u
.
next_free
;
else
e
=
(
cselib_val
*
)
obstack_alloc
(
&
cselib_obstack
,
sizeof
(
cselib_val
));
if
(
value
==
0
)
abort
();
e
->
value
=
value
;
e
->
u
.
val_rtx
=
gen_rtx_VALUE
(
mode
);
CSELIB_VAL_PTR
(
e
->
u
.
val_rtx
)
=
e
;
e
->
addr_list
=
0
;
e
->
locs
=
0
;
return
e
;
}
/* ADDR_ELT is a value that is used as address. MEM_ELT is the value that
contains the data at this address. X is a MEM that represents the
value. Update the two value structures to represent this situation. */
static
void
add_mem_for_addr
(
addr_elt
,
mem_elt
,
x
)
cselib_val
*
addr_elt
,
*
mem_elt
;
rtx
x
;
{
rtx
new
;
struct
elt_loc_list
*
l
;
/* Avoid duplicates. */
for
(
l
=
mem_elt
->
locs
;
l
;
l
=
l
->
next
)
if
(
GET_CODE
(
l
->
loc
)
==
MEM
&&
CSELIB_VAL_PTR
(
XEXP
(
l
->
loc
,
0
))
==
addr_elt
)
return
;
new
=
gen_rtx_MEM
(
GET_MODE
(
x
),
addr_elt
->
u
.
val_rtx
);
addr_elt
->
addr_list
=
new_elt_list
(
addr_elt
->
addr_list
,
mem_elt
);
RTX_UNCHANGING_P
(
new
)
=
RTX_UNCHANGING_P
(
x
);
MEM_COPY_ATTRIBUTES
(
new
,
x
);
mem_elt
->
locs
=
new_elt_loc_list
(
mem_elt
->
locs
,
new
);
}
/* Subroutine of cselib_lookup. Return a value for X, which is a MEM rtx.
If CREATE, make a new one if we haven't seen it before. */
static
cselib_val
*
cselib_lookup_mem
(
x
,
create
)
rtx
x
;
int
create
;
{
void
**
slot
;
cselib_val
*
addr
;
cselib_val
*
mem_elt
;
struct
elt_list
*
l
;
if
(
MEM_VOLATILE_P
(
x
)
||
GET_MODE
(
x
)
==
BLKmode
)
return
0
;
if
(
FLOAT_MODE_P
(
GET_MODE
(
x
))
&&
flag_float_store
)
return
0
;
/* Look up the value for the address. */
addr
=
cselib_lookup
(
XEXP
(
x
,
0
),
GET_MODE
(
x
),
create
);
if
(
!
addr
)
return
0
;
/* Find a value that describes a value of our mode at that address. */
for
(
l
=
addr
->
addr_list
;
l
;
l
=
l
->
next
)
if
(
GET_MODE
(
l
->
elt
->
u
.
val_rtx
)
==
GET_MODE
(
x
))
return
l
->
elt
;
if
(
!
create
)
return
0
;
mem_elt
=
new_cselib_val
(
++
next_unknown_value
,
GET_MODE
(
x
));
add_mem_for_addr
(
addr
,
mem_elt
,
x
);
slot
=
htab_find_slot_with_hash
(
hash_table
,
x
,
mem_elt
->
value
,
1
);
*
slot
=
mem_elt
;
return
mem_elt
;
}
/* Walk rtx X and replace all occurrences of REG and MEM subexpressions
with VALUE expressions. This way, it becomes independent of changes
to registers and memory.
X isn't actually modified; if modifications are needed, new rtl is
allocated. However, the return value can share rtl with X. */
static
rtx
cselib_subst_to_values
(
x
)
rtx
x
;
{
enum
rtx_code
code
=
GET_CODE
(
x
);
const
char
*
fmt
=
GET_RTX_FORMAT
(
code
);
cselib_val
*
e
;
struct
elt_list
*
l
;
rtx
copy
=
x
;
int
i
;
switch
(
code
)
{
case
REG
:
i
=
REGNO
(
x
);
for
(
l
=
REG_VALUES
(
i
);
l
;
l
=
l
->
next
)
if
(
GET_MODE
(
l
->
elt
->
u
.
val_rtx
)
==
GET_MODE
(
x
))
return
l
->
elt
->
u
.
val_rtx
;
abort
();
case
MEM
:
e
=
cselib_lookup_mem
(
x
,
0
);
if
(
!
e
)
abort
();
return
e
->
u
.
val_rtx
;
/* CONST_DOUBLEs must be special-cased here so that we won't try to
look up the CONST_DOUBLE_MEM inside. */
case
CONST_DOUBLE
:
case
CONST_INT
:
return
x
;
default
:
break
;
}
for
(
i
=
GET_RTX_LENGTH
(
code
)
-
1
;
i
>=
0
;
i
--
)
{
if
(
fmt
[
i
]
==
'e'
)
{
rtx
t
=
cselib_subst_to_values
(
XEXP
(
x
,
i
));
if
(
t
!=
XEXP
(
x
,
i
)
&&
x
==
copy
)
copy
=
shallow_copy_rtx
(
x
);
XEXP
(
copy
,
i
)
=
t
;
}
else
if
(
fmt
[
i
]
==
'E'
)
{
int
j
,
k
;
for
(
j
=
0
;
j
<
XVECLEN
(
x
,
i
);
j
++
)
{
rtx
t
=
cselib_subst_to_values
(
XVECEXP
(
x
,
i
,
j
));
if
(
t
!=
XVECEXP
(
x
,
i
,
j
)
&&
XVEC
(
x
,
i
)
==
XVEC
(
copy
,
i
))
{
if
(
x
==
copy
)
copy
=
shallow_copy_rtx
(
x
);
XVEC
(
copy
,
i
)
=
rtvec_alloc
(
XVECLEN
(
x
,
i
));
for
(
k
=
0
;
k
<
j
;
k
++
)
XVECEXP
(
copy
,
i
,
k
)
=
XVECEXP
(
x
,
i
,
k
);
}
XVECEXP
(
copy
,
i
,
j
)
=
t
;
}
}
}
return
copy
;
}
/* Look up the rtl expression X in our tables and return the value it has.
If CREATE is zero, we return NULL if we don't know the value. Otherwise,
we create a new one if possible, using mode MODE if X doesn't have a mode
(i.e. because it's a constant). */
cselib_val
*
cselib_lookup
(
x
,
mode
,
create
)
rtx
x
;
enum
machine_mode
mode
;
int
create
;
{
void
**
slot
;
cselib_val
*
e
;
unsigned
int
hashval
;
if
(
GET_MODE
(
x
)
!=
VOIDmode
)
mode
=
GET_MODE
(
x
);
if
(
GET_CODE
(
x
)
==
VALUE
)
return
CSELIB_VAL_PTR
(
x
);
if
(
GET_CODE
(
x
)
==
REG
)
{
struct
elt_list
*
l
;
int
i
=
REGNO
(
x
);
for
(
l
=
REG_VALUES
(
i
);
l
;
l
=
l
->
next
)
if
(
mode
==
GET_MODE
(
l
->
elt
->
u
.
val_rtx
))
return
l
->
elt
;
if
(
!
create
)
return
0
;
e
=
new_cselib_val
(
++
next_unknown_value
,
GET_MODE
(
x
));
e
->
locs
=
new_elt_loc_list
(
e
->
locs
,
x
);
REG_VALUES
(
i
)
=
new_elt_list
(
REG_VALUES
(
i
),
e
);
slot
=
htab_find_slot_with_hash
(
hash_table
,
x
,
e
->
value
,
1
);
*
slot
=
e
;
return
e
;
}
if
(
GET_CODE
(
x
)
==
MEM
)
return
cselib_lookup_mem
(
x
,
create
);
hashval
=
hash_rtx
(
x
,
mode
,
create
);
/* Can't even create if hashing is not possible. */
if
(
!
hashval
)
return
0
;
slot
=
htab_find_slot_with_hash
(
hash_table
,
x
,
hashval
,
create
);
if
(
slot
==
0
)
return
0
;
e
=
(
cselib_val
*
)
*
slot
;
if
(
e
)
return
e
;
e
=
new_cselib_val
(
hashval
,
mode
);
e
->
locs
=
new_elt_loc_list
(
e
->
locs
,
cselib_subst_to_values
(
x
));
*
slot
=
(
void
*
)
e
;
return
e
;
}
/* Invalidate any entries in reg_values that overlap REGNO. This is called
if REGNO is changing. MODE is the mode of the assignment to REGNO, which
is used to determine how many hard registers are being changed. If MODE
is VOIDmode, then only REGNO is being changed; this is used when
invalidating call clobbered registers across a call. */
static
void
cselib_invalidate_regno
(
regno
,
mode
)
int
regno
;
enum
machine_mode
mode
;
{
int
endregno
;
int
i
;
/* If we see pseudos after reload, something is _wrong_. */
if
(
reload_completed
&&
regno
>=
FIRST_PSEUDO_REGISTER
&&
reg_renumber
[
regno
]
>=
0
)
abort
();
/* Determine the range of registers that must be invalidated. For
pseudos, only REGNO is affected. For hard regs, we must take MODE
into account, and we must also invalidate lower register numbers
if they contain values that overlap REGNO. */
endregno
=
regno
+
1
;
if
(
regno
<
FIRST_PSEUDO_REGISTER
&&
mode
!=
VOIDmode
)
endregno
=
regno
+
HARD_REGNO_NREGS
(
regno
,
mode
);
for
(
i
=
0
;
i
<
endregno
;
i
++
)
{
struct
elt_list
**
l
=
&
REG_VALUES
(
i
);
/* Go through all known values for this reg; if it overlaps the range
we're invalidating, remove the value. */
while
(
*
l
)
{
cselib_val
*
v
=
(
*
l
)
->
elt
;
struct
elt_loc_list
**
p
;
int
this_last
=
i
;
if
(
i
<
FIRST_PSEUDO_REGISTER
)
this_last
+=
HARD_REGNO_NREGS
(
i
,
GET_MODE
(
v
->
u
.
val_rtx
))
-
1
;
if
(
this_last
<
regno
)
{
l
=
&
(
*
l
)
->
next
;
continue
;
}
/* We have an overlap. */
unchain_one_elt_list
(
l
);
/* Now, we clear the mapping from value to reg. It must exist, so
this code will crash intentionally if it doesn't. */
for
(
p
=
&
v
->
locs
;
;
p
=
&
(
*
p
)
->
next
)
{
rtx
x
=
(
*
p
)
->
loc
;
if
(
GET_CODE
(
x
)
==
REG
&&
REGNO
(
x
)
==
i
)
{
unchain_one_elt_loc_list
(
p
);
break
;
}
}
check_value_useless
(
v
);
}
}
}
/* The memory at address MEM_BASE is being changed.
Return whether this change will invalidate VAL. */
static
int
cselib_mem_conflict_p
(
mem_base
,
val
)
rtx
mem_base
;
rtx
val
;
{
enum
rtx_code
code
;
const
char
*
fmt
;
int
i
;
code
=
GET_CODE
(
val
);
switch
(
code
)
{
/* Get rid of a few simple cases quickly. */
case
REG
:
case
PC
:
case
CC0
:
case
SCRATCH
:
case
CONST
:
case
CONST_INT
:
case
CONST_DOUBLE
:
case
SYMBOL_REF
:
case
LABEL_REF
:
return
0
;
case
MEM
:
if
(
GET_MODE
(
mem_base
)
==
BLKmode
||
GET_MODE
(
val
)
==
BLKmode
)
return
1
;
if
(
anti_dependence
(
val
,
mem_base
))
return
1
;
/* The address may contain nested MEMs. */
break
;
default
:
break
;
}
fmt
=
GET_RTX_FORMAT
(
code
);
for
(
i
=
GET_RTX_LENGTH
(
code
)
-
1
;
i
>=
0
;
i
--
)
{
if
(
fmt
[
i
]
==
'e'
)
{
if
(
cselib_mem_conflict_p
(
mem_base
,
XEXP
(
val
,
i
)))
return
1
;
}
else
if
(
fmt
[
i
]
==
'E'
)
{
int
j
;
for
(
j
=
0
;
j
<
XVECLEN
(
val
,
i
);
j
++
)
if
(
cselib_mem_conflict_p
(
mem_base
,
XVECEXP
(
val
,
i
,
j
)))
return
1
;
}
}
return
0
;
}
/* For the value found in SLOT, walk its locations to determine if any overlap
INFO (which is a MEM rtx). */
static
int
cselib_invalidate_mem_1
(
slot
,
info
)
void
**
slot
;
void
*
info
;
{
cselib_val
*
v
=
(
cselib_val
*
)
*
slot
;
rtx
mem_rtx
=
(
rtx
)
info
;
struct
elt_loc_list
**
p
=
&
v
->
locs
;
while
(
*
p
)
{
cselib_val
*
addr
;
struct
elt_list
**
mem_chain
;
rtx
x
=
(
*
p
)
->
loc
;
/* MEMs may occur in locations only at the top level; below
that every MEM or REG is substituted by its VALUE. */
if
(
GET_CODE
(
x
)
!=
MEM
||
!
cselib_mem_conflict_p
(
mem_rtx
,
x
))
{
p
=
&
(
*
p
)
->
next
;
continue
;
}
/* This one overlaps. */
/* We must have a mapping from this MEM's address to the
value (E). Remove that, too. */
addr
=
cselib_lookup
(
XEXP
(
x
,
0
),
VOIDmode
,
0
);
mem_chain
=
&
addr
->
addr_list
;
for
(;;)
{
if
((
*
mem_chain
)
->
elt
==
v
)
{
unchain_one_elt_list
(
mem_chain
);
break
;
}
mem_chain
=
&
(
*
mem_chain
)
->
next
;
}
unchain_one_elt_loc_list
(
p
);
}
check_value_useless
(
v
);
return
1
;
}
/* Invalidate any locations in the table which are changed because of a
store to MEM_RTX. If this is called because of a non-const call
instruction, MEM_RTX is (mem:BLK const0_rtx). */
static
void
cselib_invalidate_mem
(
mem_rtx
)
rtx
mem_rtx
;
{
htab_traverse
(
hash_table
,
cselib_invalidate_mem_1
,
mem_rtx
);
}
/* Invalidate DEST, which is being assigned to or clobbered. The second and
the third parameter exist so that this function can be passed to
note_stores; they are ignored. */
static
void
cselib_invalidate_rtx
(
dest
,
ignore
,
data
)
rtx
dest
;
rtx
ignore
ATTRIBUTE_UNUSED
;
void
*
data
ATTRIBUTE_UNUSED
;
{
while
(
GET_CODE
(
dest
)
==
STRICT_LOW_PART
||
GET_CODE
(
dest
)
==
SIGN_EXTRACT
||
GET_CODE
(
dest
)
==
ZERO_EXTRACT
||
GET_CODE
(
dest
)
==
SUBREG
)
dest
=
XEXP
(
dest
,
0
);
if
(
GET_CODE
(
dest
)
==
REG
)
cselib_invalidate_regno
(
REGNO
(
dest
),
GET_MODE
(
dest
));
else
if
(
GET_CODE
(
dest
)
==
MEM
)
cselib_invalidate_mem
(
dest
);
/* Some machines don't define AUTO_INC_DEC, but they still use push
instructions. We need to catch that case here in order to
invalidate the stack pointer correctly. Note that invalidating
the stack pointer is different from invalidating DEST. */
if
(
push_operand
(
dest
,
GET_MODE
(
dest
)))
cselib_invalidate_rtx
(
stack_pointer_rtx
,
NULL_RTX
,
NULL
);
}
/* Record the result of a SET instruction. DEST is being set; the source
contains the value described by SRC_ELT. If DEST is a MEM, DEST_ADDR_ELT
describes its address. */
static
void
cselib_record_set
(
dest
,
src_elt
,
dest_addr_elt
)
rtx
dest
;
cselib_val
*
src_elt
,
*
dest_addr_elt
;
{
int
dreg
=
GET_CODE
(
dest
)
==
REG
?
REGNO
(
dest
)
:
-
1
;
if
(
src_elt
==
0
||
side_effects_p
(
dest
))
return
;
if
(
dreg
>=
0
)
{
REG_VALUES
(
dreg
)
=
new_elt_list
(
REG_VALUES
(
dreg
),
src_elt
);
src_elt
->
locs
=
new_elt_loc_list
(
src_elt
->
locs
,
dest
);
}
else
if
(
GET_CODE
(
dest
)
==
MEM
&&
dest_addr_elt
!=
0
)
add_mem_for_addr
(
dest_addr_elt
,
src_elt
,
dest
);
}
/* Describe a single set that is part of an insn. */
struct
set
{
rtx
src
;
rtx
dest
;
cselib_val
*
src_elt
;
cselib_val
*
dest_addr_elt
;
};
/* There is no good way to determine how many elements there can be
in a PARALLEL. Since it's fairly cheap, use a really large number. */
#define MAX_SETS (FIRST_PSEUDO_REGISTER * 2)
/* Record the effects of any sets in INSN. */
static
void
cselib_record_sets
(
insn
)
rtx
insn
;
{
int
n_sets
=
0
;
int
i
;
struct
set
sets
[
MAX_SETS
];
rtx
body
=
PATTERN
(
insn
);
body
=
PATTERN
(
insn
);
/* Find all sets. */
if
(
GET_CODE
(
body
)
==
SET
)
{
sets
[
0
].
src
=
SET_SRC
(
body
);
sets
[
0
].
dest
=
SET_DEST
(
body
);
n_sets
=
1
;
}
else
if
(
GET_CODE
(
body
)
==
PARALLEL
)
{
/* Look through the PARALLEL and record the values being
set, if possible. Also handle any CLOBBERs. */
for
(
i
=
XVECLEN
(
body
,
0
)
-
1
;
i
>=
0
;
--
i
)
{
rtx
x
=
XVECEXP
(
body
,
0
,
i
);
if
(
GET_CODE
(
x
)
==
SET
)
{
sets
[
n_sets
].
src
=
SET_SRC
(
x
);
sets
[
n_sets
].
dest
=
SET_DEST
(
x
);
n_sets
++
;
}
}
}
/* Look up the values that are read. Do this before invalidating the
locations that are written. */
for
(
i
=
0
;
i
<
n_sets
;
i
++
)
{
sets
[
i
].
src_elt
=
cselib_lookup
(
sets
[
i
].
src
,
GET_MODE
(
sets
[
i
].
dest
),
1
);
if
(
GET_CODE
(
sets
[
i
].
dest
)
==
MEM
)
sets
[
i
].
dest_addr_elt
=
cselib_lookup
(
XEXP
(
sets
[
i
].
dest
,
0
),
Pmode
,
1
);
else
sets
[
i
].
dest_addr_elt
=
0
;
}
/* Invalidate all locations written by this insn. Note that the elts we
looked up in the previous loop aren't affected, just some of their
locations may go away. */
note_stores
(
body
,
cselib_invalidate_rtx
,
NULL
);
/* Now enter the equivalences in our tables. */
for
(
i
=
0
;
i
<
n_sets
;
i
++
)
cselib_record_set
(
sets
[
i
].
dest
,
sets
[
i
].
src_elt
,
sets
[
i
].
dest_addr_elt
);
}
/* Record the effects of INSN. */
void
cselib_process_insn
(
insn
)
rtx
insn
;
{
int
i
;
cselib_current_insn
=
insn
;
/* Forget everything at a CODE_LABEL, a volatile asm, or a setjmp. */
if
(
GET_CODE
(
insn
)
==
CODE_LABEL
||
(
GET_CODE
(
insn
)
==
NOTE
&&
NOTE_LINE_NUMBER
(
insn
)
==
NOTE_INSN_SETJMP
)
||
(
GET_CODE
(
insn
)
==
INSN
&&
GET_CODE
(
PATTERN
(
insn
))
==
ASM_OPERANDS
&&
MEM_VOLATILE_P
(
PATTERN
(
insn
))))
{
clear_table
();
return
;
}
if
(
GET_RTX_CLASS
(
GET_CODE
(
insn
))
!=
'i'
)
{
cselib_current_insn
=
0
;
return
;
}
/* If this is a call instruction, forget anything stored in a
call clobbered register, or, if this is not a const call, in
memory. */
if
(
GET_CODE
(
insn
)
==
CALL_INSN
)
{
for
(
i
=
0
;
i
<
FIRST_PSEUDO_REGISTER
;
i
++
)
if
(
call_used_regs
[
i
])
cselib_invalidate_regno
(
i
,
VOIDmode
);
if
(
!
CONST_CALL_P
(
insn
))
cselib_invalidate_mem
(
callmem
);
}
cselib_record_sets
(
insn
);
#ifdef AUTO_INC_DEC
/* Clobber any registers which appear in REG_INC notes. We
could keep track of the changes to their values, but it is
unlikely to help. */
{
rtx
x
;
for
(
x
=
REG_NOTES
(
insn
);
x
;
x
=
XEXP
(
x
,
1
))
if
(
REG_NOTE_KIND
(
x
)
==
REG_INC
)
cselib_invalidate_rtx
(
XEXP
(
x
,
0
),
NULL_RTX
,
NULL
);
}
#endif
/* Look for any CLOBBERs in CALL_INSN_FUNCTION_USAGE, but only
after we have processed the insn. */
if
(
GET_CODE
(
insn
)
==
CALL_INSN
)
{
rtx
x
;
for
(
x
=
CALL_INSN_FUNCTION_USAGE
(
insn
);
x
;
x
=
XEXP
(
x
,
1
))
if
(
GET_CODE
(
XEXP
(
x
,
0
))
==
CLOBBER
)
cselib_invalidate_rtx
(
XEXP
(
XEXP
(
x
,
0
),
0
),
NULL_RTX
,
NULL
);
}
cselib_current_insn
=
0
;
if
(
n_useless_values
>
MAX_USELESS_VALUES
)
remove_useless_values
();
}
/* Make sure our varrays are big enough. Not called from any cselib routines;
it must be called by the user if it allocated new registers. */
void
cselib_update_varray_sizes
()
{
int
nregs
=
max_reg_num
();
if
(
nregs
==
cselib_nregs
)
return
;
cselib_nregs
=
nregs
;
VARRAY_GROW
(
reg_values
,
nregs
);
}
/* Initialize cselib for one pass. The caller must also call
init_alias_analysis. */
void
cselib_init
()
{
/* These are only created once. */
if
(
!
callmem
)
{
extern
struct
obstack
permanent_obstack
;
gcc_obstack_init
(
&
cselib_obstack
);
cselib_startobj
=
obstack_alloc
(
&
cselib_obstack
,
0
);
push_obstacks
(
&
permanent_obstack
,
&
permanent_obstack
);
callmem
=
gen_rtx_MEM
(
BLKmode
,
const0_rtx
);
pop_obstacks
();
ggc_add_rtx_root
(
&
callmem
,
1
);
}
cselib_nregs
=
max_reg_num
();
VARRAY_ELT_LIST_INIT
(
reg_values
,
cselib_nregs
,
"reg_values"
);
hash_table
=
htab_create
(
31
,
get_value_hash
,
entry_and_rtx_equal_p
,
NULL
);
clear_table
();
}
/* Called when the current user is done with cselib. */
void
cselib_finish
()
{
clear_table
();
htab_delete
(
hash_table
);
}
gcc/varray.h
View file @
eab5c70a
...
@@ -76,6 +76,7 @@ typedef union varray_data_tag {
...
@@ -76,6 +76,7 @@ typedef union varray_data_tag {
struct
reg_info_def
*
reg
[
1
];
struct
reg_info_def
*
reg
[
1
];
struct
const_equiv_data
const_equiv
[
1
];
struct
const_equiv_data
const_equiv
[
1
];
struct
basic_block_def
*
bb
[
1
];
struct
basic_block_def
*
bb
[
1
];
struct
elt_list
*
te
[
1
];
}
varray_data
;
}
varray_data
;
/* Virtual array of pointers header. */
/* Virtual array of pointers header. */
...
@@ -152,6 +153,9 @@ extern varray_type varray_init PARAMS ((size_t, size_t, const char *));
...
@@ -152,6 +153,9 @@ extern varray_type varray_init PARAMS ((size_t, size_t, const char *));
#define VARRAY_BB_INIT(va, num, name) \
#define VARRAY_BB_INIT(va, num, name) \
va = varray_init (num, sizeof (struct basic_block_def *), name)
va = varray_init (num, sizeof (struct basic_block_def *), name)
#define VARRAY_ELT_LIST_INIT(va, num, name) \
va = varray_init (num, sizeof (struct elt_list *), name)
/* Free up memory allocated by the virtual array, but do not free any of the
/* Free up memory allocated by the virtual array, but do not free any of the
elements involved. */
elements involved. */
#define VARRAY_FREE(vp) \
#define VARRAY_FREE(vp) \
...
@@ -219,6 +223,7 @@ extern void varray_check_failed PARAMS ((varray_type, size_t,
...
@@ -219,6 +223,7 @@ extern void varray_check_failed PARAMS ((varray_type, size_t,
#define VARRAY_REG(VA, N) VARRAY_CHECK (VA, N, reg)
#define VARRAY_REG(VA, N) VARRAY_CHECK (VA, N, reg)
#define VARRAY_CONST_EQUIV(VA, N) VARRAY_CHECK (VA, N, const_equiv)
#define VARRAY_CONST_EQUIV(VA, N) VARRAY_CHECK (VA, N, const_equiv)
#define VARRAY_BB(VA, N) VARRAY_CHECK (VA, N, bb)
#define VARRAY_BB(VA, N) VARRAY_CHECK (VA, N, bb)
#define VARRAY_ELT_LIST(VA, N) VARRAY_CHECK (VA, N, te)
/* Push a new element on the end of VA, extending it if necessary. */
/* Push a new element on the end of VA, extending it if necessary. */
#define VARRAY_PUSH_CHAR(VA, X) VARRAY_PUSH (VA, c, X)
#define VARRAY_PUSH_CHAR(VA, X) VARRAY_PUSH (VA, c, X)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment