Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
T
tic
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
wenyuanbo
tic
Commits
3db8880d
Unverified
Commit
3db8880d
authored
Apr 17, 2020
by
Zhi
Committed by
GitHub
Apr 17, 2020
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fix fuse over functions that are handled by external codegen (#5365)
parent
c936a81d
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
10 additions
and
8 deletions
+10
-8
src/relay/backend/vm/compiler.cc
+7
-7
src/relay/transforms/fuse_ops.cc
+3
-0
tests/python/relay/test_pass_partition_graph.py
+0
-1
No files found.
src/relay/backend/vm/compiler.cc
View file @
3db8880d
...
...
@@ -924,13 +924,6 @@ IRModule VMCompiler::OptimizeModule(const IRModule& mod, const TargetsMap& targe
pass_seqs
.
push_back
(
transform
::
LambdaLift
());
pass_seqs
.
push_back
(
transform
::
InlinePrimitives
());
// Manifest the allocations.
pass_seqs
.
push_back
(
transform
::
ManifestAlloc
(
this
->
target_host_
));
// Compute away possibly introduced constant computation.
pass_seqs
.
push_back
(
transform
::
FoldConstant
());
// Fuse the shape functions.
pass_seqs
.
push_back
(
transform
::
FuseOps
());
// Inline the functions that are lifted to the module scope. We perform this
// pass after all other optimization passes but before the memory allocation
// pass. This is because memory allocation pass will insert `invoke_tvm_op`
...
...
@@ -938,6 +931,13 @@ IRModule VMCompiler::OptimizeModule(const IRModule& mod, const TargetsMap& targe
// external codegen.
pass_seqs
.
push_back
(
transform
::
Inline
());
// Manifest the allocations.
pass_seqs
.
push_back
(
transform
::
ManifestAlloc
(
this
->
target_host_
));
// Compute away possibly introduced constant computation.
pass_seqs
.
push_back
(
transform
::
FoldConstant
());
// Fuse the shape functions.
pass_seqs
.
push_back
(
transform
::
FuseOps
());
// Manifest the allocations needed for the shape functions.
pass_seqs
.
push_back
(
transform
::
ManifestAlloc
(
this
->
target_host_
));
...
...
src/relay/transforms/fuse_ops.cc
View file @
3db8880d
...
...
@@ -199,6 +199,9 @@ class IndexedForwardGraph::Creator : private ExprVisitor {
// Post order tree
void
VisitExpr_
(
const
FunctionNode
*
op
)
final
{
// Skip the function that should be handled by external codegen.
if
(
op
->
GetAttr
<
String
>
(
attr
::
kCompiler
).
defined
())
return
;
for
(
auto
param
:
op
->
params
)
{
this
->
Update
(
param
,
nullptr
,
kOpaque
);
}
...
...
tests/python/relay/test_pass_partition_graph.py
View file @
3db8880d
...
...
@@ -457,7 +457,6 @@ def test_extern_dnnl_mobilenet():
mod
,
params
=
relay
.
testing
.
mobilenet
.
get_workload
(
batch_size
=
1
,
dtype
=
'float32'
)
mod
[
"main"
]
=
bind_params_by_name
(
mod
[
"main"
],
params
)
mod
=
transform
.
AnnotateTarget
([
"dnnl"
])(
mod
)
mod
=
transform
.
MergeCompilerRegions
()(
mod
)
mod
=
transform
.
PartitionGraph
()(
mod
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment