Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
T
tic
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
wenyuanbo
tic
Commits
b7b00611
Commit
b7b00611
authored
Sep 26, 2017
by
Yuwei HU
Committed by
Tianqi Chen
May 29, 2018
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update nnvm.runtime to tvm.contrib.graph_runtime (#41)
parent
007a06db
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
13 additions
and
22 deletions
+13
-22
nnvm/docs/api/python/index.rst
+0
-1
nnvm/docs/api/python/runtime.rst
+0
-8
nnvm/python/nnvm/compiler/build_module.py
+2
-2
nnvm/tutorials/mobilenet_inference_gpu.py
+11
-11
No files found.
nnvm/docs/api/python/index.rst
View file @
b7b00611
...
...
@@ -10,7 +10,6 @@ For user
compiler
frontend
runtime
symbol
graph
top
nnvm/docs/api/python/runtime.rst
deleted
100644 → 0
View file @
007a06db
nnvm.runtime
------------
.. automodule:: nnvm.runtime
.. autofunction:: nnvm.runtime.create
.. autoclass:: nnvm.runtime.Module
:members:
nnvm/python/nnvm/compiler/build_module.py
View file @
b7b00611
...
...
@@ -4,9 +4,9 @@ from __future__ import absolute_import as _abs
import
logging
import
tvm
from
tvm.contrib
import
graph_runtime
from
.
import
graph_attr
,
graph_util
from
..
import
graph
as
_graph
from
..
import
runtime
OPT_PASS_LEVEL
=
{
"SimplifyInference"
:
2
,
...
...
@@ -220,7 +220,7 @@ def _run_graph(graph, params):
_
,
oshape
=
graph_util
.
infer_shape
(
graph
,
**
shape
)
_
,
odtype
=
graph_util
.
infer_dtype
(
graph
,
**
dtype
)
graph
,
libmod
,
_
=
build
(
graph
,
target
,
shape
,
dtype
)
m
=
runtime
.
create
(
graph
,
libmod
,
ctx
)
m
=
graph_
runtime
.
create
(
graph
,
libmod
,
ctx
)
set_input
,
run
,
get_output
=
m
[
"set_input"
],
m
[
"run"
],
m
[
"get_output"
]
for
k
,
v
in
params
.
items
():
set_input
(
k
,
tvm
.
nd
.
array
(
v
))
...
...
nnvm/tutorials/mobilenet_inference_gpu.py
View file @
b7b00611
...
...
@@ -15,9 +15,9 @@ import nnvm.testing
######################################################################
# Register the NVCC Compiler Option
# ---------------------------------
# NNVM optimizes the graph and relies on TVM to generate fast
#
GPU code, to get the maximum performance, we need to enable
#
nvcc's compiler hook.
This gives better performance than nvrtc mode.
# NNVM optimizes the graph and relies on TVM to generate fast
GPU code.
#
To get the maximum performance, we need to enable nvcc's compiler hook.
# This gives better performance than nvrtc mode.
@tvm.register_func
def
tvm_callback_cuda_compile
(
code
):
...
...
@@ -28,7 +28,7 @@ def tvm_callback_cuda_compile(code):
# Prepare the Benchmark
# ---------------------
# We construct a standard imagenet inference benchmark.
# We use nnvm's testing utility to produce the model description and random parameters
that so
the example does not
# We use nnvm's testing utility to produce the model description and random parameters
so that
the example does not
# depend on a specific front-end framework.
#
# .. note::
...
...
@@ -46,17 +46,17 @@ net, params = nnvm.testing.mobilenet.get_workload(
batch_size
=
1
,
image_shape
=
image_shape
)
######################################################################
# Compile
T
he Graph
# Compile
t
he Graph
# -----------------
# NNVM needs two things to compile a deep learning model:
#
# - net
which is
the graph representation of the computation
# - params
a dictionary of str to parameters.
# - net
:
the graph representation of the computation
# - params
: a dictionary of str to parameters
#
# To compile the graph, we call the build function with the graph
# configuration and parameters.
# When parameters are provided, NNVM will pre-compute certain part of the graph if possible,
#
the new parameter set returned as the third return value
.
# When parameters are provided, NNVM will pre-compute certain part of the graph if possible
(e.g. simplify batch normalization to scale shift)
,
#
and return the updated parameters
.
graph
,
lib
,
params
=
nnvm
.
compiler
.
build
(
net
,
target
,
shape
=
{
"data"
:
data_shape
},
params
=
params
)
...
...
@@ -65,7 +65,7 @@ graph, lib, params = nnvm.compiler.build(
# Run the Compiled Module
# -----------------------
#
# To deploy the module, we call :any:`tvm.contrib.graph_runtime.create` passing in the graph
the lib
and context.
# To deploy the module, we call :any:`tvm.contrib.graph_runtime.create` passing in the graph
, the lib,
and context.
# Thanks to TVM, we can deploy the compiled module to many platforms and languages.
# The deployment module is designed to contain minimum dependencies.
# This example runs on the same machine.
...
...
@@ -79,5 +79,5 @@ module.set_input(**params)
module
.
run
()
# get output
out
=
module
.
get_output
(
0
,
tvm
.
nd
.
empty
(
out_shape
))
#
C
onvert to numpy
#
c
onvert to numpy
out
.
asnumpy
()
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment