Commit 5195679c by tqchen Committed by Tianqi Chen

[DOCS] Improve docs naming, fix docs warnings

parent bb2b8620
...@@ -3,9 +3,9 @@ This folder contains various extension projects using TVM, ...@@ -3,9 +3,9 @@ This folder contains various extension projects using TVM,
they also serve as examples on how to use TVM in your own project. they also serve as examples on how to use TVM in your own project.
If you are interested in writing optimized kernels with TVM, checkout [TOPI: TVM Operator Inventory](../topi). If you are interested in writing optimized kernels with TVM, checkout [TOPI: TVM Operator Inventory](../topi).
If you are interested in end to end deep learning model compilation, checkout [NNVM Compiler](https://github.com/dmlc/nnvm).
- [extension](extension) How to extend TVM C++ api along with python API. - [extension](extension) How to extend TVM C++ api along with python API.
- [ios_rpc](ios_rpc) iOS RPC server. - [ios_rpc](ios_rpc) iOS RPC server.
- [android_rpc](android_rpc) Android RPC server. - [android_rpc](android_rpc) Android RPC server.
- [benchmark](benchmark) Example end to end compilation benchmarks
- [howto_deploy](howto_deploy) Tutorial on how to deploy TVM with minimum code dependency. - [howto_deploy](howto_deploy) Tutorial on how to deploy TVM with minimum code dependency.
...@@ -20,3 +20,4 @@ Python API ...@@ -20,3 +20,4 @@ Python API
contrib contrib
dev dev
topi topi
nnvm/index
Python API NNVM API
========== ========
This document contains the python API to NNVM compiler toolchain. This document contains the python API to NNVM compiler toolchain.
For user
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
......
TVM Operator Inventory TOPI
---------------------- ----
.. automodule:: topi .. automodule:: topi
Index List of operators
~~~~~ ~~~~~~~~~~~~~~~~~
**List of operators**
.. autosummary:: .. autosummary::
...@@ -52,8 +50,8 @@ Index ...@@ -52,8 +50,8 @@ Index
topi.broadcast_minimum topi.broadcast_minimum
**List of schedules** List of schedules
~~~~~~~~~~~~~~~~~
.. autosummary:: .. autosummary::
topi.generic.schedule_conv2d_nchw topi.generic.schedule_conv2d_nchw
......
Links to API References Links to C++/JS API References
======================= ==============================
This page contains links to API references that are build with different doc build system. This page contains links to API references that are build with different doc build system.
......
TVM Design and Developer Guide TVM Design and Developer Guide
============================== ==============================
Building an IR stack for deep learning systems involves many Building a compiler stack for deep learning systems involves many many systems-level design decisions.
many systems-level design decisions.
In this part of documentation, we share the rationale for the specific choices made when designing TVM. In this part of documentation, we share the rationale for the specific choices made when designing TVM.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
runtime runtime
nnvm_json_spec
nnvm_overview
How to Deploy TVM Modules How to Deploy Compiled Modules
========================= ==============================
We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy) We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy)
To run the example, you can use the following command To run the example, you can use the following command
...@@ -59,3 +59,124 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target. ...@@ -59,3 +59,124 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
Refer [here](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target. Refer [here](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be refered at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source. From android java TVM API to load model & execute can be refered at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source.
Deploy NNVM Modules
-------------------
NNVM compiled modules are fully embedded in TVM runtime as long as ```GRAPH_RUNTIME``` option
is enabled in tvm runtime. Check out the [TVM documentation](http://docs.tvmlang.org/) for
how to deploy TVM runtime to your system.
In a nutshell, we will need three items to deploy a compiled module.
Checkout our tutorials on getting started with NNVM compiler for more details.
- The graph json data which contains the execution graph.
- The tvm module library of compiled functions.
- The parameter blobs for stored parameters.
We can then use TVM's runtime API to deploy the compiled module.
Here is an example in python.
```python
import tvm
# tvm module for compiled functions.
loaded_lib = tvm.module.load("deploy.so")
# json graph
loaded_json = open(temp.relpath("deploy.json")).read()
# parameters in binary
loaded_params = bytearray(open(temp.relpath("deploy.params"), "rb").read())
fcreate = tvm.get_global_func("tvm.graph_runtime.create")
ctx = tvm.gpu(0)
gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
set_input, get_output, run = gmodule["set_input"], gmodule["get_output"], gmodule["run"]
set_input("x", tvm.nd.array(x_np))
gmodule["load_params"](loaded_params)
run()
out = tvm.nd.empty(shape)
get_output(0, out)
print(out.asnumpy())
```
An example in c++.
```cpp
#include <dlpack/dlpack.h>
#include <tvm/runtime/module.h>
#include <tvm/runtime/registry.h>
#include <tvm/runtime/packed_func.h>
#include <fstream>
#include <iterator>
#include <algorithm>
int main()
{
// tvm module for compiled functions
tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile("deploy.so");
// json graph
std::ifstream json_in("deploy.json", std::ios::in);
std::string json_data((std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());
json_in.close();
// parameters in binary
std::ifstream params_in("deploy.params", std::ios::binary);
std::string params_data((std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());
params_in.close();
// parameters need to be TVMByteArray type to indicate the binary data
TVMByteArray params_arr;
params_arr.data = params_data.c_str();
params_arr.size = params_data.length();
int dtype_code = kDLFloat;
int dtype_bits = 32;
int dtype_lanes = 1;
int device_type = kDLCPU;
int device_id = 0;
// get global function module for graph runtime
tvm::runtime::Module mod = (*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, mod_syslib, device_type, device_id);
DLTensor* x;
int in_ndim = 4;
int64_t in_shape[4] = {1, 3, 224, 224};
TVMArrayAlloc(in_shape, in_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &x);
// load image data saved in binary
std::ifstream data_fin("cat.bin", std::ios::binary);
data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);
// get the function from the module(set input data)
tvm::runtime::PackedFunc set_input = mod.GetFunction("set_input");
set_input("data", x);
// get the function from the module(load patameters)
tvm::runtime::PackedFunc load_params = mod.GetFunction("load_params");
load_params(params_arr);
// get the function from the module(run it)
tvm::runtime::PackedFunc run = mod.GetFunction("run");
run();
DLTensor* y;
int out_ndim = 1;
int64_t out_shape[1] = {1000, };
TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);
// get the function from the module(get output data)
tvm::runtime::PackedFunc get_output = mod.GetFunction("get_output");
get_output(0, y);
// get the maximum position in output vector
auto y_iter = static_cast<float*>(y->data);
auto max_iter = std::max_element(y_iter, y_iter + 1000);
auto max_index = std::distance(y_iter, max_iter);
std::cout << "The maximum position in output vector is: " << max_index << std::endl;
TVMArrayFree(x);
TVMArrayFree(y);
return 0;
}
```
...@@ -22,37 +22,20 @@ git submodule update ...@@ -22,37 +22,20 @@ git submodule update
## Build the Shared Library ## Build the Shared Library
Our goal is to build the shared library: Our goal is to build the shared libraries:
- On Linux/OSX the target library is `libtvm.so` - On Linux/OSX the target library are `libtvm.so, libtvm_topi.so`
- On Windows the target library is `libtvm.dll` - On Windows the target library are `libtvm.dll, libtvm_topi.dll`
The minimal building requirement is
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
You can edit `make/config.mk` to change the compile options, and then build by
`make`. If everything goes well, we can go to the specific language installation section.
### Building on Windows
TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**. In order to generate the VS solution file using cmake,
make sure you have a recent version of cmake added to your path and then from the tvm directory:
```bash
mkdir build
cd build
cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
```
This will generate the VS project using the MSVC 14 64 bit generator. Open the .sln file in the build directory and build with Visual Studio.
### Customized Building
Install prerequisites first:
```bash ```bash
sudo apt-get update sudo apt-get update
sudo apt-get install -y python python-dev python-setuptools gcc libtinfo-dev zlib1g-dev sudo apt-get install -y python python-dev python-setuptools gcc libtinfo-dev zlib1g-dev
``` ```
The minimal building requirement is
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
- We highly recommend to build with LLVM to enable all the features.
- It is possible to build without llvm dependency if we only want to use CUDA/OpenCL
The configuration of tvm can be modified by ```config.mk``` The configuration of tvm can be modified by ```config.mk```
- First copy ```make/config.mk``` to the project root, on which - First copy ```make/config.mk``` to the project root, on which
any local modification will be ignored by git, then modify the according flags. any local modification will be ignored by git, then modify the according flags.
...@@ -62,8 +45,36 @@ The configuration of tvm can be modified by ```config.mk``` ...@@ -62,8 +45,36 @@ The configuration of tvm can be modified by ```config.mk```
[LLVM Download Page](http://releases.llvm.org/download.html). [LLVM Download Page](http://releases.llvm.org/download.html).
- Unzip to a certain location, modify ```config.mk``` to add ```LLVM_CONFIG=/path/to/your/llvm/bin/llvm-config``` - Unzip to a certain location, modify ```config.mk``` to add ```LLVM_CONFIG=/path/to/your/llvm/bin/llvm-config```
- You can also use [LLVM Nightly Ubuntu Build](https://apt.llvm.org/) - You can also use [LLVM Nightly Ubuntu Build](https://apt.llvm.org/)
- Note that apt-package append ```llvm-config``` with version number. For example, set ```LLVM_CONFIG=llvm-config-4.0``` if you installed 4.0 package - Note that apt-package append ```llvm-config``` with version number.
- By default CUDA and OpenCL code generator do not require llvm. For example, set ```LLVM_CONFIG=llvm-config-4.0``` if you installed 4.0 package
We can then build tvm by `make`.
After we build the tvm, we can proceed to build nnvm using the following script.
```bash
cd nnvm
make -j4
```
This will creates `libnnvm_compiler.so` under the `nnvm/lib` folder.
If everything goes well, we can go to the specific language installation section.
### Building on Windows
TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**.
In order to generate the VS solution file using cmake,
make sure you have a recent version of cmake added to your path and then from the tvm directory:
```bash
mkdir build
cd build
cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
```
This will generate the VS project using the MSVC 14 64 bit generator.
Open the .sln file in the build directory and build with Visual Studio.
In order to build with LLVM in windows, you will need to build LLVM from source.
You need to run build the nnvm by running the same script under the nnvm folder.
## Python Package Installation ## Python Package Installation
...@@ -77,7 +88,7 @@ There are several ways to install the package: ...@@ -77,7 +88,7 @@ There are several ways to install the package:
The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ```setup``` again) The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ```setup``` again)
```bash ```bash
export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:${PYTHONPATH} export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:/path/to/tvm/nnvm/python:${PYTHONPATH}
``` ```
2. Install tvm python bindings by `setup.py`: 2. Install tvm python bindings by `setup.py`:
...@@ -89,4 +100,5 @@ There are several ways to install the package: ...@@ -89,4 +100,5 @@ There are several ways to install the package:
# providing --user flag may trigger error during installation in such case. # providing --user flag may trigger error during installation in such case.
cd python; python setup.py install --user; cd .. cd python; python setup.py install --user; cd ..
cd topi/python; python setup.py install --user; cd ../.. cd topi/python; python setup.py install --user; cd ../..
cd nnvm/python; python setup.py install --user; cd ../..
``` ```
### NNPACK for Multi-Core CPU Support in TVM ### NNPACK for Multi-Core CPU Support in TVM
[NNPACK](https://github.com/Maratyszcza/NNPACK) is an acceleration package [NNPACK](https://github.com/Maratyszcza/NNPACK) is an acceleration package
for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs. for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs.
Using NNPACK, higher-level libraries like _MXNet_ can speed up Using NNPACK, higher-level libraries like _MXNet_ can speed up
...@@ -118,27 +119,4 @@ after configuration use `make` to build TVM ...@@ -118,27 +119,4 @@ after configuration use `make` to build TVM
```bash ```bash
make make
make install
```
#### Python Package Installation
The python package for [tvm](https://github.com/dmlc/tvm) depends of [topi](https://github.com/dmlc/tvm/tree/master/topi).
The tvm python package is located at `tvm/python` and topi python package is located in `tvm/topi/python` folder.
There are several ways to install the package, in all these cases the TVM library and TOPI must be present in the python env:
1. Set the environment variable PYTHONPATH to tell python where to find the libraries. For example, assume we cloned tvm on the home directory ~. then we can added the following line in ~/.bashrc. It is recommended for developers who may change the codes. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call setup again)
```bash
export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:${PYTHONPATH}
```
2. Install tvm and topi python bindings by setup.py:
```bash
# install tvm package for the current user
cd topi/python
python setup.py install --user;
cd ../../python
python setup.py install --user;
``` ```
TVM Documentation TVM Documentation
================= =================
Welcome to TVM documentation. Get Started
-----------
Contents
--------
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
self
how_to/install how_to/install
tutorials/index tutorials/index
faq
how_to/deploy how_to/deploy
how_to/integrate how_to/integrate
how_to/contribute how_to/contribute
faq
API Reference
-------------
.. toctree::
:maxdepth: 2
api/python/index api/python/index
dev/index
api_links api_links
Developer Guide
---------------
.. toctree::
:maxdepth: 2
dev/index
nnvm_top
Index
-----
.. toctree::
:maxdepth: 1
genindex genindex
Core Tensor Operators NNVM Core Tensor Operators
===================== ==========================
This page contains the list of core tensor operator primitives pre-defined in NNVM. This page contains the list of core tensor operator primitives pre-defined in NNVM.
The core tensor operator primitives(``nnvm.top``) covers typical workloads in deep learning. The core tensor operator primitives(``nnvm.top``) covers typical workloads in deep learning.
......
doxygen
_build
gen_modules
tutorials
This source diff could not be displayed because it is too large. You can view the blob instead.
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
rm -rf gen_modules
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/rabit.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/rabit.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/rabit"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/rabit"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
The documentation of nnvm is generated with recommonmark and sphinx.
- pip install sphinx>=1.5.5 sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark
- Build tvm first in the root folder.
# -*- coding: utf-8 -*-
#
# documentation build configuration file, created by
# sphinx-quickstart on Thu Jul 23 19:40:08 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os, subprocess
import shlex
import recommonmark
import sphinx_gallery
from tvm.contrib import rpc, graph_runtime
from recommonmark.parser import CommonMarkParser
from recommonmark.transform import AutoStructify
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
sys.path.insert(0, os.path.join(curr_path, '../python/'))
# -- General configuration ------------------------------------------------
# General information about the project.
project = u'nnvm'
author = u'%s developers' % project
copyright = u'2017, %s' % author
github_doc_root = 'https://github.com/dmlc/nnvm/tree/master/docs/'
# add markdown parser
CommonMarkParser.github_doc_root = github_doc_root
source_parsers = {
'.md': CommonMarkParser
}
os.environ['NNVM_BUILD_DOC'] = '1'
# Version information.
import nnvm
version = nnvm.__version__
release = nnvm.__version__
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'sphinx.ext.mathjax',
'sphinx_gallery.gen_gallery',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = ['.rst', '.md']
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# generate autosummary even if no references
autosummary_generate = True
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme is set by the make target
html_theme = os.environ.get('NNVM_THEME', 'rtd')
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
# only import rtd theme and set it if want to build docs locally
if not on_rtd and html_theme == 'rtd':
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Output file base name for HTML help builder.
htmlhelp_basename = project + 'doc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, '%s.tex' % project, project,
author, 'manual'),
]
# hook for doxygen
def run_doxygen(folder):
"""Run the doxygen make command in the designated folder."""
try:
#retcode = subprocess.call("cd %s; make doc" % folder, shell=True)
retcode = subprocess.call("rm -rf _build/html/doxygen", shell=True)
retcode = subprocess.call("mkdir -p _build/html", shell=True)
retcode = subprocess.call("cp -rf doxygen/html _build/html/doxygen", shell=True)
if retcode < 0:
sys.stderr.write("doxygen terminated by signal %s" % (-retcode))
except OSError as e:
sys.stderr.write("doxygen execution failed: %s" % e)
intersphinx_mapping = {
'python': ('https://docs.python.org/{.major}'.format(sys.version_info), None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'scipy': ('http://docs.scipy.org/doc/scipy/reference', None),
'matplotlib': ('http://matplotlib.org/', None),
'tvm': ('http://docs.tvmlang.org/', None),
}
from sphinx_gallery.sorting import ExplicitOrder
examples_dirs = ['../tutorials/']
gallery_dirs = ['tutorials']
subsection_order = ExplicitOrder([])
def generate_doxygen_xml(app):
"""Run the doxygen make commands if we're on the ReadTheDocs server"""
run_doxygen('..')
def setup(app):
# Add hook for building doxygen xml when needed
# no c++ API for now
app.connect("builder-inited", generate_doxygen_xml)
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url,
'auto_doc_ref': True
}, True)
app.add_transform(AutoStructify)
sphinx_gallery_conf = {
'backreferences_dir': 'gen_modules/backreferences',
'doc_module': ('tvm', 'nnvm', 'numpy'),
'reference_url': {
'nnvm': None,
'tvm': 'http://docs.tvmlang.org',
'numpy': 'http://docs.scipy.org/doc/numpy-1.9.1'},
'examples_dirs': examples_dirs,
'gallery_dirs': gallery_dirs,
'subsection_order': subsection_order,
'find_mayavi_figures': False,
'filename_pattern': '.py',
'expected_failing_examples': []
}
Design Note
===========
In this part of documentation, we share the rationale for the specific choices made when designing NNVM.
.. toctree::
:maxdepth: 2
overview
# Contribute to NNVM
NNVM has been developed by community members.
Everyone is more than welcome to contribute.
It is a way to make the project better and more accessible to more users.
NNVM compiler relies on TVM to deploy to different hardware backends.
You can improve the compiler performance by contributing to [TVM](https://github.com/dmlc/tvm)
- Please update [NEWS.md](https://github.com/dmlc/nnvm/blob/master/NEWS.md) to
add note on your changes to the API or added a new document.
## Guidelines
* [Submit Pull Request](#submit-pull-request)
* [Git Workflow Howtos](#git-workflow-howtos)
- [How to resolve conflict with master](#how-to-resolve-conflict-with-master)
- [How to combine multiple commits into one](#how-to-combine-multiple-commits-into-one)
- [What is the consequence of force push](#what-is-the-consequence-of-force-push)
* [Document](#document)
* [Testcases](#testcases)
* [Examples](#examples)
* [Core Library](#core-library)
* [Python Package](#python-package)
## Submit Pull Request
* Before submit, please rebase your code on the most recent version of master, you can do it by
```bash
git remote add upstream [url to nnvm repo]
git fetch upstream
git rebase upstream/master
```
* If you have multiple small commits,
it might be good to merge them together(use git rebase then squash) into more meaningful groups.
* Send the pull request!
- Fix the problems reported by automatic checks
- If you are contributing a new module or new function, add a test.
## Git Workflow Howtos
### How to resolve conflict with master
- First rebase to most recent master
```bash
# The first two steps can be skipped after you do it once.
git remote add upstream [url to nnvm repo]
git fetch upstream
git rebase upstream/master
```
- The git may show some conflicts it cannot merge, say ```conflicted.py```.
- Manually modify the file to resolve the conflict.
- After you resolved the conflict, mark it as resolved by
```bash
git add conflicted.py
```
- Then you can continue rebase by
```bash
git rebase --continue
```
- Finally push to your fork, you may need to force push here.
```bash
git push --force
```
### How to combine multiple commits into one
Sometimes we want to combine multiple commits, especially when later commits are only fixes to previous ones,
to create a PR with set of meaningful commits. You can do it by following steps.
- Before doing so, configure the default editor of git if you haven't done so before.
```bash
git config core.editor the-editor-you-like
```
- Assume we want to merge last 3 commits, type the following commands
```bash
git rebase -i HEAD~3
```
- It will pop up an text editor. Set the first commit as ```pick```, and change later ones to ```squash```.
- After you saved the file, it will pop up another text editor to ask you modify the combined commit message.
- Push the changes to your fork, you need to force push.
```bash
git push --force
```
### Reset to the most recent master
You can always use git reset to reset your version to the most recent master.
Note that all your ***local changes will get lost***.
So only do it when you do not have local changes or when your pull request just get merged.
```bash
git reset --hard [hash tag of master]
git push --force
```
### What is the consequence of force push
The previous two tips requires force push, this is because we altered the path of the commits.
It is fine to force push to your own fork, as long as the commits changed are only yours.
## Testcases
- All the testcases are in tests
## Core Library
- Follow Google C style for C++.
- We use doxygen to document all the interface code.
- You can reproduce the linter checks by typing ```make lint```
## Python Package
- Always add docstring to the new functions in numpydoc format.
- You can reproduce the linter checks by typing ```make lint```
Deploy Compiled Modules
=======================
NNVM compiled modules are fully embedded in TVM runtime as long as ```GRAPH_RUNTIME``` option
is enabled in tvm runtime. Check out the [TVM documentation](http://docs.tvmlang.org/) for
how to deploy TVM runtime to your system.
In a nutshell, we will need three items to deploy a compiled module.
Checkout our tutorials on getting started with NNVM compiler for more details.
- The graph json data which contains the execution graph.
- The tvm module library of compiled functions.
- The parameter blobs for stored parameters.
We can then use TVM's runtime API to deploy the compiled module.
Here is an example in python.
```python
import tvm
# tvm module for compiled functions.
loaded_lib = tvm.module.load("deploy.so")
# json graph
loaded_json = open(temp.relpath("deploy.json")).read()
# parameters in binary
loaded_params = bytearray(open(temp.relpath("deploy.params"), "rb").read())
fcreate = tvm.get_global_func("tvm.graph_runtime.create")
ctx = tvm.gpu(0)
gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
set_input, get_output, run = gmodule["set_input"], gmodule["get_output"], gmodule["run"]
set_input("x", tvm.nd.array(x_np))
gmodule["load_params"](loaded_params)
run()
out = tvm.nd.empty(shape)
get_output(0, out)
print(out.asnumpy())
```
An example in c++.
```cpp
#include <dlpack/dlpack.h>
#include <tvm/runtime/module.h>
#include <tvm/runtime/registry.h>
#include <tvm/runtime/packed_func.h>
#include <fstream>
#include <iterator>
#include <algorithm>
int main()
{
// tvm module for compiled functions
tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile("deploy.so");
// json graph
std::ifstream json_in("deploy.json", std::ios::in);
std::string json_data((std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());
json_in.close();
// parameters in binary
std::ifstream params_in("deploy.params", std::ios::binary);
std::string params_data((std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());
params_in.close();
// parameters need to be TVMByteArray type to indicate the binary data
TVMByteArray params_arr;
params_arr.data = params_data.c_str();
params_arr.size = params_data.length();
int dtype_code = kDLFloat;
int dtype_bits = 32;
int dtype_lanes = 1;
int device_type = kDLCPU;
int device_id = 0;
// get global function module for graph runtime
tvm::runtime::Module mod = (*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, mod_syslib, device_type, device_id);
DLTensor* x;
int in_ndim = 4;
int64_t in_shape[4] = {1, 3, 224, 224};
TVMArrayAlloc(in_shape, in_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &x);
// load image data saved in binary
std::ifstream data_fin("cat.bin", std::ios::binary);
data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);
// get the function from the module(set input data)
tvm::runtime::PackedFunc set_input = mod.GetFunction("set_input");
set_input("data", x);
// get the function from the module(load patameters)
tvm::runtime::PackedFunc load_params = mod.GetFunction("load_params");
load_params(params_arr);
// get the function from the module(run it)
tvm::runtime::PackedFunc run = mod.GetFunction("run");
run();
DLTensor* y;
int out_ndim = 1;
int64_t out_shape[1] = {1000, };
TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);
// get the function from the module(get output data)
tvm::runtime::PackedFunc get_output = mod.GetFunction("get_output");
get_output(0, y);
// get the maximum position in output vector
auto y_iter = static_cast<float*>(y->data);
auto max_iter = std::max_element(y_iter, y_iter + 1000);
auto max_index = std::distance(y_iter, max_iter);
std::cout << "The maximum position in output vector is: " << max_index << std::endl;
TVMArrayFree(x);
TVMArrayFree(y);
return 0;
}
```
Installation Guide
==================
This page gives instructions on how to build and install the nnvm compiler package from
scratch on various systems. It consists of two steps:
1. First build the shared library from the C++ codes (`libnnvm_compiler.so` for linux/osx and `libnnvm_compiler.dll` for windows).
2. Setup for the language packages (e.g. Python Package).
To get started, clone nnvm repo from github. It is important to clone the submodules along, with ```--recursive``` option.
```bash
git clone --recursive https://github.com/dmlc/nnvm
```
For windows users who use github tools, you can open the git shell, and type the following command.
```bash
git submodule init
git submodule update --recursive
```
NNVM compiler depend on TVM and TOPI, so make sure you install them by following [TVM document](http://docs.tvmlang.org/).
Note that it is necessary to build TVM with LLVM support to take full benefit of NNVM compiler.
## Contents
- [Build the Shared Library](#build-the-shared-library)
- [Python Package Installation](#python-package-installation)
- [Solution to Installation Error](#solution-to-installation-error)
## Build the Shared Library
Our goal is to build the shared library:
- On Linux/OSX the target library is `libnnvm_compiler.so`
- On Windows the target library is `libnnvm_compiler.dll`
The minimal building requirement is
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
You can edit `make/config.mk` to change the compile options, and then build by
`make`. If everything goes well, we can go to the specific language installation section.
### Building on Windows
NNVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**.
In order to generate the VS solution file using cmake, make sure you have a recent version of cmake added to your path.
NNVM compiler depend on tvm, please follow [TVM document](http://docs.tvmlang.org/how_to/install.html#building-on-windows)
to build the TVM windows library. You can build the TVM in the submodule folder under nnvm.
After tvm is built, we can then start to build nnvm, using the following command.
```bash
mkdir build
cd build
cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
```
This will generate the VS project using the MSVC 14 64 bit generator. Open the .sln file in the build directory and build with Visual Studio.
## Python Package Installation
The python package is located at python.
There are several ways to install the package, in all these cases the TVM library must be present in the python env:
1. Set the environment variable `PYTHONPATH` to tell python where to find
the library. For example, assume we cloned `nnvm` on the home directory
`~`. then we can added the following line in `~/.bashrc`.
It is ***recommended for developers*** who may change the codes.
The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ```setup``` again)
```bash
export PYTHONPATH=/path/to/nnvm/python:${PYTHONPATH}
```
2. Install nnvm python bindings by `setup.py`:
```bash
# install nnvm package for the current user
# NOTE: if you installed python via homebrew, --user is not needed during installaiton
# it will be automatically installed to your user directory.
# providing --user flag may trigger error during installation in such case.
cd python; python setup.py install --user; cd ..
```
## Solution to Installation Error
If you encounter the problem while installation process, you can solve by updating submodules to the latest commit set.
To update submodules to the latest commit set, type the following command.
```bash
git submodule update --init --recursive
```
*WARNING: The default commit set in submodule is the recommended setting. Using the latest commit set may lead to another compilation error or something else.*
NNVM Documentation
==================
This is a document about NNVM and NNVM compiler.
Contents
--------
.. toctree::
:maxdepth: 1
self
how_to/install
tutorials/index
top
json_spec
how_to/contribute
how_to/deploy
api/python/index
dev/index
NNVM Examples
=============
This folder contains example snippets of running NNVM Compilation.
- See also [Tutorials](../tutorials) for tutorials with detailed explainations.
FROM nvidia/cuda:8.0-cudnn7-devel
# Base scripts
RUN apt-get update --fix-missing
COPY install/ubuntu_install_core.sh /install/ubuntu_install_core.sh
RUN bash /install/ubuntu_install_core.sh
COPY install/ubuntu_install_python.sh /install/ubuntu_install_python.sh
RUN bash /install/ubuntu_install_python.sh
COPY install/ubuntu_install_llvm.sh /install/ubuntu_install_llvm.sh
RUN bash /install/ubuntu_install_llvm.sh
COPY install/ubuntu_install_opencl.sh /install/ubuntu_install_opencl.sh
RUN bash /install/ubuntu_install_opencl.sh
COPY install/ubuntu_install_python_package.sh /install/ubuntu_install_python_package.sh
RUN bash /install/ubuntu_install_python_package.sh
COPY install/ubuntu_install_sphinx.sh /install/ubuntu_install_sphinx.sh
RUN bash /install/ubuntu_install_sphinx.sh
# Fix recommonmark to latest version
RUN git clone https://github.com/rtfd/recommonmark
RUN cd recommonmark; python setup.py install
# Enable doxygen for c++ doc build
RUN apt-get update && apt-get install -y doxygen graphviz libprotobuf-dev protobuf-compiler
COPY install/ubuntu_install_java.sh /install/ubuntu_install_java.sh
RUN bash /install/ubuntu_install_java.sh
COPY install/ubuntu_install_nodejs.sh /install/ubuntu_install_nodejs.sh
RUN bash /install/ubuntu_install_nodejs.sh
COPY install/ubuntu_install_rocm.sh /install/ubuntu_install_rocm.sh
RUN bash /install/ubuntu_install_rocm.sh
COPY install/ubuntu_install_opengl.sh /install/ubuntu_install_opengl.sh
RUN bash /install/ubuntu_install_opengl.sh
COPY install/ubuntu_install_vulkan.sh /install/ubuntu_install_vulkan.sh
RUN bash /install/ubuntu_install_vulkan.sh
# DL Frameworks
COPY install/ubuntu_install_mxnet.sh /install/ubuntu_install_mxnet.sh
RUN bash /install/ubuntu_install_mxnet.sh
COPY install/ubuntu_install_onnx.sh /install/ubuntu_install_onnx.sh
RUN bash /install/ubuntu_install_onnx.sh
COPY install/ubuntu_install_coreml.sh /install/ubuntu_install_coreml.sh
RUN bash /install/ubuntu_install_coreml.sh
COPY install/ubuntu_install_keras.sh /install/ubuntu_install_keras.sh
RUN bash /install/ubuntu_install_keras.sh
COPY install/ubuntu_install_darknet.sh /install/ubuntu_install_darknet.sh
RUN bash /install/ubuntu_install_darknet.sh
RUN pip install Pillow
# Environment variables
ENV PATH=/usr/local/nvidia/bin:${PATH}
ENV PATH=/usr/local/cuda/bin:${PATH}
ENV CPLUS_INCLUDE_PATH=/usr/local/cuda/include:${CPLUS_INCLUDE_PATH}
ENV C_INCLUDE_PATH=/usr/local/cuda/include:${C_INCLUDE_PATH}
ENV LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LIBRARY_PATH}
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}
ENV LD_LIBRARY_PATH=/opt/rocm/lib:${LD_LIBRARY_PATH}
ENV PATH=/node_modules/.bin:${PATH}
ENV VULKAN_SDK=/usr/local/VulkanSDK/1.0.65.0/x86_64
ENV PATH=${PATH}:${VULKAN_SDK}/bin
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${VULKAN_SDK}/lib
ENV VK_LAYER_PATH=${VULKAN_SDK}/etc/explicit_layer.d
# For lint test
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y python-pip sudo
RUN apt-get install -y doxygen graphviz
RUN pip install cpplint pylint
# CI Build Scripts
This directory contains the files and setup instructions to run all tests.
## Run locally
To run locally, we need to first install
[docker](https://docs.docker.com/engine/installation/) and
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker/wiki).
Then we can run the tasks defined in the [Jenkinsfile](../../Jenkinsfile) by
using (`ci_build.sh`)[./ci_build.sh]. For example
- lint the python codes
```bash
./ci_build.sh lint make pylint
```
- build codes with CUDA supports
```bash
./ci_build.sh gpu tests/scripts/task_build.sh
```
- do the python unittest
```bash
./ci_build.sh gpu tests/scripts/task_python_test.sh
```
- build the documents. The results will be available at `docs/_build/html`
```bash
tests/ci_build/ci_build.sh gpu tests/scripts/task_python_docs.sh
```
#!/usr/bin/env bash
#
# Execute command within a docker container
#
# Usage: ci_build.sh <CONTAINER_TYPE> [--dockerfile <DOCKERFILE_PATH>] [-it]
# <COMMAND>
#
# CONTAINER_TYPE: Type of the docker container used the run the build: e.g.,
# (cpu | gpu)
#
# DOCKERFILE_PATH: (Optional) Path to the Dockerfile used for docker build. If
# this optional value is not supplied (via the --dockerfile
# flag), will use Dockerfile.CONTAINER_TYPE in default
#
# COMMAND: Command to be executed in the docker container
#
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Get the command line arguments.
CONTAINER_TYPE=$( echo "$1" | tr '[:upper:]' '[:lower:]' )
shift 1
# Dockerfile to be used in docker build
DOCKERFILE_PATH="${SCRIPT_DIR}/Dockerfile.${CONTAINER_TYPE}"
DOCKER_CONTEXT_PATH="${SCRIPT_DIR}"
if [[ "$1" == "--dockerfile" ]]; then
DOCKERFILE_PATH="$2"
DOCKER_CONTEXT_PATH=$(dirname "${DOCKERFILE_PATH}")
echo "Using custom Dockerfile path: ${DOCKERFILE_PATH}"
echo "Using custom docker build context path: ${DOCKER_CONTEXT_PATH}"
shift 2
fi
if [[ "$1" == "-it" ]]; then
CI_DOCKER_EXTRA_PARAMS+=('-it')
shift 1
fi
if [[ ! -f "${DOCKERFILE_PATH}" ]]; then
echo "Invalid Dockerfile path: \"${DOCKERFILE_PATH}\""
exit 1
fi
COMMAND=("$@")
# Validate command line arguments.
if [ "$#" -lt 1 ] || [ ! -e "${SCRIPT_DIR}/Dockerfile.${CONTAINER_TYPE}" ]; then
supported_container_types=$( ls -1 ${SCRIPT_DIR}/Dockerfile.* | \
sed -n 's/.*Dockerfile\.\([^\/]*\)/\1/p' | tr '\n' ' ' )
echo "Usage: $(basename $0) CONTAINER_TYPE COMMAND"
echo " CONTAINER_TYPE can be one of [${supported_container_types}]"
echo " COMMAND is a command (with arguments) to run inside"
echo " the container."
exit 1
fi
# Use nvidia-docker if the container is GPU.
if [[ "${CONTAINER_TYPE}" == *"gpu"* ]]; then
DOCKER_BINARY="nvidia-docker"
else
DOCKER_BINARY="docker"
fi
# Helper function to traverse directories up until given file is found.
function upsearch () {
test / == "$PWD" && return || \
test -e "$1" && echo "$PWD" && return || \
cd .. && upsearch "$1"
}
# Set up WORKSPACE and BUILD_TAG. Jenkins will set them for you or we pick
# reasonable defaults if you run it outside of Jenkins.
WORKSPACE="${WORKSPACE:-${SCRIPT_DIR}/../../}"
BUILD_TAG="${BUILD_TAG:-nnvm-ci}"
# Determine the docker image name
DOCKER_IMG_NAME="${BUILD_TAG}.${CONTAINER_TYPE}"
# Under Jenkins matrix build, the build tag may contain characters such as
# commas (,) and equal signs (=), which are not valid inside docker image names.
DOCKER_IMG_NAME=$(echo "${DOCKER_IMG_NAME}" | sed -e 's/=/_/g' -e 's/,/-/g')
# Convert to all lower-case, as per requirement of Docker image names
DOCKER_IMG_NAME=$(echo "${DOCKER_IMG_NAME}" | tr '[:upper:]' '[:lower:]')
# Print arguments.
echo "WORKSPACE: ${WORKSPACE}"
echo "CI_DOCKER_EXTRA_PARAMS: ${CI_DOCKER_EXTRA_PARAMS[@]}"
echo "COMMAND: ${COMMAND[@]}"
echo "CONTAINER_TYPE: ${CONTAINER_TYPE}"
echo "BUILD_TAG: ${BUILD_TAG}"
echo "DOCKER CONTAINER NAME: ${DOCKER_IMG_NAME}"
echo ""
# Build the docker container.
echo "Building container (${DOCKER_IMG_NAME})..."
docker build -t ${DOCKER_IMG_NAME} \
-f "${DOCKERFILE_PATH}" "${DOCKER_CONTEXT_PATH}"
# Check docker build status
if [[ $? != "0" ]]; then
echo "ERROR: docker build failed."
exit 1
fi
# Run the command inside the container.
echo "Running '${COMMAND[@]}' inside ${DOCKER_IMG_NAME}..."
# By default we cleanup - remove the container once it finish running (--rm)
# and share the PID namespace (--pid=host) so the process inside does not have
# pid 1 and SIGKILL is propagated to the process inside (jenkins can kill it).
echo ${DOCKER_BINARY}
${DOCKER_BINARY} run --rm --pid=host \
-v ${WORKSPACE}:/workspace \
-w /workspace \
-e "CI_BUILD_HOME=/workspace" \
-e "CI_BUILD_USER=$(id -u -n)" \
-e "CI_BUILD_UID=$(id -u)" \
-e "CI_BUILD_GROUP=$(id -g -n)" \
-e "CI_BUILD_GID=$(id -g)" \
${CI_DOCKER_EXTRA_PARAMS[@]} \
${DOCKER_IMG_NAME} \
bash tests/ci_build/with_the_same_user \
${COMMAND[@]}
# install libraries for building c++ core on ubuntu
apt-get install -y --no-install-recommends --force-yes \
git make libgtest-dev cmake wget unzip libtinfo-dev libz-dev\
libcurl4-openssl-dev libopenblas-dev g++ sudo
cd /usr/src/gtest && cmake CMakeLists.txt && make && cp *.a /usr/lib
#install the necessary dependancies, cffi, opencv
wget 'https://github.com/siju-samuel/darknet/blob/master/lib/libdarknet.so?raw=true' -O libdarknet.so
pip2 install opencv-python cffi
pip3 install opencv-python cffi
echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-4.0 main\
>> /etc/apt/sources.list.d/llvm.list
echo deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial-4.0 main\
>> /etc/apt/sources.list.d/llvm.list
echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-5.0 main\
>> /etc/apt/sources.list.d/llvm.list
echo deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial-5.0 main\
>> /etc/apt/sources.list.d/llvm.list
echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial main\
>> /etc/apt/sources.list.d/llvm.list
echo deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial main\
>> /etc/apt/sources.list.d/llvm.list
wget -O - http://apt.llvm.org/llvm-snapshot.gpg.key|sudo apt-key add -
apt-get update && apt-get install -y --force-yes llvm-4.0 llvm-5.0 llvm-6.0
pip2 install onnx>=1.1.0
pip3 install onnx>=1.1.0
pip2 install http://download.pytorch.org/whl/cu75/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl
pip2 install torchvision
pip3 install http://download.pytorch.org/whl/cu75/torch-0.2.0.post3-cp35-cp35m-manylinux1_x86_64.whl
pip3 install torchvision
# Install OpenCL runtime in nvidia docker.
apt-get install -y --no-install-recommends --force-yes \
ocl-icd-libopencl1 \
clinfo && \
rm -rf /var/lib/apt/lists/*
mkdir -p /etc/OpenCL/vendors && \
echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
# install python and pip, don't modify this, modify install_python_package.sh
apt-get update && apt-get install -y python-pip python-dev python3-dev
# the version of the pip shipped with ubuntu may be too lower, install a recent version here
cd /tmp && wget https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py && python2 get-pip.py
# install libraries for python package on ubuntu
pip2 install nose pylint numpy nose-timer cython decorator scipy tornado
pip3 install nose pylint numpy nose-timer cython decorator scipy tornado
pip install sphinx==1.6.2 sphinx-gallery sphinx_rtd_theme matplotlib Image commonmark>=0.7.3 docutils>=0.11
#!/usr/bin/env bash
# This script is a wrapper creating the same user inside container as the one
# running the ci_build.sh outside the container. It also set the home directory
# for the user inside container to match the same absolute path as the workspace
# outside of container. Do not run this manually. It does not make sense. It is
# intended to be called by ci_build.sh only.
set -e
COMMAND=("$@")
if ! touch /this_is_writable_file_system; then
echo "You can't write to your filesystem!"
echo "If you are in Docker you should check you do not have too many images" \
"with too many files in them. Docker has some issue with it."
exit 1
else
rm /this_is_writable_file_system
fi
getent group "${CI_BUILD_GID}" || addgroup --gid "${CI_BUILD_GID}" "${CI_BUILD_GROUP}"
getent passwd "${CI_BUILD_UID}" || adduser --gid "${CI_BUILD_GID}" --uid "${CI_BUILD_UID}" \
--gecos "${CI_BUILD_USER} (generated by with_the_same_user script)" \
--disabled-password --home "${CI_BUILD_HOME}" --quiet "${CI_BUILD_USER}"
usermod -a -G sudo "${CI_BUILD_USER}"
echo "${CI_BUILD_USER} ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-nopasswd-sudo
HOME=${CI_BUILD_HOME}\
sudo -u "#${CI_BUILD_UID}" --preserve-env\
PATH=${PATH}\
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}\
HOME=${CI_BUILD_HOME}\
${COMMAND[@]}
...@@ -16,11 +16,12 @@ def verify_onnx_forward_impl(graph_file, data_shape, out_shape): ...@@ -16,11 +16,12 @@ def verify_onnx_forward_impl(graph_file, data_shape, out_shape):
def get_tvm_output(model, x, target, ctx, dtype='float32'): def get_tvm_output(model, x, target, ctx, dtype='float32'):
new_sym, params = nnvm.frontend.from_onnx(model) new_sym, params = nnvm.frontend.from_onnx(model)
shape_dict = {'input_0': x.shape} input_name = model.graph.input[0].name
shape_dict = {input_name: x.shape}
graph, lib, params = nnvm.compiler.build(new_sym, target, shape_dict, params=params) graph, lib, params = nnvm.compiler.build(new_sym, target, shape_dict, params=params)
m = graph_runtime.create(graph, lib, ctx) m = graph_runtime.create(graph, lib, ctx)
# set inputs # set inputs
m.set_input('input_0', tvm.nd.array(x.astype(dtype))) m.set_input(input_name, tvm.nd.array(x.astype(dtype)))
m.set_input(**params) m.set_input(**params)
m.run() m.run()
# get outputs # get outputs
......
...@@ -9,7 +9,8 @@ def compare_graph(onnx_file, nnvm_sym, ishape): ...@@ -9,7 +9,8 @@ def compare_graph(onnx_file, nnvm_sym, ishape):
onnx_sym, params = nnvm.frontend.from_onnx(onnx_model) onnx_sym, params = nnvm.frontend.from_onnx(onnx_model)
g1 = nnvm.graph.create(onnx_sym) g1 = nnvm.graph.create(onnx_sym)
g2 = nnvm.graph.create(nnvm_sym) g2 = nnvm.graph.create(nnvm_sym)
ishapes = {'input_0': ishape} input_name = onnx_model.graph.input[0].name
ishapes = {input_name: ishape}
graph_attr.set_shape_inputs(g1, ishapes) graph_attr.set_shape_inputs(g1, ishapes)
graph_attr.set_shape_inputs(g2, ishapes) graph_attr.set_shape_inputs(g2, ishapes)
g1 = g1.apply("InferShape").apply("SimplifyInference") g1 = g1.apply("InferShape").apply("SimplifyInference")
......
#!/bin/bash
export PYTHONPATH=python:tvm/python:tvm/topi/python
echo "Running ONNX frontend test..."
python -m nose -v tests/python/frontend/onnx || exit -1
echo "Running MXNet frontend test..."
python -m nose -v tests/python/frontend/mxnet || exit -1
echo "Running Keras frontend test..."
python -m nose -v tests/python/frontend/keras || exit -1
#!/bin/bash
echo "Check codestyle of c++ code..."
make cpplint || exit -1
echo "Check codestyle of python code..."
make pylint || exit -1
echo "Check documentations of c++ code..."
make doc 2>log.txt
(cat log.txt| grep -v ENABLE_PREPROCESSING |grep -v "unsupported tag") > logclean.txt
echo "---------Error Log----------"
cat logclean.txt
echo "----------------------------"
(cat logclean.txt|grep warning) && exit -1
(cat logclean.txt|grep error) && exit -1
rm logclean.txt
rm log.txt
#!/bin/bash
mkdir -p docs/_build/html
# C++ doc
make doc
rm -rf python/nnvm/*.pyc python/nnvm/*/*.pyc
cd docs
PYTHONPATH=../python:../tvm/python:../tvm/topi/python make html || exit -1
cd _build/html
tar czf docs.tgz *
mv docs.tgz ../../../
#!/bin/bash
if [ ${TASK} == "lint" ]; then
make lint || exit -1
echo "Check documentations of c++ code..."
make doc 2>log.txt
(cat log.txt| grep -v ENABLE_PREPROCESSING |grep -v "unsupported tag") > logclean.txt
echo "---------Error Log----------"
cat logclean.txt
echo "----------------------------"
(cat logclean.txt|grep warning) && exit -1
(cat logclean.txt|grep error) && exit -1
exit 0
fi
if [ ! ${TRAVIS_OS_NAME} == "osx" ]; then
# use g++-4.8 for linux
if [ ${CXX} == "g++" ]; then
export CXX=g++-4.8
fi
fi
if [ ${TASK} == "cpp_test" ]; then
make -f dmlc-core/scripts/packages.mk gtest
echo "GTEST_PATH="${CACHE_PREFIX} >> config.mk
make test || exit -1
for test in tests/cpp/*_test; do
./$test || exit -1
done
exit 0
fi
# run two test one for cython, one for ctypes
if [ ${TASK} == "python_test" ]; then
make clean
make -j all || exit -1
if [ ${TRAVIS_OS_NAME} == "osx" ]; then
python -m nose tests/python/unittest/ || exit -1
python3 -m nose tests/python/unittest/ || exit -1
else
nosetests tests/python/unittest/ || exit -1
nosetests3 tests/python/unittest/ || exit -1
fi
make cython || exit -1
make cython3 || exit -1
if [ ${TRAVIS_OS_NAME} == "osx" ]; then
python -m nose tests/python/unittest/ || exit -1
python3 -m nose tests/python/unittest/ || exit -1
else
nosetests tests/python/unittest/ || exit -1
nosetests3 tests/python/unittest/ || exit -1
fi
exit 0
fi
#!/bin/bash
if [ ${TRAVIS_OS_NAME} == "osx" ]; then
brew update
brew install python3
if [ ${TASK} == "python_test" ]; then
python -m pip install --user nose numpy cython
python3 -m pip install --user nose numpy cython
fi
fi
if [ ${TASK} == "lint" ]; then
pip install --user cpplint 'pylint==1.4.4' 'astroid==1.3.6'
fi
...@@ -100,4 +100,5 @@ def find_lib_path(name=None, search_path=None, optional=False): ...@@ -100,4 +100,5 @@ def find_lib_path(name=None, search_path=None, optional=False):
# current version # current version
__version__ = "0.3.0" # We use the version of the incoming release for code that is under development
__version__ = "0.4.0"
...@@ -28,13 +28,26 @@ RUN cd recommonmark; python setup.py install ...@@ -28,13 +28,26 @@ RUN cd recommonmark; python setup.py install
# Enable doxygen for c++ doc build # Enable doxygen for c++ doc build
RUN apt-get update && apt-get install -y doxygen graphviz libprotobuf-dev protobuf-compiler RUN apt-get update && apt-get install -y doxygen graphviz libprotobuf-dev protobuf-compiler
COPY install/ubuntu_install_java.sh /install/ubuntu_install_java.sh
RUN bash /install/ubuntu_install_java.sh
COPY install/ubuntu_install_nodejs.sh /install/ubuntu_install_nodejs.sh
RUN bash /install/ubuntu_install_nodejs.sh
COPY install/ubuntu_install_rocm.sh /install/ubuntu_install_rocm.sh
RUN bash /install/ubuntu_install_rocm.sh
COPY install/ubuntu_install_opengl.sh /install/ubuntu_install_opengl.sh
RUN bash /install/ubuntu_install_opengl.sh
COPY install/ubuntu_install_vulkan.sh /install/ubuntu_install_vulkan.sh
RUN bash /install/ubuntu_install_vulkan.sh
# DL Frameworks # DL Frameworks
COPY install/ubuntu_install_mxnet.sh /install/ubuntu_install_mxnet.sh COPY install/ubuntu_install_mxnet.sh /install/ubuntu_install_mxnet.sh
RUN bash /install/ubuntu_install_mxnet.sh RUN bash /install/ubuntu_install_mxnet.sh
COPY install/ubuntu_install_onnx.sh /install/ubuntu_install_onnx.sh
RUN bash /install/ubuntu_install_onnx.sh
COPY install/ubuntu_install_coreml.sh /install/ubuntu_install_coreml.sh COPY install/ubuntu_install_coreml.sh /install/ubuntu_install_coreml.sh
RUN bash /install/ubuntu_install_coreml.sh RUN bash /install/ubuntu_install_coreml.sh
...@@ -44,6 +57,9 @@ RUN bash /install/ubuntu_install_keras.sh ...@@ -44,6 +57,9 @@ RUN bash /install/ubuntu_install_keras.sh
COPY install/ubuntu_install_darknet.sh /install/ubuntu_install_darknet.sh COPY install/ubuntu_install_darknet.sh /install/ubuntu_install_darknet.sh
RUN bash /install/ubuntu_install_darknet.sh RUN bash /install/ubuntu_install_darknet.sh
COPY install/ubuntu_install_onnx.sh /install/ubuntu_install_onnx.sh
RUN bash /install/ubuntu_install_onnx.sh
RUN pip install Pillow RUN pip install Pillow
# Environment variables # Environment variables
...@@ -53,3 +69,10 @@ ENV CPLUS_INCLUDE_PATH=/usr/local/cuda/include:${CPLUS_INCLUDE_PATH} ...@@ -53,3 +69,10 @@ ENV CPLUS_INCLUDE_PATH=/usr/local/cuda/include:${CPLUS_INCLUDE_PATH}
ENV C_INCLUDE_PATH=/usr/local/cuda/include:${C_INCLUDE_PATH} ENV C_INCLUDE_PATH=/usr/local/cuda/include:${C_INCLUDE_PATH}
ENV LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LIBRARY_PATH} ENV LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LIBRARY_PATH}
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH} ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}
ENV LD_LIBRARY_PATH=/opt/rocm/lib:${LD_LIBRARY_PATH}
ENV PATH=/node_modules/.bin:${PATH}
ENV VULKAN_SDK=/usr/local/VulkanSDK/1.0.65.0/x86_64
ENV PATH=${PATH}:${VULKAN_SDK}/bin
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${VULKAN_SDK}/lib
ENV VK_LAYER_PATH=${VULKAN_SDK}/etc/explicit_layer.d
# fix to certain version for now
pip2 install onnx>=1.1.0 pip2 install onnx>=1.1.0
pip3 install onnx>=1.1.0 pip3 install onnx>=1.1.0
......
...@@ -11,10 +11,10 @@ python -m nose -v nnvm/tests/python/compiler || exit -1 ...@@ -11,10 +11,10 @@ python -m nose -v nnvm/tests/python/compiler || exit -1
python3 -m nose -v nnvm/tests/python/compiler || exit -1 python3 -m nose -v nnvm/tests/python/compiler || exit -1
echo "Running ONNX frontend test..." echo "Running ONNX frontend test..."
python -m nose -v tests/python/frontend/onnx || exit -1 python -m nose -v nnvm/tests/python/frontend/onnx || exit -1
echo "Running MXNet frontend test..." echo "Running MXNet frontend test..."
python -m nose -v tests/python/frontend/mxnet || exit -1 python -m nose -v nnvm/tests/python/frontend/mxnet || exit -1
echo "Running Keras frontend test..." echo "Running Keras frontend test..."
python -m nose -v tests/python/frontend/keras || exit -1 python -m nose -v nnvm/tests/python/frontend/keras || exit -1
...@@ -238,8 +238,8 @@ print('%g secs/op' % cost) ...@@ -238,8 +238,8 @@ print('%g secs/op' % cost)
# The target_host should be 'llvm -target=aarch64-linux-gnu'. # The target_host should be 'llvm -target=aarch64-linux-gnu'.
# But here we set 'llvm' to enable this tutorial to run locally. # But here we set 'llvm' to enable this tutorial to run locally.
# #
# Also we need to build the runtime with the flag `USE_OPENCL=1`. # Also we need to build the runtime with the flag `USE_OPENCL=1` to
# build kernel (different from cpu, we need bind axis for OpenCL) # build the kernel (different from cpu, we need bind axis for OpenCL)
# #
# The following functions shows how we can deploy CL # The following functions shows how we can deploy CL
def deploy_cl(): def deploy_cl():
......
Run and Deploy
--------------
Lanuage and Schedules Tensor Expression and Schedules
--------------------- -------------------------------
...@@ -101,7 +101,7 @@ port = 9090 ...@@ -101,7 +101,7 @@ port = 9090
if not use_mali: if not use_mali:
# run server locally # run server locally
host = 'localhost' host = 'localhost'
port = 9092 port = 9095
server = rpc.Server(host=host, port=port, use_popen=True) server = rpc.Server(host=host, port=port, use_popen=True)
###################################################################### ######################################################################
...@@ -182,12 +182,14 @@ out_shape = (batch_size, num_classes) ...@@ -182,12 +182,14 @@ out_shape = (batch_size, num_classes)
if use_mali: if use_mali:
target_host = "llvm -target=aarch64-linux-gnu -mattr=+neon" target_host = "llvm -target=aarch64-linux-gnu -mattr=+neon"
target = tvm.target.mali()
else: else:
target_host = "llvm" target_host = "llvm"
target = tvm.target.cuda()
# set target as `tvm.target.mali` instead of 'opencl' to enable # set target as `tvm.target.mali` instead of 'opencl' to enable
# target-specified optimization # target-specified optimization
graph, lib, params = nnvm.compiler.build(net, target=tvm.target.mali(), graph, lib, params = nnvm.compiler.build(net, target=target,
shape={"data": data_shape}, params=params, target_host=target_host) shape={"data": data_shape}, params=params, target_host=target_host)
# After `nnvm.compiler.build`, you will get three return values: graph, # After `nnvm.compiler.build`, you will get three return values: graph,
...@@ -212,7 +214,7 @@ remote = rpc.connect(host, port) ...@@ -212,7 +214,7 @@ remote = rpc.connect(host, port)
remote.upload(lib_fname) remote.upload(lib_fname)
rlib = remote.load_module('net.tar') rlib = remote.load_module('net.tar')
ctx = remote.cl(0) ctx = remote.cl(0) if use_mali else remote.gpu(0)
# upload the parameter # upload the parameter
rparams = {k: tvm.nd.array(v, ctx) for k, v in params.items()} rparams = {k: tvm.nd.array(v, ctx) for k, v in params.items()}
......
...@@ -80,7 +80,8 @@ from tvm.contrib import graph_runtime as runtime ...@@ -80,7 +80,8 @@ from tvm.contrib import graph_runtime as runtime
# #
# Loading runtime library /home/YOURNAME/code/tvm/lib/libtvm_runtime.so... exec only # Loading runtime library /home/YOURNAME/code/tvm/lib/libtvm_runtime.so... exec only
# INFO:root:RPCServer: bind to 0.0.0.0:9090 # INFO:root:RPCServer: bind to 0.0.0.0:9090
#
###################################################################### ######################################################################
# For demonstration, we simply start an RPC server on the same machine, # For demonstration, we simply start an RPC server on the same machine,
# if :code:`use_rasp` is False. If you have set up the remote # if :code:`use_rasp` is False. If you have set up the remote
......
""" """
Tutorial for running Yolo-V2 in Darknet Models Compile YOLO-V2 in DarkNet Models
===================== =================================
**Author**: `Siju Samuel <https://siju-samuel.github.io/>`_ **Author**: `Siju Samuel <https://siju-samuel.github.io/>`_
This article is an introductory tutorial to deploy darknet models with NNVM. This article is an introductory tutorial to deploy darknet models with NNVM.
All the required models and libraries will be downloaded from the internet by the script.
All the required models and libraries will be downloaded from the internet
by the script.
This script runs the YOLO-V2 Model with the bounding boxes This script runs the YOLO-V2 Model with the bounding boxes
Darknet parsing have dependancy with CFFI and CV2 library Darknet parsing have dependancy with CFFI and CV2 library
Please install CFFI and CV2 before executing this script Please install CFFI and CV2 before executing this script
pip install cffi .. code-block:: bash
pip install opencv-python pip install cffi
pip install opencv-python
""" """
from ctypes import * from ctypes import *
import math import math
...@@ -40,12 +35,11 @@ else: ...@@ -40,12 +35,11 @@ else:
###################################################################### ######################################################################
# Set the parameters here. # Set the parameters here.
# Supported models alexnet, resnet50, resnet152, extraction, yolo # Supported models alexnet, resnet50, resnet152, extraction, yolo
###################################################################### #
model_name = 'yolo' model_name = 'yolo'
test_image = 'dog.jpg' test_image = 'dog.jpg'
target = 'llvm' target = 'llvm'
ctx = tvm.cpu(0) ctx = tvm.cpu(0)
######################################################################
def dlProgress(count, block_size, total_size): def dlProgress(count, block_size, total_size):
"""Show the download progress.""" """Show the download progress."""
...@@ -105,8 +99,8 @@ def download(url, path, overwrite=False, sizecompare=False): ...@@ -105,8 +99,8 @@ def download(url, path, overwrite=False, sizecompare=False):
###################################################################### ######################################################################
# Prepare cfg and weights file # Prepare cfg and weights file
# ----------------------------
# Pretrained model available https://pjreddie.com/darknet/imagenet/ # Pretrained model available https://pjreddie.com/darknet/imagenet/
# --------------------------------------------------------------------
# Download cfg and weights file first time. # Download cfg and weights file first time.
cfg_name = model_name + '.cfg' cfg_name = model_name + '.cfg'
...@@ -142,7 +136,7 @@ sym, params = nnvm.frontend.darknet.from_darknet(net, dtype) ...@@ -142,7 +136,7 @@ sym, params = nnvm.frontend.darknet.from_darknet(net, dtype)
###################################################################### ######################################################################
# Compile the model on NNVM # Compile the model on NNVM
# -------------------------------------------------------------------- # -------------------------
# compile the model # compile the model
data = np.empty([batch_size, net.c ,net.h, net.w], dtype); data = np.empty([batch_size, net.c ,net.h, net.w], dtype);
shape = {'data': data.shape} shape = {'data': data.shape}
...@@ -151,8 +145,8 @@ with nnvm.compiler.build_config(opt_level=2): ...@@ -151,8 +145,8 @@ with nnvm.compiler.build_config(opt_level=2):
graph, lib, params = nnvm.compiler.build(sym, target, shape, dtype, params) graph, lib, params = nnvm.compiler.build(sym, target, shape, dtype, params)
##################################################################### #####################################################################
# Save the json # Save the JSON
# -------------------------------------------------------------------- # -------------
def save_lib(): def save_lib():
#Save the graph, params and .so to the current directory #Save the graph, params and .so to the current directory
print("Saving the compiled output...") print("Saving the compiled output...")
...@@ -178,8 +172,8 @@ download(img_url, test_image) ...@@ -178,8 +172,8 @@ download(img_url, test_image)
data = nnvm.testing.darknet.load_image(test_image, net.w, net.h) data = nnvm.testing.darknet.load_image(test_image, net.w, net.h)
###################################################################### ######################################################################
# Execute on TVM # Execute on TVM Runtime
# -------------------------------------------------------------------- # ----------------------
# The process is no different from other examples. # The process is no different from other examples.
from tvm.contrib import graph_runtime from tvm.contrib import graph_runtime
......
""" """
Quick Start - End-to-End Tutorial for NNVM/TVM Pipeline for OpenGL and WebGL Deploy Deep Learning Models to OpenGL and WebGL
============================================================================ ===============================================
**Author**: `Zhixun Tan <https://github.com/phisiart>`_ **Author**: `Zhixun Tan <https://github.com/phisiart>`_
This example shows how to build a neural network with NNVM python frontend and This example shows how to build a neural network with NNVM python frontend and
generate runtime library for WebGL running in a browser with TVM. (Thanks to generate runtime library for WebGL running in a browser with TVM.
Tianqi's `tutorial for cuda <http://nnvm.tvmlang.org/tutorials/get_started.html>`_ and To run this notebook, you need to install tvm and nnvm.
Ziheng's `tutorial for Raspberry Pi <http://nnvm.tvmlang.org/tutorials/deploy_model_on_rasp.html>`_)
To run this notebook, you need to install tvm and nnvm following
`these instructions <https://github.com/dmlc/nnvm/blob/master/docs/how_to/install.md>`_.
Notice that you need to build tvm with OpenGL. Notice that you need to build tvm with OpenGL.
""" """
...@@ -50,7 +47,7 @@ run_deploy_local = False ...@@ -50,7 +47,7 @@ run_deploy_local = False
run_deploy_rpc = False run_deploy_rpc = False
# To run the WebGL deploy demo, set this flag to True. # To run the WebGL deploy demo, set this flag to True.
run_deploy_web = True run_deploy_web = False
###################################################################### ######################################################################
# Download a Pre-trained Resnet18 Model # Download a Pre-trained Resnet18 Model
...@@ -432,7 +429,7 @@ def deploy_web(): ...@@ -432,7 +429,7 @@ def deploy_web():
from tvm.contrib import emscripten from tvm.contrib import emscripten
curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__))) curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(os.getcwd())))
working_dir = os.getcwd() working_dir = os.getcwd()
output_dir = os.path.join(working_dir, "resnet") output_dir = os.path.join(working_dir, "resnet")
if not os.path.exists(output_dir): if not os.path.exists(output_dir):
......
...@@ -66,7 +66,9 @@ x = np.array(img_y)[np.newaxis, np.newaxis, :, :] ...@@ -66,7 +66,9 @@ x = np.array(img_y)[np.newaxis, np.newaxis, :, :]
# We should be familiar with the process right now. # We should be familiar with the process right now.
import nnvm.compiler import nnvm.compiler
target = 'cuda' target = 'cuda'
shape_dict = {'input_0': x.shape} # assume first input name is data
input_name = sym.list_input_names()[0]
shape_dict = {input_name: x.shape}
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params) graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)
###################################################################### ######################################################################
...@@ -78,7 +80,7 @@ ctx = tvm.gpu(0) ...@@ -78,7 +80,7 @@ ctx = tvm.gpu(0)
dtype = 'float32' dtype = 'float32'
m = graph_runtime.create(graph, lib, ctx) m = graph_runtime.create(graph, lib, ctx)
# set inputs # set inputs
m.set_input('input_0', tvm.nd.array(x.astype(dtype))) m.set_input(input_name, tvm.nd.array(x.astype(dtype)))
m.set_input(**params) m.set_input(**params)
# execute # execute
m.run() m.run()
......
""" """
Using external libraries with NNVM Using External Libraries in NNVM
===================== ================================
**Author**: `Masahiro Masuda <https://github.com/masahi>`_ **Author**: `Masahiro Masuda <https://github.com/masahi>`_
This is a short tutorial on how to use external libraries such as cuDNN, or cuBLAS with NNVM. This is a short tutorial on how to use external libraries such as cuDNN, or cuBLAS with NNVM.
...@@ -24,7 +24,7 @@ from nnvm.testing import utils ...@@ -24,7 +24,7 @@ from nnvm.testing import utils
###################################################################### ######################################################################
# Create a simple network # Create a simple network
# --------------------------------------------- # -----------------------
# Let's create a very simple network for demonstration. # Let's create a very simple network for demonstration.
# It consists of convolution, batch normalization, and ReLU activation. # It consists of convolution, batch normalization, and ReLU activation.
...@@ -40,7 +40,7 @@ net, params = utils.create_workload(simple_net, batch_size, data_shape[1:]) ...@@ -40,7 +40,7 @@ net, params = utils.create_workload(simple_net, batch_size, data_shape[1:])
###################################################################### ######################################################################
# Build and run with cuda backend # Build and run with cuda backend
# --------------------------------------------- # -------------------------------
# We build and run this network with cuda backend, as usual. # We build and run this network with cuda backend, as usual.
# By setting the logging level to DEBUG, the result of NNVM graph compilation will be dumped as pseudo code. # By setting the logging level to DEBUG, the result of NNVM graph compilation will be dumped as pseudo code.
import logging import logging
...@@ -151,7 +151,7 @@ out_cuda = out.asnumpy() ...@@ -151,7 +151,7 @@ out_cuda = out.asnumpy()
###################################################################### ######################################################################
# Use cuDNN for a convolutional layer # Use cuDNN for a convolutional layer
# --------------------------------------------- # -----------------------------------
# We can use cuDNN to replace convolution kernels with cuDNN ones. # We can use cuDNN to replace convolution kernels with cuDNN ones.
# To do that, all we need to do is to append the option " -libs=cudnn" to the target string. # To do that, all we need to do is to append the option " -libs=cudnn" to the target string.
net, params = utils.create_workload(simple_net, batch_size, data_shape[1:]) net, params = utils.create_workload(simple_net, batch_size, data_shape[1:])
...@@ -192,14 +192,14 @@ out_cudnn = out.asnumpy() ...@@ -192,14 +192,14 @@ out_cudnn = out.asnumpy()
###################################################################### ######################################################################
# Verify the result # Verify the result
# --------------------------------------------- # -----------------
# We can check that the results of two runs match. # We can check that the results of two runs match.
np.testing.assert_allclose(out_cuda, out_cudnn, rtol=1e-5) np.testing.assert_allclose(out_cuda, out_cudnn, rtol=1e-5)
##################################################################### #####################################################################
# Conclusion # Conclusion
# --------------------------------------------- # ----------
# This tutorial covered the usage of cuDNN with NNVM. # This tutorial covered the usage of cuDNN with NNVM.
# We also have support for cuBLAS. If cuBLAS is enabled, it will be used inside a fully connected layer (nnvm.symbol.dense). # We also have support for cuBLAS. If cuBLAS is enabled, it will be used inside a fully connected layer (nnvm.symbol.dense).
# To use cuBLAS, set a target string as "cuda -libs=cublas". # To use cuBLAS, set a target string as "cuda -libs=cublas".
......
""" """
Quick Start - End-to-End Tutorial for NNVM/TVM Pipeline Quick Start Tutorial for Compiling Deep Learning Models
======================================================= =======================================================
**Author**: `Yao Wang <https://github.com/kevinthesun>`_ **Author**: `Yao Wang <https://github.com/kevinthesun>`_
This example shows how to build a neural network with NNVM python frontend and This example shows how to build a neural network with NNVM python frontend and
generate runtime library for Nvidia GPU and Raspberry Pi with TVM. (Thanks to generate runtime library for Nvidia GPU and Raspberry Pi with TVM.
Tianqi's `tutorial for cuda <http://nnvm.tvmlang.org/tutorials/get_started.html>`_ and To run this notebook, you need to install tvm and nnvm.
Ziheng's `tutorial for Raspberry Pi <http://nnvm.tvmlang.org/tutorials/deploy_model_on_rasp.html>`_)
To run this notebook, you need to install tvm and nnvm following
`these instructions <https://github.com/dmlc/nnvm/blob/master/docs/how_to/install.md>`_.
Notice that you need to build tvm with cuda and llvm. Notice that you need to build tvm with cuda and llvm.
""" """
###################################################################### ######################################################################
# Overview for Supported Hardware Backend of TVM # Overview for Supported Hardware Backend of TVM
# ----------------------------- # ----------------------------------------------
# The image below shows hardware backend currently supported by TVM: # The image below shows hardware backend currently supported by TVM:
# #
# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png # .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png
...@@ -52,7 +49,7 @@ print(net.debug_str()) ...@@ -52,7 +49,7 @@ print(net.debug_str())
###################################################################### ######################################################################
# Compilation # Compilation
# ---------------------------- # -----------
# Next step is to compile the model using the NNVM/TVM pipeline. # Next step is to compile the model using the NNVM/TVM pipeline.
# Users can specify the optimization level of the compilation. # Users can specify the optimization level of the compilation.
# Currently this value can be 0 to 2, which corresponds to # Currently this value can be 0 to 2, which corresponds to
...@@ -120,7 +117,7 @@ print(out.asnumpy()[0][0:10]) ...@@ -120,7 +117,7 @@ print(out.asnumpy()[0][0:10])
###################################################################### ######################################################################
# Compile and Deploy the Model to Raspberry Pi Remotely with RPC # Compile and Deploy the Model to Raspberry Pi Remotely with RPC
# ------------------------------ # --------------------------------------------------------------
# Following the steps above, we can also compile the model for Raspberry Pi. # Following the steps above, we can also compile the model for Raspberry Pi.
# TVM provides rpc module to help with remote deploying. # TVM provides rpc module to help with remote deploying.
# #
...@@ -145,7 +142,7 @@ port = 9090 ...@@ -145,7 +142,7 @@ port = 9090
if not use_rasp: if not use_rasp:
# run server locally # run server locally
host = 'localhost' host = 'localhost'
port = 9090 port = 9099
server = rpc.Server(host=host, port=port, use_popen=True) server = rpc.Server(host=host, port=port, use_popen=True)
# compile and save model library # compile and save model library
...@@ -190,4 +187,3 @@ print(out.asnumpy()[0][0:10]) ...@@ -190,4 +187,3 @@ print(out.asnumpy()[0][0:10])
if not use_rasp: if not use_rasp:
# terminate the local server # terminate the local server
server.terminate() server.terminate()
Optimize Operators Optimize Tensor Operators
------------------ -------------------------
...@@ -174,7 +174,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True)) ...@@ -174,7 +174,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True))
################################################################################################### ###################################################################################################
# Loop Permutation # Loop Permutation
# ------------- # ----------------
# If we look at the above IR, we can see the inner loop row data is vectorized and # If we look at the above IR, we can see the inner loop row data is vectorized and
# B is transformed into PackedB. The traversal of PackedB is sequential now. # B is transformed into PackedB. The traversal of PackedB is sequential now.
# So we will look at the access pattern of A. In current schedule, A is accessed column by column # So we will look at the access pattern of A. In current schedule, A is accessed column by column
...@@ -262,7 +262,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True)) ...@@ -262,7 +262,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True))
################################################################################################ ################################################################################################
# Write cache for blocks # Write cache for blocks
# -------- # ----------------------
# After blocking, the program will write result to C block by block, the access pattern # After blocking, the program will write result to C block by block, the access pattern
# is not sequential. So we can use a sequential cache array to hold the block results and # is not sequential. So we can use a sequential cache array to hold the block results and
# write to C when all the block results are ready. # write to C when all the block results are ready.
...@@ -308,7 +308,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True)) ...@@ -308,7 +308,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True))
################################################################################################### ###################################################################################################
# Parallel # Parallel
# ------------- # --------
# Futhermore, we can also utilize multi-core processors to do the thread-level parallelization. # Futhermore, we can also utilize multi-core processors to do the thread-level parallelization.
s = tvm.create_schedule(C.op) s = tvm.create_schedule(C.op)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment