Commit 84824ae3 by Tianqi Chen Committed by GitHub

[DOCS] Improve documents on deployment (#1412)

* [DOCS] Improve documents on deployment

* minor updates
parent 22a21995
.. _deploy-and-integration:
Deploy and Integration Deploy and Integration
====================== ======================
...@@ -6,7 +8,42 @@ as well as how to integrate it with your project. ...@@ -6,7 +8,42 @@ as well as how to integrate it with your project.
.. image:: http://www.tvm.ai/images/release/tvm_flexible.png .. image:: http://www.tvm.ai/images/release/tvm_flexible.png
In order to integrate the compiled module, we do not have to ship the compiler stack. We only need to use a lightweight runtime API that can be integrated into various platforms. Unlike traditional deep learning frameworks. TVM stack is divided into two major components:
- TVM compiler, which does all the compilation and optimizations
- TVM runtime, which runs on the target devices.
In order to integrate the compiled module, we **do not** need to build entire TVM on the target device. You only need to build the TVM compiler stack on your desktop and use that to cross-compile modules that are deployed on the target device.
We only need to use a light-weight runtime API that can be integrated into various platforms.
For example, you can run the following commands to build the runtime API
on a Linux based embedded system such as Raspberry Pi:
.. code:: bash
git clone --recursive https://github.com/dmlc/tvm
cd tvm
mkdir build
cp cmake/config.cmake build
cd build
cmake ..
make runtime
Note that we type `make runtime` to only build the runtime library.
If you want to include additional runtime such as OpenCL,
you can modify `config.cmake` to enable these options.
After you get the TVM runtime library, you can link the compiled library
The easiest and recommended way to test, tune and benchmark TVM kernels on
embedded devices is through TVM's RPC API.
Here are the links to the related tutorials.
- :ref:`tutorial-cross-compilation-and-rpc`
- :ref:`tutorial-deploy-model-on-mali-gpu`
- :ref:`tutorial-deploy-model-on-rasp`
After you finished tuning and benchmarking, you might need to deploy the model on the
target device without relying on RPC. see the following resources on how to do so.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
......
...@@ -2,6 +2,9 @@ Installation ...@@ -2,6 +2,9 @@ Installation
============ ============
To install TVM, please read :ref:`install-from-source`. To install TVM, please read :ref:`install-from-source`.
If you are interested in deploying to mobile/embedded devices,
you do not need to install the entire tvm stack on your device,
instead, you only need the runtime, please read :ref:`deploy-and-integration`.
If you would like to quickly try out TVM or do demo/tutorials, checkout :ref:`docker-images` If you would like to quickly try out TVM or do demo/tutorials, checkout :ref:`docker-images`
.. toctree:: .. toctree::
......
""" """
.. _tutorial-cross-compilation-and-rpc:
Cross Compilation and RPC Cross Compilation and RPC
========================= =========================
**Author**: `Ziheng Jiang <https://github.com/ZihengJiang/>`_ **Author**: `Ziheng Jiang <https://github.com/ZihengJiang/>`_
......
""" """
.. _tutorial-deploy-model-on-mali-gpu:
Deploy the Pretrained Model on ARM Mali GPU Deploy the Pretrained Model on ARM Mali GPU
======================================================= ===========================================
**Author**: `Lianmin Zheng <https://lmzheng.net/>`_, `Ziheng Jiang <https://ziheng.org/>`_ **Author**: `Lianmin Zheng <https://lmzheng.net/>`_, `Ziheng Jiang <https://ziheng.org/>`_
This is an example of using NNVM to compile a ResNet model and This is an example of using NNVM to compile a ResNet model and
......
""" """
.. _tutorial-deploy-model-on-rasp:
Deploy the Pretrained Model on Raspberry Pi Deploy the Pretrained Model on Raspberry Pi
=========================================== ===========================================
**Author**: `Ziheng Jiang <https://ziheng.org/>`_ **Author**: `Ziheng Jiang <https://ziheng.org/>`_
......
""" """
.. _tutorial-nnvm-quick-start:
Quick Start Tutorial for Compiling Deep Learning Models Quick Start Tutorial for Compiling Deep Learning Models
======================================================= =======================================================
**Author**: `Yao Wang <https://github.com/kevinthesun>`_ **Author**: `Yao Wang <https://github.com/kevinthesun>`_
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment