Unverified Commit 1f6c498b by Tianqi Chen Committed by GitHub

[DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416)

* [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings

* Add note block
parent d81a4fa1
......@@ -23,28 +23,3 @@ tvm.runtime
:imported-members:
:exclude-members: NDArray
:autosummary:
.. autoclass:: tvm.runtime.PackedFunc
:members:
:inherited-members:
.. autofunction:: tvm.register_func
.. autofunction:: tvm.get_global_func
.. autoclass:: tvm.runtime.Module
:members:
.. autofunction:: tvm.runtime.load_module
.. autofunction:: tvm.runtime.system_lib
.. autofunction:: tvm.runtime.enabled
.. autoclass:: tvm.runtime.Object
:members:
.. autofunction:: tvm.register_object
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->
# Deploy to Android
## Build model for Android Target
Relay compilation of model for android target could follow same approach like android_rpc.
The code below will save the compilation output which is required on android target.
```
lib.export_library("deploy_lib.so", ndk.create_shared)
with open("deploy_graph.json", "w") as fo:
fo.write(graph.json())
with open("deploy_param.params", "wb") as fo:
fo.write(relay.save_param_dict(params))
```
deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
## TVM Runtime for Android Target
Refer [here](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be referred at this [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java) sample source.
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Deploy to Android
=================
Build model for Android Target
------------------------------
Relay compilation of model for android target could follow same approach like android_rpc.
The code below will save the compilation output which is required on android target.
.. code:: python
lib.export_library("deploy_lib.so", ndk.create_shared)
with open("deploy_graph.json", "w") as fo:
fo.write(graph.json())
with open("deploy_param.params", "wb") as fo:
fo.write(relay.save_param_dict(params))
deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
TVM Runtime for Android Target
------------------------------
Refer `here <https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation>`_ to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be referred at this `java <https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java>`_ sample source.
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->
Deploy TVM Module using C++ API
===============================
We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy)
To run the example, you can use the following command
```bash
cd apps/howto_deploy
./run_example.sh
```
Get TVM Runtime Library
-----------------------
The only thing we need is to link to a TVM runtime in your target platform.
TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use.
In most cases, we can use ```libtvm_runtime.so``` that comes with the build.
If somehow you find it is hard to build ```libtvm_runtime```, checkout [tvm_runtime_pack.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc).
It is an example all in one file that gives you TVM runtime.
You can compile this file using your build system and include this into your project.
You can also checkout [apps](https://github.com/apache/incubator-tvm/tree/master/apps/) for example applications build with TVM on iOS, Android and others.
Dynamic Library vs. System Module
---------------------------------
TVM provides two ways to use the compiled library.
You can checkout [prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py)
on how to generate the library and [cpp_deploy.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc) on how to use them.
- Store library as a shared library and dynamically load the library into your project.
- Bundle the compiled library into your project in system module mode.
Dynamic loading is more flexible and can load new modules on the fly. System module is a more ```static``` approach. We can use system module in places where dynamic library loading is banned.
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Deploy TVM Module using C++ API
===============================
We provide an example on how to deploy TVM modules in `apps/howto_deploy <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy>`_
To run the example, you can use the following command
.. code:: bash
cd apps/howto_deploy
./run_example.sh
Get TVM Runtime Library
-----------------------
The only thing we need is to link to a TVM runtime in your target platform.
TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use.
In most cases, we can use ``libtvm_runtime.so`` that comes with the build.
If somehow you find it is hard to build ``libtvm_runtime``, checkout
`tvm_runtime_pack.cc <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc>`_.
It is an example all in one file that gives you TVM runtime.
You can compile this file using your build system and include this into your project.
You can also checkout `apps <https://github.com/apache/incubator-tvm/tree/master/apps/>`_ for example applications build with TVM on iOS, Android and others.
Dynamic Library vs. System Module
---------------------------------
TVM provides two ways to use the compiled library.
You can checkout `prepare_test_libs.py <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py>`_
on how to generate the library and `cpp_deploy.cc <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc>`_ on how to use them.
- Store library as a shared library and dynamically load the library into your project.
- Bundle the compiled library into your project in system module mode.
Dynamic loading is more flexible and can load new modules on the fly. System module is a more ``static`` approach. We can use system module in places where dynamic library loading is banned.
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->
Integrate TVM into Your Project
===============================
TVM's runtime is designed to be lightweight and portable.
There are several ways you can integrate TVM into your project.
This article introduces possible ways to integrate TVM
as a JIT compiler to generate functions on your system.
## DLPack Support
TVM's generated function follows the PackedFunc convention.
It is a function that can take positional arguments including
standard types such as float, integer, string.
The PackedFunc takes DLTensor pointer in [dlpack](https://github.com/dmlc/dlpack) convention.
So the only thing you need to solve is to create a corresponding DLTensor object.
## Integrate User Defined C++ Array
The only thing we have to do in C++ is to convert your array to DLTensor and pass in its address as
```DLTensor*``` to the generated function.
## Integrate User Defined Python Array
Assume you have a python object ```MyArray```. There are three things that you need to do
- Add ```_tvm_tcode``` field to your array which returns ```tvm.TypeCode.ARRAY_HANDLE```
- Support ```_tvm_handle``` property in your object, which returns the address of DLTensor in python integer
- Register this class by ```tvm.register_extension```
```python
# Example code
import tvm
class MyArray(object):
_tvm_tcode = tvm.TypeCode.ARRAY_HANDLE
@property
def _tvm_handle(self):
dltensor_addr = self.get_dltensor_addr()
return dltensor_addr
# You can put registration step in a separate file mypkg.tvm.py
# and only optionally import that if you only want optional dependency.
tvm.register_extension(MyArray)
```
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Integrate TVM into Your Project
===============================
TVM's runtime is designed to be lightweight and portable.
There are several ways you can integrate TVM into your project.
This article introduces possible ways to integrate TVM
as a JIT compiler to generate functions on your system.
DLPack Support
--------------
TVM's generated function follows the PackedFunc convention.
It is a function that can take positional arguments including
standard types such as float, integer, string.
The PackedFunc takes DLTensor pointer in `DLPack <https://github.com/dmlc/dlpack>`_ convention.
So the only thing you need to solve is to create a corresponding DLTensor object.
Integrate User Defined C++ Array
--------------------------------
The only thing we have to do in C++ is to convert your array to DLTensor and pass in its address as
``DLTensor*`` to the generated function.
## Integrate User Defined Python Array
Assume you have a python object ``MyArray``. There are three things that you need to do
- Add ``_tvm_tcode`` field to your array which returns ``tvm.TypeCode.ARRAY_HANDLE``
- Support ``_tvm_handle`` property in your object, which returns the address of DLTensor in python integer
- Register this class by ``tvm.register_extension``
.. code:: python
# Example code
import tvm
class MyArray(object):
_tvm_tcode = tvm.TypeCode.ARRAY_HANDLE
@property
def _tvm_handle(self):
dltensor_addr = self.get_dltensor_addr()
return dltensor_addr
# You can put registration step in a separate file mypkg.tvm.py
# and only optionally import that if you only want optional dependency.
tvm.register_extension(MyArray)
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->
# NNPACK Contrib Installation
[NNPACK](https://github.com/Maratyszcza/NNPACK) is an acceleration package
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
NNPACK Contrib Installation
===========================
`NNPACK <https://github.com/Maratyszcza/NNPACK>`_ is an acceleration package
for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs.
Using NNPACK, higher-level libraries like _MXNet_ can speed up
the execution on multi-core CPU computers, including laptops and mobile devices.
***Note***: AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose.
For regular use prefer native tuned TVM implementation.
.. note::
AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose.
For regular use prefer native tuned TVM implementation.
_TVM_ supports NNPACK for forward propagation (inference only) in convolution, max-pooling, and fully-connected layers.
In this document, we give a high level overview of how to use NNPACK with _TVM_.
TVM supports NNPACK for forward propagation (inference only) in convolution, max-pooling, and fully-connected layers.
In this document, we give a high level overview of how to use NNPACK with TVM.
Conditions
----------
## Conditions
The underlying implementation of NNPACK utilizes several acceleration methods,
including [fft](https://arxiv.org/abs/1312.5851) and [winograd](https://arxiv.org/abs/1509.09308).
including fft and winograd.
These algorithms work better on some special `batch size`, `kernel size`, and `stride` settings than on other,
so depending on the context, not all convolution, max-pooling, or fully-connected layers can be powered by NNPACK.
When favorable conditions for running NNPACKS are not met,
NNPACK only supports Linux and OS X systems. Windows is not supported at present.
## Build/Install NNPACK
Build/Install NNPACK
--------------------
If the trained model meets some conditions of using NNPACK,
you can build TVM with NNPACK support.
Follow these simple steps:
* Build NNPACK shared library with the following commands. _TVM_ will link NNPACK dynamically.
uild NNPACK shared library with the following commands. TVM will link NNPACK dynamically.
Note: The following NNPACK installation instructions have been tested on Ubuntu 16.04.
### Build [Ninja](https://ninja-build.org/)
Build Ninja
~~~~~~~~~~~
NNPACK need a recent version of Ninja. So we need to install ninja from source.
```bash
git clone git://github.com/ninja-build/ninja.git
cd ninja
./configure.py --bootstrap
```
.. code:: bash
git clone git://github.com/ninja-build/ninja.git
cd ninja
./configure.py --bootstrap
Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc.
```bash
export PATH="${PATH}:~/ninja"
```
### Build [NNPACK](https://github.com/Maratyszcza/NNPACK)
The new CMAKE version of NNPACK download [Peach](https://github.com/Maratyszcza/PeachPy) and other dependencies alone
.. code:: bash
export PATH="${PATH}:~/ninja"
Build NNPACK
~~~~~~~~~~~~
The new CMAKE version of NNPACK download `Peach <https://github.com/Maratyszcza/PeachPy>`_ and other dependencies alone
Note: at least on OS X, running `ninja install` below will overwrite googletest libraries installed in `/usr/local/lib`. If you build googletest again to replace the nnpack copy, be sure to pass `-DBUILD_SHARED_LIBS=ON` to `cmake`.
```bash
git clone --recursive https://github.com/Maratyszcza/NNPACK.git
cd NNPACK
# Add PIC option in CFLAG and CXXFLAG to build NNPACK shared library
sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt
sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt
mkdir build
cd build
# Generate ninja build rule and add shared library in configuration
cmake -G Ninja -D BUILD_SHARED_LIBS=ON ..
ninja
sudo ninja install
# Add NNPACK lib folder in your ldconfig
echo "/usr/local/lib" > /etc/ld.so.conf.d/nnpack.conf
sudo ldconfig
```
## Build TVM with NNPACK support
```bash
git clone --recursive https://github.com/apache/incubator-tvm tvm
```
* Set `set(USE_NNPACK ON)` in config.cmake.
* Set `NNPACK_PATH` to the $(YOUR_NNPACK_INSTALL_PATH)
.. code:: bash
git clone --recursive https://github.com/Maratyszcza/NNPACK.git
cd NNPACK
# Add PIC option in CFLAG and CXXFLAG to build NNPACK shared library
sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt
sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt
mkdir build
cd build
# Generate ninja build rule and add shared library in configuration
cmake -G Ninja -D BUILD_SHARED_LIBS=ON ..
ninja
sudo ninja install
# Add NNPACK lib folder in your ldconfig
echo "/usr/local/lib" > /etc/ld.so.conf.d/nnpack.conf
sudo ldconfig
Build TVM with NNPACK support
-----------------------------
.. code:: bash
git clone --recursive https://github.com/apache/incubator-tvm tvm
- Set `set(USE_NNPACK ON)` in config.cmake.
- Set `NNPACK_PATH` to the $(YOUR_NNPACK_INSTALL_PATH)
after configuration use `make` to build TVM
```bash
make
```
.. code:: bash
make
......@@ -42,7 +42,7 @@ cd docs
make clean
TVM_TUTORIAL_EXEC_PATTERN=none make html 2>/tmp/$$.log.txt
grep -v -E "__mro__|RemovedInSphinx|UserWarning|FutureWarning|Keras" < /tmp/$$.log.txt > /tmp/$$.logclean.txt || true
grep -v -E "__mro__|RemovedIn|UserWarning|FutureWarning|Keras" < /tmp/$$.log.txt > /tmp/$$.logclean.txt || true
echo "---------Sphinx Log----------"
cat /tmp/$$.logclean.txt
echo "-----------------------------"
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment