Unverified Commit 1f6c498b by Tianqi Chen Committed by GitHub

[DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416)

* [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings

* Add note block
parent d81a4fa1
...@@ -23,28 +23,3 @@ tvm.runtime ...@@ -23,28 +23,3 @@ tvm.runtime
:imported-members: :imported-members:
:exclude-members: NDArray :exclude-members: NDArray
:autosummary: :autosummary:
.. autoclass:: tvm.runtime.PackedFunc
:members:
:inherited-members:
.. autofunction:: tvm.register_func
.. autofunction:: tvm.get_global_func
.. autoclass:: tvm.runtime.Module
:members:
.. autofunction:: tvm.runtime.load_module
.. autofunction:: tvm.runtime.system_lib
.. autofunction:: tvm.runtime.enabled
.. autoclass:: tvm.runtime.Object
:members:
.. autofunction:: tvm.register_object
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->
# Deploy to Android
## Build model for Android Target
Relay compilation of model for android target could follow same approach like android_rpc.
The code below will save the compilation output which is required on android target.
```
lib.export_library("deploy_lib.so", ndk.create_shared)
with open("deploy_graph.json", "w") as fo:
fo.write(graph.json())
with open("deploy_param.params", "wb") as fo:
fo.write(relay.save_param_dict(params))
```
deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
## TVM Runtime for Android Target
Refer [here](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be referred at this [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java) sample source.
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Deploy to Android
=================
Build model for Android Target
------------------------------
Relay compilation of model for android target could follow same approach like android_rpc.
The code below will save the compilation output which is required on android target.
.. code:: python
lib.export_library("deploy_lib.so", ndk.create_shared)
with open("deploy_graph.json", "w") as fo:
fo.write(graph.json())
with open("deploy_param.params", "wb") as fo:
fo.write(relay.save_param_dict(params))
deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
TVM Runtime for Android Target
------------------------------
Refer `here <https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation>`_ to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be referred at this `java <https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java>`_ sample source.
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->
Deploy TVM Module using C++ API
===============================
We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy)
To run the example, you can use the following command
```bash
cd apps/howto_deploy
./run_example.sh
```
Get TVM Runtime Library
-----------------------
The only thing we need is to link to a TVM runtime in your target platform.
TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use.
In most cases, we can use ```libtvm_runtime.so``` that comes with the build.
If somehow you find it is hard to build ```libtvm_runtime```, checkout [tvm_runtime_pack.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc).
It is an example all in one file that gives you TVM runtime.
You can compile this file using your build system and include this into your project.
You can also checkout [apps](https://github.com/apache/incubator-tvm/tree/master/apps/) for example applications build with TVM on iOS, Android and others.
Dynamic Library vs. System Module
---------------------------------
TVM provides two ways to use the compiled library.
You can checkout [prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py)
on how to generate the library and [cpp_deploy.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc) on how to use them.
- Store library as a shared library and dynamically load the library into your project.
- Bundle the compiled library into your project in system module mode.
Dynamic loading is more flexible and can load new modules on the fly. System module is a more ```static``` approach. We can use system module in places where dynamic library loading is banned.
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Deploy TVM Module using C++ API
===============================
We provide an example on how to deploy TVM modules in `apps/howto_deploy <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy>`_
To run the example, you can use the following command
.. code:: bash
cd apps/howto_deploy
./run_example.sh
Get TVM Runtime Library
-----------------------
The only thing we need is to link to a TVM runtime in your target platform.
TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use.
In most cases, we can use ``libtvm_runtime.so`` that comes with the build.
If somehow you find it is hard to build ``libtvm_runtime``, checkout
`tvm_runtime_pack.cc <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc>`_.
It is an example all in one file that gives you TVM runtime.
You can compile this file using your build system and include this into your project.
You can also checkout `apps <https://github.com/apache/incubator-tvm/tree/master/apps/>`_ for example applications build with TVM on iOS, Android and others.
Dynamic Library vs. System Module
---------------------------------
TVM provides two ways to use the compiled library.
You can checkout `prepare_test_libs.py <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py>`_
on how to generate the library and `cpp_deploy.cc <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc>`_ on how to use them.
- Store library as a shared library and dynamically load the library into your project.
- Bundle the compiled library into your project in system module mode.
Dynamic loading is more flexible and can load new modules on the fly. System module is a more ``static`` approach. We can use system module in places where dynamic library loading is banned.
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->
Integrate TVM into Your Project
===============================
TVM's runtime is designed to be lightweight and portable.
There are several ways you can integrate TVM into your project.
This article introduces possible ways to integrate TVM
as a JIT compiler to generate functions on your system.
## DLPack Support
TVM's generated function follows the PackedFunc convention.
It is a function that can take positional arguments including
standard types such as float, integer, string.
The PackedFunc takes DLTensor pointer in [dlpack](https://github.com/dmlc/dlpack) convention.
So the only thing you need to solve is to create a corresponding DLTensor object.
## Integrate User Defined C++ Array
The only thing we have to do in C++ is to convert your array to DLTensor and pass in its address as
```DLTensor*``` to the generated function.
## Integrate User Defined Python Array
Assume you have a python object ```MyArray```. There are three things that you need to do
- Add ```_tvm_tcode``` field to your array which returns ```tvm.TypeCode.ARRAY_HANDLE```
- Support ```_tvm_handle``` property in your object, which returns the address of DLTensor in python integer
- Register this class by ```tvm.register_extension```
```python
# Example code
import tvm
class MyArray(object):
_tvm_tcode = tvm.TypeCode.ARRAY_HANDLE
@property
def _tvm_handle(self):
dltensor_addr = self.get_dltensor_addr()
return dltensor_addr
# You can put registration step in a separate file mypkg.tvm.py
# and only optionally import that if you only want optional dependency.
tvm.register_extension(MyArray)
```
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Integrate TVM into Your Project
===============================
TVM's runtime is designed to be lightweight and portable.
There are several ways you can integrate TVM into your project.
This article introduces possible ways to integrate TVM
as a JIT compiler to generate functions on your system.
DLPack Support
--------------
TVM's generated function follows the PackedFunc convention.
It is a function that can take positional arguments including
standard types such as float, integer, string.
The PackedFunc takes DLTensor pointer in `DLPack <https://github.com/dmlc/dlpack>`_ convention.
So the only thing you need to solve is to create a corresponding DLTensor object.
Integrate User Defined C++ Array
--------------------------------
The only thing we have to do in C++ is to convert your array to DLTensor and pass in its address as
``DLTensor*`` to the generated function.
## Integrate User Defined Python Array
Assume you have a python object ``MyArray``. There are three things that you need to do
- Add ``_tvm_tcode`` field to your array which returns ``tvm.TypeCode.ARRAY_HANDLE``
- Support ``_tvm_handle`` property in your object, which returns the address of DLTensor in python integer
- Register this class by ``tvm.register_extension``
.. code:: python
# Example code
import tvm
class MyArray(object):
_tvm_tcode = tvm.TypeCode.ARRAY_HANDLE
@property
def _tvm_handle(self):
dltensor_addr = self.get_dltensor_addr()
return dltensor_addr
# You can put registration step in a separate file mypkg.tvm.py
# and only optionally import that if you only want optional dependency.
tvm.register_extension(MyArray)
<!--- Licensed to the Apache Software Foundation (ASF) under one --> .. Licensed to the Apache Software Foundation (ASF) under one
<!--- or more contributor license agreements. See the NOTICE file --> or more contributor license agreements. See the NOTICE file
<!--- distributed with this work for additional information --> distributed with this work for additional information
<!--- regarding copyright ownership. The ASF licenses this file --> regarding copyright ownership. The ASF licenses this file
<!--- to you under the Apache License, Version 2.0 (the --> to you under the Apache License, Version 2.0 (the
<!--- "License"); you may not use this file except in compliance --> "License"); you may not use this file except in compliance
<!--- with the License. You may obtain a copy of the License at --> with the License. You may obtain a copy of the License at
<!--- http://www.apache.org/licenses/LICENSE-2.0 --> .. http://www.apache.org/licenses/LICENSE-2.0
<!--- Unless required by applicable law or agreed to in writing, --> .. Unless required by applicable law or agreed to in writing,
<!--- software distributed under the License is distributed on an --> software distributed under the License is distributed on an
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY --> "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
<!--- KIND, either express or implied. See the License for the --> KIND, either express or implied. See the License for the
<!--- specific language governing permissions and limitations --> specific language governing permissions and limitations
<!--- under the License. --> under the License.
# NNPACK Contrib Installation
NNPACK Contrib Installation
[NNPACK](https://github.com/Maratyszcza/NNPACK) is an acceleration package ===========================
`NNPACK <https://github.com/Maratyszcza/NNPACK>`_ is an acceleration package
for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs. for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs.
Using NNPACK, higher-level libraries like _MXNet_ can speed up Using NNPACK, higher-level libraries like _MXNet_ can speed up
the execution on multi-core CPU computers, including laptops and mobile devices. the execution on multi-core CPU computers, including laptops and mobile devices.
***Note***: AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose. .. note::
For regular use prefer native tuned TVM implementation.
AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose.
For regular use prefer native tuned TVM implementation.
_TVM_ supports NNPACK for forward propagation (inference only) in convolution, max-pooling, and fully-connected layers. TVM supports NNPACK for forward propagation (inference only) in convolution, max-pooling, and fully-connected layers.
In this document, we give a high level overview of how to use NNPACK with _TVM_. In this document, we give a high level overview of how to use NNPACK with TVM.
Conditions
----------
## Conditions
The underlying implementation of NNPACK utilizes several acceleration methods, The underlying implementation of NNPACK utilizes several acceleration methods,
including [fft](https://arxiv.org/abs/1312.5851) and [winograd](https://arxiv.org/abs/1509.09308). including fft and winograd.
These algorithms work better on some special `batch size`, `kernel size`, and `stride` settings than on other, These algorithms work better on some special `batch size`, `kernel size`, and `stride` settings than on other,
so depending on the context, not all convolution, max-pooling, or fully-connected layers can be powered by NNPACK. so depending on the context, not all convolution, max-pooling, or fully-connected layers can be powered by NNPACK.
When favorable conditions for running NNPACKS are not met, When favorable conditions for running NNPACKS are not met,
NNPACK only supports Linux and OS X systems. Windows is not supported at present. NNPACK only supports Linux and OS X systems. Windows is not supported at present.
## Build/Install NNPACK Build/Install NNPACK
--------------------
If the trained model meets some conditions of using NNPACK, If the trained model meets some conditions of using NNPACK,
you can build TVM with NNPACK support. you can build TVM with NNPACK support.
Follow these simple steps: Follow these simple steps:
* Build NNPACK shared library with the following commands. _TVM_ will link NNPACK dynamically.
uild NNPACK shared library with the following commands. TVM will link NNPACK dynamically.
Note: The following NNPACK installation instructions have been tested on Ubuntu 16.04. Note: The following NNPACK installation instructions have been tested on Ubuntu 16.04.
### Build [Ninja](https://ninja-build.org/) Build Ninja
~~~~~~~~~~~
NNPACK need a recent version of Ninja. So we need to install ninja from source. NNPACK need a recent version of Ninja. So we need to install ninja from source.
```bash
git clone git://github.com/ninja-build/ninja.git .. code:: bash
cd ninja
./configure.py --bootstrap git clone git://github.com/ninja-build/ninja.git
``` cd ninja
./configure.py --bootstrap
Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc. Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc.
```bash
export PATH="${PATH}:~/ninja"
```
### Build [NNPACK](https://github.com/Maratyszcza/NNPACK)
The new CMAKE version of NNPACK download [Peach](https://github.com/Maratyszcza/PeachPy) and other dependencies alone .. code:: bash
export PATH="${PATH}:~/ninja"
Build NNPACK
~~~~~~~~~~~~
The new CMAKE version of NNPACK download `Peach <https://github.com/Maratyszcza/PeachPy>`_ and other dependencies alone
Note: at least on OS X, running `ninja install` below will overwrite googletest libraries installed in `/usr/local/lib`. If you build googletest again to replace the nnpack copy, be sure to pass `-DBUILD_SHARED_LIBS=ON` to `cmake`. Note: at least on OS X, running `ninja install` below will overwrite googletest libraries installed in `/usr/local/lib`. If you build googletest again to replace the nnpack copy, be sure to pass `-DBUILD_SHARED_LIBS=ON` to `cmake`.
```bash .. code:: bash
git clone --recursive https://github.com/Maratyszcza/NNPACK.git
cd NNPACK git clone --recursive https://github.com/Maratyszcza/NNPACK.git
# Add PIC option in CFLAG and CXXFLAG to build NNPACK shared library cd NNPACK
sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt # Add PIC option in CFLAG and CXXFLAG to build NNPACK shared library
sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt
mkdir build sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt
cd build mkdir build
# Generate ninja build rule and add shared library in configuration cd build
cmake -G Ninja -D BUILD_SHARED_LIBS=ON .. # Generate ninja build rule and add shared library in configuration
ninja cmake -G Ninja -D BUILD_SHARED_LIBS=ON ..
sudo ninja install ninja
sudo ninja install
# Add NNPACK lib folder in your ldconfig
echo "/usr/local/lib" > /etc/ld.so.conf.d/nnpack.conf # Add NNPACK lib folder in your ldconfig
sudo ldconfig echo "/usr/local/lib" > /etc/ld.so.conf.d/nnpack.conf
``` sudo ldconfig
## Build TVM with NNPACK support
Build TVM with NNPACK support
```bash -----------------------------
git clone --recursive https://github.com/apache/incubator-tvm tvm
``` .. code:: bash
* Set `set(USE_NNPACK ON)` in config.cmake. git clone --recursive https://github.com/apache/incubator-tvm tvm
* Set `NNPACK_PATH` to the $(YOUR_NNPACK_INSTALL_PATH)
- Set `set(USE_NNPACK ON)` in config.cmake.
- Set `NNPACK_PATH` to the $(YOUR_NNPACK_INSTALL_PATH)
after configuration use `make` to build TVM after configuration use `make` to build TVM
```bash
make .. code:: bash
```
make
...@@ -42,7 +42,7 @@ cd docs ...@@ -42,7 +42,7 @@ cd docs
make clean make clean
TVM_TUTORIAL_EXEC_PATTERN=none make html 2>/tmp/$$.log.txt TVM_TUTORIAL_EXEC_PATTERN=none make html 2>/tmp/$$.log.txt
grep -v -E "__mro__|RemovedInSphinx|UserWarning|FutureWarning|Keras" < /tmp/$$.log.txt > /tmp/$$.logclean.txt || true grep -v -E "__mro__|RemovedIn|UserWarning|FutureWarning|Keras" < /tmp/$$.log.txt > /tmp/$$.logclean.txt || true
echo "---------Sphinx Log----------" echo "---------Sphinx Log----------"
cat /tmp/$$.logclean.txt cat /tmp/$$.logclean.txt
echo "-----------------------------" echo "-----------------------------"
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment