Unverified Commit 86b844b9 by Tianqi Chen Committed by GitHub

[DOCS] Update link loc (#4257)

parent aae5cde8
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread. Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
...@@ -112,7 +112,7 @@ We do encourage everyone to work anything they are interested in. ...@@ -112,7 +112,7 @@ We do encourage everyone to work anything they are interested in.
- [Lianmin Zheng](https://github.com/merrymercy): @merrymercy - [Lianmin Zheng](https://github.com/merrymercy): @merrymercy
## List of Contributors ## List of Contributors
- [Full List of Contributors](https://github.com/dmlc/tvm/graphs/contributors) - [Full List of Contributors](https://github.com/apache/incubator-tvm/graphs/contributors)
- To contributors: please add your name to the list. - To contributors: please add your name to the list.
- [Qiao Zhang](https://github.com/zhangqiaorjc) - [Qiao Zhang](https://github.com/zhangqiaorjc)
- [Haolong Zhang](https://github.com/haolongzhangm) - [Haolong Zhang](https://github.com/haolongzhangm)
......
...@@ -34,7 +34,7 @@ Alternatively, you may execute Docker image we provide wich contains the require ...@@ -34,7 +34,7 @@ Alternatively, you may execute Docker image we provide wich contains the require
### Build APK ### Build APK
Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/dmlc/tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary. Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/apache/incubator-tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary.
``` ```
dependencies { dependencies {
...@@ -124,7 +124,7 @@ If everything goes well, you will find compile tools in `/opt/android-toolchain- ...@@ -124,7 +124,7 @@ If everything goes well, you will find compile tools in `/opt/android-toolchain-
Follow instruction to get compiled version model for android target [here.](http://docs.tvm.ai/deploy/android.html) Follow instruction to get compiled version model for android target [here.](http://docs.tvm.ai/deploy/android.html)
Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java#L81) Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java#L81)
`CPU Verison flavor` `CPU Verison flavor`
``` ```
......
...@@ -28,7 +28,7 @@ You will need JDK, [Android NDK](https://developer.android.com/ndk) and an Andro ...@@ -28,7 +28,7 @@ You will need JDK, [Android NDK](https://developer.android.com/ndk) and an Andro
We use [Gradle](https://gradle.org) to build. Please follow [the installation instruction](https://gradle.org/install) for your operating system. We use [Gradle](https://gradle.org) to build. Please follow [the installation instruction](https://gradle.org/install) for your operating system.
Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/dmlc/tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary. Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/apache/incubator-tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary.
``` ```
dependencies { dependencies {
...@@ -146,7 +146,7 @@ android 1 1 0 ...@@ -146,7 +146,7 @@ android 1 1 0
``` ```
Then checkout [android\_rpc/tests/android\_rpc\_test.py](https://github.com/dmlc/tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py) and run, Then checkout [android\_rpc/tests/android\_rpc\_test.py](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py) and run,
```bash ```bash
# Specify the RPC tracker # Specify the RPC tracker
...@@ -157,7 +157,7 @@ export TVM_NDK_CC=/opt/android-toolchain-arm64/bin/aarch64-linux-android-g++ ...@@ -157,7 +157,7 @@ export TVM_NDK_CC=/opt/android-toolchain-arm64/bin/aarch64-linux-android-g++
python android_rpc_test.py python android_rpc_test.py
``` ```
This will compile TVM IR to shared libraries (CPU, OpenCL and Vulkan) and run vector addition on your Android device. To verify compiled TVM IR shared libraries on OpenCL target set `'test_opencl = True'` and on Vulkan target set `'test_vulkan = True'` in [tests/android_rpc_test.py](https://github.com/dmlc/tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py), by default on CPU target will execute. This will compile TVM IR to shared libraries (CPU, OpenCL and Vulkan) and run vector addition on your Android device. To verify compiled TVM IR shared libraries on OpenCL target set `'test_opencl = True'` and on Vulkan target set `'test_vulkan = True'` in [tests/android_rpc_test.py](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py), by default on CPU target will execute.
On my test device, it gives following results. On my test device, it gives following results.
```bash ```bash
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
## Results ## Results
See results on wiki page https://github.com/dmlc/tvm/wiki/Benchmark See results on wiki page https://github.com/apache/incubator-tvm/wiki/Benchmark
## How to Reproduce ## How to Reproduce
...@@ -78,7 +78,7 @@ python3 -m tvm.exec.rpc_tracker ...@@ -78,7 +78,7 @@ python3 -m tvm.exec.rpc_tracker
`python3 -m tvm.exec.rpc_server --tracker=10.77.1.123:9190 --key=rk3399`, where 10.77.1.123 is the IP address of the tracker. `python3 -m tvm.exec.rpc_server --tracker=10.77.1.123:9190 --key=rk3399`, where 10.77.1.123 is the IP address of the tracker.
* For Android device * For Android device
* Build and install tvm RPC apk on your device [Help](https://github.com/dmlc/tvm/tree/master/apps/android_rpc). * Build and install tvm RPC apk on your device [Help](https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc).
Make sure you can pass the android rpc test. Then you have alreadly known how to register. Make sure you can pass the android rpc test. Then you have alreadly known how to register.
3. Verify the device registration 3. Verify the device registration
......
...@@ -39,7 +39,7 @@ Check out the `/tvm/install/ubuntu_install_sgx.sh` for the commands to get these ...@@ -39,7 +39,7 @@ Check out the `/tvm/install/ubuntu_install_sgx.sh` for the commands to get these
If using Docker, start by running If using Docker, start by running
``` ```
git clone --recursive https://github.com/dmlc/tvm.git git clone --recursive https://github.com/apache/incubator-tvm.git
docker run --rm -it -v $(pwd)/tvm:/mnt tvmai/ci-cpu /bin/bash docker run --rm -it -v $(pwd)/tvm:/mnt tvmai/ci-cpu /bin/bash
``` ```
then, in the container then, in the container
......
...@@ -43,6 +43,6 @@ requirements: ...@@ -43,6 +43,6 @@ requirements:
- {{ pin_compatible('cudnn', lower_bound='7.6.0', max_pin='x') }} # [cuda] - {{ pin_compatible('cudnn', lower_bound='7.6.0', max_pin='x') }} # [cuda]
about: about:
home: https://github.com/dmlc/tvm home: https://github.com/apache/incubator-tvm
license: Apache2 license: Apache2
summary: a low level domain specific language for compiling tensor computation pipelines summary: a low level domain specific language for compiling tensor computation pipelines
\ No newline at end of file
...@@ -58,7 +58,7 @@ test: ...@@ -58,7 +58,7 @@ test:
- python -m pytest -v tests/python/integration - python -m pytest -v tests/python/integration
about: about:
home: https://github.com/dmlc/tvm home: https://github.com/apache/incubator-tvm
license: Apache-2.0 license: Apache-2.0
license_family: Apache license_family: Apache
summary: a low level domain specific language for compiling tensor computation pipelines summary: a low level domain specific language for compiling tensor computation pipelines
...@@ -56,7 +56,7 @@ RUN git clone https://github.com/KhronosGroup/OpenCL-Headers /usr/local/OpenCL-H ...@@ -56,7 +56,7 @@ RUN git clone https://github.com/KhronosGroup/OpenCL-Headers /usr/local/OpenCL-H
# Build TVM # Build TVM
RUN cd /usr && \ RUN cd /usr && \
git clone --depth=1 https://github.com/dmlc/tvm --recursive && \ git clone --depth=1 https://github.com/apache/incubator-tvm --recursive && \
cd /usr/tvm && \ cd /usr/tvm && \
mkdir -p build && \ mkdir -p build && \
cd build && \ cd build && \
......
...@@ -62,7 +62,7 @@ RUN echo "Cloning TVM source & submodules" ...@@ -62,7 +62,7 @@ RUN echo "Cloning TVM source & submodules"
ENV TVM_PAR_DIR="/usr" ENV TVM_PAR_DIR="/usr"
RUN mkdir -p TVM_PAR_DIR && \ RUN mkdir -p TVM_PAR_DIR && \
cd ${TVM_PAR_DIR} && \ cd ${TVM_PAR_DIR} && \
git clone --depth=1 https://github.com/dmlc/tvm --recursive git clone --depth=1 https://github.com/apache/incubator-tvm --recursive
#RUN git submodule update --init --recursive #RUN git submodule update --init --recursive
......
...@@ -21,7 +21,7 @@ set -u ...@@ -21,7 +21,7 @@ set -u
set -o pipefail set -o pipefail
cd /usr cd /usr
git clone --depth=1 https://github.com/dmlc/tvm --recursive git clone --depth=1 https://github.com/apache/incubator-tvm --recursive
cd /usr/tvm cd /usr/tvm
# checkout a hash-tag # checkout a hash-tag
git checkout 4b13bf668edc7099b38d463e5db94ebc96c80470 git checkout 4b13bf668edc7099b38d463e5db94ebc96c80470
......
...@@ -21,7 +21,7 @@ set -u ...@@ -21,7 +21,7 @@ set -u
set -o pipefail set -o pipefail
cd /usr cd /usr
git clone --depth=1 https://github.com/dmlc/tvm --recursive git clone --depth=1 https://github.com/apache/incubator-tvm --recursive
cd /usr/tvm cd /usr/tvm
# checkout a hash-tag # checkout a hash-tag
git checkout 4b13bf668edc7099b38d463e5db94ebc96c80470 git checkout 4b13bf668edc7099b38d463e5db94ebc96c80470
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
TVM Community Guideline TVM Community Guideline
======================= =======================
TVM adopts the Apache style model and governs by merit. We believe that it is important to create an inclusive community where everyone can use, contribute to, and influence the direction of the project. See `CONTRIBUTORS.md <https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md>`_ for the current list of contributors. TVM adopts the Apache style model and governs by merit. We believe that it is important to create an inclusive community where everyone can use, contribute to, and influence the direction of the project. See `CONTRIBUTORS.md <https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md>`_ for the current list of contributors.
......
...@@ -68,7 +68,7 @@ Be careful to leave blank lines between sections of your documents. ...@@ -68,7 +68,7 @@ Be careful to leave blank lines between sections of your documents.
In the above case, there has to be a blank line before `Parameters`, `Returns` and `Examples` In the above case, there has to be a blank line before `Parameters`, `Returns` and `Examples`
in order for the doc to be built correctly. To add a new function to the doc, in order for the doc to be built correctly. To add a new function to the doc,
we need to add the `sphinx.autodoc <http://www.sphinx-doc.org/en/master/ext/autodoc.html>`_ we need to add the `sphinx.autodoc <http://www.sphinx-doc.org/en/master/ext/autodoc.html>`_
rules to the `docs/api/python <https://github.com/dmlc/tvm/tree/master/docs/api/python>`_). rules to the `docs/api/python <https://github.com/apache/incubator-tvm/tree/master/docs/api/python>`_).
You can refer to the existing files under this folder on how to add the functions. You can refer to the existing files under this folder on how to add the functions.
...@@ -96,7 +96,7 @@ to add comments about code logics to improve readability. ...@@ -96,7 +96,7 @@ to add comments about code logics to improve readability.
Write Tutorials Write Tutorials
--------------- ---------------
We use the `sphinx-gallery <https://sphinx-gallery.github.io/>`_ to build python tutorials. We use the `sphinx-gallery <https://sphinx-gallery.github.io/>`_ to build python tutorials.
You can find the source code under `tutorials <https://github.com/dmlc/tvm/tree/master/tutorials>`_ quite self explanatory. You can find the source code under `tutorials <https://github.com/apache/incubator-tvm/tree/master/tutorials>`_ quite self explanatory.
One thing that worth noting is that the comment blocks are written in reStructuredText instead of markdown so be aware of the syntax. One thing that worth noting is that the comment blocks are written in reStructuredText instead of markdown so be aware of the syntax.
The tutorial code will run on our build server to generate the document page. The tutorial code will run on our build server to generate the document page.
......
...@@ -38,5 +38,5 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target. ...@@ -38,5 +38,5 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
## TVM Runtime for Android Target ## TVM Runtime for Android Target
Refer [here](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target. Refer [here](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be referred at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source. From android java TVM API to load model & execute can be referred at this [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source.
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
Deploy TVM Module using C++ API Deploy TVM Module using C++ API
=============================== ===============================
We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy) We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy)
To run the example, you can use the following command To run the example, you can use the following command
...@@ -34,17 +34,17 @@ The only thing we need is to link to a TVM runtime in your target platform. ...@@ -34,17 +34,17 @@ The only thing we need is to link to a TVM runtime in your target platform.
TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use. TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use.
In most cases, we can use ```libtvm_runtime.so``` that comes with the build. In most cases, we can use ```libtvm_runtime.so``` that comes with the build.
If somehow you find it is hard to build ```libtvm_runtime```, checkout [tvm_runtime_pack.cc](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc). If somehow you find it is hard to build ```libtvm_runtime```, checkout [tvm_runtime_pack.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc).
It is an example all in one file that gives you TVM runtime. It is an example all in one file that gives you TVM runtime.
You can compile this file using your build system and include this into your project. You can compile this file using your build system and include this into your project.
You can also checkout [apps](https://github.com/dmlc/tvm/tree/master/apps/) for example applications build with TVM on iOS, Android and others. You can also checkout [apps](https://github.com/apache/incubator-tvm/tree/master/apps/) for example applications build with TVM on iOS, Android and others.
Dynamic Library vs. System Module Dynamic Library vs. System Module
--------------------------------- ---------------------------------
TVM provides two ways to use the compiled library. TVM provides two ways to use the compiled library.
You can checkout [prepare_test_libs.py](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/prepare_test_libs.py) You can checkout [prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py)
on how to generate the library and [cpp_deploy.cc](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/cpp_deploy.cc) on how to use them. on how to generate the library and [cpp_deploy.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc) on how to use them.
- Store library as a shared library and dynamically load the library into your project. - Store library as a shared library and dynamically load the library into your project.
- Bundle the compiled library into your project in system module mode. - Bundle the compiled library into your project in system module mode.
......
...@@ -38,7 +38,7 @@ on a Linux based embedded system such as Raspberry Pi: ...@@ -38,7 +38,7 @@ on a Linux based embedded system such as Raspberry Pi:
.. code:: bash .. code:: bash
git clone --recursive https://github.com/dmlc/tvm git clone --recursive https://github.com/apache/incubator-tvm
cd tvm cd tvm
mkdir build mkdir build
cp cmake/config.cmake build cp cmake/config.cmake build
......
...@@ -144,7 +144,7 @@ This process need few additional options as given below to NNVM build. ...@@ -144,7 +144,7 @@ This process need few additional options as given below to NNVM build.
Module export require additional options for not to compile but save as ```lib.export_library (path, fcompile=False)``` Module export require additional options for not to compile but save as ```lib.export_library (path, fcompile=False)```
The output of above API is a tar compressed file containing object file ```(lib.o)``` and cpp source file ```(devc.cc)``` which embeds device blob. Thease two files should be compiled along with other files or objects while building c++ application. The output of above API is a tar compressed file containing object file ```(lib.o)``` and cpp source file ```(devc.cc)``` which embeds device blob. Thease two files should be compiled along with other files or objects while building c++ application.
Please refer to [Makefile](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/Makefile#L32) for a reference. Please refer to [Makefile](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/Makefile#L32) for a reference.
The c++ code to load this system module require the below change. The c++ code to load this system module require the below change.
......
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
InferBound Pass InferBound Pass
******************************************* *******************************************
The InferBound pass is run after normalize, and before ScheduleOps `build_module.py <https://github.com/dmlc/tvm/blob/master/python/tvm/build_module.py>`_. The main job of InferBound is to create the bounds map, which specifies a Range for each IterVar in the program. These bounds are then passed to ScheduleOps, where they are used to set the extents of For loops, see `MakeLoopNest <https://github.com/dmlc/tvm/blob/master/src/op/op_util.cc>`_, and to set the sizes of allocated buffers (`BuildRealize <https://github.com/dmlc/tvm/blob/master/src/op/compute_op.cc>`_), among other uses. The InferBound pass is run after normalize, and before ScheduleOps `build_module.py <https://github.com/apache/incubator-tvm/blob/master/python/tvm/build_module.py>`_. The main job of InferBound is to create the bounds map, which specifies a Range for each IterVar in the program. These bounds are then passed to ScheduleOps, where they are used to set the extents of For loops, see `MakeLoopNest <https://github.com/apache/incubator-tvm/blob/master/src/op/op_util.cc>`_, and to set the sizes of allocated buffers (`BuildRealize <https://github.com/apache/incubator-tvm/blob/master/src/op/compute_op.cc>`_), among other uses.
The output of InferBound is a map from IterVar to Range: The output of InferBound is a map from IterVar to Range:
...@@ -50,9 +50,9 @@ Therefore, let's review the Range and IterVar classes: ...@@ -50,9 +50,9 @@ Therefore, let's review the Range and IterVar classes:
}; };
} }
Note that IterVarNode also contains a Range ``dom``. This ``dom`` may or may not have a meaningful value, depending on when the IterVar was created. For example, when ``tvm.compute`` is called, an `IterVar is created <https://github.com/dmlc/tvm/blob/master/src/op/compute_op.cc>`_ for each axis and reduce axis, with dom's equal to the shape supplied in the call to ``tvm.compute``. Note that IterVarNode also contains a Range ``dom``. This ``dom`` may or may not have a meaningful value, depending on when the IterVar was created. For example, when ``tvm.compute`` is called, an `IterVar is created <https://github.com/apache/incubator-tvm/blob/master/src/op/compute_op.cc>`_ for each axis and reduce axis, with dom's equal to the shape supplied in the call to ``tvm.compute``.
On the other hand, when ``tvm.split`` is called, `IterVars are created <https://github.com/dmlc/tvm/blob/master/src/schedule/schedule_lang.cc>`_ for the inner and outer axes, but these IterVars are not given a meaningful ``dom`` value. On the other hand, when ``tvm.split`` is called, `IterVars are created <https://github.com/apache/incubator-tvm/blob/master/src/schedule/schedule_lang.cc>`_ for the inner and outer axes, but these IterVars are not given a meaningful ``dom`` value.
In any case, the ``dom`` member of an IterVar is never modified during InferBound. However, keep in mind that the ``dom`` member of an IterVar is sometimes used as default value for the Ranges InferBound computes. In any case, the ``dom`` member of an IterVar is never modified during InferBound. However, keep in mind that the ``dom`` member of an IterVar is sometimes used as default value for the Ranges InferBound computes.
...@@ -114,7 +114,7 @@ Tensors haven't been mentioned yet, but in the context of TVM, a Tensor represen ...@@ -114,7 +114,7 @@ Tensors haven't been mentioned yet, but in the context of TVM, a Tensor represen
int value_index; int value_index;
}; };
In the Operation class declaration above, we can see that each operation also has a list of InputTensors. Thus the stages of the schedule form a DAG, where each stage is a node in the graph. There is an edge in the graph from Stage A to Stage B, if the operation of Stage B has an input tensor whose source operation is the op of Stage A. Put simply, there is an edge from A to B, if B consumes a tensor produced by A. See the diagram below. This graph is created at the beginning of InferBound, by a call to `CreateReadGraph <https://github.com/dmlc/tvm/blob/master/src/schedule/bound.cc>`_. In the Operation class declaration above, we can see that each operation also has a list of InputTensors. Thus the stages of the schedule form a DAG, where each stage is a node in the graph. There is an edge in the graph from Stage A to Stage B, if the operation of Stage B has an input tensor whose source operation is the op of Stage A. Put simply, there is an edge from A to B, if B consumes a tensor produced by A. See the diagram below. This graph is created at the beginning of InferBound, by a call to `CreateReadGraph <https://github.com/apache/incubator-tvm/blob/master/src/schedule/bound.cc>`_.
.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/stage_graph.png .. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/stage_graph.png
:align: center :align: center
......
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
# NNVM Design Overview # NNVM Design Overview
NNVM is a reusable graph IR stack for deep learning systems. It provides useful API to construct, represent and transform computation graphs to get most high-level optimization needed in deep learning. NNVM is a reusable graph IR stack for deep learning systems. It provides useful API to construct, represent and transform computation graphs to get most high-level optimization needed in deep learning.
As a part of TVM stack for deep learning, NNVM also provides a shared compiler for deep learning frameworks to optimize, compile and deploy into different hardware backends via [TVM](https://github.com/dmlc/tvm) As a part of TVM stack for deep learning, NNVM also provides a shared compiler for deep learning frameworks to optimize, compile and deploy into different hardware backends via [TVM](https://github.com/apache/incubator-tvm)
## Key Requirements and Design Choices ## Key Requirements and Design Choices
......
...@@ -399,8 +399,8 @@ information about the pass manager interface can be found in :ref:`relay-pass-in ...@@ -399,8 +399,8 @@ information about the pass manager interface can be found in :ref:`relay-pass-in
Relay's standard passes are listed in `include/tvm/relay/transform.h`_ and implemented Relay's standard passes are listed in `include/tvm/relay/transform.h`_ and implemented
in `src/relay/pass/`_. in `src/relay/pass/`_.
.. _include/tvm/relay/transform.h: https://github.com/dmlc/tvm/blob/master/include/tvm/relay/transform.h .. _include/tvm/relay/transform.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/relay/transform.h
.. _src/relay/pass: https://github.com/dmlc/tvm/tree/master/src/relay/pass .. _src/relay/pass: https://github.com/apache/incubator-tvm/tree/master/src/relay/pass
.. _src/relay/pass/fold_constant.cc: https://github.com/dmlc/tvm/blob/master/src/relay/pass/fold_constant.cc .. _src/relay/pass/fold_constant.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/pass/fold_constant.cc
...@@ -631,14 +631,14 @@ For more pass infra related examples in Python and C++, please refer to ...@@ -631,14 +631,14 @@ For more pass infra related examples in Python and C++, please refer to
.. _Relay module: https://docs.tvm.ai/langref/relay_expr.html#module-and-global-functions .. _Relay module: https://docs.tvm.ai/langref/relay_expr.html#module-and-global-functions
.. _include/tvm/relay/transform.h: https://github.com/dmlc/tvm/blob/master/include/tvm/relay/transform.h .. _include/tvm/relay/transform.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/relay/transform.h
.. _src/relay/pass/pass_manager.cc: https://github.com/dmlc/tvm/blob/master/src/relay/pass/pass_manager.cc .. _src/relay/pass/pass_manager.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/pass/pass_manager.cc
.. _src/relay/pass/fold_constant.cc: https://github.com/dmlc/tvm/blob/master/src/relay/pass/fold_constant.cc .. _src/relay/pass/fold_constant.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/pass/fold_constant.cc
.. _python/tvm/relay/transform.py: https://github.com/dmlc/tvm/blob/master/python/tvm/relay/transform.py .. _python/tvm/relay/transform.py: https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/transform.py
.. _tests/python/relay/test_pass_manager.py: https://github.com/dmlc/tvm/blob/master/tests/python/relay/test_pass_manager.py .. _tests/python/relay/test_pass_manager.py: https://github.com/apache/incubator-tvm/blob/master/tests/python/relay/test_pass_manager.py
.. _tests/cpp/relay_transform_sequential.cc: https://github.com/dmlc/tvm/blob/master/tests/cpp/relay_transform_sequential.cc .. _tests/cpp/relay_transform_sequential.cc: https://github.com/apache/incubator-tvm/blob/master/tests/cpp/relay_transform_sequential.cc
...@@ -43,7 +43,7 @@ PackedFunc ...@@ -43,7 +43,7 @@ PackedFunc
`PackedFunc`_ is a simple but elegant solution `PackedFunc`_ is a simple but elegant solution
we find to solve the challenges listed. The following code block provides an example in C++ we find to solve the challenges listed. The following code block provides an example in C++
.. _PackedFunc: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/packed_func.h .. _PackedFunc: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/packed_func.h
.. code:: c .. code:: c
...@@ -129,9 +129,9 @@ which allows us to embed the PackedFunc into any languages. Besides python, so f ...@@ -129,9 +129,9 @@ which allows us to embed the PackedFunc into any languages. Besides python, so f
`java`_ and `javascript`_. `java`_ and `javascript`_.
This philosophy of embedded API is very like Lua, except that we don't have a new language but use C++. This philosophy of embedded API is very like Lua, except that we don't have a new language but use C++.
.. _minimum C API: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/c_runtime_api.h .. _minimum C API: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/c_runtime_api.h
.. _java: https://github.com/dmlc/tvm/tree/master/jvm .. _java: https://github.com/apache/incubator-tvm/tree/master/jvm
.. _javascript: https://github.com/dmlc/tvm/tree/master/web .. _javascript: https://github.com/apache/incubator-tvm/tree/master/web
One fun fact about PackedFunc is that we use it for both compiler and deployment stack. One fun fact about PackedFunc is that we use it for both compiler and deployment stack.
...@@ -139,7 +139,7 @@ One fun fact about PackedFunc is that we use it for both compiler and deployment ...@@ -139,7 +139,7 @@ One fun fact about PackedFunc is that we use it for both compiler and deployment
- All TVM's compiler pass functions are exposed to frontend as PackedFunc, see `here`_ - All TVM's compiler pass functions are exposed to frontend as PackedFunc, see `here`_
- The compiled module also returns the compiled function as PackedFunc - The compiled module also returns the compiled function as PackedFunc
.. _here: https://github.com/dmlc/tvm/tree/master/src/api .. _here: https://github.com/apache/incubator-tvm/tree/master/src/api
To keep the runtime minimum, we isolated the IR Node support from the deployment runtime. The resulting runtime takes around 200K - 600K depending on how many runtime driver modules (e.g., CUDA) get included. To keep the runtime minimum, we isolated the IR Node support from the deployment runtime. The resulting runtime takes around 200K - 600K depending on how many runtime driver modules (e.g., CUDA) get included.
...@@ -160,7 +160,7 @@ TVM defines the compiled object as `Module`_. ...@@ -160,7 +160,7 @@ TVM defines the compiled object as `Module`_.
The user can get the compiled function from Module as PackedFunc. The user can get the compiled function from Module as PackedFunc.
The generated compiled code can dynamically get function from Module in runtime. It caches the function handle in the first call and reuses in subsequent calls. We use this to link device code and callback into any PackedFunc(e.g., python) from generated code. The generated compiled code can dynamically get function from Module in runtime. It caches the function handle in the first call and reuses in subsequent calls. We use this to link device code and callback into any PackedFunc(e.g., python) from generated code.
.. _Module: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/module.h .. _Module: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/module.h
The ModuleNode is an abstract class that can be implemented by each type of device. The ModuleNode is an abstract class that can be implemented by each type of device.
So far we support modules for CUDA, Metal, OpenCL and loading dynamic shared libraries. This abstraction makes introduction So far we support modules for CUDA, Metal, OpenCL and loading dynamic shared libraries. This abstraction makes introduction
...@@ -276,17 +276,17 @@ Each argument in PackedFunc contains a union value `TVMValue`_ ...@@ -276,17 +276,17 @@ Each argument in PackedFunc contains a union value `TVMValue`_
and a type code. This design allows the dynamically typed language to convert to the corresponding type directly, and statically typed language to and a type code. This design allows the dynamically typed language to convert to the corresponding type directly, and statically typed language to
do runtime type checking during conversion. do runtime type checking during conversion.
.. _TVMValue: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/c_runtime_api.h#L122 .. _TVMValue: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/c_runtime_api.h#L122
The relevant files are The relevant files are
- `packed_func.h`_ for C++ API - `packed_func.h`_ for C++ API
- `c_runtime_api.cc`_ for C API and how to provide callback. - `c_runtime_api.cc`_ for C API and how to provide callback.
.. _packed_func.h: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/packed_func.h .. _packed_func.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/packed_func.h
.. _c_runtime_api.cc: https://github.com/dmlc/tvm/blob/master/src/runtime/c_runtime_api.cc#L262 .. _c_runtime_api.cc: https://github.com/apache/incubator-tvm/blob/master/src/runtime/c_runtime_api.cc#L262
To support extension types, we used a registry system to register type related information, like support of any To support extension types, we used a registry system to register type related information, like support of any
in C++, see `Extension types`_ for more details. in C++, see `Extension types`_ for more details.
.. _Extension types: https://github.com/dmlc/tvm/tree/master/apps/extension .. _Extension types: https://github.com/apache/incubator-tvm/tree/master/apps/extension
...@@ -45,5 +45,5 @@ TVM's relation to libDNN cuDNN ...@@ -45,5 +45,5 @@ TVM's relation to libDNN cuDNN
TVM can incorporate these library as external calls. One goal of TVM is to be able to TVM can incorporate these library as external calls. One goal of TVM is to be able to
generate high performing kernels. We will evolve TVM an incremental manner as generate high performing kernels. We will evolve TVM an incremental manner as
we learn from the technics of manual kernel crafting and add these as primitives in DSL. we learn from the technics of manual kernel crafting and add these as primitives in DSL.
See also [TVM Operator Inventory](https://github.com/dmlc/tvm/tree/master/topi) for See also [TVM Operator Inventory](https://github.com/apache/incubator-tvm/tree/master/topi) for
recipes of operators in TVM. recipes of operators in TVM.
...@@ -57,7 +57,7 @@ Export ...@@ -57,7 +57,7 @@ Export
TensorFlow frontend expects a frozen protobuf (.pb) or saved model as input. It currently does not support checkpoint (.ckpt). The graphdef needed by the TensorFlow frontend can be extracted from the active session, or by using the `TFParser`_ helper class. TensorFlow frontend expects a frozen protobuf (.pb) or saved model as input. It currently does not support checkpoint (.ckpt). The graphdef needed by the TensorFlow frontend can be extracted from the active session, or by using the `TFParser`_ helper class.
.. _TFParser: https://github.com/dmlc/tvm/blob/master/python/tvm/relay/frontend/tensorflow_parser.py .. _TFParser: https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/tensorflow_parser.py
The model should be exported with a number of transformations to prepare the model for inference. It is also important to set ```add_shapes=True```, as this will embed the output shapes of each node into the graph. Here is one function to export a model as a protobuf given a session: The model should be exported with a number of transformations to prepare the model for inference. It is also important to set ```add_shapes=True```, as this will embed the output shapes of each node into the graph. Here is one function to export a model as a protobuf given a session:
...@@ -97,7 +97,7 @@ Import the Model ...@@ -97,7 +97,7 @@ Import the Model
Explicit Shape: Explicit Shape:
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
To ensure shapes can be known throughout the entire graph, pass the ```shape``` argument to ```from_tensorflow```. This dictionary maps input names to input shapes. Please refer to these `test cases <https://github.com/dmlc/tvm/blob/master/nnvm/tests/python/frontend/tensorflow/test_forward.py#L36>`_ for examples. To ensure shapes can be known throughout the entire graph, pass the ```shape``` argument to ```from_tensorflow```. This dictionary maps input names to input shapes. Please refer to these `test cases <https://github.com/apache/incubator-tvm/blob/master/nnvm/tests/python/frontend/tensorflow/test_forward.py#L36>`_ for examples.
Data Layout Data Layout
~~~~~~~~~~~ ~~~~~~~~~~~
......
...@@ -29,7 +29,7 @@ First, clone TVM repo to get the auxiliary scripts ...@@ -29,7 +29,7 @@ First, clone TVM repo to get the auxiliary scripts
.. code:: bash .. code:: bash
git clone --recursive https://github.com/dmlc/tvm git clone --recursive https://github.com/apache/incubator-tvm
We can then use the following command to launch a `tvmai/demo-cpu` image. We can then use the following command to launch a `tvmai/demo-cpu` image.
...@@ -69,5 +69,5 @@ with ``localhost`` when pasting it into browser. ...@@ -69,5 +69,5 @@ with ``localhost`` when pasting it into browser.
Docker Source Docker Source
------------- -------------
Check out `<https://github.com/dmlc/tvm/tree/master/docker>`_ if you are interested in Check out `<https://github.com/apache/incubator-tvm/tree/master/docker>`_ if you are interested in
building your own docker images. building your own docker images.
...@@ -29,7 +29,7 @@ To get started, clone TVM repo from github. It is important to clone the submodu ...@@ -29,7 +29,7 @@ To get started, clone TVM repo from github. It is important to clone the submodu
.. code:: bash .. code:: bash
git clone --recursive https://github.com/dmlc/tvm git clone --recursive https://github.com/apache/incubator-tvm
For windows users who use github tools, you can open the git shell, and type the following command. For windows users who use github tools, you can open the git shell, and type the following command.
......
...@@ -85,7 +85,7 @@ sudo ldconfig ...@@ -85,7 +85,7 @@ sudo ldconfig
## Build TVM with NNPACK support ## Build TVM with NNPACK support
```bash ```bash
git clone --recursive https://github.com/dmlc/tvm git clone --recursive https://github.com/apache/incubator-tvm
``` ```
* Set `set(USE_NNPACK ON)` in config.cmake. * Set `set(USE_NNPACK ON)` in config.cmake.
......
...@@ -267,7 +267,7 @@ Operators ...@@ -267,7 +267,7 @@ Operators
An operator is a primitive operation, such as :code:`add` or :code:`conv2d`, not defined in the Relay An operator is a primitive operation, such as :code:`add` or :code:`conv2d`, not defined in the Relay
language. Operators are declared in the global operator language. Operators are declared in the global operator
registry in C++. Many common operators are backed by TVM's registry in C++. Many common operators are backed by TVM's
Tensor Operator Inventory (`TOPI <https://github.com/dmlc/tvm/tree/master/topi>`__). Tensor Operator Inventory (`TOPI <https://github.com/apache/incubator-tvm/tree/master/topi>`__).
To register an operator a user must provide an implementation To register an operator a user must provide an implementation
of the operator, its type, and any other desired metadata. of the operator, its type, and any other desired metadata.
......
...@@ -103,7 +103,7 @@ Because the direct board-to-computer connection prevents the board from directly ...@@ -103,7 +103,7 @@ Because the direct board-to-computer connection prevents the board from directly
mkdir <mountpoint> mkdir <mountpoint>
sshfs xilinx@192.168.2.99:/home/xilinx <mountpoint> sshfs xilinx@192.168.2.99:/home/xilinx <mountpoint>
cd <mountpoint> cd <mountpoint>
git clone --recursive https://github.com/dmlc/tvm git clone --recursive https://github.com/apache/incubator-tvm
# When finished, you can leave the moutpoint and unmount the directory # When finished, you can leave the moutpoint and unmount the directory
cd ~ cd ~
sudo umount <mountpoint> sudo umount <mountpoint>
...@@ -375,7 +375,7 @@ Once the compilation completes, the generated bitstream can be found under `<tvm ...@@ -375,7 +375,7 @@ Once the compilation completes, the generated bitstream can be found under `<tvm
### Chisel-based Custom VTA Bitstream Compilation for DE10-Nano ### Chisel-based Custom VTA Bitstream Compilation for DE10-Nano
Similar to the HLS-based design, high-level hardware parameters in Chisel-based design are listed in the VTA configuration file [Configs.scala](https://github.com/dmlc/tvm/blob/master/vta/hardware/chisel/src/main/scala/core/Configs.scala), and they can be customized by the user. Similar to the HLS-based design, high-level hardware parameters in Chisel-based design are listed in the VTA configuration file [Configs.scala](https://github.com/apache/incubator-tvm/blob/master/vta/hardware/chisel/src/main/scala/core/Configs.scala), and they can be customized by the user.
For Intel FPGA, bitstream generation is driven by a top-level `Makefile` under `<tvmroot>/vta/hardware/intel`. For Intel FPGA, bitstream generation is driven by a top-level `Makefile` under `<tvmroot>/vta/hardware/intel`.
......
...@@ -175,4 +175,4 @@ Server server = new Server(proxyHost, proxyPort, "key"); ...@@ -175,4 +175,4 @@ Server server = new Server(proxyHost, proxyPort, "key");
server.start(); server.start();
``` ```
You can also use `StandaloneServerProcessor` and `ConnectProxyServerProcessor` to build your own RPC server. Refer to [Android RPC Server](https://github.com/dmlc/tvm/blob/master/apps/android_rpc/app/src/main/java/ml/dmlc/tvm/tvmrpc/RPCProcessor.java) for more details. You can also use `StandaloneServerProcessor` and `ConnectProxyServerProcessor` to build your own RPC server. Refer to [Android RPC Server](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/app/src/main/java/ml/dmlc/tvm/tvmrpc/RPCProcessor.java) for more details.
\ No newline at end of file \ No newline at end of file
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
<artifactId>tvm4j-parent</artifactId> <artifactId>tvm4j-parent</artifactId>
<version>0.0.1-SNAPSHOT</version> <version>0.0.1-SNAPSHOT</version>
<name>TVM4J Package - Parent</name> <name>TVM4J Package - Parent</name>
<url>https://github.com/dmlc/tvm/tree/master/jvm</url> <url>https://github.com/apache/incubator-tvm/tree/master/jvm</url>
<description>TVM4J Package</description> <description>TVM4J Package</description>
<organization> <organization>
<name>Distributed (Deep) Machine Learning Community</name> <name>Distributed (Deep) Machine Learning Community</name>
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
<scm> <scm>
<connection>scm:git:git@github.com:dmlc/tvm.git</connection> <connection>scm:git:git@github.com:dmlc/tvm.git</connection>
<developerConnection>scm:git:git@github.com:dmlc/tvm.git</developerConnection> <developerConnection>scm:git:git@github.com:dmlc/tvm.git</developerConnection>
<url>https://github.com/dmlc/tvm</url> <url>https://github.com/apache/incubator-tvm</url>
</scm> </scm>
<properties> <properties>
......
...@@ -53,7 +53,7 @@ from tvm.contrib.download import download_testdata ...@@ -53,7 +53,7 @@ from tvm.contrib.download import download_testdata
# #
# .. code-block:: bash # .. code-block:: bash
# #
# git clone --recursive https://github.com/dmlc/tvm # git clone --recursive https://github.com/apache/incubator-tvm
# cd tvm # cd tvm
# cp cmake/config.cmake . # cp cmake/config.cmake .
# sed -i "s/USE_OPENCL OFF/USE_OPENCL ON/" config.cmake # sed -i "s/USE_OPENCL OFF/USE_OPENCL ON/" config.cmake
......
...@@ -52,7 +52,7 @@ from tvm.contrib.download import download_testdata ...@@ -52,7 +52,7 @@ from tvm.contrib.download import download_testdata
# #
# .. code-block:: bash # .. code-block:: bash
# #
# git clone --recursive https://github.com/dmlc/tvm # git clone --recursive https://github.com/apache/incubator-tvm
# cd tvm # cd tvm
# make runtime -j4 # make runtime -j4
# #
......
...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the tvm compiler compiles ...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the tvm compiler compiles
these operators, it will query this log file to get the best knob values. these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some arm devices. You can go to We also released pre-tuned parameters for some arm devices. You can go to
`ARM CPU Benchmark <https://github.com/dmlc/tvm/wiki/Benchmark#arm-cpu>`_ `ARM CPU Benchmark <https://github.com/apache/incubator-tvm/wiki/Benchmark#arm-cpu>`_
to see the results. to see the results.
""" """
...@@ -157,7 +157,7 @@ def get_network(name, batch_size): ...@@ -157,7 +157,7 @@ def get_network(name, batch_size):
# (replace :code:`[HOST_IP]` with the IP address of your host machine) # (replace :code:`[HOST_IP]` with the IP address of your host machine)
# #
# * For Android: # * For Android:
# Follow this `readme page <https://github.com/dmlc/tvm/tree/master/apps/android_rpc>`_ to # Follow this `readme page <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc>`_ to
# install tvm rpc apk on the android device. Make sure you can pass the android rpc test. # install tvm rpc apk on the android device. Make sure you can pass the android rpc test.
# Then you have already registred your device. During tuning, you have to go to developer option # Then you have already registred your device. During tuning, you have to go to developer option
# and enable "Keep screen awake during changing" and charge your phone to make it stable. # and enable "Keep screen awake during changing" and charge your phone to make it stable.
......
...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the tvm compiler compiles ...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the tvm compiler compiles
these operators, it will query this log file to get the best knob values. these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some NVIDIA GPUs. You can go to We also released pre-tuned parameters for some NVIDIA GPUs. You can go to
`NVIDIA GPU Benchmark <https://github.com/dmlc/tvm/wiki/Benchmark#nvidia-gpu>`_ `NVIDIA GPU Benchmark <https://github.com/apache/incubator-tvm/wiki/Benchmark#nvidia-gpu>`_
to see the results. to see the results.
""" """
......
...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the tvm compiler compiles ...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the tvm compiler compiles
these operators, it will query this log file to get the best knob values. these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some arm devices. You can go to We also released pre-tuned parameters for some arm devices. You can go to
`Mobile GPU Benchmark <https://github.com/dmlc/tvm/wiki/Benchmark#mobile-gpu>`_ `Mobile GPU Benchmark <https://github.com/apache/incubator-tvm/wiki/Benchmark#mobile-gpu>`_
to see the results. to see the results.
""" """
...@@ -157,7 +157,7 @@ def get_network(name, batch_size): ...@@ -157,7 +157,7 @@ def get_network(name, batch_size):
# (replace :code:`[HOST_IP]` with the IP address of your host machine) # (replace :code:`[HOST_IP]` with the IP address of your host machine)
# #
# * For Android: # * For Android:
# Follow this `readme page <https://github.com/dmlc/tvm/tree/master/apps/android_rpc>`_ to # Follow this `readme page <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc>`_ to
# install tvm rpc apk on the android device. Make sure you can pass the android rpc test. # install tvm rpc apk on the android device. Make sure you can pass the android rpc test.
# Then you have already registred your device. During tuning, you have to go to developer option # Then you have already registred your device. During tuning, you have to go to developer option
# and enable "Keep screen awake during changing" and charge your phone to make it stable. # and enable "Keep screen awake during changing" and charge your phone to make it stable.
......
...@@ -156,7 +156,7 @@ setup(name='tvm', ...@@ -156,7 +156,7 @@ setup(name='tvm',
package_dir={'tvm': 'tvm'}, package_dir={'tvm': 'tvm'},
package_data={'tvm': get_package_data_files()}, package_data={'tvm': get_package_data_files()},
distclass=BinaryDistribution, distclass=BinaryDistribution,
url='https://github.com/dmlc/tvm', url='https://github.com/apache/incubator-tvm',
ext_modules=config_cython(), ext_modules=config_cython(),
**setup_kwargs) **setup_kwargs)
......
...@@ -54,7 +54,7 @@ def find_lib_path(name=None, search_path=None, optional=False): ...@@ -54,7 +54,7 @@ def find_lib_path(name=None, search_path=None, optional=False):
""" """
use_runtime = os.environ.get("TVM_USE_RUNTIME_LIB", False) use_runtime = os.environ.get("TVM_USE_RUNTIME_LIB", False)
# See https://github.com/dmlc/tvm/issues/281 for some background. # See https://github.com/apache/incubator-tvm/issues/281 for some background.
# NB: This will either be the source directory (if TVM is run # NB: This will either be the source directory (if TVM is run
# inplace) or the install directory (if TVM is installed). # inplace) or the install directory (if TVM is installed).
......
...@@ -20,8 +20,8 @@ name = "tvm-frontend" ...@@ -20,8 +20,8 @@ name = "tvm-frontend"
version = "0.1.0" version = "0.1.0"
license = "Apache-2.0" license = "Apache-2.0"
description = "Rust frontend support for TVM" description = "Rust frontend support for TVM"
repository = "https://github.com/dmlc/tvm" repository = "https://github.com/apache/incubator-tvm"
homepage = "https://github.com/dmlc/tvm" homepage = "https://github.com/apache/incubator-tvm"
readme = "README.md" readme = "README.md"
keywords = ["rust", "tvm", "nnvm"] keywords = ["rust", "tvm", "nnvm"]
categories = ["api-bindings", "science"] categories = ["api-bindings", "science"]
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
# TVM Runtime Frontend Support # TVM Runtime Frontend Support
This crate provides an idiomatic Rust API for [TVM](https://github.com/dmlc/tvm) runtime frontend. Currently this requires **Nightly Rust** and tested on `rustc 1.32.0-nightly` This crate provides an idiomatic Rust API for [TVM](https://github.com/apache/incubator-tvm) runtime frontend. Currently this requires **Nightly Rust** and tested on `rustc 1.32.0-nightly`
## What Does This Crate Offer? ## What Does This Crate Offer?
......
...@@ -50,7 +50,7 @@ use tvm_common::ffi; ...@@ -50,7 +50,7 @@ use tvm_common::ffi;
use crate::{function, TVMArgValue}; use crate::{function, TVMArgValue};
/// Device type can be from a supported device name. See the supported devices /// Device type can be from a supported device name. See the supported devices
/// in [TVM](https://github.com/dmlc/tvm). /// in [TVM](https://github.com/apache/incubator-tvm).
/// ///
/// ## Example /// ## Example
/// ///
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
* under the License. * under the License.
*/ */
//! [TVM](https://github.com/dmlc/tvm) is a compiler stack for deep learning systems. //! [TVM](https://github.com/apache/incubator-tvm) is a compiler stack for deep learning systems.
//! //!
//! This crate provides an idiomatic Rust API for TVM runtime frontend. //! This crate provides an idiomatic Rust API for TVM runtime frontend.
//! //!
......
...@@ -20,7 +20,7 @@ name = "tvm-macros" ...@@ -20,7 +20,7 @@ name = "tvm-macros"
version = "0.1.0" version = "0.1.0"
license = "Apache-2.0" license = "Apache-2.0"
description = "Proc macros used by the TVM crates." description = "Proc macros used by the TVM crates."
repository = "https://github.com/dmlc/tvm" repository = "https://github.com/apache/incubator-tvm"
readme = "README.md" readme = "README.md"
keywords = ["tvm"] keywords = ["tvm"]
authors = ["TVM Contributors"] authors = ["TVM Contributors"]
......
...@@ -20,7 +20,7 @@ name = "tvm-runtime" ...@@ -20,7 +20,7 @@ name = "tvm-runtime"
version = "0.1.0" version = "0.1.0"
license = "Apache-2.0" license = "Apache-2.0"
description = "A static TVM runtime" description = "A static TVM runtime"
repository = "https://github.com/dmlc/tvm" repository = "https://github.com/apache/incubator-tvm"
readme = "README.md" readme = "README.md"
keywords = ["tvm", "nnvm"] keywords = ["tvm", "nnvm"]
categories = ["api-bindings", "science"] categories = ["api-bindings", "science"]
......
...@@ -296,7 +296,7 @@ pub(crate) fn sgx_join_threads() { ...@@ -296,7 +296,7 @@ pub(crate) fn sgx_join_threads() {
ocall_packed!("__sgx_thread_group_join__", 0); ocall_packed!("__sgx_thread_group_join__", 0);
} }
// @see https://github.com/dmlc/tvm/issues/988 for information on why this function is used. // @see https://github.com/apache/incubator-tvm/issues/988 for information on why this function is used.
#[no_mangle] #[no_mangle]
pub extern "C" fn TVMBackendParallelBarrier(_task_id: usize, penv: *const TVMParallelGroupEnv) { pub extern "C" fn TVMBackendParallelBarrier(_task_id: usize, penv: *const TVMParallelGroupEnv) {
let barrier: &Arc<Barrier> = unsafe { &*((*penv).sync_handle as *const Arc<Barrier>) }; let barrier: &Arc<Barrier> = unsafe { &*((*penv).sync_handle as *const Arc<Barrier>) };
......
...@@ -142,7 +142,7 @@ def test_conv2d_run(): ...@@ -142,7 +142,7 @@ def test_conv2d_run():
x, w, (1, 1), "SAME")) x, w, (1, 1), "SAME"))
# CUDA is disabled for 'direct' schedule: # CUDA is disabled for 'direct' schedule:
# https://github.com/dmlc/tvm/pull/3070#issuecomment-486597553 # https://github.com/apache/incubator-tvm/pull/3070#issuecomment-486597553
# group conv2d # group conv2d
dshape = (1, 32, 18, 18) dshape = (1, 32, 18, 18)
kshape = (32, 4, 3, 3) kshape = (32, 4, 3, 3)
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
# NOTE: We name this test file to start with test_graph_tuner # NOTE: We name this test file to start with test_graph_tuner
# to make it execute after zero_rank tensor test cases. This # to make it execute after zero_rank tensor test cases. This
# helps avoid topi arithmetic operator overloading issue: # helps avoid topi arithmetic operator overloading issue:
# https://github.com/dmlc/tvm/issues/3240. # https://github.com/apache/incubator-tvm/issues/3240.
# TODO: restore the file name after this issue is resolved. # TODO: restore the file name after this issue is resolved.
import os import os
import copy import copy
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
# NOTE: We name this test file to start with test_graph_tuner # NOTE: We name this test file to start with test_graph_tuner
# to make it execute after zero_rank tensor test cases. This # to make it execute after zero_rank tensor test cases. This
# helps avoid topi arithmetic operator overloading issue: # helps avoid topi arithmetic operator overloading issue:
# https://github.com/dmlc/tvm/issues/3240 # https://github.com/apache/incubator-tvm/issues/3240
# TODO: restore the file name after this issue is resolved. # TODO: restore the file name after this issue is resolved.
import tvm import tvm
......
...@@ -115,7 +115,7 @@ setup(name='topi', ...@@ -115,7 +115,7 @@ setup(name='topi',
"decorator", "decorator",
], ],
packages=find_packages(), packages=find_packages(),
url='https://github.com/dmlc/tvm', url='https://github.com/apache/incubator-tvm',
**setup_kwargs) **setup_kwargs)
......
...@@ -110,7 +110,7 @@ def _declaration_conv(cfg, data, kernel, strides, padding, dilation, layout, out ...@@ -110,7 +110,7 @@ def _declaration_conv(cfg, data, kernel, strides, padding, dilation, layout, out
kh, kw, _, _ = get_const_tuple(kernel.shape) kh, kw, _, _ = get_const_tuple(kernel.shape)
if layout == 'HWCN': if layout == 'HWCN':
return nn.conv2d_hwcn(data, kernel, strides, padding, dilation, out_dtype) return nn.conv2d_hwcn(data, kernel, strides, padding, dilation, out_dtype)
# FIXME - https://github.com/dmlc/tvm/issues/4122 # FIXME - https://github.com/apache/incubator-tvm/issues/4122
# _declaration_conv_nhwc_pack expects kernel layout to be HWOI. However, the tests use HWIO # _declaration_conv_nhwc_pack expects kernel layout to be HWOI. However, the tests use HWIO
# layout. Commenting until we have clarity about the nhwc_pack implementation from the author. # layout. Commenting until we have clarity about the nhwc_pack implementation from the author.
# elif layout == 'NHWC' and kh == 1 and kw == 1 and kernel.dtype == "int8": # elif layout == 'NHWC' and kh == 1 and kw == 1 and kernel.dtype == "int8":
......
...@@ -251,7 +251,7 @@ def _schedule_conv_nhwc_pack_int8(s, cfg, data, conv_out, last): ...@@ -251,7 +251,7 @@ def _schedule_conv_nhwc_pack_int8(s, cfg, data, conv_out, last):
packing of weight to make the address access be friendly to int8 packing of weight to make the address access be friendly to int8
intrinsic intrinsic
""" """
# FIXME - https://github.com/dmlc/tvm/issues/3598 # FIXME - https://github.com/apache/incubator-tvm/issues/3598
# pylint: disable=unreachable # pylint: disable=unreachable
return s return s
......
...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the TVM compiler compiles ...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the TVM compiler compiles
these operators, it will query this log file to get the best knob values. these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some arm devices. You can go to We also released pre-tuned parameters for some arm devices. You can go to
`ARM CPU Benchmark <https://github.com/dmlc/tvm/wiki/Benchmark#arm-cpu>`_ `ARM CPU Benchmark <https://github.com/apache/incubator-tvm/wiki/Benchmark#arm-cpu>`_
to see the results. to see the results.
""" """
...@@ -149,7 +149,7 @@ def get_network(name, batch_size): ...@@ -149,7 +149,7 @@ def get_network(name, batch_size):
# (replace :code:`[HOST_IP]` with the IP address of your host machine) # (replace :code:`[HOST_IP]` with the IP address of your host machine)
# #
# * For Android: # * For Android:
# Follow this `readme page <https://github.com/dmlc/tvm/tree/master/apps/android_rpc>`_ to # Follow this `readme page <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc>`_ to
# install the TVM RPC APK on the android device. Make sure you can pass the android rpc test. # install the TVM RPC APK on the android device. Make sure you can pass the android rpc test.
# Then you have already registred your device. During tuning, you have to go to developer option # Then you have already registred your device. During tuning, you have to go to developer option
# and enable "Keep screen awake during changing" and charge your phone to make it stable. # and enable "Keep screen awake during changing" and charge your phone to make it stable.
......
...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the TVM compiler compiles ...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the TVM compiler compiles
these operators, it will query this log file to get the best knob values. these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some NVIDIA GPUs. You can go to We also released pre-tuned parameters for some NVIDIA GPUs. You can go to
`NVIDIA GPU Benchmark <https://github.com/dmlc/tvm/wiki/Benchmark#nvidia-gpu>`_ `NVIDIA GPU Benchmark <https://github.com/apache/incubator-tvm/wiki/Benchmark#nvidia-gpu>`_
to see the results. to see the results.
""" """
......
...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the TVM compiler compiles ...@@ -31,7 +31,7 @@ the best knob values for all required operators. When the TVM compiler compiles
these operators, it will query this log file to get the best knob values. these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some arm devices. You can go to We also released pre-tuned parameters for some arm devices. You can go to
`Mobile GPU Benchmark <https://github.com/dmlc/tvm/wiki/Benchmark#mobile-gpu>`_ `Mobile GPU Benchmark <https://github.com/apache/incubator-tvm/wiki/Benchmark#mobile-gpu>`_
to see the results. to see the results.
""" """
...@@ -150,7 +150,7 @@ def get_network(name, batch_size): ...@@ -150,7 +150,7 @@ def get_network(name, batch_size):
# (replace :code:`[HOST_IP]` with the IP address of your host machine) # (replace :code:`[HOST_IP]` with the IP address of your host machine)
# #
# * For Android: # * For Android:
# Follow this `readme page <https://github.com/dmlc/tvm/tree/master/apps/android_rpc>`_ to # Follow this `readme page <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc>`_ to
# install TVM RPC APK on the android device. Make sure you can pass the android RPC test. # install TVM RPC APK on the android device. Make sure you can pass the android RPC test.
# Then you have already registred your device. During tuning, you have to go to developer option # Then you have already registred your device. During tuning, you have to go to developer option
# and enable "Keep screen awake during changing" and charge your phone to make it stable. # and enable "Keep screen awake during changing" and charge your phone to make it stable.
......
...@@ -49,7 +49,7 @@ and the Firefly-RK3399 for an OpenCL example. ...@@ -49,7 +49,7 @@ and the Firefly-RK3399 for an OpenCL example.
# #
# .. code-block:: bash # .. code-block:: bash
# #
# git clone --recursive https://github.com/dmlc/tvm # git clone --recursive https://github.com/apache/incubator-tvm
# cd tvm # cd tvm
# make runtime -j2 # make runtime -j2
# #
......
...@@ -46,7 +46,7 @@ from tvm.contrib.download import download_testdata ...@@ -46,7 +46,7 @@ from tvm.contrib.download import download_testdata
# #
# .. code-block:: bash # .. code-block:: bash
# #
# git clone --recursive https://github.com/dmlc/tvm # git clone --recursive https://github.com/apache/incubator-tvm
# cd tvm # cd tvm
# docker build -t tvm.demo_android -f docker/Dockerfile.demo_android ./docker # docker build -t tvm.demo_android -f docker/Dockerfile.demo_android ./docker
# docker run --pid=host -h tvm -v $PWD:/workspace \ # docker run --pid=host -h tvm -v $PWD:/workspace \
...@@ -105,7 +105,7 @@ from tvm.contrib.download import download_testdata ...@@ -105,7 +105,7 @@ from tvm.contrib.download import download_testdata
# --------------------------------------- # ---------------------------------------
# Now we can register our Android device to the tracker. # Now we can register our Android device to the tracker.
# #
# Follow this `readme page <https://github.com/dmlc/tvm/tree/master/apps/android_rpc>`_ to # Follow this `readme page <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc>`_ to
# install TVM RPC APK on the android device. # install TVM RPC APK on the android device.
# #
# Here is an example of config.mk. I enabled OpenCL and Vulkan. # Here is an example of config.mk. I enabled OpenCL and Vulkan.
...@@ -138,7 +138,7 @@ from tvm.contrib.download import download_testdata ...@@ -138,7 +138,7 @@ from tvm.contrib.download import download_testdata
# #
# .. note:: # .. note::
# #
# At this time, don't forget to `create a standalone toolchain <https://github.com/dmlc/tvm/tree/master/apps/android_rpc#architecture-and-android-standalone-toolchain>`_ . # At this time, don't forget to `create a standalone toolchain <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc#architecture-and-android-standalone-toolchain>`_ .
# #
# for example # for example
# #
......
...@@ -52,7 +52,7 @@ from tvm.contrib.download import download_testdata ...@@ -52,7 +52,7 @@ from tvm.contrib.download import download_testdata
# #
# .. code-block:: bash # .. code-block:: bash
# #
# git clone --recursive https://github.com/dmlc/tvm # git clone --recursive https://github.com/apache/incubator-tvm
# cd tvm # cd tvm
# mkdir build # mkdir build
# cp cmake/config.cmake build # cp cmake/config.cmake build
......
...@@ -62,7 +62,7 @@ https://www.veripool.org/projects/verilator/wiki/Installing ...@@ -62,7 +62,7 @@ https://www.veripool.org/projects/verilator/wiki/Installing
## Setup in TVM ## Setup in TVM
1. Install `verilator` and `sbt` as described above 1. Install `verilator` and `sbt` as described above
2. Get tvm `git clone https://github.com/dmlc/tvm.git` 2. Get tvm `git clone https://github.com/apache/incubator-tvm.git`
3. Build [tvm](https://docs.tvm.ai/install/from_source.html#build-the-shared-library) 3. Build [tvm](https://docs.tvm.ai/install/from_source.html#build-the-shared-library)
## How to run VTA TSIM examples ## How to run VTA TSIM examples
......
...@@ -82,7 +82,7 @@ This will create ```build/libtvm_web_runtime.bc``` and ```build/libtvm_web_runti ...@@ -82,7 +82,7 @@ This will create ```build/libtvm_web_runtime.bc``` and ```build/libtvm_web_runti
The general idea is to use TVM as normally and set target to be ```llvm -target=asmjs-unknown-emscripten -system-lib```. The general idea is to use TVM as normally and set target to be ```llvm -target=asmjs-unknown-emscripten -system-lib```.
The following code snippet from [tests/web/prepare_test_libs.py](https://github.com/dmlc/tvm/tree/master/tests/web/prepare_test_libs.py) demonstrate The following code snippet from [tests/web/prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/master/tests/web/prepare_test_libs.py) demonstrate
the compilation process. the compilation process.
```python ```python
...@@ -114,7 +114,7 @@ The result js library is a library that contains both TVM runtime and the compil ...@@ -114,7 +114,7 @@ The result js library is a library that contains both TVM runtime and the compil
## Run the Generated Library ## Run the Generated Library
The following code snippet from [tests/web/test_module_load.js](https://github.com/dmlc/tvm/tree/master/tests/web/test_module_load.js) demonstrate The following code snippet from [tests/web/test_module_load.js](https://github.com/apache/incubator-tvm/tree/master/tests/web/test_module_load.js) demonstrate
how to run the compiled library. how to run the compiled library.
```js ```js
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment