Unverified Commit b776ff39 by Tianqi Chen Committed by GitHub

[DOCS] Point docs to the ASF site. (#5178)

* [DOCS] Point docs to the ASF site.

We have migrated the main docs to the ASF site,
which will be periodically updated using the docs generated by the CI.
Points the docs to the ASF version.

* [CI] Improve the docs generation script
parent 02121383
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
......@@ -20,7 +20,7 @@ TVM Contributors
TVM adopts the Apache way and governs by merit. We believe that it is important to create an inclusive community where everyone can use,
contribute to, and influence the direction of the project. We actively invite contributors who have earned the merit to be part of the development community.
See the [community structure document](http://docs.tvm.ai/contribute/community.html) for the explanation of community structure and contribution guidelines.
See the [community structure document](https://tvm.apache.org/docs/contribute/community.html) for the explanation of community structure and contribution guidelines.
## Mentors
......
......@@ -17,7 +17,7 @@
<img src=https://raw.githubusercontent.com/apache/incubator-tvm-site/master/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack
==============================================
[Documentation](https://docs.tvm.ai) |
[Documentation](https://tvm.apache.org/docs) |
[Contributors](CONTRIBUTORS.md) |
[Community](https://tvm.apache.org/community) |
[Release Notes](NEWS.md)
......@@ -36,7 +36,7 @@ License
Contribute to TVM
-----------------
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.
Checkout the [Contributor Guide](https://docs.tvm.ai/contribute/)
Checkout the [Contributor Guide](https://tvm.apache.org/docs/contribute/)
Acknowledgement
---------------
......
......@@ -122,7 +122,7 @@ If everything goes well, you will find compile tools in `/opt/android-toolchain-
### Place compiled model on Android application assets folder
Follow instruction to get compiled version model for android target [here.](http://docs.tvm.ai/deploy/android.html)
Follow instruction to get compiled version model for android target [here.](https://tvm.apache.org/docs/deploy/android.html)
Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L81)
......
......@@ -35,13 +35,13 @@ In general, the performance should also be good.
It is recommended that you run tuning by yourself if you have your customized network or devices.
Please follow the tutorial for
[NVIDIA GPU](https://docs.tvm.ai/tutorials/autotvm/tune_conv2d_cuda.html),
[ARM CPU](https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html),
[Mobile GPU](https://docs.tvm.ai/tutorials/autotvm/tune_relay_mobile_gpu.html).
[NVIDIA GPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_conv2d_cuda.html),
[ARM CPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_arm.html),
[Mobile GPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_mobile_gpu.html).
### NVIDIA GPU
Build TVM with LLVM and CUDA enabled. [Help](https://docs.tvm.ai/install/from_source.html)
Build TVM with LLVM and CUDA enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)
```bash
python3 gpu_imagenet_bench.py --model 1080ti
......@@ -58,7 +58,7 @@ You need to use it for reproducing benchmark results.
**Note**: We use llvm-4.0 in our tuning environment. Mismatch of the LLVM version during tuning and deployment can influence the performance, so you have to use a same version for reproduction.
0. Build TVM with LLVM enabled. [Help](https://docs.tvm.ai/install/from_source.html)
0. Build TVM with LLVM enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)
1. Start an RPC Tracker on the host machine
```bash
......@@ -67,7 +67,7 @@ python3 -m tvm.exec.rpc_tracker
2. Register devices to the tracker
* For Linux device
* Build tvm runtime on your device [Help](https://docs.tvm.ai/tutorials/frontend/deploy_model_on_rasp.html#build-tvm-runtime-on-device)
* Build tvm runtime on your device [Help](https://tvm.apache.org/docs/tutorials/frontend/deploy_model_on_rasp.html#build-tvm-runtime-on-device)
* Register your device to tracker by
```bash
python3 -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=[DEVICE_KEY]
......@@ -123,7 +123,7 @@ python3 -m tvm.exec.rpc_tracker
### AMD GPU
Build TVM with LLVM and ROCm enabled. [Help](https://docs.tvm.ai/install/from_source.html)
Build TVM with LLVM and ROCm enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)
```bash
python3 gpu_imagenet_bench.py --model gfx900 --target rocm
```
......@@ -26,4 +26,4 @@ Type the following command to run the sample code under the current folder(need
./run_example.sh
```
Checkout [How to Deploy TVM Modules](http://docs.tvm.ai/deploy/cpp_deploy.html) for more information.
Checkout [How to Deploy TVM Modules](https://tvm.apache.org/docs/deploy/cpp_deploy.html) for more information.
......@@ -2,7 +2,7 @@ TVM Documentations
==================
This folder contains the source of TVM documents
- A hosted version of doc is at http://docs.tvm.ai
- A hosted version of doc is at https://tvm.apache.org/docs
- pip install sphinx>=1.5.5 sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark "Pillow<7"
- Build tvm first in the root folder.
- To build locally, you need to enable USE_CUDA, USE_OPENCL, LLVM_CONFIG in config.mk and then type "make html" in this folder.
......
......@@ -26,8 +26,8 @@ We present three installation guides, each extending on the previous one:
## VTA Simulator Installation
You need [TVM installed](https://docs.tvm.ai/install/index.html) on your machine.
For a quick and easy start, use the pre-built [TVM Docker image](https://docs.tvm.ai/install/docker.html).
You need [TVM installed](https://tvm.apache.org/docs/install/index.html) on your machine.
For a quick and easy start, checkout the [Docker Guide](https://tvm.apache.org/docs/install/docker.html).
You'll need to set the following paths to use VTA:
```bash
......@@ -60,7 +60,7 @@ python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py
> Note: You'll notice that for every convolution layer, the throughput gets reported in GOPS. These numbers are actually the computational throughput that the simulator achieves, by evaluating the convolutions in software.
You are invited to try out our [VTA programming tutorials](https://docs.tvm.ai/vta/tutorials/index.html).
You are invited to try out our [VTA programming tutorials](https://tvm.apache.org/docs/vta/tutorials/index.html).
### Advanced Configuration (optional)
......@@ -193,7 +193,7 @@ python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py
The performance metrics measured on the Pynq board will be reported for each convolutional layer.
You can also try out our [VTA programming tutorials](https://docs.tvm.ai/vta/tutorials/index.html).
You can also try out our [VTA programming tutorials](https://tvm.apache.org/docs/vta/tutorials/index.html).
## VTA Custom Test Setup for Intel FPGA
......
......@@ -30,7 +30,7 @@ This folder contains the Java interface for TVM runtime. It brings TVM runtime t
- JDK 1.6+. Oracle JDK and OpenJDK are well tested.
- Maven 3 for build.
- LLVM (TVM4J need LLVM support. Please refer to [build-the-shared-library](https://docs.tvm.ai/install/from_source.html#build-the-shared-library) for how to enable LLVM support.)
- LLVM (TVM4J need LLVM support. Please refer to [build-the-shared-library](https://tvm.apache.org/docs/install/from_source.html#build-the-shared-library) for how to enable LLVM support.)
### Modules
......@@ -45,7 +45,7 @@ TVM4J contains three modules:
### Build
First please refer to [Installation Guide](http://docs.tvm.ai/install/) and build runtime shared library from the C++ codes (libtvm\_runtime.so for Linux and libtvm\_runtime.dylib for OSX).
First please refer to [Installation Guide](https://tvm.apache.org/docs/install/) and build runtime shared library from the C++ codes (libtvm\_runtime.so for Linux and libtvm\_runtime.dylib for OSX).
Then you can compile tvm4j by
......
......@@ -109,7 +109,7 @@ and the model correctly predicts the input image as **tiger cat**.
## Installations
Please follow TVM [installations](https://docs.tvm.ai/install/index.html), `export TVM_HOME=/path/to/tvm` and add `libtvm_runtime` to your `LD_LIBRARY_PATH`.
Please follow TVM [installations](https://tvm.apache.org/docs/install/index.html), `export TVM_HOME=/path/to/tvm` and add `libtvm_runtime` to your `LD_LIBRARY_PATH`.
*Note:* To run the end-to-end examples and tests, `tvm` and `topi` need to be added to your `PYTHONPATH` or it's automatic via an Anaconda environment when it is installed individually.
......
......@@ -22,7 +22,7 @@ This end-to-end example shows how to:
* use the provided Rust frontend API to test for an input image
To run the example with pretrained resnet weights, first `tvm` and `mxnet` must be installed for the python build. To install mxnet for cpu, run `pip install mxnet`
and to install `tvm` with `llvm` follow the [TVM installation guide](https://docs.tvm.ai/install/index.html).
and to install `tvm` with `llvm` follow the [TVM installation guide](https://tvm.apache.org/docs/install/index.html).
* **Build the example**: `cargo build
......
......@@ -20,8 +20,8 @@ set -e
set -u
# cleanup old states
rm -rf docs/_build/html
mkdir -p docs/_build/html
rm -rf docs/_build/html/*
rm -rf docs/gen_modules
rm -rf docs/doxygen
......@@ -31,24 +31,30 @@ rm -rf docs/vta/tutorials
# cleanup stale log files
find . -type f -path "*.log" | xargs rm -f
find . -type f -path "*.pyc" | xargs rm -f
cd docs
PYTHONPATH=`pwd`/../python make html
cd ..
rm -rf _docs
mv docs/_build/html _docs
# C++ doc
make doc
rm -f docs/doxygen/html/*.map docs/doxygen/html/*.md5
mv docs/doxygen/html docs/_build/html/doxygen
mv docs/doxygen/html _docs/doxygen
# JS doc
jsdoc -c web/.jsdoc_conf.json web/tvm_runtime.js web/README.md
mv out docs/_build/html/jsdoc
mv out _docs/jsdoc
# Java doc
make javadoc
mv jvm/core/target/site/apidocs docs/_build/html/javadoc
mv jvm/core/target/site/apidocs _docs/javadoc
find . -type f -path "*.pyc" | xargs rm -f
cd docs
PYTHONPATH=`pwd`/../python make html
cd _build/html
tar czf docs.tgz *
mv docs.tgz ../../../
echo "Start creating the docs tarball.."
# make the tarball
tar -C _docs -czvf docs.tgz .
echo "Finish creating the docs tarball"
du -h docs.tgz
......@@ -86,7 +86,7 @@ def matmul_nn(A, B, L, dtype='float16', layout='NN'):
# (2) The warp tile size is not 16x16x16 on CUDA9, or not one of {16x16x16, 32x8x16, 8x32x16} on CUDA version >= 10.0.
#
# In this schedule, storage_align is used to reduce bank conflicts of shared memory. Please refer to this
# `doc <https://docs.tvm.ai/api/python/schedule.html#tvm.te.schedule.Stage.storage_align>`_
# `doc <https://tvm.apache.org/docs/api/python/schedule.html#tvm.te.schedule.Stage.storage_align>`_
# for the usage of storage_align primitive. In short, we need to add an offset to some shared memory buffer
# to reduce bank conflicts.
# According to the `wmma doc <https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#wmma-description>`_,
......
......@@ -30,4 +30,4 @@ The key features of VTA include:
- Customized and extensible TVM compiler back-end.
- Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python.
Learn more about VTA [here](https://docs.tvm.ai/vta/index.html).
\ No newline at end of file
Learn more about VTA [here](https://tvm.apache.org/docs/vta/index.html).
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment