Unverified Commit b776ff39 by Tianqi Chen Committed by GitHub

[DOCS] Point docs to the ASF site. (#5178)

* [DOCS] Point docs to the ASF site.

We have migrated the main docs to the ASF site,
which will be periodically updated using the docs generated by the CI.
Points the docs to the ASF version.

* [CI] Improve the docs generation script
parent 02121383
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread. Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
...@@ -20,7 +20,7 @@ TVM Contributors ...@@ -20,7 +20,7 @@ TVM Contributors
TVM adopts the Apache way and governs by merit. We believe that it is important to create an inclusive community where everyone can use, TVM adopts the Apache way and governs by merit. We believe that it is important to create an inclusive community where everyone can use,
contribute to, and influence the direction of the project. We actively invite contributors who have earned the merit to be part of the development community. contribute to, and influence the direction of the project. We actively invite contributors who have earned the merit to be part of the development community.
See the [community structure document](http://docs.tvm.ai/contribute/community.html) for the explanation of community structure and contribution guidelines. See the [community structure document](https://tvm.apache.org/docs/contribute/community.html) for the explanation of community structure and contribution guidelines.
## Mentors ## Mentors
......
...@@ -31,7 +31,7 @@ to that issue so it can get added. ...@@ -31,7 +31,7 @@ to that issue so it can get added.
### Relay in Production ### Relay in Production
Relay is a functional, differentiable programming language designed to be an expressive intermediate representation for machine learning systems. Relay supports algebraic data types, closures, control flow, and recursion, allowing it to directly represent more complex models than computation graph-based IRs (e.g., NNVM) can. In TVM v0.6, Relay is in stable phase and is ready for production. Relay is a functional, differentiable programming language designed to be an expressive intermediate representation for machine learning systems. Relay supports algebraic data types, closures, control flow, and recursion, allowing it to directly represent more complex models than computation graph-based IRs (e.g., NNVM) can. In TVM v0.6, Relay is in stable phase and is ready for production.
* Algebraic Data Types (ADT) support (#2442, #2575). ADT provides an expressive, efficient, and safe way to realize recursive computation (e.g., RNN). Refer to https://docs.tvm.ai/langref/relay_adt.html for more information. * Algebraic Data Types (ADT) support (#2442, #2575). ADT provides an expressive, efficient, and safe way to realize recursive computation (e.g., RNN). Refer to https://tvm.apache.org/docs/langref/relay_adt.html for more information.
* Pass manager for Relay (#2546, #3226, #3234, #3191) * Pass manager for Relay (#2546, #3226, #3234, #3191)
* Most frameworks have been supported in Relay, including ONNX, Keras, Tensorflow, Caffe2, CoreML, NNVMv1, MXNet (#2246). * Most frameworks have been supported in Relay, including ONNX, Keras, Tensorflow, Caffe2, CoreML, NNVMv1, MXNet (#2246).
* Explicitly manifest memory and tensor allocations in Relay. (#3560) * Explicitly manifest memory and tensor allocations in Relay. (#3560)
...@@ -75,7 +75,7 @@ Relay is designed to natively support first-order and higher-order differentiati ...@@ -75,7 +75,7 @@ Relay is designed to natively support first-order and higher-order differentiati
Low-bit inference is getting more and more popular as it benefits both the performance and storage usage. TVM now supports two types of quantization. 1. Automatic quantizaion takes floating-point precision model, does per-layer calibration and generates low-bit model. 2. TVM also imports pre-quantized model from Tensorflow and MXNet, a new dialect QNN is introduced to handle further lowering to normal operators. Low-bit inference is getting more and more popular as it benefits both the performance and storage usage. TVM now supports two types of quantization. 1. Automatic quantizaion takes floating-point precision model, does per-layer calibration and generates low-bit model. 2. TVM also imports pre-quantized model from Tensorflow and MXNet, a new dialect QNN is introduced to handle further lowering to normal operators.
* Automatic Quantization * Automatic Quantization
- Low-bit automatic quantization supported. (#2116). The workflow includes annotation, calibration and transformation. - Low-bit automatic quantization supported. (#2116). The workflow includes annotation, calibration and transformation.
- Refactor quantization codebase and fix model accuracy. (#3543) - Refactor quantization codebase and fix model accuracy. (#3543)
- KL-divergence-based per-layer calibration. (#3538) - KL-divergence-based per-layer calibration. (#3538)
- Add option to select which convolution layers are quantized. (#3173) - Add option to select which convolution layers are quantized. (#3173)
...@@ -164,14 +164,14 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre ...@@ -164,14 +164,14 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* Vision operator `roi_align` (#2618). * Vision operator `roi_align` (#2618).
* `where` operator for MXNet (#2647). * `where` operator for MXNet (#2647).
* Deformable conv2d (#2908) * Deformable conv2d (#2908)
* Faster-RCNN Proposal OP (#2725) * Faster-RCNN Proposal OP (#2725)
* ROI Pool operator (#2811) * ROI Pool operator (#2811)
* Gluoncv SSD support on CPU (#2353) * Gluoncv SSD support on CPU (#2353)
* shape, reverse, and sign op (#2749, #2800, #2775) * shape, reverse, and sign op (#2749, #2800, #2775)
* tile and repeat op (#2720) * tile and repeat op (#2720)
* logical operators (#2743, #2453) * logical operators (#2743, #2453)
* stack op (#2729) * stack op (#2729)
* NCHWc upsampling (#2806) * NCHWc upsampling (#2806)
* clip and wrap mode support in take (#2858) * clip and wrap mode support in take (#2858)
* AlterLayout support for `intel_graphics` conv2d , depthwise conv2d (#2729, #2806) * AlterLayout support for `intel_graphics` conv2d , depthwise conv2d (#2729, #2806)
* Add foldr1 operator (#2928) * Add foldr1 operator (#2928)
...@@ -215,7 +215,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre ...@@ -215,7 +215,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
### Frontend and User Interface ### Frontend and User Interface
* Frontend darknet (#2773) * Frontend darknet (#2773)
* Support tf.gather (#2935) * Support tf.gather (#2935)
* Support tf.where (#2936) * Support tf.where (#2936)
* Adding ADD operator to tflite frontend for compiling the MobileNetV2 (#2919) * Adding ADD operator to tflite frontend for compiling the MobileNetV2 (#2919)
* Support SpaceToBatchND/BatchToSpaceND in Tensorflow frontend (#2943) * Support SpaceToBatchND/BatchToSpaceND in Tensorflow frontend (#2943)
...@@ -281,7 +281,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre ...@@ -281,7 +281,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
### Runtime and Backend Support ### Runtime and Backend Support
* Make external library extend TVM's NDArray more easily (#2613). * Make external library extend TVM's NDArray more easily (#2613).
* Improvements for NNPACK integratation, includes ci test, winograd (#2846, #2868, #2856, #2721) * Improvements for NNPACK integratation, includes ci test, winograd (#2846, #2868, #2856, #2721)
* Improvements for OpenCL runtime (#2741, #2737) * Improvements for OpenCL runtime (#2741, #2737)
* GraphRuntime: Enable sharing parameters of a model among multiple threads (#3384) * GraphRuntime: Enable sharing parameters of a model among multiple threads (#3384)
* Android runtime argsort support (#3472) * Android runtime argsort support (#3472)
...@@ -343,7 +343,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre ...@@ -343,7 +343,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* Higher order reverse mode automatic differentiation that work with control flow (#2496) * Higher order reverse mode automatic differentiation that work with control flow (#2496)
* Integer arithmetic analyzers, includes modular set analysis, const integer bound analysis and rewrite simplifier (#2904, #2851, #2768, #2722, #2668, #2860) * Integer arithmetic analyzers, includes modular set analysis, const integer bound analysis and rewrite simplifier (#2904, #2851, #2768, #2722, #2668, #2860)
* Improve operator fusion for TupleGetItem in relay (#2914, #2929 * Improve operator fusion for TupleGetItem in relay (#2914, #2929
* Compute FLOP of autotvm template for int8 models (#2776) * Compute FLOP of autotvm template for int8 models (#2776)
* Common subexpression elimination pass in Relay (#2639) * Common subexpression elimination pass in Relay (#2639)
* Improve quantization in Relay (#2723) * Improve quantization in Relay (#2723)
* Refactor `build_func` in measure module of autotvm to better support cross compiler (#2927) * Refactor `build_func` in measure module of autotvm to better support cross compiler (#2927)
...@@ -437,12 +437,12 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre ...@@ -437,12 +437,12 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* Relay now supports saving and loading parameter dictionaries. (#2620) * Relay now supports saving and loading parameter dictionaries. (#2620)
* Add `max_num_threads` to Hybrid Script, which allows users to get max number of threads for GPU targets ([#2672](#2672/)). * Add `max_num_threads` to Hybrid Script, which allows users to get max number of threads for GPU targets ([#2672](#2672/)).
* Improvements for tensorflow frontend (#2830, #2757, #2586), includes decompiling tf control flow (#2830) * Improvements for tensorflow frontend (#2830, #2757, #2586), includes decompiling tf control flow (#2830)
* Improvements for mxnet frontend (#2844, #2777, #2772, #2706, #2704, #2709,, #2739) * Improvements for mxnet frontend (#2844, #2777, #2772, #2706, #2704, #2709,, #2739)
* Improvements for keras frontend (#2842, #2854) * Improvements for keras frontend (#2842, #2854)
* Improvements for DarkNet frontend (#2673) * Improvements for DarkNet frontend (#2673)
* Improvements for ONNX frontend (#2843, #2840) * Improvements for ONNX frontend (#2843, #2840)
* Better profile result dump in Chrome Tracing format (#2922, #2863) * Better profile result dump in Chrome Tracing format (#2922, #2863)
* Unified error handling in NNVM and Relay frontends (#2828) * Unified error handling in NNVM and Relay frontends (#2828)
* Improve NNVM to Relay conversion (#2734) * Improve NNVM to Relay conversion (#2734)
* Remove `input_0d_mismatch` special handling for TF Frontend(#3087) * Remove `input_0d_mismatch` special handling for TF Frontend(#3087)
* Bumped ONNX version from 1.1.0 to 1.4.1 (#3286) * Bumped ONNX version from 1.1.0 to 1.4.1 (#3286)
...@@ -509,7 +509,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre ...@@ -509,7 +509,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* Documentation on operators (#2761) * Documentation on operators (#2761)
* Add gradient operator tutorial docs (#2751) * Add gradient operator tutorial docs (#2751)
* Add compiler pass tutorial docs (#2746) * Add compiler pass tutorial docs (#2746)
* Add Android Tutorial (#2977) * Add Android Tutorial (#2977)
* Developer documentation for InferBound pass (#3126) * Developer documentation for InferBound pass (#3126)
* Add missing targets to `target_name` documentation (#3128) * Add missing targets to `target_name` documentation (#3128)
* Various documentation improvements (#3133) * Various documentation improvements (#3133)
...@@ -540,10 +540,10 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre ...@@ -540,10 +540,10 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
### Build and Test ### Build and Test
* Increate the robuteness of CI test (#2841, #2798, #2793, #2788, #2781, #2727, #2710, #2711, #2923) * Increate the robuteness of CI test (#2841, #2798, #2793, #2788, #2781, #2727, #2710, #2711, #2923)
* Improve conda build (#2742) * Improve conda build (#2742)
* Add caffe2 nnvm frontend to CI (#3018) * Add caffe2 nnvm frontend to CI (#3018)
* Use bridge network and expose port on macOS when launch docker image (#3086) * Use bridge network and expose port on macOS when launch docker image (#3086)
* Run DarkNet tests (#2673) * Run DarkNet tests (#2673)
* Add file type check (#3116) * Add file type check (#3116)
* Always run cpptest during build to ensure library correctness (#3147) * Always run cpptest during build to ensure library correctness (#3147)
* Handle more file types in ASF header (#3235) * Handle more file types in ASF header (#3235)
...@@ -641,41 +641,41 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre ...@@ -641,41 +641,41 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* [Tensor Expression] Fix missing reduction init predicates. (#2495) * [Tensor Expression] Fix missing reduction init predicates. (#2495)
* [Relay] Fix missing argument for NCHWc in Relay. (#2627) * [Relay] Fix missing argument for NCHWc in Relay. (#2627)
* [TOPI] Fix `Nms_ir` data race. (#2600) * [TOPI] Fix `Nms_ir` data race. (#2600)
* Fix `compute_inline` with multiple outputs (#2934) * Fix `compute_inline` with multiple outputs (#2934)
* [TEXPR][PASS] Fix thread all reduce to avoid write after read hazzard (#2937) * [TEXPR][PASS] Fix thread all reduce to avoid write after read hazzard (#2937)
* [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. (#2864) * [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. (#2864)
* [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. (#2850) * [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. (#2850)
* Turn on `USE_SORT` by default (#2916) * Turn on `USE_SORT` by default (#2916)
* [DOCKER] Upgrade ci-cpu to latest v0.50 (#2901) * [DOCKER] Upgrade ci-cpu to latest v0.50 (#2901)
* [TESTS] Import script robustness (set -u) (#2896) * [TESTS] Import script robustness (set -u) (#2896)
* [Relay] Fix name of bias in testing.mlp (#2892) * [Relay] Fix name of bias in testing.mlp (#2892)
* [TESTS] Improve script robustness (#2893) * [TESTS] Improve script robustness (#2893)
* Add dense schedules to `__init__` for cpu (#2855) * Add dense schedules to `__init__` for cpu (#2855)
* [Apps] [howto_deploy] fix cxx-flags order and build directory (#2888) * [Apps] [howto_deploy] fix cxx-flags order and build directory (#2888)
* [Relay] Add TVM_DLL for ANF/GNF conversion #2883 * [Relay] Add TVM_DLL for ANF/GNF conversion #2883
* [Relay] Fix Relay ARM CPU depthwise spatial pack schedule alter op layout issue. (#2861) * [Relay] Fix Relay ARM CPU depthwise spatial pack schedule alter op layout issue. (#2861)
* Fix setting up hints for getaddrinfo (#2872) * Fix setting up hints for getaddrinfo (#2872)
* Add missing sgx includes (#2878) * Add missing sgx includes (#2878)
* Fix error reporting for missing axis (#2835) * Fix error reporting for missing axis (#2835)
* Fix an OrderDict initilization bug. (#2862) * Fix an OrderDict initilization bug. (#2862)
* Fix Xcode 10 metal compile error (#2836) * Fix Xcode 10 metal compile error (#2836)
* tvmrpc: Fix includes (#2825) * tvmrpc: Fix includes (#2825)
* Fix `init_proj.py`: Team ID expected (#2824) * Fix `init_proj.py`: Team ID expected (#2824)
* [DOCKER] Fix git clone failure. (#2816) * [DOCKER] Fix git clone failure. (#2816)
* upgrade java style-check due to CVE-2019-9658 (#2817) * upgrade java style-check due to CVE-2019-9658 (#2817)
* [Relay][Quantization] Fix duplicated simulated quantization (#2803) * [Relay][Quantization] Fix duplicated simulated quantization (#2803)
* [Bugfix] Repeat and tile bug fixed, relay tests added (#2804) * [Bugfix] Repeat and tile bug fixed, relay tests added (#2804)
* Fix caffe2 relay frontend (#2733) * Fix caffe2 relay frontend (#2733)
* Fix a bug in nnvm to relay converter. (#2756) * Fix a bug in nnvm to relay converter. (#2756)
* Ensure loop count is a constant before trying to unroll. (#2797) * Ensure loop count is a constant before trying to unroll. (#2797)
* xcode.py: Decode bytes before output #2833 * xcode.py: Decode bytes before output #2833
* [WIN] Fix a bug in `find_llvm` when specify llvm-config (#2758) * [WIN] Fix a bug in `find_llvm` when specify llvm-config (#2758)
* [DLPACK] fix flaky ctypes support (#2759) * [DLPACK] fix flaky ctypes support (#2759)
* [Bugfix][Relay][Frontend] Fix bug in mxnet converter for `slick_like` (#2744) * [Bugfix][Relay][Frontend] Fix bug in mxnet converter for `slick_like` (#2744)
* [DOCS] Fix tutorial (#2724) * [DOCS] Fix tutorial (#2724)
* [TOPI][Relay] Fix default `out_dtype` for `conv2d_NCHWc` and Relay (#2702) * [TOPI][Relay] Fix default `out_dtype` for `conv2d_NCHWc` and Relay (#2702)
* [Relay] fix checkwellform (#2705) * [Relay] fix checkwellform (#2705)
* fix prelu, now can use on 2d input and add one test (#2875) * fix prelu, now can use on 2d input and add one test (#2875)
* [CODEGEN][OPENCL] Fix compile error about ternary expression. (#2821) * [CODEGEN][OPENCL] Fix compile error about ternary expression. (#2821)
* Fix Placeholder issue (#2834) * Fix Placeholder issue (#2834)
* Fix makedirs() condition in contrib (#2942) * Fix makedirs() condition in contrib (#2942)
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
<img src=https://raw.githubusercontent.com/apache/incubator-tvm-site/master/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack <img src=https://raw.githubusercontent.com/apache/incubator-tvm-site/master/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack
============================================== ==============================================
[Documentation](https://docs.tvm.ai) | [Documentation](https://tvm.apache.org/docs) |
[Contributors](CONTRIBUTORS.md) | [Contributors](CONTRIBUTORS.md) |
[Community](https://tvm.apache.org/community) | [Community](https://tvm.apache.org/community) |
[Release Notes](NEWS.md) [Release Notes](NEWS.md)
...@@ -36,7 +36,7 @@ License ...@@ -36,7 +36,7 @@ License
Contribute to TVM Contribute to TVM
----------------- -----------------
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.
Checkout the [Contributor Guide](https://docs.tvm.ai/contribute/) Checkout the [Contributor Guide](https://tvm.apache.org/docs/contribute/)
Acknowledgement Acknowledgement
--------------- ---------------
......
...@@ -122,7 +122,7 @@ If everything goes well, you will find compile tools in `/opt/android-toolchain- ...@@ -122,7 +122,7 @@ If everything goes well, you will find compile tools in `/opt/android-toolchain-
### Place compiled model on Android application assets folder ### Place compiled model on Android application assets folder
Follow instruction to get compiled version model for android target [here.](http://docs.tvm.ai/deploy/android.html) Follow instruction to get compiled version model for android target [here.](https://tvm.apache.org/docs/deploy/android.html)
Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L81) Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L81)
......
...@@ -35,13 +35,13 @@ In general, the performance should also be good. ...@@ -35,13 +35,13 @@ In general, the performance should also be good.
It is recommended that you run tuning by yourself if you have your customized network or devices. It is recommended that you run tuning by yourself if you have your customized network or devices.
Please follow the tutorial for Please follow the tutorial for
[NVIDIA GPU](https://docs.tvm.ai/tutorials/autotvm/tune_conv2d_cuda.html), [NVIDIA GPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_conv2d_cuda.html),
[ARM CPU](https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html), [ARM CPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_arm.html),
[Mobile GPU](https://docs.tvm.ai/tutorials/autotvm/tune_relay_mobile_gpu.html). [Mobile GPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_mobile_gpu.html).
### NVIDIA GPU ### NVIDIA GPU
Build TVM with LLVM and CUDA enabled. [Help](https://docs.tvm.ai/install/from_source.html) Build TVM with LLVM and CUDA enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)
```bash ```bash
python3 gpu_imagenet_bench.py --model 1080ti python3 gpu_imagenet_bench.py --model 1080ti
...@@ -58,7 +58,7 @@ You need to use it for reproducing benchmark results. ...@@ -58,7 +58,7 @@ You need to use it for reproducing benchmark results.
**Note**: We use llvm-4.0 in our tuning environment. Mismatch of the LLVM version during tuning and deployment can influence the performance, so you have to use a same version for reproduction. **Note**: We use llvm-4.0 in our tuning environment. Mismatch of the LLVM version during tuning and deployment can influence the performance, so you have to use a same version for reproduction.
0. Build TVM with LLVM enabled. [Help](https://docs.tvm.ai/install/from_source.html) 0. Build TVM with LLVM enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)
1. Start an RPC Tracker on the host machine 1. Start an RPC Tracker on the host machine
```bash ```bash
...@@ -67,7 +67,7 @@ python3 -m tvm.exec.rpc_tracker ...@@ -67,7 +67,7 @@ python3 -m tvm.exec.rpc_tracker
2. Register devices to the tracker 2. Register devices to the tracker
* For Linux device * For Linux device
* Build tvm runtime on your device [Help](https://docs.tvm.ai/tutorials/frontend/deploy_model_on_rasp.html#build-tvm-runtime-on-device) * Build tvm runtime on your device [Help](https://tvm.apache.org/docs/tutorials/frontend/deploy_model_on_rasp.html#build-tvm-runtime-on-device)
* Register your device to tracker by * Register your device to tracker by
```bash ```bash
python3 -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=[DEVICE_KEY] python3 -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=[DEVICE_KEY]
...@@ -123,7 +123,7 @@ python3 -m tvm.exec.rpc_tracker ...@@ -123,7 +123,7 @@ python3 -m tvm.exec.rpc_tracker
### AMD GPU ### AMD GPU
Build TVM with LLVM and ROCm enabled. [Help](https://docs.tvm.ai/install/from_source.html) Build TVM with LLVM and ROCm enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)
```bash ```bash
python3 gpu_imagenet_bench.py --model gfx900 --target rocm python3 gpu_imagenet_bench.py --model gfx900 --target rocm
``` ```
...@@ -26,4 +26,4 @@ Type the following command to run the sample code under the current folder(need ...@@ -26,4 +26,4 @@ Type the following command to run the sample code under the current folder(need
./run_example.sh ./run_example.sh
``` ```
Checkout [How to Deploy TVM Modules](http://docs.tvm.ai/deploy/cpp_deploy.html) for more information. Checkout [How to Deploy TVM Modules](https://tvm.apache.org/docs/deploy/cpp_deploy.html) for more information.
...@@ -2,7 +2,7 @@ TVM Documentations ...@@ -2,7 +2,7 @@ TVM Documentations
================== ==================
This folder contains the source of TVM documents This folder contains the source of TVM documents
- A hosted version of doc is at http://docs.tvm.ai - A hosted version of doc is at https://tvm.apache.org/docs
- pip install sphinx>=1.5.5 sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark "Pillow<7" - pip install sphinx>=1.5.5 sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark "Pillow<7"
- Build tvm first in the root folder. - Build tvm first in the root folder.
- To build locally, you need to enable USE_CUDA, USE_OPENCL, LLVM_CONFIG in config.mk and then type "make html" in this folder. - To build locally, you need to enable USE_CUDA, USE_OPENCL, LLVM_CONFIG in config.mk and then type "make html" in this folder.
......
...@@ -26,8 +26,8 @@ We present three installation guides, each extending on the previous one: ...@@ -26,8 +26,8 @@ We present three installation guides, each extending on the previous one:
## VTA Simulator Installation ## VTA Simulator Installation
You need [TVM installed](https://docs.tvm.ai/install/index.html) on your machine. You need [TVM installed](https://tvm.apache.org/docs/install/index.html) on your machine.
For a quick and easy start, use the pre-built [TVM Docker image](https://docs.tvm.ai/install/docker.html). For a quick and easy start, checkout the [Docker Guide](https://tvm.apache.org/docs/install/docker.html).
You'll need to set the following paths to use VTA: You'll need to set the following paths to use VTA:
```bash ```bash
...@@ -60,7 +60,7 @@ python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py ...@@ -60,7 +60,7 @@ python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py
> Note: You'll notice that for every convolution layer, the throughput gets reported in GOPS. These numbers are actually the computational throughput that the simulator achieves, by evaluating the convolutions in software. > Note: You'll notice that for every convolution layer, the throughput gets reported in GOPS. These numbers are actually the computational throughput that the simulator achieves, by evaluating the convolutions in software.
You are invited to try out our [VTA programming tutorials](https://docs.tvm.ai/vta/tutorials/index.html). You are invited to try out our [VTA programming tutorials](https://tvm.apache.org/docs/vta/tutorials/index.html).
### Advanced Configuration (optional) ### Advanced Configuration (optional)
...@@ -193,7 +193,7 @@ python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py ...@@ -193,7 +193,7 @@ python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py
The performance metrics measured on the Pynq board will be reported for each convolutional layer. The performance metrics measured on the Pynq board will be reported for each convolutional layer.
You can also try out our [VTA programming tutorials](https://docs.tvm.ai/vta/tutorials/index.html). You can also try out our [VTA programming tutorials](https://tvm.apache.org/docs/vta/tutorials/index.html).
## VTA Custom Test Setup for Intel FPGA ## VTA Custom Test Setup for Intel FPGA
......
...@@ -30,7 +30,7 @@ This folder contains the Java interface for TVM runtime. It brings TVM runtime t ...@@ -30,7 +30,7 @@ This folder contains the Java interface for TVM runtime. It brings TVM runtime t
- JDK 1.6+. Oracle JDK and OpenJDK are well tested. - JDK 1.6+. Oracle JDK and OpenJDK are well tested.
- Maven 3 for build. - Maven 3 for build.
- LLVM (TVM4J need LLVM support. Please refer to [build-the-shared-library](https://docs.tvm.ai/install/from_source.html#build-the-shared-library) for how to enable LLVM support.) - LLVM (TVM4J need LLVM support. Please refer to [build-the-shared-library](https://tvm.apache.org/docs/install/from_source.html#build-the-shared-library) for how to enable LLVM support.)
### Modules ### Modules
...@@ -45,7 +45,7 @@ TVM4J contains three modules: ...@@ -45,7 +45,7 @@ TVM4J contains three modules:
### Build ### Build
First please refer to [Installation Guide](http://docs.tvm.ai/install/) and build runtime shared library from the C++ codes (libtvm\_runtime.so for Linux and libtvm\_runtime.dylib for OSX). First please refer to [Installation Guide](https://tvm.apache.org/docs/install/) and build runtime shared library from the C++ codes (libtvm\_runtime.so for Linux and libtvm\_runtime.dylib for OSX).
Then you can compile tvm4j by Then you can compile tvm4j by
......
...@@ -109,7 +109,7 @@ and the model correctly predicts the input image as **tiger cat**. ...@@ -109,7 +109,7 @@ and the model correctly predicts the input image as **tiger cat**.
## Installations ## Installations
Please follow TVM [installations](https://docs.tvm.ai/install/index.html), `export TVM_HOME=/path/to/tvm` and add `libtvm_runtime` to your `LD_LIBRARY_PATH`. Please follow TVM [installations](https://tvm.apache.org/docs/install/index.html), `export TVM_HOME=/path/to/tvm` and add `libtvm_runtime` to your `LD_LIBRARY_PATH`.
*Note:* To run the end-to-end examples and tests, `tvm` and `topi` need to be added to your `PYTHONPATH` or it's automatic via an Anaconda environment when it is installed individually. *Note:* To run the end-to-end examples and tests, `tvm` and `topi` need to be added to your `PYTHONPATH` or it's automatic via an Anaconda environment when it is installed individually.
......
...@@ -22,7 +22,7 @@ This end-to-end example shows how to: ...@@ -22,7 +22,7 @@ This end-to-end example shows how to:
* use the provided Rust frontend API to test for an input image * use the provided Rust frontend API to test for an input image
To run the example with pretrained resnet weights, first `tvm` and `mxnet` must be installed for the python build. To install mxnet for cpu, run `pip install mxnet` To run the example with pretrained resnet weights, first `tvm` and `mxnet` must be installed for the python build. To install mxnet for cpu, run `pip install mxnet`
and to install `tvm` with `llvm` follow the [TVM installation guide](https://docs.tvm.ai/install/index.html). and to install `tvm` with `llvm` follow the [TVM installation guide](https://tvm.apache.org/docs/install/index.html).
* **Build the example**: `cargo build * **Build the example**: `cargo build
......
...@@ -20,8 +20,8 @@ set -e ...@@ -20,8 +20,8 @@ set -e
set -u set -u
# cleanup old states # cleanup old states
rm -rf docs/_build/html
mkdir -p docs/_build/html mkdir -p docs/_build/html
rm -rf docs/_build/html/*
rm -rf docs/gen_modules rm -rf docs/gen_modules
rm -rf docs/doxygen rm -rf docs/doxygen
...@@ -31,24 +31,30 @@ rm -rf docs/vta/tutorials ...@@ -31,24 +31,30 @@ rm -rf docs/vta/tutorials
# cleanup stale log files # cleanup stale log files
find . -type f -path "*.log" | xargs rm -f find . -type f -path "*.log" | xargs rm -f
find . -type f -path "*.pyc" | xargs rm -f
cd docs
PYTHONPATH=`pwd`/../python make html
cd ..
rm -rf _docs
mv docs/_build/html _docs
# C++ doc # C++ doc
make doc make doc
rm -f docs/doxygen/html/*.map docs/doxygen/html/*.md5 rm -f docs/doxygen/html/*.map docs/doxygen/html/*.md5
mv docs/doxygen/html docs/_build/html/doxygen mv docs/doxygen/html _docs/doxygen
# JS doc # JS doc
jsdoc -c web/.jsdoc_conf.json web/tvm_runtime.js web/README.md jsdoc -c web/.jsdoc_conf.json web/tvm_runtime.js web/README.md
mv out docs/_build/html/jsdoc mv out _docs/jsdoc
# Java doc # Java doc
make javadoc make javadoc
mv jvm/core/target/site/apidocs docs/_build/html/javadoc mv jvm/core/target/site/apidocs _docs/javadoc
find . -type f -path "*.pyc" | xargs rm -f echo "Start creating the docs tarball.."
# make the tarball
cd docs tar -C _docs -czvf docs.tgz .
PYTHONPATH=`pwd`/../python make html echo "Finish creating the docs tarball"
cd _build/html du -h docs.tgz
tar czf docs.tgz *
mv docs.tgz ../../../
...@@ -86,7 +86,7 @@ def matmul_nn(A, B, L, dtype='float16', layout='NN'): ...@@ -86,7 +86,7 @@ def matmul_nn(A, B, L, dtype='float16', layout='NN'):
# (2) The warp tile size is not 16x16x16 on CUDA9, or not one of {16x16x16, 32x8x16, 8x32x16} on CUDA version >= 10.0. # (2) The warp tile size is not 16x16x16 on CUDA9, or not one of {16x16x16, 32x8x16, 8x32x16} on CUDA version >= 10.0.
# #
# In this schedule, storage_align is used to reduce bank conflicts of shared memory. Please refer to this # In this schedule, storage_align is used to reduce bank conflicts of shared memory. Please refer to this
# `doc <https://docs.tvm.ai/api/python/schedule.html#tvm.te.schedule.Stage.storage_align>`_ # `doc <https://tvm.apache.org/docs/api/python/schedule.html#tvm.te.schedule.Stage.storage_align>`_
# for the usage of storage_align primitive. In short, we need to add an offset to some shared memory buffer # for the usage of storage_align primitive. In short, we need to add an offset to some shared memory buffer
# to reduce bank conflicts. # to reduce bank conflicts.
# According to the `wmma doc <https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#wmma-description>`_, # According to the `wmma doc <https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#wmma-description>`_,
......
...@@ -30,4 +30,4 @@ The key features of VTA include: ...@@ -30,4 +30,4 @@ The key features of VTA include:
- Customized and extensible TVM compiler back-end. - Customized and extensible TVM compiler back-end.
- Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python. - Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python.
Learn more about VTA [here](https://docs.tvm.ai/vta/index.html). Learn more about VTA [here](https://tvm.apache.org/docs/vta/index.html).
\ No newline at end of file \ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment