Commit ce18b565 by Tianqi Chen Committed by GitHub

[DOC] Document update (#329)

parent 07e56b9a
...@@ -4,23 +4,25 @@ ...@@ -4,23 +4,25 @@
[Installation](docs/how_to/install.md) | [Installation](docs/how_to/install.md) |
[Documentation](http://docs.tvmlang.org) | [Documentation](http://docs.tvmlang.org) |
[Tutorials](http://tutorials.tvmlang.org) | [Tutorials](http://tutorials.tvmlang.org) |
[Operator Inventory](topi) |
[FAQ](docs/faq.md) | [FAQ](docs/faq.md) |
[Contributors](CONTRIBUTORS.md) | [Contributors](CONTRIBUTORS.md) |
[Release Notes](NEWS.md) [Release Notes](NEWS.md)
TVM TVM: Tensor IR Stack for Deep Learning Systems
=== ==============================================
TVM is a Tensor intermediate representation(IR) stack for deep learning systems. It is designed to close the gap between the
productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends.
TVM works with deep learning frameworks to provide end to end compilation to different backends.
TVM is a low level domain specific language(DSL) for compiling tensor computation pipelines. License
It is designed to compile multi-dimensional tensor algebra pipelines which -------
are crucial to deep learning frameworks. © Contributors, 2017. Licensed under an [Apache-2.0](https://github.com/dmlc/tvm/blob/master/LICENSE) license.
Contribute to TVM Contribute to TVM
----------------- -----------------
Your help is very valuable to make the package better for everyone. TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.
- [Contributor Guide](docs/how_to/contribute.md) - [Contributor Guide](docs/how_to/contribute.md)
- Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md) - Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md)
- Please also update [NEWS.md](NEWS.md) on changes and improvements in API and codes. - Please also update [NEWS.md](NEWS.md) on changes and improvements in API and codes.
## Documentation
The current documentation can be build locally via sphinx. See [docs](docs) folder for details.
...@@ -13,8 +13,16 @@ Index ...@@ -13,8 +13,16 @@ Index
topi.tanh topi.tanh
topi.log topi.log
topi.sqrt topi.sqrt
topi.sigmoid
topi.broadcast_to
topi.max
topi.sum
topi.min
topi.nn.relu topi.nn.relu
topi.nn.dilate
topi.nn.scale_shift topi.nn.scale_shift
topi.nn.conv2d_nchw
topi.nn.conv2d_hwcn
topi.nn.depthwise_conv2d topi.nn.depthwise_conv2d
...@@ -22,7 +30,11 @@ Index ...@@ -22,7 +30,11 @@ Index
.. autosummary:: .. autosummary::
topi.cuda.schedule_depthwise_conv2d_map topi.cuda.schedule_conv2d_nchw
topi.cuda.schedule_conv2d_hwcn
topi.cuda.schedule_depthwise_conv2d
topi.cuda.schedule_reduce
topi.cuda.schedule_broadcast_to
topi topi
...@@ -31,15 +43,27 @@ topi ...@@ -31,15 +43,27 @@ topi
.. autofunction:: topi.tanh .. autofunction:: topi.tanh
.. autofunction:: topi.log .. autofunction:: topi.log
.. autofunction:: topi.sqrt .. autofunction:: topi.sqrt
.. autofunction:: topi.sigmoid
.. autofunction:: topi.broadcast_to
.. autofunction:: topi.max
.. autofunction:: topi.sum
.. autofunction:: topi.min
topi.nn topi.nn
~~~~~~~ ~~~~~~~
.. autofunction:: topi.nn.relu .. autofunction:: topi.nn.relu
.. autofunction:: topi.nn.dilate
.. autofunction:: topi.nn.scale_shift .. autofunction:: topi.nn.scale_shift
.. autofunction:: topi.nn.conv2d_nchw
.. autofunction:: topi.nn.conv2d_hwcn
.. autofunction:: topi.nn.depthwise_conv2d .. autofunction:: topi.nn.depthwise_conv2d
topi.cuda topi.cuda
~~~~~~~~~ ~~~~~~~~~
.. automodule:: topi.cuda .. automodule:: topi.cuda
.. autofunction:: topi.cuda.schedule_depthwise_conv2d_map .. autofunction:: topi.cuda.schedule_conv2d_nchw
.. autofunction:: topi.cuda.schedule_conv2d_hwcn
.. autofunction:: topi.cuda.schedule_depthwise_conv2d
.. autofunction:: topi.cuda.schedule_reduce
.. autofunction:: topi.cuda.schedule_broadcast_to
...@@ -6,17 +6,27 @@ How to Install ...@@ -6,17 +6,27 @@ How to Install
-------------- --------------
See [Installation](https://github.com/dmlc/tvm/blob/master/docs/how_to/install.md) See [Installation](https://github.com/dmlc/tvm/blob/master/docs/how_to/install.md)
TVM's relation to XLA TVM's relation to Other IR/DSL Projects
--------------------- ---------------------------------------
They has different abstraction level. There are usually two levels of abstractions of IR in the deep learning systems.
XLA is a higher level tensor algebra DSL, the system defines codegen and loop transformation NNVM, TensorFlow's XLA and Intel's ngraph uses computation graph representation.
rules for each kernels. TVM is an low level array index based DSL that give the loop transformation This representation is high level, and can be helpful to perform generic optimizations
primitives to the user. In terms of design philosophy, TVM aims to be directly used by developers such as memory reuse, layout transformation and automatic differentiation.
and provide general support for different framework via DLPack.
See also [This Issue](https://github.com/dmlc/tvm/issues/151) TVM adopts a low level representation, that explicitly express the choice of memory
layout, parallelization pattern, locality and hardware primtives etc.
This level of IR is closer to directly target hardwares.
The low level IR adopt ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra based analysis.
We specifically focus of expressing deep learning workloads(e.g. recurrence),
optimization for different hardware backends and embedding with frameworks to provide
end-to-end compilation stack.
TVM's relation to libDNN cuDNN TVM's relation to libDNN cuDNN
------------------------------ ------------------------------
TVM can incorporate these library as external calls. One goal of TVM is to be able to TVM can incorporate these library as external calls. One goal of TVM is to be able to
generate high performing kernels. We will evolve TVM an incremental manner as generate high performing kernels. We will evolve TVM an incremental manner as
we learn from the technics of manual kernel crafting and add these as primitives in DSL. we learn from the technics of manual kernel crafting and add these as primitives in DSL.
See also [TVM Operator Inventory](https://github.com/dmlc/tvm/tree/master/topi) for
recipes of operators in TVM.
...@@ -8,7 +8,7 @@ scratch on various systems. It consists of two steps: ...@@ -8,7 +8,7 @@ scratch on various systems. It consists of two steps:
To get started, clone tvm repo from github. It is important to clone the submodules along, with ```--recursive``` option. To get started, clone tvm repo from github. It is important to clone the submodules along, with ```--recursive``` option.
```bash ```bash
git clone --recursive ssh://git@github.com/dmlc/tvm git clone --recursive https://github.com/dmlc/tvm
``` ```
For windows users who use github tools, you can open the git shell, and type the following command. For windows users who use github tools, you can open the git shell, and type the following command.
```bash ```bash
......
# TVM Operator Inventory # TOPI: TVM Operator Inventory
topi is the operator collection library for TVM intended at sharing the effort of crafting and TOPI is the operator collection library for TVM intended at sharing the effort of crafting
optimizing tvm generated kernels. The goal: and optimizing tvm generated kernels. The goal:
- Provide sugars for operator declaration - Provide sugars for operator declaration
- Give common primitives for fused op creation. - Give common primitives for fused op creation.
...@@ -21,6 +21,7 @@ optimizing tvm generated kernels. The goal: ...@@ -21,6 +21,7 @@ optimizing tvm generated kernels. The goal:
- Some kernels have requirements on shape and data layout, assert them - Some kernels have requirements on shape and data layout, assert them
- Data layout aware, if not specified in argument or in function, assume NCHW by default. - Data layout aware, if not specified in argument or in function, assume NCHW by default.
## Testcase ## Testcase
- Add testcases to testout the schedule and dataflow in the TOPI workflow - Add testcases to testout the schedule and dataflow in the TOPI workflow
- Only do correctness testing without attaching compiler flags and only run it once. - Only do correctness testing without attaching compiler flags and only run it once.
......
# pylint: disable=redefined-builtin, wildcard-import # pylint: disable=redefined-builtin, wildcard-import
"""TVM Operator Inventory. """TVM Operator Inventory.
TOPI is the operator collection library for TVM intended at sharing the effort of crafting and TOPI is the operator collection library for TVM, to provide sugars
optimizing tvm generated kernels. for constructing compute declaration as well as optimized schedules.
Some of the schedule function may have been specially optimized for a
specific workload.
""" """
from __future__ import absolute_import as _abs from __future__ import absolute_import as _abs
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment