Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
T
tic
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
wenyuanbo
tic
Commits
ce18b565
Commit
ce18b565
authored
Aug 14, 2017
by
Tianqi Chen
Committed by
GitHub
Aug 14, 2017
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
[DOC] Document update (#329)
parent
07e56b9a
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
65 additions
and
25 deletions
+65
-25
README.md
+11
-9
docs/api/python/topi.rst
+26
-2
docs/faq.md
+18
-8
docs/how_to/install.md
+1
-1
topi/README.md
+4
-3
topi/python/topi/__init__.py
+5
-2
No files found.
README.md
View file @
ce18b565
...
...
@@ -4,23 +4,25 @@
[
Installation
](
docs/how_to/install.md
)
|
[
Documentation
](
http://docs.tvmlang.org
)
|
[
Tutorials
](
http://tutorials.tvmlang.org
)
|
[
Operator Inventory
](
topi
)
|
[
FAQ
](
docs/faq.md
)
|
[
Contributors
](
CONTRIBUTORS.md
)
|
[
Release Notes
](
NEWS.md
)
TVM
===
TVM: Tensor IR Stack for Deep Learning Systems
==============================================
TVM is a Tensor intermediate representation(IR) stack for deep learning systems. It is designed to close the gap between the
productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends.
TVM works with deep learning frameworks to provide end to end compilation to different backends.
TVM is a low level domain specific language(DSL) for compiling tensor computation pipelines.
It is designed to compile multi-dimensional tensor algebra pipelines which
are crucial to deep learning frameworks
.
License
-------
© Contributors, 2017. Licensed under an
[
Apache-2.0
](
https://github.com/dmlc/tvm/blob/master/LICENSE
)
license
.
Contribute to TVM
-----------------
Your help is very valuable to make the package better for everyone.
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.
-
[
Contributor Guide
](
docs/how_to/contribute.md
)
-
Please add your name to
[
CONTRIBUTORS.md
](
CONTRIBUTORS.md
)
-
Please also update
[
NEWS.md
](
NEWS.md
)
on changes and improvements in API and codes.
## Documentation
The current documentation can be build locally via sphinx. See
[
docs
](
docs
)
folder for details.
docs/api/python/topi.rst
View file @
ce18b565
...
...
@@ -13,8 +13,16 @@ Index
topi.tanh
topi.log
topi.sqrt
topi.sigmoid
topi.broadcast_to
topi.max
topi.sum
topi.min
topi.nn.relu
topi.nn.dilate
topi.nn.scale_shift
topi.nn.conv2d_nchw
topi.nn.conv2d_hwcn
topi.nn.depthwise_conv2d
...
...
@@ -22,7 +30,11 @@ Index
.. autosummary::
topi.cuda.schedule_depthwise_conv2d_map
topi.cuda.schedule_conv2d_nchw
topi.cuda.schedule_conv2d_hwcn
topi.cuda.schedule_depthwise_conv2d
topi.cuda.schedule_reduce
topi.cuda.schedule_broadcast_to
topi
...
...
@@ -31,15 +43,27 @@ topi
.. autofunction:: topi.tanh
.. autofunction:: topi.log
.. autofunction:: topi.sqrt
.. autofunction:: topi.sigmoid
.. autofunction:: topi.broadcast_to
.. autofunction:: topi.max
.. autofunction:: topi.sum
.. autofunction:: topi.min
topi.nn
~~~~~~~
.. autofunction:: topi.nn.relu
.. autofunction:: topi.nn.dilate
.. autofunction:: topi.nn.scale_shift
.. autofunction:: topi.nn.conv2d_nchw
.. autofunction:: topi.nn.conv2d_hwcn
.. autofunction:: topi.nn.depthwise_conv2d
topi.cuda
~~~~~~~~~
.. automodule:: topi.cuda
.. autofunction:: topi.cuda.schedule_depthwise_conv2d_map
.. autofunction:: topi.cuda.schedule_conv2d_nchw
.. autofunction:: topi.cuda.schedule_conv2d_hwcn
.. autofunction:: topi.cuda.schedule_depthwise_conv2d
.. autofunction:: topi.cuda.schedule_reduce
.. autofunction:: topi.cuda.schedule_broadcast_to
docs/faq.md
View file @
ce18b565
...
...
@@ -6,17 +6,27 @@ How to Install
--------------
See
[
Installation
](
https://github.com/dmlc/tvm/blob/master/docs/how_to/install.md
)
TVM's relation to XLA
---------------------
They has different abstraction level.
XLA is a higher level tensor algebra DSL, the system defines codegen and loop transformation
rules for each kernels. TVM is an low level array index based DSL that give the loop transformation
primitives to the user. In terms of design philosophy, TVM aims to be directly used by developers
and provide general support for different framework via DLPack.
See also
[
This Issue
](
https://github.com/dmlc/tvm/issues/151
)
TVM's relation to Other IR/DSL Projects
---------------------------------------
There are usually two levels of abstractions of IR in the deep learning systems.
NNVM, TensorFlow's XLA and Intel's ngraph uses computation graph representation.
This representation is high level, and can be helpful to perform generic optimizations
such as memory reuse, layout transformation and automatic differentiation.
TVM adopts a low level representation, that explicitly express the choice of memory
layout, parallelization pattern, locality and hardware primtives etc.
This level of IR is closer to directly target hardwares.
The low level IR adopt ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra based analysis.
We specifically focus of expressing deep learning workloads(e.g. recurrence),
optimization for different hardware backends and embedding with frameworks to provide
end-to-end compilation stack.
TVM's relation to libDNN cuDNN
------------------------------
TVM can incorporate these library as external calls. One goal of TVM is to be able to
generate high performing kernels. We will evolve TVM an incremental manner as
we learn from the technics of manual kernel crafting and add these as primitives in DSL.
See also
[
TVM Operator Inventory
](
https://github.com/dmlc/tvm/tree/master/topi
)
for
recipes of operators in TVM.
docs/how_to/install.md
View file @
ce18b565
...
...
@@ -8,7 +8,7 @@ scratch on various systems. It consists of two steps:
To get started, clone tvm repo from github. It is important to clone the submodules along, with
```--recursive```
option.
```
bash
git clone
--recursive
ssh://git@
github.com/dmlc/tvm
git clone
--recursive
https://
github.com/dmlc/tvm
```
For windows users who use github tools, you can open the git shell, and type the following command.
```
bash
...
...
topi/README.md
View file @
ce18b565
# TVM Operator Inventory
# T
OPI: T
VM Operator Inventory
topi is the operator collection library for TVM intended at sharing the effort of crafting and
optimizing tvm generated kernels. The goal:
TOPI is the operator collection library for TVM intended at sharing the effort of crafting
and
optimizing tvm generated kernels. The goal:
-
Provide sugars for operator declaration
-
Give common primitives for fused op creation.
...
...
@@ -21,6 +21,7 @@ optimizing tvm generated kernels. The goal:
-
Some kernels have requirements on shape and data layout, assert them
-
Data layout aware, if not specified in argument or in function, assume NCHW by default.
## Testcase
-
Add testcases to testout the schedule and dataflow in the TOPI workflow
-
Only do correctness testing without attaching compiler flags and only run it once.
...
...
topi/python/topi/__init__.py
View file @
ce18b565
# pylint: disable=redefined-builtin, wildcard-import
"""TVM Operator Inventory.
TOPI is the operator collection library for TVM intended at sharing the effort of crafting and
optimizing tvm generated kernels.
TOPI is the operator collection library for TVM, to provide sugars
for constructing compute declaration as well as optimized schedules.
Some of the schedule function may have been specially optimized for a
specific workload.
"""
from
__future__
import
absolute_import
as
_abs
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment