Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
T
tic
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
wenyuanbo
tic
Commits
a43e326f
Unverified
Commit
a43e326f
authored
Feb 17, 2020
by
Baden Hughes
Committed by
GitHub
Feb 16, 2020
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Update faq.md (#4893)
various minor editorial updates - style, grammar, typos.
parent
95de08ba
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
9 deletions
+9
-9
docs/faq.md
+9
-9
No files found.
docs/faq.md
View file @
a43e326f
...
...
@@ -26,24 +26,24 @@ See [Installation](http://docs.tvm.ai/install/)
TVM's relation to Other IR/DSL Projects
---------------------------------------
There are usually two levels of abstractions of IR in the deep learning systems.
TensorFlow's XLA and Intel's ngraph
uses
computation graph representation.
TensorFlow's XLA and Intel's ngraph
both use a
computation graph representation.
This representation is high level, and can be helpful to perform generic optimizations
such as memory reuse, layout transformation and automatic differentiation.
TVM adopts a low
level representation, that explicitly express the choice of memory
TVM adopts a low
-
level representation, that explicitly express the choice of memory
layout, parallelization pattern, locality and hardware primitives etc.
This level of IR is closer to directly target hardwares.
The low
level IR adopt
ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra
based analysis.
We specifically focus o
f expressing deep learning workloads
(e.g. recurrence),
The low
-level IR adopts
ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra
-
based analysis.
We specifically focus o
n expressing deep learning workloads
(e.g. recurrence),
optimization for different hardware backends and embedding with frameworks to provide
end-to-end compilation stack.
TVM's relation to libDNN cuDNN
TVM's relation to libDNN
,
cuDNN
------------------------------
TVM can incorporate these librar
y
as external calls. One goal of TVM is to be able to
generate high
performing kernels. We will evolve TVM an incremental manner as
we learn from the techni
c
s of manual kernel crafting and add these as primitives in DSL.
TVM can incorporate these librar
ies
as external calls. One goal of TVM is to be able to
generate high
-
performing kernels. We will evolve TVM an incremental manner as
we learn from the techni
que
s of manual kernel crafting and add these as primitives in DSL.
See also
[
TVM Operator Inventory
](
https://github.com/apache/incubator-tvm/tree/master/topi
)
for
recipes of operators in TVM.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment