Commit bc819820 by Tianqi Chen Committed by GitHub

[DOCS] Jenkins deployment of docs, add FAQ (#157)

parent 80f36b34
......@@ -153,6 +153,18 @@ stage('Integration Test') {
timeout(time: max_time, unit: 'MINUTES') {
sh "${docker_run} gpu ./tests/scripts/task_python_docs.sh"
}
pack_lib('mydocs', 'docs.tgz')
}
}
}
}
stage('Deploy') {
node('docker' && 'doc') {
ws('workspace/tvm/deploy-docs') {
if (env.BRANCH_NAME == "master") {
unpack_lib('mydocs', 'docs.tgz')
sh "tar xf docs.tgz -C /var/docs"
}
}
}
......
[![Build Status](https://travis-ci.com/dmlc/tvm.svg?token=mkPkB7rD7Myq9dXZa9GV&branch=master)](https://travis-ci.com/dmlc/tvm)
[![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE)
[![Build Status](http://mode-gpu.cs.washington.edu:8080/buildStatus/icon?job=dmlc/tvm/master)](http://mode-gpu.cs.washington.edu:8080/job/dmlc/job/tvm/job/master/)
[Installation](docs/how_to/install.md) |
[Documentation](docs) |
[Tutorials](tutorials) |
[Documentation](http://docs.tvmlang.org) |
[Tutorials](http://tutorials.tvmlang.org) |
[FAQ](docs/faq.md) |
[Contributors](CONTRIBUTORS.md) |
[Release Notes](NEWS.md)
......
The documentation of tvm is generated with recommonmark and sphinx.
- A hosted version of doc is at http://docs.tvmlang.org
- pip install sphinx>=1.5.5 sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark
- Build tvm first in the root folder.
- To build locally, you need to enable USE_CUDA, USE_OPENCL, LLVM_CONFIG in config.mk and then type "make html" in this folder.
Frequently Asked Questions
==========================
This document contains frequently asked questions.
How to Install
--------------
See [Installation](how_to/install.md)
TVM's relation to XLA
---------------------
They has different abstraction level.
XLA is a higher level tensor algebra DSL, the system defines codegen and loop transformation
rules for each kernels. TVM is an low level array index based DSL that give the loop transformation
primitives to the user. In terms of design philosophy, TVM aims to be directly used by developers
and provide general support for different framework via DLPack.
See also [This Issue](https://github.com/dmlc/tvm/issues/151)
TVM's relation to libDNN cuDNN
------------------------------
TVM can incoporate these library as external calls. One goal of TVM is to be able to
generate high performing kernels. We will evolve TVM an incremental manner as
we learn from the technics of manual kernel crafting and add these as primitives in DSL.
......@@ -12,6 +12,7 @@ Contents
self
how_to/install
tutorials/index
faq
api/python/index
how_to/contribute
genindex
......@@ -5,6 +5,5 @@ RUN apt-get update
COPY install/ubuntu_*.sh /install/
RUN bash /install/ubuntu_install_core.sh
RUN apt-get update && apt-get install -y python-pip
RUN bash /install/ubuntu_install_python.sh
RUN bash /install/ubuntu_install_iverilog.sh
FROM nvidia/cuda:cudnn
FROM nvidia/cuda:7.5-cudnn5-devel
# Base scripts
RUN apt-get update
......
# install libraries for python package on ubuntu
apt-get update && apt-get install -y python-dev python3-dev
apt-get update && apt-get install -y python-pip python-dev python3-dev
# the version of the pip shipped with ubuntu may be too lower, install a recent version here
cd /tmp && wget https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py && python2 get-pip.py
......
......@@ -59,13 +59,6 @@ def test_log_llvm():
np.testing.assert_allclose(
b.asnumpy(), np.log(a.asnumpy()), rtol=1e-5)
from tvm.contrib import nvcc_compiler
@tvm.register_func
def tvm_callback_cuda_compile(code):
print(code)
ptx = nvcc_compiler.compile_source(code, target="ptx", options=["-arch=sm_35"])
return ptx
def test_add():
# graph
......
import tvm
from tvm.contrib import nvcc_compiler
from tvm.contrib import metal_compiler
import numpy as np
import time
#@tvm.register_func
def tvm_callback_metal_compile(code):
lib = metal_compiler.compile_source(code)
return lib
def test_gemm():
# graph
nn = 1024
......
#!/bin/bash
cd docs
PYTHONPATH=../python make html
cd _build/html
tar czf docs.tgz *
mv docs.tgz ../../../
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment