Commit f927e1f3 by Tianqi Chen Committed by GitHub

[DOCKER] Add docker demo image (#1404)

parent b30ae8ac
# For CPU
# CI docker CPU env
FROM ubuntu:16.04
RUN apt-get update --fix-missing
......
# CI docker GPU env
FROM nvidia/cuda:8.0-cudnn7-devel
# Base scripts
......
# CI docker i386 env
FROM ioft/i386-ubuntu:16.04
RUN apt-get update --fix-missing
......
# For lint test
# CI docker lint env
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y sudo wget
......
# Minimum docker image for demo purposes
# prebuilt-image: tvmai/demo_cpu
FROM ubuntu:16.04
RUN apt-get update --fix-missing
COPY install/ubuntu_install_core.sh /install/ubuntu_install_core.sh
RUN bash /install/ubuntu_install_core.sh
# Python: basic dependencies
RUN apt-get update && apt-get install -y python3-dev python3-pip
RUN pip3 install numpy nose-timer cython decorator scipy
# LLVM
RUN echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-6.0 main \
>> /etc/apt/sources.list.d/llvm.list && \
wget -O - http://apt.llvm.org/llvm-snapshot.gpg.key|sudo apt-key add - && \
apt-get update && apt-get install -y --force-yes llvm-6.0
# Jupyter notebook.
RUN pip3 install matplotlib Image Pillow jupyter[notebook]
# Deep learning frameworks
RUN pip3 install mxnet tensorflow keras
# Build TVM
COPY install/install_tvm_cpu.sh /install/install_tvm_cpu.sh
RUN bash /install/install_tvm_cpu.sh
# Environment variables
ENV PYTHONPATH=/usr/tvm/python:/usr/tvm/topi/python:/usr/tvm/nnvm/python/:/usr/tvm/vta/python:${PYTHONPATH}
# TVM Docker
This directory contains the TVM's docker infrastructure.
We use docker to quickly provide environments that can be
used to build tvm in various settings.
We use docker to provide build environments for CI and images for demo.
We need [docker](https://docs.docker.com/engine/installation/) and
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker/) for GPU images.
To run locally, we need to first install
[docker](https://docs.docker.com/engine/installation/) and
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker/).
## Start Docker Bash Session
You can use the following helper script to start an
interactive bash session with a given image_name.
```bash
/path/to/tvm/docker/bash.sh image_name
```
The script does the following things:
- Mount current directory to /workspace and set it as home
- Switch user to be the same user that calls the bash.sh
- Use the host-side network
The helper bash script can be useful to build demo sessions.
## Prebuilt Docker Images
We provide several pre-built images for doing quick exploration with TVM installed.
For example, you can run the following command to get ```tvmai/demo_cpu``` image.
```bash
/path/to/tvm/docker/bash.sh tvmai/demo_cpu
```
Then inside the docker container, you can type the following command to start the jupyter notebook
```bash
jupyter notebook
```
Check out https://hub.docker.com/r/tvmai/ to get the full list of available prebuilt images.
## Use Local Build Script
Each dockerfile defines a different environment.
We also provide script to build docker images locally.
We use (`build.sh`)[./build.sh] to build and run the commands.
To use the docker images, we can run the following command
To build and run docker images, we can run the following command
at the root of the project.
```bash
......@@ -36,23 +65,23 @@ Here are some common use examples to perform CI tasks.
- lint the python codes
```bash
./docker/build.sh ci_lint make pylint
```
./docker/build.sh ci_lint make pylint
```
- build codes with CUDA support
```bash
./docker/build.sh ci_gpu make -j$(nproc)
```
./docker/build.sh ci_gpu make -j$(nproc)
```
- do the python unittest
```bash
./docker/build.sh ci_gpu tests/scripts/task_python_unittest.sh'
```
./docker/build.sh ci_gpu tests/scripts/task_python_unittest.sh
```
- build the documents. The results will be available at `docs/_build/html`
```bash
./docker/ci_build.sh ci_gpu make -C docs html
./docker/ci_build.sh ci_gpu make -C docs html
```
#!/usr/bin/env bash
#
# Start a bash, mount /workspace to be current directory.
#
# Usage: docker/bash.sh <CONTAINER_NAME>
#
if [ "$#" -lt 1 ]; then
echo "Usage: docker/bash.sh <CONTAINER_NAME>"
exit -1
fi
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
WORKSPACE="$(pwd)"
DOCKER_IMAGE_NAME=$1
# Use nvidia-docker if the container is GPU.
if [[ "${DOCKER_IMAGE_NAME}" == *"gpu"* ]]; then
DOCKER_BINARY="nvidia-docker"
else
DOCKER_BINARY="docker"
fi
# Print arguments.
echo "WORKSPACE: ${WORKSPACE}"
echo "DOCKER CONTAINER NAME: ${DOCKER_IMG_NAME}"
echo ""
# By default we cleanup - remove the container once it finish running (--rm)
# and share the PID namespace (--pid=host) so the process inside does not have
# pid 1 and SIGKILL is propagated to the process inside (jenkins can kill it).
echo ${DOCKER_BINARY}
${DOCKER_BINARY} run --rm -it --pid=host --net=host\
-v ${WORKSPACE}:/workspace \
-v ${SCRIPT_DIR}:/docker \
-w /workspace \
-e "CI_BUILD_HOME=/workspace" \
-e "CI_BUILD_USER=$(id -u -n)" \
-e "CI_BUILD_UID=$(id -u)" \
-e "CI_BUILD_GROUP=$(id -g -n)" \
-e "CI_BUILD_GID=$(id -g)" \
${DOCKER_IMAGE_NAME}\
bash /docker/with_the_same_user \
bash
......@@ -37,6 +37,11 @@ if [[ "$1" == "-it" ]]; then
shift 1
fi
if [[ "$1" == "--net=host" ]]; then
CI_DOCKER_EXTRA_PARAMS+=('--net=host')
shift 1
fi
if [[ ! -f "${DOCKERFILE_PATH}" ]]; then
echo "Invalid Dockerfile path: \"${DOCKERFILE_PATH}\""
exit 1
......
cd /usr
git clone https://github.com/dmlc/tvm --recursive
cd /usr/tvm
echo set\(USE_LLVM llvm-config-6.0\) >> config.cmake
echo set\(USE_RPC ON\) >> config.cmake
echo set\(USE_SORT ON\) >> config.cmake
echo set\(USE_GRAPH_RUNTIME ON\) >> config.cmake
echo set\(USE_BLAS openblas\) >> config.cmake
mkdir -p build
cd build
cmake ..
make -j10
#!/usr/bin/env bash
# This script is a wrapper creating the same user inside container as the one
# running the ci_build.sh outside the container. It also set the home directory
# running the docker/build.sh outside the container. It also set the home directory
# for the user inside container to match the same absolute path as the workspace
# outside of container. Do not run this manually. It does not make sense. It is
# intended to be called by ci_build.sh only.
......@@ -30,5 +30,6 @@ HOME=${CI_BUILD_HOME}\
sudo -u "#${CI_BUILD_UID}" --preserve-env\
PATH=${PATH}\
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}\
PYTHONPATH=${PYTHONPATH}\
HOME=${CI_BUILD_HOME}\
${COMMAND[@]}
.. _docker-images:
Docker Images
=============
We provide several prebuilt docker images to quickly try out tvm.
These images are also helpful to do demo/tutorials of TVM.
You can get the docker images via the following steps.
We need `docker <https://docs.docker.com/engine/installation/>`_ and
`nvidia-docker <https://github.com/NVIDIA/nvidia-docker/>`_ if we want to use cuda.
First, clone tvm repo to get the auxiliary scripts
.. code:: bash
git clone --recursive https://github.com/dmlc/tvm
We can then use the following command to launch a `tvmai/demo_cpu` image.
.. code:: bash
/path/to/tvm/docker/bash.sh tvmai/demo_cpu
.. note::
You can find all the prebuilt images in `https://hub.docker.com/r/tvmai/`_
This auxiliary script does the following things:
- Mount current directory to /workspace
- Switch user to be the same user that calls the bash.sh (so you can read/write host system)
- Use the host-side network (so you can use jupyter notebook)
Then you can start a jupyter notebook by typing
.. code:: bash
jupyter notebook
Docker Source
-------------
Check out `https://github.com/dmlc/tvm/tree/master/docker`_ if you are interested in
building your own docker images.
.. _install-from-source:
Install from Source
===================
This page gives instructions on how to build and install the tvm package from
scratch on various systems. It consists of two steps:
1. First build the shared library from the C++ codes (`libtvm.so` for linux/osx and `libtvm.dll` for windows).
2. Setup for the language packages (e.g. Python Package).
To get started, clone tvm repo from github. It is important to clone the submodules along, with ``--recursive`` option.
.. code:: bash
git clone --recursive https://github.com/dmlc/tvm
For windows users who use github tools, you can open the git shell, and type the following command.
.. code:: bash
git submodule init
git submodule update
Build the Shared Library
------------------------
Our goal is to build the shared libraries:
- On Linux the target library are `libtvm.so, libtvm_topi.so`
- On OSX the target library are `libtvm.dylib, libtvm_topi.dylib`
- On Windows the target library are `libtvm.dll, libtvm_topi.dll`
.. code:: bash
sudo apt-get update
sudo apt-get install -y python python-dev python-setuptools gcc libtinfo-dev zlib1g-dev
The minimal building requirements are
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
- CMake 3.5 or higher
- We highly recommend to build with LLVM to enable all the features.
- It is possible to build without llvm dependency if we only want to use CUDA/OpenCL
We use cmake to build the library.
The configuration of tvm can be modified by `config.cmake`.
- First, check the cmake in your system, you do not have cmake
you can obtain the latest version from `official website <https://cmake.org/download/>`_
- First create a build directory, copy the ``cmake/config.cmake`` to the directory.
.. code:: bash
mkdir build
cp cmake/config.cmake build
- Edit ``build/config.cmake`` to customize the compilation options
- On macOS, for some versions of XCode, you need to add ``-lc++abi`` in the LDFLAGS or you'll get link errors.
- TVM optionally depends on LLVM. LLVM is required for CPU codegen that needs LLVM.
- LLVM 4.0 or higher is needed for build with LLVM. Note that verison of LLVM from default apt may lower than 4.0.
- Since LLVM takes long time to build from source, you can download pre-built version of LLVM from
[LLVM Download Page](http://releases.llvm.org/download.html).
- Unzip to a certain location, modify ``build/config.cmake`` to add ``set(USE_LLVM /path/to/your/llvm/bin/llvm-config)``
- You can also directly set ``set(USE_LLVM ON)`` and let cmake search for a usable version of LLVM.
- You can also use [LLVM Nightly Ubuntu Build](https://apt.llvm.org/)
- Note that apt-package append ``llvm-config`` with version number.
For example, set ``set(LLVM_CONFIG llvm-config-4.0)`` if you installed 4.0 package
- We can then build tvm and related libraries.
.. code:: bash
cd build
cmake ..
make -j4
If everything goes well, we can go to the specific language installation section.
Building on Windows
~~~~~~~~~~~~~~~~~~~
TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**.
In order to generate the VS solution file using cmake,
make sure you have a recent version of cmake added to your path and then from the tvm directory:
.. code:: bash
mkdir build
cd build
cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
This will generate the VS project using the MSVC 14 64 bit generator.
Open the .sln file in the build directory and build with Visual Studio.
In order to build with LLVM in windows, you will need to build LLVM from source.
You need to run build the nnvm by running the same script under the nnvm folder.
Building ROCm support
~~~~~~~~~~~~~~~~~~~~~
Currently, ROCm is supported only on linux, so all the instructions are written with linux in mind.
- Set ``set(USE_ROCM ON)``, set ROCM_PATH to the correct path.
- You need to first install HIP runtime from ROCm. Make sure the installation system has ROCm installed in it.
- Install latest stable version of LLVM (v6.0.1), and LLD, make sure ``ld.lld`` is available via command line.
Python Package Installation
---------------------------
The python package is located at python
There are several ways to install the package:
1. Set the environment variable `PYTHONPATH` to tell python where to find
the library. For example, assume we cloned `tvm` on the home directory
`~`. then we can added the following line in `~/.bashrc`.
It is **recommended for developers** who may change the codes.
The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ``setup`` again)
.. code:: bash
export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:/path/to/tvm/nnvm/python:${PYTHONPATH}
2. Install tvm python bindings by `setup.py`:
.. code:: bash
# install tvm package for the current user
# NOTE: if you installed python via homebrew, --user is not needed during installaiton
# it will be automatically installed to your user directory.
# providing --user flag may trigger error during installation in such case.
export MACOSX_DEPLOYMENT_TARGET=10.9 # This is required for mac to avoid symbol conflicts with libstdc++
cd python; python setup.py install --user; cd ..
cd topi/python; python setup.py install --user; cd ../..
cd nnvm/python; python setup.py install --user; cd ../..
Install Contrib Libraries
-------------------------
.. toctree::
:maxdepth: 1
nnpack
Installation
============
This page gives instructions on how to build and install the tvm package from
scratch on various systems. It consists of two steps:
1. First build the shared library from the C++ codes (`libtvm.so` for linux/osx and `libtvm.dll` for windows).
2. Setup for the language packages (e.g. Python Package).
To get started, clone tvm repo from github. It is important to clone the submodules along, with ``--recursive`` option.
.. code:: bash
git clone --recursive https://github.com/dmlc/tvm
For windows users who use github tools, you can open the git shell, and type the following command.
.. code:: bash
git submodule init
git submodule update
Build the Shared Library
------------------------
Our goal is to build the shared libraries:
- On Linux the target library are `libtvm.so, libtvm_topi.so`
- On OSX the target library are `libtvm.dylib, libtvm_topi.dylib`
- On Windows the target library are `libtvm.dll, libtvm_topi.dll`
.. code:: bash
sudo apt-get update
sudo apt-get install -y python python-dev python-setuptools gcc libtinfo-dev zlib1g-dev
The minimal building requirements are
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
- CMake 3.5 or higher
- We highly recommend to build with LLVM to enable all the features.
- It is possible to build without llvm dependency if we only want to use CUDA/OpenCL
We use cmake to build the library.
The configuration of tvm can be modified by `config.cmake`.
- First, check the cmake in your system, you do not have cmake
you can obtain the latest version from `official website <https://cmake.org/download/>`_
- First create a build directory, copy the ``cmake/config.cmake`` to the directory.
.. code:: bash
mkdir build
cp cmake/config.cmake build
- Edit ``build/config.cmake`` to customize the compilation options
- On macOS, for some versions of XCode, you need to add ``-lc++abi`` in the LDFLAGS or you'll get link errors.
- TVM optionally depends on LLVM. LLVM is required for CPU codegen that needs LLVM.
- LLVM 4.0 or higher is needed for build with LLVM. Note that verison of LLVM from default apt may lower than 4.0.
- Since LLVM takes long time to build from source, you can download pre-built version of LLVM from
[LLVM Download Page](http://releases.llvm.org/download.html).
- Unzip to a certain location, modify ``build/config.cmake`` to add ``set(USE_LLVM /path/to/your/llvm/bin/llvm-config)``
- You can also directly set ``set(USE_LLVM ON)`` and let cmake search for a usable version of LLVM.
- You can also use [LLVM Nightly Ubuntu Build](https://apt.llvm.org/)
- Note that apt-package append ``llvm-config`` with version number.
For example, set ``set(LLVM_CONFIG llvm-config-4.0)`` if you installed 4.0 package
- We can then build tvm and related libraries.
.. code:: bash
cd build
cmake ..
make -j4
If everything goes well, we can go to the specific language installation section.
Building on Windows
~~~~~~~~~~~~~~~~~~~
TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**.
In order to generate the VS solution file using cmake,
make sure you have a recent version of cmake added to your path and then from the tvm directory:
.. code:: bash
mkdir build
cd build
cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
This will generate the VS project using the MSVC 14 64 bit generator.
Open the .sln file in the build directory and build with Visual Studio.
In order to build with LLVM in windows, you will need to build LLVM from source.
You need to run build the nnvm by running the same script under the nnvm folder.
Building ROCm support
~~~~~~~~~~~~~~~~~~~~~
Currently, ROCm is supported only on linux, so all the instructions are written with linux in mind.
- Set ``set(USE_ROCM ON)``, set ROCM_PATH to the correct path.
- You need to first install HIP runtime from ROCm. Make sure the installation system has ROCm installed in it.
- Install latest stable version of LLVM (v6.0.1), and LLD, make sure ``ld.lld`` is available via command line.
Python Package Installation
---------------------------
The python package is located at python
There are several ways to install the package:
1. Set the environment variable `PYTHONPATH` to tell python where to find
the library. For example, assume we cloned `tvm` on the home directory
`~`. then we can added the following line in `~/.bashrc`.
It is **recommended for developers** who may change the codes.
The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ``setup`` again)
.. code:: bash
export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:/path/to/tvm/nnvm/python:${PYTHONPATH}
2. Install tvm python bindings by `setup.py`:
.. code:: bash
# install tvm package for the current user
# NOTE: if you installed python via homebrew, --user is not needed during installaiton
# it will be automatically installed to your user directory.
# providing --user flag may trigger error during installation in such case.
export MACOSX_DEPLOYMENT_TARGET=10.9 # This is required for mac to avoid symbol conflicts with libstdc++
cd python; python setup.py install --user; cd ..
cd topi/python; python setup.py install --user; cd ../..
cd nnvm/python; python setup.py install --user; cd ../..
Install Contrib Libraries
-------------------------
To install TVM, please read :ref:`install-from-source`.
If you would like to quickly try out TVM or do demo/tutorials, checkout :ref:`docker-images`
.. toctree::
:maxdepth: 1
:maxdepth: 2
nnpack
from_source
docker
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment