Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
T
tvm08dev
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
YuxuanGuo
tvm08dev
Commits
ff5c1a8e
Commit
ff5c1a8e
authored
Jan 04, 2021
by
guoyuxuan
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
add git ignore
parent
14597ce4
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
765 additions
and
1 deletions
+765
-1
.gitignore
+235
-0
tvm/.gitignore
+1
-1
tvm/cmake/config.cmake
+268
-0
tvm/config.cmake
+261
-0
No files found.
.gitignore
0 → 100644
View file @
ff5c1a8e
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
*.S
# C extensions
*.so
*.ll
.npm
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
.conda/
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
docs/gen_modules
# PyBuilder
/target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# dotenv
.env
# virtualenv
venv/
ENV/
# Spyder project settings
.spyderproject
# Rope project settings
.ropeproject
*~
*.pyc
*~
config.mk
# config.cmake
Win32
*.dir
perf
*.wasm
.emscripten
## IOS
DerivedData/
## Java
*.class
jvm/*/target/
jvm/*/*/target/
*.worksheet
*.idea
*.iml
*.classpath
*.project
*.settings
*/node_modules/
## Various settings
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
xcuserdata/
.pkl_memoize_*
.emscripten*
.m2
# Compiled Dynamic libraries
*.so
*.dylib
*.dll
# Compiled Object files
*.slo
*.lo
*.o
*.obj
# Precompiled Headers
*.gch
*.pch
# Compiled Static libraries
*.lai
*.la
*.a
*.lib
# Executables
*.exe
*.out
*.app
## Other
*.moved-aside
*.xccheckout
*.xcscmblueprint
.DS_Store
tags
cscope*
*.lock
# vim temporary files
*.swp
*.swo
# TVM generated code
perf
.bash_history
*.json
*.params
*.onnx
*.h5
synset.txt
cat.jpg
cat.png
docs.tgz
cat.png
*.mlmodel
tvm_u.*
tvm_t.*
# Mac OS X
.DS_Store
# Jetbrain
.idea
.ipython
.jupyter
.nv
.pylint.d
.python_history
.pytest_cache
.local
cmake-build-debug
# Visual Studio
.vs
# Visual Studio Code
.vscode
# tmp file
.nfs*
# keys
*.pem
*.p12
*.pfx
*.cer
*.crt
*.der
# patch sentinel
patched.txt
# Python type checking
.mypy_cache/
.pyre/
# pipenv files
Pipfile
Pipfile.lock
# conda package artifacts
conda/Dockerfile.cuda*
conda/pkg
.node_repl_history
# nix files
.envrc
*.nix
tvm/.gitignore
View file @
ff5c1a8e
...
@@ -93,7 +93,7 @@ ENV/
...
@@ -93,7 +93,7 @@ ENV/
*.pyc
*.pyc
*~
*~
config.mk
config.mk
config.cmake
#
config.cmake
Win32
Win32
*.dir
*.dir
perf
perf
...
...
tvm/cmake/config.cmake
0 → 100644
View file @
ff5c1a8e
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#--------------------------------------------------------------------
# Template custom cmake configuration for compiling
#
# This file is used to override the build options in build.
# If you want to change the configuration, please use the following
# steps. Assume you are on the root directory. First copy the this
# file so that any local changes will be ignored by git
#
# $ mkdir build
# $ cp cmake/config.cmake build
#
# Next modify the according entries, and then compile by
#
# $ cd build
# $ cmake ..
#
# Then build in parallel with 8 threads
#
# $ make -j8
#--------------------------------------------------------------------
#---------------------------------------------
# Backend runtimes.
#---------------------------------------------
# Whether enable CUDA during compile,
#
# Possible values:
# - ON: enable CUDA with cmake's auto search
# - OFF: disable CUDA
# - /path/to/cuda: use specific path to cuda toolkit
set
(
USE_CUDA OFF
)
# Whether enable ROCM runtime
#
# Possible values:
# - ON: enable ROCM with cmake's auto search
# - OFF: disable ROCM
# - /path/to/rocm: use specific path to rocm
set
(
USE_ROCM OFF
)
# Whether enable SDAccel runtime
set
(
USE_SDACCEL OFF
)
# Whether enable Intel FPGA SDK for OpenCL (AOCL) runtime
set
(
USE_AOCL OFF
)
# Whether enable OpenCL runtime
#
# Possible values:
# - ON: enable OpenCL with cmake's auto search
# - OFF: disable OpenCL
# - /path/to/opencl-sdk: use specific path to opencl-sdk
set
(
USE_OPENCL OFF
)
# Whether enable Metal runtime
set
(
USE_METAL OFF
)
# Whether enable Vulkan runtime
#
# Possible values:
# - ON: enable Vulkan with cmake's auto search
# - OFF: disable vulkan
# - /path/to/vulkan-sdk: use specific path to vulkan-sdk
set
(
USE_VULKAN OFF
)
# Whether enable OpenGL runtime
set
(
USE_OPENGL OFF
)
# Whether enable MicroTVM runtime
set
(
USE_MICRO OFF
)
# Whether enable RPC runtime
set
(
USE_RPC ON
)
# Whether to build the C++ RPC server binary
set
(
USE_CPP_RPC OFF
)
# Whether embed stackvm into the runtime
set
(
USE_STACKVM_RUNTIME OFF
)
# Whether enable tiny embedded graph runtime.
set
(
USE_GRAPH_RUNTIME ON
)
# Whether enable additional graph debug functions
set
(
USE_GRAPH_RUNTIME_DEBUG OFF
)
# Whether enable additional vm profiler functions
set
(
USE_VM_PROFILER OFF
)
# Whether enable uTVM standalone runtime
set
(
USE_MICRO_STANDALONE_RUNTIME OFF
)
# Whether build with LLVM support
# Requires LLVM version >= 4.0
#
# Possible values:
# - ON: enable llvm with cmake's find search
# - OFF: disable llvm
# - /path/to/llvm-config: enable specific LLVM when multiple llvm-dev is available.
set
(
USE_LLVM OFF
)
#---------------------------------------------
# Contrib libraries
#---------------------------------------------
# Whether to build with BYODT software emulated posit custom datatype
#
# Possible values:
# - ON: enable BYODT posit, requires setting UNIVERSAL_PATH
# - OFF: disable BYODT posit
#
# set(UNIVERSAL_PATH /path/to/stillwater-universal) for ON
set
(
USE_BYODT_POSIT OFF
)
# Whether use BLAS, choices: openblas, atlas, apple
set
(
USE_BLAS none
)
# Whether to use MKL
# Possible values:
# - ON: Enable MKL
# - /path/to/mkl: mkl root path
# - OFF: Disable MKL
# set(USE_MKL /opt/intel/mkl) for UNIX
# set(USE_MKL ../IntelSWTools/compilers_and_libraries_2018/windows/mkl) for WIN32
# set(USE_MKL <path to venv or site-packages directory>) if using `pip install mkl`
set
(
USE_MKL OFF
)
# Whether use MKLDNN library, choices: ON, OFF, path to mkldnn library
set
(
USE_MKLDNN OFF
)
# Whether use OpenMP thread pool, choices: gnu, intel
# Note: "gnu" uses gomp library, "intel" uses iomp5 library
set
(
USE_OPENMP none
)
# Whether use contrib.random in runtime
set
(
USE_RANDOM ON
)
# Whether use NNPack
set
(
USE_NNPACK OFF
)
# Possible values:
# - ON: enable tflite with cmake's find search
# - OFF: disable tflite
# - /path/to/libtensorflow-lite.a: use specific path to tensorflow lite library
set
(
USE_TFLITE OFF
)
# /path/to/tensorflow: tensorflow root path when use tflite library
set
(
USE_TENSORFLOW_PATH none
)
# Required for full builds with TFLite. Not needed for runtime with TFLite.
# /path/to/flatbuffers: flatbuffers root path when using tflite library
set
(
USE_FLATBUFFERS_PATH none
)
# Possible values:
# - OFF: disable tflite support for edgetpu
# - /path/to/edgetpu: use specific path to edgetpu library
set
(
USE_EDGETPU OFF
)
# Whether use CuDNN
set
(
USE_CUDNN OFF
)
# Whether use cuBLAS
set
(
USE_CUBLAS OFF
)
# Whether use MIOpen
set
(
USE_MIOPEN OFF
)
# Whether use MPS
set
(
USE_MPS OFF
)
# Whether use rocBlas
set
(
USE_ROCBLAS OFF
)
# Whether use contrib sort
set
(
USE_SORT ON
)
# Whether use MKL-DNN (DNNL) codegen
set
(
USE_DNNL_CODEGEN OFF
)
# Whether to use Arm Compute Library (ACL) codegen
# We provide 2 separate flags since we cannot build the ACL runtime on x86.
# This is useful for cases where you want to cross-compile a relay graph
# on x86 then run on AArch.
#
# An example of how to use this can be found here: docs/deploy/arm_compute_lib.rst.
#
# USE_ARM_COMPUTE_LIB - Support for compiling a relay graph offloading supported
# operators to Arm Compute Library. OFF/ON
# USE_ARM_COMPUTE_LIB_GRAPH_RUNTIME - Run Arm Compute Library annotated functions via the ACL
# runtime. OFF/ON/"path/to/ACL"
set
(
USE_ARM_COMPUTE_LIB OFF
)
set
(
USE_ARM_COMPUTE_LIB_GRAPH_RUNTIME OFF
)
# Whether to build with Arm Ethos-N support
# Possible values:
# - OFF: disable Arm Ethos-N support
# - path/to/arm-ethos-N-stack: use a specific version of the
# Ethos-N driver stack
set
(
USE_ETHOSN OFF
)
# If USE_ETHOSN is enabled, use ETHOSN_HW (ON) if Ethos-N hardware is available on this machine
# otherwise use ETHOSN_HW (OFF) to use the software test infrastructure
set
(
USE_ETHOSN_HW OFF
)
# Whether to build with TensorRT codegen or runtime
# Examples are available here: docs/deploy/tensorrt.rst.
#
# USE_TENSORRT_CODEGEN - Support for compiling a relay graph where supported operators are
# offloaded to TensorRT. OFF/ON
# USE_TENSORRT_RUNTIME - Support for running TensorRT compiled modules, requires presense of
# TensorRT library. OFF/ON/"path/to/TensorRT"
set
(
USE_TENSORRT_CODEGEN OFF
)
set
(
USE_TENSORRT_RUNTIME OFF
)
# Build ANTLR parser for Relay text format
# Possible values:
# - ON: enable ANTLR by searching default locations (cmake find_program for antlr4 and /usr/local for jar)
# - OFF: disable ANTLR
# - /path/to/antlr-*-complete.jar: path to specific ANTLR jar file
set
(
USE_ANTLR OFF
)
# Whether use Relay debug mode
set
(
USE_RELAY_DEBUG OFF
)
# Whether to build fast VTA simulator driver
set
(
USE_VTA_FSIM OFF
)
# Whether to build cycle-accurate VTA simulator driver
set
(
USE_VTA_TSIM OFF
)
# Whether to build VTA FPGA driver (device side only)
set
(
USE_VTA_FPGA OFF
)
# Whether use Thrust
set
(
USE_THRUST OFF
)
# Whether to build the TensorFlow TVMDSOOp module
set
(
USE_TF_TVMDSOOP OFF
)
# Whether to use STL's std::unordered_map or TVM's POD compatible Map
set
(
USE_FALLBACK_STL_MAP OFF
)
# Whether to use hexagon device
set
(
USE_HEXAGON_DEVICE OFF
)
set
(
USE_HEXAGON_SDK /path/to/sdk
)
# Whether to use ONNX codegen
set
(
USE_TARGET_ONNX OFF
)
# Whether to compile the standalone C runtime.
set
(
USE_STANDALONE_CRT ON
)
tvm/config.cmake
0 → 100644
View file @
ff5c1a8e
set
(
CMAKE_BUILD_TYPE
"Debug"
)
set
(
CMAKE_CXX_FLAGS
"-D_GLIBCXX_USE_CXX11_ABI=0"
)
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#--------------------------------------------------------------------
# Template custom cmake configuration for compiling
#
# This file is used to override the build options in build.
# If you want to change the configuration, please use the following
# steps. Assume you are on the root directory. First copy the this
# file so that any local changes will be ignored by git
#
# $ mkdir build
# $ cp cmake/config.cmake build
#
# Next modify the according entries, and then compile by
#
# $ cd build
# $ cmake ..
#
# Then build in parallel with 8 threads
#
# $ make -j8
#--------------------------------------------------------------------
#---------------------------------------------
# Backend runtimes.
#---------------------------------------------
# Whether enable CUDA during compile,
#
# Possible values:
# - ON: enable CUDA with cmake's auto search
# - OFF: disable CUDA
# - /path/to/cuda: use specific path to cuda toolkit
set
(
USE_CUDA ON
)
# Whether enable ROCM runtime
#
# Possible values:
# - ON: enable ROCM with cmake's auto search
# - OFF: disable ROCM
# - /path/to/rocm: use specific path to rocm
set
(
USE_ROCM OFF
)
# Whether enable SDAccel runtime
set
(
USE_SDACCEL OFF
)
# Whether enable Intel FPGA SDK for OpenCL (AOCL) runtime
set
(
USE_AOCL OFF
)
# Whether enable OpenCL runtime
#
# Possible values:
# - ON: enable OpenCL with cmake's auto search
# - OFF: disable OpenCL
# - /path/to/opencl-sdk: use specific path to opencl-sdk
set
(
USE_OPENCL OFF
)
# Whether enable Metal runtime
set
(
USE_METAL OFF
)
# Whether enable Vulkan runtime
#
# Possible values:
# - ON: enable Vulkan with cmake's auto search
# - OFF: disable vulkan
# - /path/to/vulkan-sdk: use specific path to vulkan-sdk
set
(
USE_VULKAN OFF
)
# Whether enable OpenGL runtime
set
(
USE_OPENGL OFF
)
# Whether enable MicroTVM runtime
set
(
USE_MICRO OFF
)
# Whether enable RPC runtime
set
(
USE_RPC ON
)
# Whether to build the C++ RPC server binary
set
(
USE_CPP_RPC OFF
)
# Whether embed stackvm into the runtime
set
(
USE_STACKVM_RUNTIME OFF
)
# Whether enable tiny embedded graph runtime.
set
(
USE_GRAPH_RUNTIME ON
)
# Whether enable additional graph debug functions
set
(
USE_GRAPH_RUNTIME_DEBUG OFF
)
# Whether enable additional vm profiler functions
set
(
USE_VM_PROFILER OFF
)
# Whether enable uTVM standalone runtime
set
(
USE_MICRO_STANDALONE_RUNTIME OFF
)
# Whether build with LLVM support
# Requires LLVM version >= 4.0
#
# Possible values:
# - ON: enable llvm with cmake's find search
# - OFF: disable llvm
# - /path/to/llvm-config: enable specific LLVM when multiple llvm-dev is available.
set
(
USE_LLVM ON
)
#---------------------------------------------
# Contrib libraries
#---------------------------------------------
# Whether to build with BYODT software emulated posit custom datatype
#
# Possible values:
# - ON: enable BYODT posit, requires setting UNIVERSAL_PATH
# - OFF: disable BYODT posit
#
# set(UNIVERSAL_PATH /path/to/stillwater-universal) for ON
set
(
USE_BYODT_POSIT OFF
)
# Whether use BLAS, choices: openblas, atlas, apple
set
(
USE_BLAS none
)
# Whether to use MKL
# Possible values:
# - ON: Enable MKL
# - /path/to/mkl: mkl root path
# - OFF: Disable MKL
# set(USE_MKL /opt/intel/mkl) for UNIX
# set(USE_MKL ../IntelSWTools/compilers_and_libraries_2018/windows/mkl) for WIN32
# set(USE_MKL <path to venv or site-packages directory>) if using `pip install mkl`
set
(
USE_MKL OFF
)
# Whether use MKLDNN library, choices: ON, OFF, path to mkldnn library
set
(
USE_MKLDNN OFF
)
# Whether use OpenMP thread pool, choices: gnu, intel
# Note: "gnu" uses gomp library, "intel" uses iomp5 library
set
(
USE_OPENMP none
)
# Whether use contrib.random in runtime
set
(
USE_RANDOM ON
)
# Whether use NNPack
set
(
USE_NNPACK OFF
)
# Possible values:
# - ON: enable tflite with cmake's find search
# - OFF: disable tflite
# - /path/to/libtensorflow-lite.a: use specific path to tensorflow lite library
set
(
USE_TFLITE OFF
)
# /path/to/tensorflow: tensorflow root path when use tflite library
set
(
USE_TENSORFLOW_PATH none
)
# Required for full builds with TFLite. Not needed for runtime with TFLite.
# /path/to/flatbuffers: flatbuffers root path when using tflite library
set
(
USE_FLATBUFFERS_PATH none
)
# Possible values:
# - OFF: disable tflite support for edgetpu
# - /path/to/edgetpu: use specific path to edgetpu library
set
(
USE_EDGETPU OFF
)
# Whether use CuDNN
set
(
USE_CUDNN ON
)
# Whether use cuBLAS
set
(
USE_CUBLAS OFF
)
# Whether use MIOpen
set
(
USE_MIOPEN OFF
)
# Whether use MPS
set
(
USE_MPS OFF
)
# Whether use rocBlas
set
(
USE_ROCBLAS OFF
)
# Whether use contrib sort
set
(
USE_SORT ON
)
# Whether use MKL-DNN (DNNL) codegen
set
(
USE_DNNL_CODEGEN OFF
)
# Whether to use Arm Compute Library (ACL) codegen
# We provide 2 separate flags since we cannot build the ACL runtime on x86.
# This is useful for cases where you want to cross-compile a relay graph
# on x86 then run on AArch.
#
# An example of how to use this can be found here: docs/deploy/arm_compute_lib.rst.
#
# USE_ARM_COMPUTE_LIB - Support for compiling a relay graph offloading supported
# operators to Arm Compute Library. OFF/ON
# USE_ARM_COMPUTE_LIB_GRAPH_RUNTIME - Run Arm Compute Library annotated functions via the ACL
# runtime. OFF/ON/"path/to/ACL"
set
(
USE_ARM_COMPUTE_LIB OFF
)
set
(
USE_ARM_COMPUTE_LIB_GRAPH_RUNTIME OFF
)
# Whether to build with Arm Ethos-N support
# Possible values:
# - OFF: disable Arm Ethos-N support
# - path/to/arm-ethos-N-stack: use a specific version of the
# Ethos-N driver stack
set
(
USE_ETHOSN OFF
)
# If USE_ETHOSN is enabled, use ETHOSN_HW (ON) if Ethos-N hardware is available on this machine
# otherwise use ETHOSN_HW (OFF) to use the software test infrastructure
set
(
USE_ETHOSN_HW OFF
)
# Build ANTLR parser for Relay text format
# Possible values:
# - ON: enable ANTLR by searching default locations (cmake find_program for antlr4 and /usr/local for jar)
# - OFF: disable ANTLR
# - /path/to/antlr-*-complete.jar: path to specific ANTLR jar file
set
(
USE_ANTLR ON
)
# Whether use Relay debug mode
set
(
USE_RELAY_DEBUG ON
)
# Whether to build fast VTA simulator driver
set
(
USE_VTA_FSIM OFF
)
# Whether to build cycle-accurate VTA simulator driver
set
(
USE_VTA_TSIM OFF
)
# Whether to build VTA FPGA driver (device side only)
set
(
USE_VTA_FPGA OFF
)
# Whether use Thrust
set
(
USE_THRUST OFF
)
# Whether to build the TensorFlow TVMDSOOp module
set
(
USE_TF_TVMDSOOP OFF
)
# Whether to use STL's std::unordered_map or TVM's POD compatible Map
set
(
USE_FALLBACK_STL_MAP OFF
)
# Whether to use hexagon device
set
(
USE_HEXAGON_DEVICE OFF
)
set
(
USE_HEXAGON_SDK /path/to/sdk
)
# Whether to use ONNX codegen
set
(
USE_TARGET_ONNX OFF
)
# Whether to compile the standalone C runtime.
set
(
USE_STANDALONE_CRT ON
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment