Unverified Commit 7cb85d81 by Tianqi Chen Committed by GitHub

[VERSION] Update to 0.5.dev (#1623)

* [VERSION] Update to 0.5.dev

* Update the docs to include all intrins
parent 5d533ec9
......@@ -9,6 +9,69 @@ Refer to the Roadmap issue for complete list on on-going version features.
If you check in something that is not reflected in Roadmap issue, please reply
to that issue so it can get added.
## 0.4
This release features several major improvements. The high-level graph optimizer is now part of TVM repo. Some of the highlights are: Initial support of AutoTVM for automated optimization; customized accelerator backend VTA.
- Tensor operator primitives
- Introduce attrs field to operator primitives(e.g. compute) to store additional metadata, the attrs can be used as hint for scheduling
- Enable embedding of asm micro-kernels
- Hybrid python programming model
- python AST based IR builder interface
- support GPU programs
- AutoTVM, Automated tuning, and scheduling
- basic autotvm infra
- GPU IR verifier
- basic autotuning tutorial
- topi integration
- ARM support
- winograd support
- initial support of ARM autotuning records
- TOPI Vision
- Generic GPU sort support(useful for vision)
- SSD operator support
- TOPI numpy consistency
- Rename all binary operators for numpy consistecy: broadcast_add-> add, broadcast_sub -> substract, broadcast_mul -> multiply, broadcast_div->divide
- New operators: slice, LRN, equal, not_equal, less, greater
- tutorials on topi
- Initial low-bit operator support support
- Optimized popcount generation on ARM
- general bit-serial convolution and GEMM
- optimized low bit kernels
- parallel optimization
- New topi backend optimization for intel graphics
- Adapt AVX schedules for SSE target
- VTA: customized accelerator backend
- custom hardware backend example
- tutorials on how to use customized accelerator
- Initial experimental support for HLS backend
- Bugfix in SPIRV code generator for vulkan
- libdevice support, enable NVPTX backend
- Introduce NDArrayContainer for managed NDarray
- RPC and Device API
- Support communication between big/small endian machines.
- RPC and device API protocol upgrade (this is a non-backward compatible change) to support big-small endian communication. This is a non-backward compatible change, need to use the latest version of TVM runtime with the RPC
- graduate rpc from contrib, tvm.contrib.rpc->tvm.rpc
-Support tracker in Android RPC, add fault tolerance for AutoTVM
- BIG.LITTLE aware threadpool
- tvm4j graph runtime that runs end to end workload in java
- DLPack support
- Support from_dlpack and to_dlpack
- Enables bridges to pytorch
- Enable link of stackvm in runtime
- Tensorflow graphdef frontend
- Keras frontend
- improved to support reuse layers, add activations
- ONNX
- gather, LRN
- CoreML frontend
- Support C-RNN and activation functions
- Fix grads for sum and expand_like
- Enhanced operator fusion for multiple elemwise branches
- Separate nnvm fusion and compilation pass
- Unified build system to cmake, customizable cmake path for vulkan, rocm, cuda
## 0.3
This release features numerous improvements in TOPI and backends. We make the first step toward object detection support in TOPI, featuring operators necessary for YOLO and SSDs. The topi now supports numpy-style API and operator overloading. RPC is significantly improved to support resource allocation and using a pool of devices. We are adding two new backends: WebGL for running GPUs on the browser, and Vulkan for running on next-generation graphics API.
......
{% set version = "0.4.dev" %}
{% set version = "0.5.dev" %}
package:
name: nnvm
......
{% set version = "0.4.dev" %}
{% set version = "0.5.dev" %}
package:
name: topi
......
{% set version = "0.4.dev" %}
{% set version = "0.5.dev" %}
package:
name: tvm-libs
......
{% set version = "0.4.dev" %}
{% set version = "0.5.dev" %}
package:
name: tvm
......
......@@ -6,7 +6,10 @@ tvm.intrin
tvm.call_packed
tvm.call_pure_intrin
tvm.call_intrin
tvm.call_pure_extern
tvm.call_extern
tvm.call_llvm_intrin
tvm.register_intrin_rule
tvm.exp
tvm.log
......@@ -18,7 +21,10 @@ tvm.intrin
.. autofunction:: tvm.call_packed
.. autofunction:: tvm.call_pure_intrin
.. autofunction:: tvm.call_intrin
.. autofunction:: tvm.call_pure_extern
.. autofunction:: tvm.call_extern
.. autofunction:: tvm.call_llvm_intrin
.. autofunction:: tvm.register_intrin_rule
.. autofunction:: tvm.exp
.. autofunction:: tvm.log
......
......@@ -43,7 +43,7 @@
#endif
// TVM version
#define TVM_VERSION "0.4.0"
#define TVM_VERSION "0.5.dev"
// TVM Runtime is DLPack compatible.
......
......@@ -100,5 +100,7 @@ def find_lib_path(name=None, search_path=None, optional=False):
# current version
# We use the version of the incoming release for code that is under development
__version__ = "0.4.0"
# We use the version of the incoming release for code
# that is under development.
# The following line is set by tvm/python/update_version.py
__version__ = "0.5.dev"
"""
This is the global script that set the version information of TVM.
This script runs and update all the locations that related to versions
List of affected files:
- tvm-root/python/tvm/_ffi/libinfo.py
- tvm-root/include/tvm/runtime/c_runtime_api.h
- tvm-root/web/tvm_runtime.js
- tvm-root/conda/tvm/meta.yaml
- tvm-root/conda/topi/meta.yaml
- tvm-root/conda/nnvm/meta.yaml
- tvm-root/conda/tvm-libs/meta.yaml
"""
import os
import re
# current version
# We use the version of the incoming release for code
# that is under development
__version__ = "0.5.dev"
# Implementations
def update(file_name, pattern, repl):
update = []
hit_counter = 0
need_update = False
for l in open(file_name):
result = re.findall(pattern, l)
if result:
assert len(result) == 1
hit_counter += 1
if result[0] != repl:
l = re.sub(pattern, repl, l)
need_update = True
print("%s: %s->%s" % (file_name, result[0], repl))
else:
print("%s: version is already %s" % (file_name, repl))
update.append(l)
if hit_counter != 1:
raise RuntimeError("Cannot find version in %s" % file_name)
if need_update:
with open(file_name, "w") as output_file:
for l in update:
output_file.write(l)
def main():
curr_dir = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
proj_root = os.path.abspath(os.path.join(curr_dir, ".."))
# python path
update(os.path.join(proj_root, "python", "tvm", "_ffi", "libinfo.py"),
r"(?<=__version__ = \")[.0-9a-z]+", __version__)
# C++ header
update(os.path.join(proj_root, "include", "tvm", "runtime", "c_runtime_api.h"),
"(?<=TVM_VERSION \")[.0-9a-z]+", __version__)
# conda
for path in ["tvm", "topi", "nnvm", "tvm-libs"]:
update(os.path.join(proj_root, "conda", path, "meta.yaml"),
"(?<=version = \")[.0-9a-z]+", __version__)
# web
update(os.path.join(proj_root, "web", "tvm_runtime.js"),
"(?<=@version )[.0-9a-z]+", __version__)
if __name__ == "__main__":
main()
......@@ -2,7 +2,7 @@
* TVM Javascript web runtime library.
*
* @projectname tvm
* @version 0.1
* @version 0.5.dev
*/
/* eslint no-unused-vars: "off" */
/* eslint no-unexpected-multiline: "off" */
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment