- 23 Apr, 2020 1 commit
-
-
This PR removes ir_pass(old style pass functions) in favor of analysis/transform(new style pass manager).
Tianqi Chen committed
-
- 21 Apr, 2020 2 commits
-
-
Tianqi Chen committed
-
The legacy Simplify/CanonicalSimplify are now a thin wrapper around the Analyzer. This PR removes these functions and migrated every place that requires simplification to enforce Analyzer creation. The new API would encourage more Analyzer sharing and potentially enable context-aware analyzer-based simplification.
Tianqi Chen committed
-
- 20 Apr, 2020 1 commit
-
-
* [TIR][REFACTIR] RewriteForTensorCore -> te/schedule RewriteForTensor depends on the schedule information, which makes it differ from a typical pass(which should get all the information from the input TIR). As a result, we refactor it as a SchedulePostProc step for now. We should revisit it later as we introduce more support for tensor core patterns in the TIR. * Fix VTA to fit the new IR Pattern
Tianqi Chen committed
-
- 19 Apr, 2020 1 commit
-
-
* [TIR][REFACTOR] Remove te::Tensor dependencies from TIR passes. te::Tensor is an useful object for tensor expression, but brings un-necessary reverse dependency in TIR nodes such as Provide and Realize. This PR is a first step to remove this dependency. We will use Buffer in all the places where the te::Tensor was used. The rough correspondence are: - Provide -> BufferStore - Realize -> BufferRealize - HalideCall -> BufferLoad. After this change, we can not use IRModule of PrimFuncs cleanly to represent TIR at any point of the optimizations. Buffer will serve as the abstraction for the TIR data models to represent the intermediate storages and their constraints. We still keep Realize/HalideCall and Provide as TIR nodes for now to make the change minimum. Right after ScheduleOps, we call SchedulePostProcToPrimFunc to canonicalize the temporary IR generated by TE(which contains these nodes) to the TIR. The TIR optimizations are now mostly migrated to to the pass manager. Followup PRs are needed to migrate the remaining few passes. * Fix dev tutorial
Tianqi Chen committed
-
- 18 Apr, 2020 1 commit
-
-
- Migrate BoundCheckers and Simplify - Migrate RewriteUnsafeSelect and RemoveNoOp - Migrate UnrollLoop and StorageRewrite - Migrate InjectDoubleBuffer and InjectVirtualThread - Migrate LoopPartition and Vectorize - Migrate CoProcSync, LiftAttrScope, InjectCopyIntrin We still keep ir_pass registerations for now. Need a separate PR to refactor the parts before the StorageFlatten.
Tianqi Chen committed
-
- 14 Apr, 2020 1 commit
-
-
Previously MakePackedAPI was in the target independent stage, but never the less requires the device_type information that will be binded at a later target dependent stage. The previous implementation was due to the limitation of LoweredFunc which can not carry buffer_map info(so they have to be lowered right away). This is no longer the case after the unified IR refactor. This PR migrates MakePackedAPI to a target dependent stage and removes the un-necessary BindDevice pass.
Tianqi Chen committed
-
- 07 Apr, 2020 1 commit
-
-
Tianqi Chen committed
-
- 05 Apr, 2020 1 commit
-
-
* [REFACTOR][TIR] Migrate all low-level passes to the Pass Manager. This PR migrates the tvm.lower to return IRModule of PrimFuncs instead of the LoweredFuncs. * Remove LoweredFunc.
Tianqi Chen committed
-
- 03 Apr, 2020 1 commit
-
-
* [REFACTOR][TIR] Migrate most of low-level build to use the Pass Manager. - SplitHostDevice - ThreadSync - BindDevice - LowerThreadAllreduce - Provide a temp fix for printing IRModule with PrimFunc before the formal text printer. * Address comments, fix tests. * Fix relay tests * Explicit move
Tianqi Chen committed
-
- 02 Apr, 2020 2 commits
-
-
- Migrate LowerTVMBultin - Migrate inferFragment, LowerThreadAllreduce - Migrate ThreadSync - Refactor target::Build to directly take IRModule. - Remove un-used legacy functions.
Tianqi Chen committed -
Haozheng Fan committed
-
- 27 Feb, 2020 1 commit
-
-
* [REFACTOR][PY][API-CHANGE] Remove legacy python files. Remove legacy python files. Use the te namespace for most of the tensor expression primitives. - tvm.create_schedule -> tvm.te.create_schedule - tvm.placeholder -> tvm.te.placeholder - tvm.compute -> tvm.te.compute * Remove top-level exposures.
Tianqi Chen committed
-
- 18 Feb, 2020 1 commit
-
-
- Move the related files to tvm.te - Move build_module.py to tvm.driver
Tianqi Chen committed
-
- 14 Feb, 2020 1 commit
-
-
- Move related files into the corresponding location as in C++ - Keep the top-level TVM API backward compatible to make minimum changes in topi
tqchen committed
-
- 13 Feb, 2020 1 commit
-
-
Move the related target modules into tvm.target. API change: - tvm.target.current_target -> tvm.target.Target.current - tvm.datatype -> tvm.target.datatype
tqchen committed
-
- 12 Feb, 2020 1 commit
-
-
* [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files. This PR establishes tvm.ir and migrates the corresponding relay files into the new folder. API Change: - relay.Module -> tvm.IRModule * Update with ADT * Migrate transform * address comments * Migrate module * Migrate json_compact * Migrate attrs * Move LoweredFunc to stmt temporarily * temp migrate container * Finish migrate container
Tianqi Chen committed
-
- 09 Feb, 2020 1 commit
-
-
Tianqi Chen committed
-
- 07 Feb, 2020 1 commit
-
-
* [REFACTOR][PY-API] Polish tvm.runtime, tvm.runtime.module API update This PR updates the tvm.runtime to use the new FFI style. - Remove top-level tvm.module to avoid confusion between runtime.Module and IRModule - API changes wrt to runtime.Module - tvm.module.load -> tvm.runtime.load_module - tvm.module.enabled -> tvm.runtime.enabled - tvm.module.system_lib -> tvm.runtime.system_lib - Remove dep on api_internal from runtime. * Update module.load in the latest API
Tianqi Chen committed
-
- 05 Feb, 2020 2 commits
-
-
* [REFACTOR][PY] Establish tvm.runtime This PR establishes the tvm.runtime namespace that contains the core runtime data structures. The top-level API are kept inact for now via re-exporting. We will followup later to cleanup some of the top-level APIs. * Fix ndarray name
Tianqi Chen committed -
* [REFACTOR][PY] tvm._ffi - Remove from __future__ import absolute_import in the related files as they are no longer needed if the code only runs in python3 - Remove reverse dependency of _ctypes _cython to object_generic. - function.py -> packed_func.py - Function -> PackedFunc - all registry related logics goes to tvm._ffi.registry - Use absolute references for FFI related calls. - tvm._ffi.register_object - tvm._ffi.register_func - tvm._ffi.get_global_func * Move get global func to the ffi side
Tianqi Chen committed
-
- 11 Jan, 2020 1 commit
-
-
* replace TensorObj and TensorValue with NDArray * NodeBase to Object in Python * rebase
Zhi committed
-
- 15 Nov, 2019 1 commit
-
-
Zhao Wu committed
-
- 09 Nov, 2019 1 commit
-
-
* Add Auto TensorCore TensorCore Unit Test * Rebase to tvm master branch & Add auto tensor core * Code Refine * Add tensor core switch by pragma * Add pragma in tensor core example code * Get real tile size to replace hard coded 16 * support more than 2 dimensions (e.g. batchmatmul) for buffer bind scope * support batch matmul * Move cuda env check to tensor_core.cc * Coderefine for tensor_core.cc * Refine comments * Some refinements of code and comment * Update TensorCore UT to pass the CPU test * remove redundant code * matmul's storage align for different layout * Add support for differenct position of type cast * Add formal tutorial for auto tensorcore codegen * move tensorcore check up to tutorial code * code and doc refine * comment out tune_and_evaluate in tutorial * fix cpplint error
Minmin Sun (孙敏敏) committed
-
- 24 Oct, 2019 2 commits
-
-
* add tensor core support * avoid memory bank conflict * fix thread sync & better performance * better performance * add schedule test for conv2d * extend into BatchMatMul * support config fragment shape and layout using intrinsic * add TensorCore tutorial * add int support and fix lint * address comment * add 32*16*8 TensorCore test * fix wmma include logic
Siyuan Feng committed -
* Fix example code in comment of tvm.build_module.build() * Update build_module.py
optima2005 committed
-
- 11 Oct, 2019 1 commit
-
-
* [tvm][any] broadcast with values other than 1 * Add test for incompatible runtime values * Remove hybrid script compact buffer binding * retrigger ci
Zhi committed
-
- 24 May, 2019 1 commit
-
-
Tianqi Chen committed
-
- 21 May, 2019 1 commit
-
-
Logan Weber committed
-
- 08 Apr, 2019 1 commit
-
-
* [HEADER] ASF header dir=include * [HEADER] ASF Header dir=src * [HEADER] ASF Header -dir=python * [HEADER] ASF header dir=topi * [HEADER] ASF Header dir=nnvm * [HEADER] ASF Header -dir=tutorials * [HEADER] ASF Header dir=tests * [HEADER] ASF Header -dir=docker * fix whitespace * [HEADER] ASF Header -dir=jvm * [HEADER] ASF Header -dir=web * [HEADER] ASF Header --dir=apps * [HEADER] ASF Header --dir=vta * [HEADER] ASF Header -dir=go * temp * [HEADER] ASF Header --dir=rust * [HEADER] Add ASF Header --dir=cmake * [HEADER] ASF Header --dir=docs * [HEADER] Header for Jenkinsfile * [HEADER] ASF Header to toml and md * [HEADER] ASF Header to gradle * Finalize rat cleanup * Fix permission * Fix java test * temporary remove nnvm onnx test
Tianqi Chen committed
-
- 13 Feb, 2019 1 commit
-
-
* a preliminary version is done? * we no longer need the redundant hybrid/api.py * support assert stmt * cast supported * intrin -> runtime; util is mainly in charge of compilation time * assert statement * fix python lint * fix cpp lint * on the way to module * rollback .cc * fix typo, no direct expose then * @vinx13 ceil is added i guess? * wip... * temp commit * fix import * i preliminary version is done? * on the way to build hybrid module * nearly fixed... * dumped python are equiv as original python * on the way to bootstrap * cpu bootstrap done * bootstrap! * fix lint * fix doc * resolve some review concerns * support load/save * fix lint * thanks to xqdan fixed my typo * fix build, make dump non-optional * add vthread * jesus why i added this
Jian Weng committed
-
- 12 Feb, 2019 1 commit
-
-
Marina Kolpakova committed
-
- 08 Jan, 2019 1 commit
-
-
Li-Wen Chang committed
-
- 06 Dec, 2018 1 commit
-
-
xqdan committed
-
- 30 Nov, 2018 1 commit
-
-
The pass which instruments checkers before memory accesses (load/store). This allows to handle invalid memory accesses. The patch is related to issue: https://discuss.tvm.ai/t/array-bounds-checking/944
Denis Khalikov committed
-
- 29 Nov, 2018 1 commit
-
-
tvm.Schedule ==> tvm.schedule.Schedule
Liang ZOU committed
-
- 19 Nov, 2018 1 commit
-
-
[HYBRID FRONTEND] Modify hybrid script to new interface; hybrid op supported; enable compilation_database in CMakeList.txt (#1757)
Jian Weng committed
-
- 29 Sep, 2018 1 commit
-
-
Zhi committed
-
- 22 Sep, 2018 1 commit
-
-
Zhi committed
-
- 05 Sep, 2018 1 commit
-
-
Jian Weng committed
-