- 06 Jan, 2020 5 commits
-
-
* [CONV] Asymmetic padding * fix lint error * update for legalize, rocm and cudnn * add more test cases * change more symmetric padding * change conv2d winograd tests according orginal cases * remove 'alter_op_layout.h' header in bitserial.cc
optima2005 committed -
* Support empty tensor * Fix schedule * Refactor * Minor fix * Fix pylint * Merge cpp and python is_empty_shape
Yao Wang committed -
Previously we support a limited case of function type deduction and in many places we have to supply the type twice during set_body_typed (one in the template parameter, another in the lambda signature). This PR improves the deduce function by enablng automatic function signature deduction. ``` TVM_REGISTER_GLOBAL("sub") .set_body_typed([](int x, int y) -> int { return x - y; }); ``` Unfortunately, because of template conflict, we can not support the original case where both type signature and lambda are supplied through set_body_typed. This PR refactors the existing regsitration to the new style.
Tianqi Chen committed -
Tianqi Chen committed
-
Kevin Yuan committed
-
- 05 Jan, 2020 2 commits
-
-
Zhi committed
-
Rationale: printer is a common infra that is shared across all nodes.
Tianqi Chen committed
-
- 04 Jan, 2020 4 commits
-
-
TVM_REGSISTER_API is an alias of TVM_REGISTER_GLOBAL. In the spirit of simplify redirections, this PR removes the original TVM_REGISTER_API macro and directly use TVM_REGISTER_GLOBAL. This type of refactor will also simplify the IDE navigation tools such as FFI navigator to provide better code reading experiences. Move EnvFunc's definition to node.
Tianqi Chen committed -
This PR moves a few base types from relay to the ir sub-folder. These types will serve as a common type system across the stack. Notably, we want to be able to use the same FuncType for all function signatures. I tried to make a minimum move to bring the necessary dependencies for a FuncType. We can discuss what additional things we want to move as a follow-up. Notably, because the TensorType will have a dependency on low-level Expr, we will need to break the type.h into two files and introduce a tensor_type.h(or leave them in relay for now).
Tianqi Chen committed -
Currently, we use a tvm::Var to represent a placeholder for shapes in generic types. This is not necessary for GlobalTypeVar(as we never parameterize by shape var), and is a bit twisted for TypeVar. As we move to a unified type system, we want to break the dependency from the base TypeVar(which is shared across the languages) from the expression. Note that it is fine for TensorType to depend on Expr. One alternative solution to embed the Var would be to introduce a TypeVarExpr, which can wrap a TypeVar as Expr. However, this new alternative won't be natural until we migrate the type to the global scope. Lucikly, we have not yet start to depend on the shape parameterization heavily yet. This PR removes the tvm::Var from the typevars. We will follow up with another PR to migrate the types to a base location. After that, we should be able to use the more elegant approach via TypeVarExpr.
Tianqi Chen committed -
* Add more shape funcs * Fix test * Enhance test_any_concat * Fix pylint * Minor fix test * Fix pylint * Minor refactor * Add test any for elemwise
Yao Wang committed
-
- 03 Jan, 2020 9 commits
-
-
* [relay] test tensor_array in vm * add tensor_array scatter test
Zhi committed -
Zhi committed
-
Tianqi Chen committed
-
Liangfu Chen committed
-
Animesh Jain committed
-
* Use memory efficient calibrate * Fixed indexing * add cpp kl stub * ported KL cpp from mxnet * Fixed std::distance arguments order * remove python implementation * fix lint and indent * fix indent * refactoring * fix lint * fix for i386
masahi committed -
Tianqi Chen committed
-
* add onnx resize converter * update frontends * updating topi * adding onnx resize tests * fixed NHWC test by casting size dtype to int32 * fix tests * fix lint * update existing test cases * fix tensorflow frontend * fix lint * remove NHWC stuff * update topi resize test for half_pixel * update doc * fix doc * remove onnx resize bits
masahi committed -
* CombineContextCall * Migrate BoundChecker * Migrate CoprocSync * Migrate detect_device * Migrate loop_partition * Migrate infer_fragement * Migrate inject_copy_intrin * Migrate inject double buffer * Migrate lower_intrin and simplify * Migrate storage flatten * Migrate inject prefetch * Migrate inject_virtual_thread * migrate inline * Migrate lift attr scope * Migrate custom datatypes * migrate lower_thread_all_reduce * Migrate lower_tvm_builtin * migrate lower_warp memory * Migrate make_api.cc * Migrate remap_thread_axis * Migrate remove_no_op * migrate rewrite_unsafe_select * Migrate skip_assert simple_passes * Migrate split_host_device * Migrate ssa * Migrate storage_access * Migrate storage_rewrite * Migrate tensor_core * Migrate unroll_loop * Migrate vectorize * Migrate verify compact_buffer gpu_code * Migrate verify_memory * Migrate storage_sync * Remove unused refs to mutator * Migrate hybrid_op * Migrate tensorize * Migrate schedule ops * Migrate schedule_dataflow_rewrite * Migrate auto_inline_elemwise * Remove unecessary ref to visitor * remove unecessary ref * Migrate bound_deducer * Migrate domain_touched * Migrate autotvm feature touch extractor * Add annotations
Tianqi Chen committed
-
- 02 Jan, 2020 2 commits
-
-
Tianqi Chen committed
-
IRMutator and IRVisitor were the main data structures for doing low level IR visiting. As the project evolves, we start to introduce more powerful variants such as StmtFunctor and ExprFunctor. This PR brings new classes that allows us to migrate the visitor mutator to be sub-class of these functors. List of changes: - Create separate class for ExprMutator and StmtMutator, following convention used in relay. - Introduce copy-on-write to StmtMutator that can later benefit the statement mutations if we use move semantics and keep a single copy of stmt. - Move two generic visit mutate util to use the new classes. We will send followup PRs to migrate the existing passes that use the legacy visitors to the new one.
Tianqi Chen committed
-
- 01 Jan, 2020 2 commits
-
-
* [FRONTEND][TF] Add conv3d * fix high rtol
optima2005 committed -
Zhi committed
-
- 31 Dec, 2019 3 commits
-
-
Zhi committed
-
* [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal and macros to Object. Historically, we have classes like NodePtr/Ref/HashEqual. After unified object protocol, these names are just alias of the object counterpart. Moreover, there are helper macros defined over the places for defining these object. This PR consoldiate the terminologies into the corresponding ones in the Object system so we have a clean and consistent API moving forward. * Update include/tvm/attrs.h Co-Authored-By: Wei Chen <ipondering.weic@gmail.com> * fix compilation Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
Tianqi Chen committed -
* Refactor to use IsOp utility * retrigger CI
Zhi committed
-
- 30 Dec, 2019 3 commits
-
-
Animesh Jain committed
-
* [REFACTOR][RUNTIME] Move NDArray to Object System. Previously NDArray has its own object reference counting mechanism. This PR migrates NDArray to the unified object protocol. The calling convention of NDArray remained intact. That means NDArray still has its own type_code and its handle is still DLTensor compatible. In order to do so, this PR added a few minimum runtime type detection in TVMArgValue and RetValue only when the corresponding type is a base type(ObjectRef) that could also refer to NDArray. This means that even if we return a base reference object ObjectRef which refers to the NDArray. The type_code will still be translated correctly as kNDArrayContainer. If we assign a non-base type(say Expr) that we know is not compatible with NDArray during compile time, no runtime type detection will be performed. This PR also adopts the object protocol for NDArray sub-classing and removed the legacy NDArray subclass protocol. Examples in apps/extension are now updated to reflect that. Making NDArray as an Object brings all the benefits of the object system. For example, we can now use the Array container to store NDArrays. * Address review comments
Tianqi Chen committed -
Zhi committed
-
- 29 Dec, 2019 2 commits
-
-
* cublaslt added * fix lint * address comments * address more comments * Trigger CI * Trigger CI
Leyuan Wang committed -
* [GraphRuntime] Support parameter out in the graph runtime debug * Dummy commit to trigger build
Neo Chien committed
-
- 28 Dec, 2019 2 commits
-
-
* [FRONTEND][TF] conv3d_transpose 'SAME' support kernel more than 1x1 * revised per as review comments * add more fallback wolkaround to make all tests pass
optima2005 committed -
zhuochen committed
-
- 27 Dec, 2019 4 commits
-
-
Wang Yucheng committed
-
Zhao Wu (Chinese Name: 吴钊) committed
-
* [TOPI] add 3D upsampling Op. * fix lint issues * change align_corners to coordinate_transformation_mode * fix resize3d half_pixel * make a simple function and clean up trilinear_resize3d_python * fix doc
optima2005 committed -
Animesh Jain committed
-
- 26 Dec, 2019 2 commits
-
-
Animesh Jain committed
-
deepIgnorance committed
-