- 10 Jan, 2020 4 commits
-
-
Zhao Wu committed
-
Liangfu Chen committed
-
Josh Fromm committed
-
Xingyu Zhou committed
-
- 09 Jan, 2020 7 commits
-
-
* [REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr) As part of unified IR, we will need to unify relay::Expr and the current tvm::Expr under the same base type. From the techinical point of view. tvm::Expr is a "primitive" expression that only contains POD types and handles and does not do life-cycle management. This PR renames Expr->PrimExpr to clarify that. We will send a subsequent PR to introduce the base expr class. * Remove legacy VarExpr and ExprHash/Equal
Tianqi Chen committed -
Trevor Morris committed
-
* [Relay][Frontend][Tflite] Add parses support for unary elemwise ops * Add generic method to convert unary functions: abs, exp, ceil, floor log, sin, cos, sqrt, rsqrt, neg * Add relevant tests * Delete excessive underscores as requested in PR review * Change parameter name as suggested in PR review
Ina Dobreva committed -
* [REFACTOR] relay::Module Def -> TypeDef The term Def was not very clear about what is the object of interest(could be function def or type def). Changes the term to TypeDef to be more explicit. * Update include/tvm/relay/module.h Co-Authored-By: Wei Chen <ipondering.weic@gmail.com> Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
Tianqi Chen committed -
* Added 1D pooling to Topi * Added 1D pooling relay op and tests. * Added onnx parsing and tests for maxpool1d and averagepool1d * formatting * moved partial import. * Fixed typo.
Josh Fromm committed -
* [AutoTVM] Use vm compile in extracting task from relay * update * restructure vm compiler to reduce task extraction time * x * fix * update doc * udpate doc * lint
Haichen Shen committed -
* [RUNTIME] Fix windows build after the latest dso module change. Switch to shared_ptr to get around a problem in latest MSVC. * [CI] Add github action for win mac build.
Tianqi Chen committed
-
- 08 Jan, 2020 4 commits
-
-
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
optima2005 committed -
* [REFACTOR][IR] Variable -> VarNode * [REFACTOR][IR] Add/Sub/Mul/Div -> AddNode/SubNode etc. * [REFACTOR][IR] Min/Max/FloorDiv/FloorMod -> MinNode/MaxNode etc. * [REFACTOR][IR] EQ/NE/LT/LE/GT/GE/Select -> EQNode/NENode etc. * [REFACTOR][IR] Add Node suffix to Select/Call/Load/Ramp/Shuffle/Let * [REFACTOR][IR] Add node suffix to IntImm/UIntImm/FloatImm/StringImm * [REFACTOR][IR] Add Node suffix to Any, AttrStmt, AssertStmt * [REFACTOR][IR] Add Node suffix to Store/Provide/Allocate/Free * [REFACTOR][IR] Add Node suffix to ProducerConsumer * Fix lint * style updates, test fixes
Tianqi Chen committed -
Zhi committed
-
Haichen Shen committed
-
- 07 Jan, 2020 4 commits
-
-
* [RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return value. Previously the signature of LibraryModule's PackedFunc does not support return value. This wasn't a limitation for our current usecase but could become one as we start to generate more interesting functions. This feature also start to get interesting as we move towards unified object protocol and start to pass object around. This PR enhances the function signature to allow return values. We also created two macros TVM_DLL_EXPORT_PACKED_FUNC and TVM_DLL_EXPORT_TYPED_FUNC to allow manual creation of functions that can be loaded by a LibraryModule. Examples are added in apps/dso_plugin_module. The change to TVMBackendPackedCFunc is backward compatible, as previous function will simply ignore the return value field. * address review comments
Tianqi Chen committed -
Animesh Jain committed
-
Tianqi Chen committed
-
Tianqi Chen committed
-
- 06 Jan, 2020 11 commits
-
-
Tianqi Chen committed
-
* Make Relay Keras frontend support networks created using Tensorflow (1.13) Keras implementation (tf.Keras) * Modify Keras frontend tests to run from a class rather than a function based script * Adjust Keras frontend tests to run with both 'Keras' and 'tf.Keras' * Change "TestKeras.test_forward_merge" to validate instances by class name rather than instance type
Leandro Nunes committed -
* Fix typos on Docker image versions that we are currently running as part of CI * Add version comment in the same pattern for ci_lint image
Leandro Nunes committed -
* As a result of backwards incompatible changes released in pillow 7.0, torchvision crashes if you just "pip install pillow", as we do in a few places. * This patch sets pillow<7 to be installed in Dockerfiles and support material as tutorials and documentation.
Leandro Nunes committed -
* Improve commentary for operator fusion. * Attempt to clarify what well formed checker is doing
Ramana Radhakrishnan committed -
* [REFACTOR][IR] Introduce SeqStmt to replace Block ir::Block was used to represent a sequence of Stmts in the original low-level IR. The nested ir::Block structure is not really friendly for recursive visits, especially when the statements are unrolled. This PR introduce a SeqStmt that directly stores a sequence of statements in an Array container. The new SeqStmt will be used as a replacement of the original Block structure. * [REFACTOR] Migrate use of Block to SeqStmt. * [REFACTOR] Remove Block * Add more comments per yizhi's comment
Tianqi Chen committed -
* [CONV] Asymmetic padding * fix lint error * update for legalize, rocm and cudnn * add more test cases * change more symmetric padding * change conv2d winograd tests according orginal cases * remove 'alter_op_layout.h' header in bitserial.cc
optima2005 committed -
* Support empty tensor * Fix schedule * Refactor * Minor fix * Fix pylint * Merge cpp and python is_empty_shape
Yao Wang committed -
Previously we support a limited case of function type deduction and in many places we have to supply the type twice during set_body_typed (one in the template parameter, another in the lambda signature). This PR improves the deduce function by enablng automatic function signature deduction. ``` TVM_REGISTER_GLOBAL("sub") .set_body_typed([](int x, int y) -> int { return x - y; }); ``` Unfortunately, because of template conflict, we can not support the original case where both type signature and lambda are supplied through set_body_typed. This PR refactors the existing regsitration to the new style.
Tianqi Chen committed -
Tianqi Chen committed
-
Kevin Yuan committed
-
- 05 Jan, 2020 2 commits
-
-
Zhi committed
-
Rationale: printer is a common infra that is shared across all nodes.
Tianqi Chen committed
-
- 04 Jan, 2020 4 commits
-
-
TVM_REGSISTER_API is an alias of TVM_REGISTER_GLOBAL. In the spirit of simplify redirections, this PR removes the original TVM_REGISTER_API macro and directly use TVM_REGISTER_GLOBAL. This type of refactor will also simplify the IDE navigation tools such as FFI navigator to provide better code reading experiences. Move EnvFunc's definition to node.
Tianqi Chen committed -
This PR moves a few base types from relay to the ir sub-folder. These types will serve as a common type system across the stack. Notably, we want to be able to use the same FuncType for all function signatures. I tried to make a minimum move to bring the necessary dependencies for a FuncType. We can discuss what additional things we want to move as a follow-up. Notably, because the TensorType will have a dependency on low-level Expr, we will need to break the type.h into two files and introduce a tensor_type.h(or leave them in relay for now).
Tianqi Chen committed -
Currently, we use a tvm::Var to represent a placeholder for shapes in generic types. This is not necessary for GlobalTypeVar(as we never parameterize by shape var), and is a bit twisted for TypeVar. As we move to a unified type system, we want to break the dependency from the base TypeVar(which is shared across the languages) from the expression. Note that it is fine for TensorType to depend on Expr. One alternative solution to embed the Var would be to introduce a TypeVarExpr, which can wrap a TypeVar as Expr. However, this new alternative won't be natural until we migrate the type to the global scope. Lucikly, we have not yet start to depend on the shape parameterization heavily yet. This PR removes the tvm::Var from the typevars. We will follow up with another PR to migrate the types to a base location. After that, we should be able to use the more elegant approach via TypeVarExpr.
Tianqi Chen committed -
* Add more shape funcs * Fix test * Enhance test_any_concat * Fix pylint * Minor fix test * Fix pylint * Minor refactor * Add test any for elemwise
Yao Wang committed
-
- 03 Jan, 2020 4 commits
-
-
* [relay] test tensor_array in vm * add tensor_array scatter test
Zhi committed -
Zhi committed
-
Tianqi Chen committed
-
Liangfu Chen committed
-