1. 11 Jan, 2020 1 commit
    • [Relay][TOPI]Fix meaning of conv2d_transpose output_padding parameter (#4318) · dcf7fbf1
      * Add output_padding to generic
      
      * Add output_padding to the reference impl
      
      * Add output_padding to arm_cpu
      
      * Add output_padding to the test
      
      * Add output_padding for cuda
      
      * Add output_padding for x86
      
      * Make use of the new output_padding argument in Relay
      
      * Adjust conv2d_transpose Relay test
      
      * Fix lint errors
      
      * Fix the VTA declaration of conv2d_transpose
      
      * support for output padding in conv2d transpose
      
      * some output padding will break IR pass
      
      * Fix new conv2d_transpose test
      
      * Update tophub
      
      * Fix conv1d output_padding too.
      
      * Fix the conv1d_transpose reference function.
      
      * Fix the cuda impl
      
      * fix the topi test for conv1d
      
      * Update the versions in tophub.py
      
      Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
      abergeron committed
  2. 10 Jan, 2020 3 commits
  3. 09 Jan, 2020 4 commits
  4. 08 Jan, 2020 1 commit
    • [REFACTOR][IR] Add Node suffix to low-level IR nodes (#4649) · f4c5f93b
      * [REFACTOR][IR] Variable -> VarNode
      
      * [REFACTOR][IR] Add/Sub/Mul/Div -> AddNode/SubNode etc.
      
      * [REFACTOR][IR] Min/Max/FloorDiv/FloorMod -> MinNode/MaxNode etc.
      
      * [REFACTOR][IR] EQ/NE/LT/LE/GT/GE/Select -> EQNode/NENode etc.
      
      * [REFACTOR][IR] Add Node suffix to Select/Call/Load/Ramp/Shuffle/Let
      
      * [REFACTOR][IR] Add node suffix to IntImm/UIntImm/FloatImm/StringImm
      
      * [REFACTOR][IR] Add Node suffix to Any, AttrStmt, AssertStmt
      
      * [REFACTOR][IR] Add Node suffix to Store/Provide/Allocate/Free
      
      * [REFACTOR][IR] Add Node suffix to ProducerConsumer
      
      * Fix lint
      
      * style updates, test fixes
      Tianqi Chen committed
  5. 07 Jan, 2020 3 commits
  6. 06 Jan, 2020 4 commits
    • [FRONTEND][Keras] Add support for tf.Keras networks in Relay Keras frontend (#4630) · dc30880b
      * Make Relay Keras frontend support networks created using
         Tensorflow (1.13) Keras implementation (tf.Keras)
       * Modify Keras frontend tests to run from a class rather than a
         function based script
       * Adjust Keras frontend tests to run with both 'Keras' and 'tf.Keras'
       * Change "TestKeras.test_forward_merge" to validate instances by
         class name rather than instance type
      Leandro Nunes committed
    • [REFACTOR][IR] Introduce SeqStmt to replace ir::Block (#4627) · 3595cbe0
      * [REFACTOR][IR] Introduce SeqStmt to replace Block
      
      ir::Block was used to represent a sequence of Stmts in the original low-level IR.
      The nested ir::Block structure is not really friendly for recursive visits,
      especially when the statements are unrolled.
      
      This PR introduce a SeqStmt that directly stores a sequence of statements in an Array container.
      The new SeqStmt will be used as a replacement of the original Block structure.
      
      * [REFACTOR] Migrate use of Block to SeqStmt.
      
      * [REFACTOR] Remove Block
      
      * Add more comments per yizhi's comment
      Tianqi Chen committed
    • [CONV] Asymmetric padding (#4511) · 34b98eb7
      * [CONV] Asymmetic padding
      
      * fix lint error
      
      * update for legalize, rocm and cudnn
      
      * add more test cases
      
      * change more symmetric padding
      
      * change conv2d winograd tests according orginal cases
      
      * remove 'alter_op_layout.h' header in bitserial.cc
      optima2005 committed
    • [REFACTOR] Automatically deduce function type signature in Registry.set_body_typed (#4623) · d5d63a44
      Previously we support a limited case of function type deduction and in many places
      we have to supply the type twice during set_body_typed (one in the template parameter, another in the lambda signature).
      
      This PR improves the deduce function by enablng automatic function signature deduction.
      
      ```
      TVM_REGISTER_GLOBAL("sub")
      .set_body_typed([](int x, int y) -> int { return x - y; });
      ```
      
      Unfortunately, because of template conflict, we can not support the original case
      where both type signature and lambda are supplied through set_body_typed.
      
      This PR refactors the existing regsitration to the new style.
      Tianqi Chen committed
  7. 05 Jan, 2020 1 commit
  8. 04 Jan, 2020 3 commits
    • [REFACTOR] Unified IR base types. (#4616) · 1ecd3ee2
      This PR moves a few base types from relay to the ir sub-folder.
      These types will serve as a common type system across the stack.
      
      Notably, we want to be able to use the same FuncType for all function signatures.
      I tried to make a minimum move to bring the necessary dependencies for a FuncType.
      We can discuss what additional things we want to move as a follow-up.
      
      Notably, because the TensorType will have a dependency on low-level Expr,
      we will need to break the type.h into two files and introduce a
      tensor_type.h(or leave them in relay for now).
      Tianqi Chen committed
    • [REFACTOR][TYPE] Remove un-necessary var sub-field in GlobalTypeVar and TypeVar (#4615) · 24e6fcb6
      Currently, we use a tvm::Var to represent a placeholder for shapes in generic types.
      This is not necessary for GlobalTypeVar(as we never parameterize by shape var),
      and is a bit twisted for TypeVar.
      
      As we move to a unified type system, we want to break the dependency
      from the base TypeVar(which is shared across the languages) from the expression.
      Note that it is fine for TensorType to depend on Expr.
      
      One alternative solution to embed the Var would be to introduce a TypeVarExpr,
      which can wrap a TypeVar as Expr. However, this new alternative won't be
      natural until we migrate the type to the global scope.
      
      Lucikly, we have not yet start to depend on the shape parameterization heavily yet.
      
      This PR removes the tvm::Var from the typevars. We will follow up with another
      PR to migrate the types to a base location. After that, we should be able to
      use the more elegant approach via TypeVarExpr.
      Tianqi Chen committed
    • [Relay][Pass]Improve memory_allocation pass to support multiple i/o dynamic kernels (#4595) · 9c638f06
      * Add more shape funcs
      
      * Fix test
      
      * Enhance test_any_concat
      
      * Fix pylint
      
      * Minor fix test
      
      * Fix pylint
      
      * Minor refactor
      
      * Add test any for elemwise
      Yao Wang committed
  9. 03 Jan, 2020 7 commits
  10. 02 Jan, 2020 2 commits
    • [IR] Unify approach to Visitor/Mutator under Functor (#4606) · 983eba88
      IRMutator and IRVisitor were the main data structures for doing low level IR visiting.
      As the project evolves, we start to introduce more powerful variants such as StmtFunctor and ExprFunctor.
      This PR brings new classes that allows us to migrate the visitor mutator to be sub-class of these functors.
      
      List of changes:
      
      - Create separate class for ExprMutator and StmtMutator, following convention used in relay.
      - Introduce copy-on-write to StmtMutator that can later benefit the statement mutations
        if we use move semantics and keep a single copy of stmt.
      - Move two generic visit mutate util to use the new classes.
      
      We will send followup PRs to migrate the existing passes that use the legacy visitors
      to the new one.
      Tianqi Chen committed
  11. 01 Jan, 2020 1 commit
  12. 31 Dec, 2019 2 commits
    • Sort VM stats by time (#4601) · 6b11ffb9
      Zhi committed
    • [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal to Object (#4603) · a8c36921
      * [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal and macros to Object.
      
      Historically, we have classes like NodePtr/Ref/HashEqual.
      After unified object protocol, these names are just alias of the object counterpart.
      Moreover, there are helper macros defined over the places for defining these object.
      
      This PR consoldiate the terminologies into the corresponding ones
      in the Object system so we have a clean and consistent API moving forward.
      
      * Update include/tvm/attrs.h
      
      Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
      
      * fix compilation
      
      Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
      Tianqi Chen committed
  13. 30 Dec, 2019 2 commits
    • [REFACTOR][RUNTIME] Update NDArray use the Unified Object System (#4581) · 55bd786f
      * [REFACTOR][RUNTIME] Move NDArray to Object System.
      
      Previously NDArray has its own object reference counting mechanism.
      This PR migrates NDArray to the unified object protocol.
      
      The calling convention of NDArray remained intact.
      That means NDArray still has its own type_code and
      its handle is still DLTensor compatible.
      
      In order to do so, this PR added a few minimum runtime type
      detection in TVMArgValue and RetValue only when the corresponding
      type is a base type(ObjectRef) that could also refer to NDArray.
      
      This means that even if we return a base reference object ObjectRef
      which refers to the NDArray. The type_code will still be translated
      correctly as kNDArrayContainer.
      If we assign a non-base type(say Expr) that we know is not compatible
      with NDArray during compile time, no runtime type detection will be performed.
      
      This PR also adopts the object protocol for NDArray sub-classing and
      removed the legacy NDArray subclass protocol.
      Examples in apps/extension are now updated to reflect that.
      
      Making NDArray as an Object brings all the benefits of the object system.
      For example, we can now use the Array container to store NDArrays.
      
      * Address review comments
      Tianqi Chen committed
  14. 29 Dec, 2019 1 commit
  15. 28 Dec, 2019 1 commit
  16. 27 Dec, 2019 2 commits
  17. 26 Dec, 2019 2 commits