1. 15 Jan, 2020 4 commits
  2. 14 Jan, 2020 6 commits
  3. 11 Jan, 2020 7 commits
    • [Relay/Topi][Op] Conv1D (#4639) · 35099e6a
      * added conv1d operators to topi.
      
      * Started to add python testing.
      
      * Added python conv1d implementation for testing.
      
      * Wrote test but need to add cuda schedule :(
      
      * Cuda schedules working for both conv1d layouts.
      
      * All topi tests passing.
      
      * Formatting topi.
      
      * Removed pad_method option as its probably overkill.
      
      * Added relay op definition of conv1d.
      
      * End2end conv1d working with onnx.
      
      * Lint fixes.
      
      * Formatting fixes.
      
      * Rebase fix.
      
      * Switched to array based attributes for consistency across convs.
      
      * Improved onnx parsing and testing for convolutions.
      
      * lint fix
      
      * Tiny tweak.
      
      * Bug fix
      
      * Rebase fix.
      
      * Add group ignore to onnx conv1d frontend.
      
      * Unified MakeConv and fixed documentation.
      
      * improved autopadding
      
      * Addressed feedback and simplified onnx frontend.
      
      * Format fix.
      
      * Basic X86 NCW schedule working.
      
      * Added nwc schedule.
      
      * fixed name
      
      * Added more tests and basic x86 schedules.
      
      * Format fix.
      
      * Added non power of two shape tests.
      Josh Fromm committed
    • [REFACTOR][IR] Unified IR Primitive Op and Registry (#4687) · d8f06020
      This PR migrates relay's Op into the ir folder.
      Op and its registry provides an useful mechanism to
      store any attribute meta-data of an operator include
      function signatures, lowering rules, side effect etc.
      
      These features are not only useful for Relay, but also needed in the low-level IR.
      At the current moment, intrinsic functions in the low-level IR are simply
      represented by a string. This means we cannot type-check the low-level IR
      when the type does not meet the constraint, nor can we obtain further
      information such as side-effect and read write relation of these intrinsics
      wrt to arguments.
      
      Op will be used as the way to handle primitive ops(in DL terminology)
      (builtin intrinsics or in compiler terminology).
      We will perform follow-up refactors to make low-level CallNode
      take Op as the function argument.
      Tianqi Chen committed
    • [REFACTOR][IR] Allow Module to store BaseFunc. (#4678) · 3d52a99c
      Under the unified IR. We will allow a single IRModule
      to store different function variants, such as relay::Function,
      ExternFunc, and low-level function.
      
      This PR changes relay::Function -> BaseFunc in the module file
      to support multiple function variants.
      Tianqi Chen committed
    • [TOPI][RELAY][OP] add op crop_and_resize (#4417) · 56416ed0
      * [TOPI][RELAY][OP] add op crop_and_resize
      
      * fix pylint
      
      * incorporate comments
      
      * fix ci
      Yong Wu committed
    • [REFACTOR][IR] Initialize Unified IR Expr Data Structure (#4673) · 12e51e6c
      This PR moves a few base types from relay and low-level Expr into the ir sub-folder.
      These classes will serve as a common type system across the stack.
      
      Rationale:
      
      - PrimExpr for low-level expressions
      - RelayExpr for advanced features, including Function definition.
      - Introduce BaseFunc to host all functions, including future PrimFunc(low-level expr functions, subject to discussion).
      
      This is a minimum change we can do to unify the classes into a common hierarchy.
      The main data structure that are variant specific will still be kept in the sub-namespaces.
      We only include classes that is needed to allow a common Module class.
      - BaseFunc
      - GlobalVar
      - Type definition part of ADT
      
      We will only need the BaseFunc and their checked_type to decide the calling convention
      across the function variants.
      Tianqi Chen committed
    • [REFACTOR] Replace TensorObj and TensorValue with NDArray (#4643) · 86092de0
      * replace TensorObj and TensorValue with NDArray
      
      * NodeBase to Object in Python
      
      * rebase
      Zhi committed
    • [Bugfix] fskip of EliminateCommonSubexpr cannot always return false (#4620) · 4073125d
      * 'fskip' will not always return false
      
      fskip returns false at the end of PackedFunc, discards return true in 'cast' case
      
      * Update build_module.cc
      yuliujq committed
  4. 10 Jan, 2020 1 commit
  5. 09 Jan, 2020 6 commits
  6. 08 Jan, 2020 1 commit
    • [REFACTOR][IR] Add Node suffix to low-level IR nodes (#4649) · f4c5f93b
      * [REFACTOR][IR] Variable -> VarNode
      
      * [REFACTOR][IR] Add/Sub/Mul/Div -> AddNode/SubNode etc.
      
      * [REFACTOR][IR] Min/Max/FloorDiv/FloorMod -> MinNode/MaxNode etc.
      
      * [REFACTOR][IR] EQ/NE/LT/LE/GT/GE/Select -> EQNode/NENode etc.
      
      * [REFACTOR][IR] Add Node suffix to Select/Call/Load/Ramp/Shuffle/Let
      
      * [REFACTOR][IR] Add node suffix to IntImm/UIntImm/FloatImm/StringImm
      
      * [REFACTOR][IR] Add Node suffix to Any, AttrStmt, AssertStmt
      
      * [REFACTOR][IR] Add Node suffix to Store/Provide/Allocate/Free
      
      * [REFACTOR][IR] Add Node suffix to ProducerConsumer
      
      * Fix lint
      
      * style updates, test fixes
      Tianqi Chen committed
  7. 07 Jan, 2020 3 commits
  8. 06 Jan, 2020 6 commits
  9. 05 Jan, 2020 1 commit
  10. 04 Jan, 2020 3 commits
    • [REFACTOR] TVM_REGISTER_API -> TVM_REGISTER_GLOBAL (#4621) · 81523604
      TVM_REGSISTER_API is an alias of TVM_REGISTER_GLOBAL.
      In the spirit of simplify redirections, this PR removes
      the original TVM_REGISTER_API macro and directly use TVM_REGISTER_GLOBAL.
      
      This type of refactor will also simplify the IDE navigation tools
      such as FFI navigator to provide better code reading experiences.
      
      Move EnvFunc's definition to node.
      Tianqi Chen committed
    • [REFACTOR] Unified IR base types. (#4616) · 1ecd3ee2
      This PR moves a few base types from relay to the ir sub-folder.
      These types will serve as a common type system across the stack.
      
      Notably, we want to be able to use the same FuncType for all function signatures.
      I tried to make a minimum move to bring the necessary dependencies for a FuncType.
      We can discuss what additional things we want to move as a follow-up.
      
      Notably, because the TensorType will have a dependency on low-level Expr,
      we will need to break the type.h into two files and introduce a
      tensor_type.h(or leave them in relay for now).
      Tianqi Chen committed
    • [REFACTOR][TYPE] Remove un-necessary var sub-field in GlobalTypeVar and TypeVar (#4615) · 24e6fcb6
      Currently, we use a tvm::Var to represent a placeholder for shapes in generic types.
      This is not necessary for GlobalTypeVar(as we never parameterize by shape var),
      and is a bit twisted for TypeVar.
      
      As we move to a unified type system, we want to break the dependency
      from the base TypeVar(which is shared across the languages) from the expression.
      Note that it is fine for TensorType to depend on Expr.
      
      One alternative solution to embed the Var would be to introduce a TypeVarExpr,
      which can wrap a TypeVar as Expr. However, this new alternative won't be
      natural until we migrate the type to the global scope.
      
      Lucikly, we have not yet start to depend on the shape parameterization heavily yet.
      
      This PR removes the tvm::Var from the typevars. We will follow up with another
      PR to migrate the types to a base location. After that, we should be able to
      use the more elegant approach via TypeVarExpr.
      Tianqi Chen committed
  11. 03 Jan, 2020 2 commits