1. 04 Feb, 2020 1 commit
  2. 03 Feb, 2020 1 commit
    • [TIR] Create a StringImm reference type (#4806) · c39ab93d
      This is motivated by the want to send an
      array of strings across the python/C++
      boundary. Arrays only support ObjectRef types
      and so can't carry StringImmNodes. This creates
      a string reference type, StringImm, which can
      be used with tvm::Arrays.
      
      Change-Id: I598a44536c156b97dbfe3e9518e0a1f705da850c
      mbarrett97 committed
  3. 28 Jan, 2020 1 commit
  4. 22 Jan, 2020 1 commit
  5. 21 Jan, 2020 2 commits
    • [REFACTOR] top->te (#4759) · 55d81925
      Bring up namespace te -- Tensor expression language DSL.
      Tianqi Chen committed
    • [REFACTOR] Establish printer in the source folder (#4752) · e4d817d4
      * [REFACTOR] Establish printer in the source folder.
      
      As we move towards the unified IR, we will eventually want to build a unified
      printers for both relay and TIR.
      
      This PR isolate the printer component into a separate folder in src as a first step.
      
      - Refactored the Doc DSL using Object, clean up APIs.
      - Isolate out the meta data into a header.
      - move printer into relay_text_printer, add comments about further TODos.
      
      * Rename NodePrinter -> ReprPrinter to distinguish it from other printers
      Tianqi Chen committed
  6. 20 Jan, 2020 1 commit
  7. 19 Jan, 2020 2 commits
    • [REFACTOR][CODEGEN] codegen->target, build_module->driver (#4742) · 33b0831c
      This PR moves the codegen related code into the target folder,
      as they are target specific functionalities.
      
      We also adopt the term "compiler driver" in common compiler infra
      such as rust, GHC and clang.
      As a result, build_module is moved into the driver folder.
      Tianqi Chen committed
    • [REFACTOR] Establish tir (#4740) · cf59b206
      TIR is the new namespace for low-level IR
      for tensor-level optimizations and loop transformations.
      
      This PR establishes the namespace and files.
      
      - lowered_func.h,buffer.h,data_layout.h -> tir/buffer.h,tir/data_layout.h,tir/lowered_func.h
      - ir.h -> tir/expr.h, tir/stmt.h
      - ir_functor_ext.h -> tir/expr_functor.h, tir/stmt_functor.h
      Tianqi Chen committed
  8. 18 Jan, 2020 2 commits
    • [runtime][refactor] Unify vm and interpreter objects (#4693) · acbf8851
      * unify vm and interpreter objects
      
      * move closure back vm
      
      * adt/closure back to vm.adt/vm.closure
      
      * closure base
      Zhi committed
    • [CodeGen][CUDA] Improve CUDA vectorizer (#4736) · 2630ffcb
      - Fixes issues to enable fp16 vectorizer. Now correct packing and
        unpacking CUDA code will be emitted. Enabled more unit tests.
      
      - Do not emit code to read the first lane from an undef variable
      
        int _3;
        _3 = _3 & ~(0x000000ff << 0) | ...
      
        and emit the following code instead:
      
        _3 = (((0x000000ff & (_1 >> 0))+(0x000000ff & (_2 >> 0))) << 0);
      
        Note that nvcc 10.2 is forgiving and emits the same code for both cases.
        A warning appears in test_codegen_cuda.py.
      
      Signed-off-by: Wei Pan <weip@nvidia.com>
      wpan11nv committed
  9. 17 Jan, 2020 2 commits
  10. 16 Jan, 2020 6 commits
  11. 15 Jan, 2020 3 commits
  12. 14 Jan, 2020 5 commits
  13. 11 Jan, 2020 6 commits
    • [Relay/Topi][Op] Conv1D (#4639) · 35099e6a
      * added conv1d operators to topi.
      
      * Started to add python testing.
      
      * Added python conv1d implementation for testing.
      
      * Wrote test but need to add cuda schedule :(
      
      * Cuda schedules working for both conv1d layouts.
      
      * All topi tests passing.
      
      * Formatting topi.
      
      * Removed pad_method option as its probably overkill.
      
      * Added relay op definition of conv1d.
      
      * End2end conv1d working with onnx.
      
      * Lint fixes.
      
      * Formatting fixes.
      
      * Rebase fix.
      
      * Switched to array based attributes for consistency across convs.
      
      * Improved onnx parsing and testing for convolutions.
      
      * lint fix
      
      * Tiny tweak.
      
      * Bug fix
      
      * Rebase fix.
      
      * Add group ignore to onnx conv1d frontend.
      
      * Unified MakeConv and fixed documentation.
      
      * improved autopadding
      
      * Addressed feedback and simplified onnx frontend.
      
      * Format fix.
      
      * Basic X86 NCW schedule working.
      
      * Added nwc schedule.
      
      * fixed name
      
      * Added more tests and basic x86 schedules.
      
      * Format fix.
      
      * Added non power of two shape tests.
      Josh Fromm committed
    • [REFACTOR][IR] Unified IR Primitive Op and Registry (#4687) · d8f06020
      This PR migrates relay's Op into the ir folder.
      Op and its registry provides an useful mechanism to
      store any attribute meta-data of an operator include
      function signatures, lowering rules, side effect etc.
      
      These features are not only useful for Relay, but also needed in the low-level IR.
      At the current moment, intrinsic functions in the low-level IR are simply
      represented by a string. This means we cannot type-check the low-level IR
      when the type does not meet the constraint, nor can we obtain further
      information such as side-effect and read write relation of these intrinsics
      wrt to arguments.
      
      Op will be used as the way to handle primitive ops(in DL terminology)
      (builtin intrinsics or in compiler terminology).
      We will perform follow-up refactors to make low-level CallNode
      take Op as the function argument.
      Tianqi Chen committed
    • [REFACTOR][IR] Allow Module to store BaseFunc. (#4678) · 3d52a99c
      Under the unified IR. We will allow a single IRModule
      to store different function variants, such as relay::Function,
      ExternFunc, and low-level function.
      
      This PR changes relay::Function -> BaseFunc in the module file
      to support multiple function variants.
      Tianqi Chen committed
    • [TOPI][RELAY][OP] add op crop_and_resize (#4417) · 56416ed0
      * [TOPI][RELAY][OP] add op crop_and_resize
      
      * fix pylint
      
      * incorporate comments
      
      * fix ci
      Yong Wu committed
    • [REFACTOR][IR] Initialize Unified IR Expr Data Structure (#4673) · 12e51e6c
      This PR moves a few base types from relay and low-level Expr into the ir sub-folder.
      These classes will serve as a common type system across the stack.
      
      Rationale:
      
      - PrimExpr for low-level expressions
      - RelayExpr for advanced features, including Function definition.
      - Introduce BaseFunc to host all functions, including future PrimFunc(low-level expr functions, subject to discussion).
      
      This is a minimum change we can do to unify the classes into a common hierarchy.
      The main data structure that are variant specific will still be kept in the sub-namespaces.
      We only include classes that is needed to allow a common Module class.
      - BaseFunc
      - GlobalVar
      - Type definition part of ADT
      
      We will only need the BaseFunc and their checked_type to decide the calling convention
      across the function variants.
      Tianqi Chen committed
    • [REFACTOR] Replace TensorObj and TensorValue with NDArray (#4643) · 86092de0
      * replace TensorObj and TensorValue with NDArray
      
      * NodeBase to Object in Python
      
      * rebase
      Zhi committed
  14. 10 Jan, 2020 1 commit
  15. 09 Jan, 2020 4 commits
  16. 08 Jan, 2020 1 commit
    • [REFACTOR][IR] Add Node suffix to low-level IR nodes (#4649) · f4c5f93b
      * [REFACTOR][IR] Variable -> VarNode
      
      * [REFACTOR][IR] Add/Sub/Mul/Div -> AddNode/SubNode etc.
      
      * [REFACTOR][IR] Min/Max/FloorDiv/FloorMod -> MinNode/MaxNode etc.
      
      * [REFACTOR][IR] EQ/NE/LT/LE/GT/GE/Select -> EQNode/NENode etc.
      
      * [REFACTOR][IR] Add Node suffix to Select/Call/Load/Ramp/Shuffle/Let
      
      * [REFACTOR][IR] Add node suffix to IntImm/UIntImm/FloatImm/StringImm
      
      * [REFACTOR][IR] Add Node suffix to Any, AttrStmt, AssertStmt
      
      * [REFACTOR][IR] Add Node suffix to Store/Provide/Allocate/Free
      
      * [REFACTOR][IR] Add Node suffix to ProducerConsumer
      
      * Fix lint
      
      * style updates, test fixes
      Tianqi Chen committed
  17. 07 Jan, 2020 1 commit
    • [RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return val (#4637) · 77c47748
      * [RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return value.
      
      Previously the signature of LibraryModule's PackedFunc does not support return value.
      This wasn't a limitation for our current usecase but could become one
      as we start to generate more interesting functions.
      
      This feature also start to get interesting as we move towards unified
      object protocol and start to pass object around.
      This PR enhances the function signature to allow return values.
      
      We also created two macros TVM_DLL_EXPORT_PACKED_FUNC and TVM_DLL_EXPORT_TYPED_FUNC
      to allow manual creation of functions that can be loaded by a LibraryModule.
      
      Examples are added in apps/dso_plugin_module.
      The change to TVMBackendPackedCFunc is backward compatible,
      as previous function will simply ignore the return value field.
      
      * address review comments
      Tianqi Chen committed