1. 23 Apr, 2020 3 commits
  2. 22 Apr, 2020 3 commits
  3. 21 Apr, 2020 6 commits
  4. 20 Apr, 2020 2 commits
  5. 19 Apr, 2020 3 commits
    • [TIR][REFACTOR] Remove te::Tensor dependencies from TIR passes. (#5372) · c3511c5e
      * [TIR][REFACTOR] Remove te::Tensor dependencies from TIR passes.
      
      te::Tensor is an useful object for tensor expression, but brings
      un-necessary reverse dependency in TIR nodes such as Provide and Realize.
      
      This PR is a first step to remove this dependency. We will use Buffer in all the places
      where the te::Tensor was used. The rough correspondence are:
      
      - Provide -> BufferStore
      - Realize -> BufferRealize
      - HalideCall -> BufferLoad.
      
      After this change, we can not use IRModule of PrimFuncs cleanly to represent TIR
      at any point of the optimizations. Buffer will serve as the abstraction for the TIR data
      models to represent the intermediate storages and their constraints.
      
      We still keep Realize/HalideCall and Provide as TIR nodes for now to make the change minimum.
      Right after ScheduleOps, we call SchedulePostProcToPrimFunc to canonicalize the temporary IR
      generated by TE(which contains these nodes) to the TIR.
      
      The TIR optimizations are now mostly migrated to to the pass manager.
      Followup PRs are needed to migrate the remaining few passes.
      
      * Fix dev tutorial
      Tianqi Chen committed
    • [TIR] Fix lower_warp_memory when there are >1 warp buffers (#5368) · a2d6fe65
      * fix recursion in lower_warp_memory
      
      * post-order mutation
      Tang, Shizhi committed
  6. 18 Apr, 2020 2 commits
  7. 17 Apr, 2020 4 commits
  8. 16 Apr, 2020 2 commits
  9. 15 Apr, 2020 6 commits
  10. 14 Apr, 2020 4 commits
  11. 13 Apr, 2020 4 commits
    • [BYOC] Enhance partitioning and external codegen (#5310) · 5958d60d
      * Remove duplicated output args
      
      * address comment
      
      * fix codegen c
      
      * improve comment
      
      * VisitExprDefault_
      
      * deduce type
      Zhi committed
    • [RUNTIME][IR] Allow non-nullable ObjectRef, introduce Optional<T>. (#5314) · fc75de9d
      * [RUNTIME] Allow non-nullable ObjectRef, introduce Optional<T>.
      
      We use ObjectRef and their sub-classes extensively throughout our codebase.
      Each of ObjectRef's sub-classes are nullable, which means they can hold nullptr
      as their values.
      
      While in some places we need nullptr as an alternative value. The implicit support
      for nullptr in all ObjectRef creates additional burdens for the developer
      to explicitly check defined in many places of the codebase.
      
      Moreover, it is unclear from the API's intentional point of view whether
      we want a nullable object or not-null version(many cases we want the later).
      
      Borrowing existing wisdoms from languages like Rust. We propose to
      introduce non-nullable ObjectRef, and Optional<T> container that
      represents a nullable variant.
      
      To keep backward compatiblity, we will start by allowing most ObjectRef to be nullable.
      However, we should start to use Optional<T> as the type in places where
      we know nullable is a requirement. Gradually, we will move most of the ObjectRef
      to be non-nullable and use Optional<T> in the nullable cases.
      
      Such explicitness in typing can help reduce the potential problems
      in our codebase overall.
      
      Changes in this PR:
      - Introduce _type_is_nullable attribute to ObjectRef
      - Introduce Optional<T>
      - Change String to be non-nullable.
      - Change the API of function->GetAttr to return Optional<T>
      
      * Address review comments
      
      * Upgrade all compiler flags to c++14
      
      * Update as per review comment
      Tianqi Chen committed
    • [PYTORCH]Reduce_ops support added (#5308) · 6805d543
      * [PYTORCH]Reduce_ops support added
      
      * Review comments updated
      
      * typo bug in qnn test
      Samuel committed
    • [Torch] Support Python list, more realistic recurrent networks (#5306) · 0145cd50
      * use funcs from prelude, pass around convert_map
      
      * get relay input type from user ishape
      
      * handle tuple unpack
      
      * experimenting with static tensor array
      
      * use prelude concat instead of cons + rev
      
      * minor clean up
      
      * fix layer norm conversion bug, unwrap tensor array
      
      * add infer shape on tensor array
      
      * pass around prelude for now
      
      * compile worked but runtime error
      
      * fix tensor array wrapping
      
      * begin list dynamic test
      
      * is_list_dynamic first version
      
      * finish dynamic list test
      
      * a few fix
      
      * use shape_of function if Any is found
      
      * improve size conversion
      
      * working on adding free vars to loop block
      
      * fixed inlined inner loop issue
      
      * clean up free var handling
      
      * add support for tensor array concat
      
      * adding ta concat on last axis
      
      * fix concat, but got runtime error
      
      * disable concat on axis -1 for now
      
      * add lstm tests
      
      * revert unrelated change
      
      * fix stacked bidir test
      
      * minor fix to test
      
      * relax tol a bit, revert dnnl change to avoid conflict
      
      * simplify infer type, use input tensor shape rather than concat shape
      
      * more shape fix
      masahi committed
  12. 12 Apr, 2020 1 commit