1. 15 Apr, 2020 3 commits
  2. 14 Apr, 2020 6 commits
  3. 13 Apr, 2020 7 commits
    • [BYOC] Enhance partitioning and external codegen (#5310) · 5958d60d
      * Remove duplicated output args
      
      * address comment
      
      * fix codegen c
      
      * improve comment
      
      * VisitExprDefault_
      
      * deduce type
      Zhi committed
    • [RUNTIME][IR] Allow non-nullable ObjectRef, introduce Optional<T>. (#5314) · fc75de9d
      * [RUNTIME] Allow non-nullable ObjectRef, introduce Optional<T>.
      
      We use ObjectRef and their sub-classes extensively throughout our codebase.
      Each of ObjectRef's sub-classes are nullable, which means they can hold nullptr
      as their values.
      
      While in some places we need nullptr as an alternative value. The implicit support
      for nullptr in all ObjectRef creates additional burdens for the developer
      to explicitly check defined in many places of the codebase.
      
      Moreover, it is unclear from the API's intentional point of view whether
      we want a nullable object or not-null version(many cases we want the later).
      
      Borrowing existing wisdoms from languages like Rust. We propose to
      introduce non-nullable ObjectRef, and Optional<T> container that
      represents a nullable variant.
      
      To keep backward compatiblity, we will start by allowing most ObjectRef to be nullable.
      However, we should start to use Optional<T> as the type in places where
      we know nullable is a requirement. Gradually, we will move most of the ObjectRef
      to be non-nullable and use Optional<T> in the nullable cases.
      
      Such explicitness in typing can help reduce the potential problems
      in our codebase overall.
      
      Changes in this PR:
      - Introduce _type_is_nullable attribute to ObjectRef
      - Introduce Optional<T>
      - Change String to be non-nullable.
      - Change the API of function->GetAttr to return Optional<T>
      
      * Address review comments
      
      * Upgrade all compiler flags to c++14
      
      * Update as per review comment
      Tianqi Chen committed
    • [Topi] Tensorcore support for Conv3D (#5284) · 3df8d560
      * one weird trick.
      
      * Added schedule knob for different workloads.
      
      * Initial conv3d tensorcore working.
      
      * Added conv3d tensorcore strategy.
      
      * Added layout conversion to tensorcore friendly format for conv2d and conv3d.
      
      * Add target name check.
      
      * Fixed bad names and depthwise check.
      
      * Removed duplicated attribute assignment.
      Josh Fromm committed
    • [REALY][OP] fix typo (#5315) · 0d48361a
      Signed-off-by: windclarion <windclarion@gmail.com>
      windclarion committed
    • [PYTORCH]Reduce_ops support added (#5308) · 6805d543
      * [PYTORCH]Reduce_ops support added
      
      * Review comments updated
      
      * typo bug in qnn test
      Samuel committed
    • [Torch] Support Python list, more realistic recurrent networks (#5306) · 0145cd50
      * use funcs from prelude, pass around convert_map
      
      * get relay input type from user ishape
      
      * handle tuple unpack
      
      * experimenting with static tensor array
      
      * use prelude concat instead of cons + rev
      
      * minor clean up
      
      * fix layer norm conversion bug, unwrap tensor array
      
      * add infer shape on tensor array
      
      * pass around prelude for now
      
      * compile worked but runtime error
      
      * fix tensor array wrapping
      
      * begin list dynamic test
      
      * is_list_dynamic first version
      
      * finish dynamic list test
      
      * a few fix
      
      * use shape_of function if Any is found
      
      * improve size conversion
      
      * working on adding free vars to loop block
      
      * fixed inlined inner loop issue
      
      * clean up free var handling
      
      * add support for tensor array concat
      
      * adding ta concat on last axis
      
      * fix concat, but got runtime error
      
      * disable concat on axis -1 for now
      
      * add lstm tests
      
      * revert unrelated change
      
      * fix stacked bidir test
      
      * minor fix to test
      
      * relax tol a bit, revert dnnl change to avoid conflict
      
      * simplify infer type, use input tensor shape rather than concat shape
      
      * more shape fix
      masahi committed
  4. 12 Apr, 2020 5 commits
  5. 11 Apr, 2020 5 commits
    • [PYTORCH]Abs, Arange, Softplus ops (#5295) · 5b37d4c1
      * [PYTHON]Abs, Arange, Softplus ops
      
      * Review comments updated
      Samuel committed
    • [LLVM] Fix generation of LLVM intrinsics (#5282) · 403929f9
      * [LLVM] Fix generation of LLVM intrinsics
      
      The type list in the call to llvm::Intrinsic::getDeclaration is not
      the intrinsic's signature, it's the list of overloaded types. Without
      this fix, the updated unit test would cause the following error:
      
      TVMError: LLVM module verification failed with the following errors:
      Intrinsic name not mangled correctly for type arguments! Should be:
      llvm.ctlz.i32
      i32 (i32, i1)* @llvm.ctlz.i32.i1
      
      Special handling for llvm.prefetch, sig matching for overloaded ints only
      
      The prefetch intrinsic returns void in LLVM, while it returns i32 in TVM.
      This case needs to be handled specially, because rule-based intrinsic
      translation would cause invalid LLVM type to be created.
      
      Do the signature matching only for overloaded intrinsics. It's not needed
      for non-overloaded ones, so this can save a bit of compile-time.
      
      * Include intrinsic name in the error message
      
      * Fix number of arguments for llvm.fmuladd and llvm.pow
      Krzysztof Parzyszek committed
    • [BYOC] Add example of Composite + Annotate for DNNL fused op (#5272) · 3616ebee
      * merge change from dev branch
      
      * fix string issue
      
      * bring comanic's change back
      masahi committed
    • [Frontend][TensorFlow]Improve TensorFlow Static Shape Tensor Array (#5243) · 4b27cd14
      * Support TF Frontend Static TensorArray
      
      * Fix pylint
      
      * Fix lint
      
      * Move get_tensor_array_shape into prelude
      
      * Fix lint
      
      * Fix common
      Yao Wang committed
    • [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc (#5271) · b72dd9d9
      * [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc
      
      This PR introduces RValue reference support the PackedFunc calling convention to address the above issue.
      Specifically, when an argument is a r-value reference, we will use a assign a different type code(`kObjectRValueRefArg`),
      and pass `Object**`  (the address to the Object pointer) instead through the values array.
      The callee can choose to move out this Object pointer and set the original Object pointer from the caller side to be nullptr.
      
      We also add an experimental move support to the python side(marked as _move so to indicate the dev nature).
      This enhancement will enable copy on write optimizations through out the TVM stack.
      
      * Address review comments
      
      * fix compilation
      Tianqi Chen committed
  6. 10 Apr, 2020 14 commits