1. 13 Apr, 2020 2 commits
    • [PYTORCH]Reduce_ops support added (#5308) · 6805d543
      * [PYTORCH]Reduce_ops support added
      
      * Review comments updated
      
      * typo bug in qnn test
      Samuel committed
    • [Torch] Support Python list, more realistic recurrent networks (#5306) · 0145cd50
      * use funcs from prelude, pass around convert_map
      
      * get relay input type from user ishape
      
      * handle tuple unpack
      
      * experimenting with static tensor array
      
      * use prelude concat instead of cons + rev
      
      * minor clean up
      
      * fix layer norm conversion bug, unwrap tensor array
      
      * add infer shape on tensor array
      
      * pass around prelude for now
      
      * compile worked but runtime error
      
      * fix tensor array wrapping
      
      * begin list dynamic test
      
      * is_list_dynamic first version
      
      * finish dynamic list test
      
      * a few fix
      
      * use shape_of function if Any is found
      
      * improve size conversion
      
      * working on adding free vars to loop block
      
      * fixed inlined inner loop issue
      
      * clean up free var handling
      
      * add support for tensor array concat
      
      * adding ta concat on last axis
      
      * fix concat, but got runtime error
      
      * disable concat on axis -1 for now
      
      * add lstm tests
      
      * revert unrelated change
      
      * fix stacked bidir test
      
      * minor fix to test
      
      * relax tol a bit, revert dnnl change to avoid conflict
      
      * simplify infer type, use input tensor shape rather than concat shape
      
      * more shape fix
      masahi committed
  2. 12 Apr, 2020 5 commits
  3. 11 Apr, 2020 5 commits
    • [PYTORCH]Abs, Arange, Softplus ops (#5295) · 5b37d4c1
      * [PYTHON]Abs, Arange, Softplus ops
      
      * Review comments updated
      Samuel committed
    • [LLVM] Fix generation of LLVM intrinsics (#5282) · 403929f9
      * [LLVM] Fix generation of LLVM intrinsics
      
      The type list in the call to llvm::Intrinsic::getDeclaration is not
      the intrinsic's signature, it's the list of overloaded types. Without
      this fix, the updated unit test would cause the following error:
      
      TVMError: LLVM module verification failed with the following errors:
      Intrinsic name not mangled correctly for type arguments! Should be:
      llvm.ctlz.i32
      i32 (i32, i1)* @llvm.ctlz.i32.i1
      
      Special handling for llvm.prefetch, sig matching for overloaded ints only
      
      The prefetch intrinsic returns void in LLVM, while it returns i32 in TVM.
      This case needs to be handled specially, because rule-based intrinsic
      translation would cause invalid LLVM type to be created.
      
      Do the signature matching only for overloaded intrinsics. It's not needed
      for non-overloaded ones, so this can save a bit of compile-time.
      
      * Include intrinsic name in the error message
      
      * Fix number of arguments for llvm.fmuladd and llvm.pow
      Krzysztof Parzyszek committed
    • [BYOC] Add example of Composite + Annotate for DNNL fused op (#5272) · 3616ebee
      * merge change from dev branch
      
      * fix string issue
      
      * bring comanic's change back
      masahi committed
    • [Frontend][TensorFlow]Improve TensorFlow Static Shape Tensor Array (#5243) · 4b27cd14
      * Support TF Frontend Static TensorArray
      
      * Fix pylint
      
      * Fix lint
      
      * Move get_tensor_array_shape into prelude
      
      * Fix lint
      
      * Fix common
      Yao Wang committed
    • [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc (#5271) · b72dd9d9
      * [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc
      
      This PR introduces RValue reference support the PackedFunc calling convention to address the above issue.
      Specifically, when an argument is a r-value reference, we will use a assign a different type code(`kObjectRValueRefArg`),
      and pass `Object**`  (the address to the Object pointer) instead through the values array.
      The callee can choose to move out this Object pointer and set the original Object pointer from the caller side to be nullptr.
      
      We also add an experimental move support to the python side(marked as _move so to indicate the dev nature).
      This enhancement will enable copy on write optimizations through out the TVM stack.
      
      * Address review comments
      
      * fix compilation
      Tianqi Chen committed
  4. 10 Apr, 2020 18 commits
  5. 09 Apr, 2020 1 commit
  6. 08 Apr, 2020 6 commits
  7. 07 Apr, 2020 3 commits
    • [RUNTIME] Implement TVMDSOOp(TensorFlow custom op) for TVM runtime (#4459) · 53a4ad35
      * Add implementation of TVMDSOOp
      
      * feat: Update cmake script to work with c++11 and in-repo build
      
      * feat: Use libtvm as oplib dependency
      
      * fix: Add missing link dependency to libtvm
      
      * feat: Update tf tvmdso op by review comments
      
      * fix: Update with pr comments
      
      * fix: Fix lint
      
      * feat: Add test script and fix gpu shape
      
      * feat: Add test script and fix gpu shape
      
      * fix: Conditional build tftvm op for gpu
      
      * fix: Conditional build tftvm op for gpu
      
      * fix: Fix pylint of tf_op module.py
      
      * fix: Fix pylint of tf_op module.py
      
      * feat: Conditional enable gpu test for tftvm op
      
      * feat: Conditional enable gpu test for tftvm op
      
      * feat: Add tf_tvmdsoop test script as an app test
      
      * fix: Fix gpu/cpu enabled check on tvm in test script
      
      * fix: Make tf tvmdso op test script runnable with pytest
      
      * remove unused test script test_tfop_module.py
      
      * fix: Remove pushd & popd in tfdsoop test script
      
      * fix: Upgrade tftvmop use python3 to find TensorFlow
      
      * fix: Upgrade tftvmop use python3 to find TensorFlow
      
      * fix: Change target_link_options to target_link_libraries
      
      * fix: Add tftvmop build script's c++ option
      
      * fix: Add tvm library path to tf op test library path
      
      * fix: Debug ci build for tftvm dso op
      
      * fix: Fix cmake error and skip tfop test
      
      * fix: Fix typo and indentation issues
      
      * feat: Use TF list input op def
      
      * fix: Fix style and unexpected changes
      
      Co-authored-by: baoxinqi <baoxinqi@4paradigm.com>
      Co-authored-by: Chen Dihao <chendihao@4paradigm.com>
      Co-authored-by: wrongtest <wrongtest@4paradigm.com>
      tobe committed
    • [LLVM] Do not use x86_vcvtph2ps_256 intrinsic with LLVM 11+ (#5267) · 4e007632
      This intrinsic was removed in LLVM 11.
      Krzysztof Parzyszek committed