1. 02 Apr, 2020 1 commit
  2. 28 Mar, 2020 1 commit
    • Adding support for QNN subtract op (#5153) · dada6761
      * Adding support for QNN subtract op
      
      * Fixing typo.
      
      * Fixing typo.
      
      * Fixing lint.
      
      * Addressing review comments.
      
      * Renaming variables as per convention and renamed QnnBinaryOpTypes -> QnnBinaryOpType
      
      * Renaming QnnBinaryOpType to QnnBinaryOpTensorType which now takes the index you want to extract to make the code more readable.
      
      * Fixing lint.
      
      * Moving common code to macro.
      
      * Fixing alignment.
      
      * Fixing typo.
      
      * Fixing lint.
      
      * Renaming method to pass CI.
      shoubhik committed
  3. 23 Mar, 2020 1 commit
  4. 19 Mar, 2020 1 commit
  5. 12 Mar, 2020 1 commit
  6. 11 Mar, 2020 1 commit
  7. 15 Feb, 2020 1 commit
  8. 05 Feb, 2020 1 commit
  9. 04 Feb, 2020 1 commit
  10. 03 Feb, 2020 1 commit
  11. 24 Jan, 2020 1 commit
  12. 20 Jan, 2020 1 commit
  13. 19 Jan, 2020 1 commit
    • [REFACTOR] Establish tir (#4740) · cf59b206
      TIR is the new namespace for low-level IR
      for tensor-level optimizations and loop transformations.
      
      This PR establishes the namespace and files.
      
      - lowered_func.h,buffer.h,data_layout.h -> tir/buffer.h,tir/data_layout.h,tir/lowered_func.h
      - ir.h -> tir/expr.h, tir/stmt.h
      - ir_functor_ext.h -> tir/expr_functor.h, tir/stmt_functor.h
      Tianqi Chen committed
  14. 17 Jan, 2020 2 commits
  15. 15 Jan, 2020 1 commit
    • [REFACTOR][IR] Unify IntImm and UIntImm (#4706) · ce807fe8
      * [REFACTOR][IR] Unify IntImm and UIntImm
      
      This PR unifies UIntImm and IntImm to simplify the codebase.
      Unsigned integer constants will also be stored as IntImm.
      
      For uint constant that does not fit into int64(rare case), we introduced
      an intrinsic tvm_big_uint_imm to construct such intgers by its
      lower and higher 32bits.
      
      * [REFACTOR][IR] Remove UIntImm to use IntImm
      
      * rename big->large
      Tianqi Chen committed
  16. 14 Jan, 2020 1 commit
  17. 09 Jan, 2020 1 commit
    • [REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr) (#4669) · d6a23cf5
      * [REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr)
      
      As part of unified IR, we will need to unify relay::Expr
      and the current tvm::Expr under the same base type.
      
      From the techinical point of view. tvm::Expr is a "primitive"
      expression that only contains POD types and handles and does
      not do life-cycle management.
      
      This PR renames Expr->PrimExpr to clarify that.
      We will send a subsequent PR to introduce the base expr class.
      
      * Remove legacy VarExpr and ExprHash/Equal
      Tianqi Chen committed
  18. 07 Jan, 2020 1 commit
  19. 06 Jan, 2020 1 commit
    • [REFACTOR] Automatically deduce function type signature in Registry.set_body_typed (#4623) · d5d63a44
      Previously we support a limited case of function type deduction and in many places
      we have to supply the type twice during set_body_typed (one in the template parameter, another in the lambda signature).
      
      This PR improves the deduce function by enablng automatic function signature deduction.
      
      ```
      TVM_REGISTER_GLOBAL("sub")
      .set_body_typed([](int x, int y) -> int { return x - y; });
      ```
      
      Unfortunately, because of template conflict, we can not support the original case
      where both type signature and lambda are supplied through set_body_typed.
      
      This PR refactors the existing regsitration to the new style.
      Tianqi Chen committed
  20. 04 Jan, 2020 1 commit
    • [REFACTOR] TVM_REGISTER_API -> TVM_REGISTER_GLOBAL (#4621) · 81523604
      TVM_REGSISTER_API is an alias of TVM_REGISTER_GLOBAL.
      In the spirit of simplify redirections, this PR removes
      the original TVM_REGISTER_API macro and directly use TVM_REGISTER_GLOBAL.
      
      This type of refactor will also simplify the IDE navigation tools
      such as FFI navigator to provide better code reading experiences.
      
      Move EnvFunc's definition to node.
      Tianqi Chen committed
  21. 03 Jan, 2020 1 commit
  22. 31 Dec, 2019 1 commit
    • [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal to Object (#4603) · a8c36921
      * [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal and macros to Object.
      
      Historically, we have classes like NodePtr/Ref/HashEqual.
      After unified object protocol, these names are just alias of the object counterpart.
      Moreover, there are helper macros defined over the places for defining these object.
      
      This PR consoldiate the terminologies into the corresponding ones
      in the Object system so we have a clean and consistent API moving forward.
      
      * Update include/tvm/attrs.h
      
      Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
      
      * fix compilation
      
      Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
      Tianqi Chen committed
  23. 22 Dec, 2019 1 commit
    • [REFACTOR][DTYPE] Isolate dtype to runtime (#4560) · 7fa8aab5
      dtype.h -> runtime/data_type.h
      
      Changes:
      - Rename all old reference of tvm::Type to DataType
      - ExprNode.type -> ExprNode.dtype
      - Expr.type() -> Expr.dtype()
      - Change Expr related functions to expr_operator.
        - DataType::min() -> min_value(DataType)
        - DataType::max() -> max_value(DataType)
      - Move type constructor Int, UInt, Float, Handle, Bool into DataType.
        - Int(bits) -> DataType::Int(bits)
        - UInt(bits) -> DataType::UInt(bits)
      Tianqi Chen committed
  24. 24 Nov, 2019 1 commit
  25. 21 Nov, 2019 1 commit
  26. 16 Nov, 2019 1 commit
    • Retain qnn input kernel scales (#4292) · 3ba9dd09
      * Add qnn conv2d attributes for input_tensor_scale and
      kernel_tensor_scale.
      
      The lowering in the tflite frontend loses the input_tensor_scale
      and the kernel_tensor_scale by multiplying it and putting it into
      the Requantize operation. This means that any graph partitioning
      passes or other passes that need to access this information no longer
      have it available in the qnn dialect.
      
      regards
      Ramana
      
      * Store input tensor scale and Weight tensor scale for Dense as well
      
      As for conv2d, the tflite frontend drops the input tensor
      scale and the weight tensor scale from the relay op. Store
      it as separate fields in there.
      
      * Fix unintentional tab
      
      * Rename input_tensor_scale to input_scale and kernel_tensor_scale
      to kernel_scale for conv2d.
      
      * input_tensor_scale -> input_scale weight_tensor_scale->weight_scale
      
      * Rework dense testcase
      
      And use input_scale and kernel_scale
      
      * Be consistent in use of input_scale and kernel_scale values
      
      * Fixup qnn conv2d tests for input_scale and kernel_scale
      
      * Make pydoc identical between conv2d and dense for weight_tensor
      
      * Fix up conv2d parameters to be in the same order between C++ and python
      
      * Fix ordering of parameters for dense.
      
      * Add input_scale and output_scale to try and satisfy ci gods
      
      * Delete input_scale and kernel_scale.
      
      nn.conv2d does not contain input_scale and kernel_scale. We need
      to delete it when lowering it to nn.conv2d.
      
      * Add input_scale and kernel_scale for qnn.conv2d
      Ramana Radhakrishnan committed
  27. 14 Nov, 2019 2 commits
  28. 30 Oct, 2019 1 commit
    • Improve the lowering of Qnn Dense (#4213) · 2be444f9
      * [QNN] Improving Dense lowering.
      
      * - Moving get_shape method to util
      - Finalizing the test cases and the code structure for optimized dense computation.
      
      * - Fixing cpplint.
      
      * - Addressing review comments.
      
      * - Renaming the variables correctly.
      
      * - Renaming the variables correctly.
      shoubhik committed
  29. 29 Oct, 2019 1 commit
  30. 21 Oct, 2019 2 commits
    • Add support for quantized multiply to Relay (#4141) · e5835425
      This patch adds multiply operator for quantized tensors.
      The details of the quantized multiplication are outlined
      in the code.
      
      This builds on pull request 3927 and includes the changes
      Animesh mentions in the comments on that request.
      
      Change-Id: I555715b53d0266a91d5c03dc3dfe8fc31e7ce4e1
      ekalda committed
    • [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol. (#4161) · 7895adb2
      * [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol.
      
      This PR removes the original node system, and make node as a subclass of Object.
      This is a major refactor towards a better unified runtime object system.
      
      List of changes in the refactor:
      
      - We now hide data_ field, use Downcast explicitly to get a sub-class object.
      - Removed the node system FFI in python.
      - Removed the node C API, instead use PackedFunc for list and get attrs.
      - Change relay::Op::set_attr_type_key(attr_key_name) to relay::Op::set_attr_type<AttrType>().
        - This change was necessary because of the new Object registration mechanism.
        - Subsequent changes to the op registrations
        - The change revealed a few previous problems that is now fixed.
      - Patched up a few missing node type registration.
        - Now we will raise an error if we register object that is not registered.
      - The original node.h and container.h are kept in the same location.
      - Calling convention: kObjectHandle now equals the old kNodeHandle, kNodeHandle is removed.
      - IRFunctor now dispatches on ObjectRef.
      - Update to the new type checking API: is_type, derived_from are replaced by IsInstance.
      - Removed .hash member function, instead use C++ convention hasher functors.
      
      * Address review comments
      Tianqi Chen committed
  31. 16 Oct, 2019 2 commits
  32. 08 Oct, 2019 1 commit
  33. 06 Oct, 2019 1 commit
  34. 02 Oct, 2019 1 commit
  35. 26 Sep, 2019 1 commit
  36. 23 Sep, 2019 1 commit