1. 25 Apr, 2020 1 commit
    • [RELAY] Move frontend utils (#5345) · 3f47b327
      * [RELAY] Move frontend utils
      
      The util file currently under frontend is used from
      outside of frontend (in qnn/op/legalizations). This suggests
      that the file should be pushed up to a higher level.
      
      The benefit from this change is that importing qnn no longer
      also imports all the frontends.
      
      * Inline get_scalar_from_constant
      
      Change-Id: I1cc64e9ecb0eadb6ac0f7b62e6ea174644af4ad4
      
      * Remove util.py from Relay
      
      Change-Id: If9cd7cf3fc0bd1861a3a9b5604f338e084d8db96
      
      * Shorten functions
      
      Change-Id: Ieb537d82e6ee52421ff05a90cd00a03679ffebf2
      
      * Line length
      
      Change-Id: I1d216b7e73a060c4f118f5da50ce58b18eba907f
      mbaret committed
  2. 28 Mar, 2020 1 commit
    • Adding support for QNN subtract op (#5153) · dada6761
      * Adding support for QNN subtract op
      
      * Fixing typo.
      
      * Fixing typo.
      
      * Fixing lint.
      
      * Addressing review comments.
      
      * Renaming variables as per convention and renamed QnnBinaryOpTypes -> QnnBinaryOpType
      
      * Renaming QnnBinaryOpType to QnnBinaryOpTensorType which now takes the index you want to extract to make the code more readable.
      
      * Fixing lint.
      
      * Moving common code to macro.
      
      * Fixing alignment.
      
      * Fixing typo.
      
      * Fixing lint.
      
      * Renaming method to pass CI.
      shoubhik committed
  3. 19 Mar, 2020 1 commit
  4. 17 Mar, 2020 1 commit
  5. 12 Mar, 2020 1 commit
  6. 11 Mar, 2020 1 commit
  7. 15 Feb, 2020 1 commit
  8. 14 Feb, 2020 1 commit
  9. 13 Feb, 2020 1 commit
  10. 12 Feb, 2020 1 commit
    • [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding files (#4862) · a5661611
      * [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files.
      
      This PR establishes tvm.ir and migrates the corresponding relay
      files into the new folder.
      
      API Change:
      - relay.Module -> tvm.IRModule
      
      * Update with ADT
      
      * Migrate transform
      
      * address comments
      
      * Migrate module
      
      * Migrate json_compact
      
      * Migrate attrs
      
      * Move LoweredFunc to stmt temporarily
      
      * temp migrate container
      
      * Finish migrate container
      Tianqi Chen committed
  11. 09 Feb, 2020 1 commit
  12. 05 Feb, 2020 1 commit
    • [REFACTOR][PY] tvm._ffi (#4813) · f9b46c43
      * [REFACTOR][PY] tvm._ffi
      
      - Remove from __future__ import absolute_import in the related files as they are no longer needed if the code only runs in python3
      - Remove reverse dependency of _ctypes _cython to object_generic.
      - function.py -> packed_func.py
      - Function -> PackedFunc
      - all registry related logics goes to tvm._ffi.registry
      - Use absolute references for FFI related calls.
        - tvm._ffi.register_object
        - tvm._ffi.register_func
        - tvm._ffi.get_global_func
      
      * Move get global func to the ffi side
      Tianqi Chen committed
  13. 03 Feb, 2020 1 commit
  14. 07 Jan, 2020 1 commit
  15. 03 Jan, 2020 1 commit
  16. 16 Nov, 2019 1 commit
    • Retain qnn input kernel scales (#4292) · 3ba9dd09
      * Add qnn conv2d attributes for input_tensor_scale and
      kernel_tensor_scale.
      
      The lowering in the tflite frontend loses the input_tensor_scale
      and the kernel_tensor_scale by multiplying it and putting it into
      the Requantize operation. This means that any graph partitioning
      passes or other passes that need to access this information no longer
      have it available in the qnn dialect.
      
      regards
      Ramana
      
      * Store input tensor scale and Weight tensor scale for Dense as well
      
      As for conv2d, the tflite frontend drops the input tensor
      scale and the weight tensor scale from the relay op. Store
      it as separate fields in there.
      
      * Fix unintentional tab
      
      * Rename input_tensor_scale to input_scale and kernel_tensor_scale
      to kernel_scale for conv2d.
      
      * input_tensor_scale -> input_scale weight_tensor_scale->weight_scale
      
      * Rework dense testcase
      
      And use input_scale and kernel_scale
      
      * Be consistent in use of input_scale and kernel_scale values
      
      * Fixup qnn conv2d tests for input_scale and kernel_scale
      
      * Make pydoc identical between conv2d and dense for weight_tensor
      
      * Fix up conv2d parameters to be in the same order between C++ and python
      
      * Fix ordering of parameters for dense.
      
      * Add input_scale and output_scale to try and satisfy ci gods
      
      * Delete input_scale and kernel_scale.
      
      nn.conv2d does not contain input_scale and kernel_scale. We need
      to delete it when lowering it to nn.conv2d.
      
      * Add input_scale and kernel_scale for qnn.conv2d
      Ramana Radhakrishnan committed
  17. 13 Nov, 2019 1 commit
  18. 30 Oct, 2019 1 commit
    • Improve the lowering of Qnn Dense (#4213) · 2be444f9
      * [QNN] Improving Dense lowering.
      
      * - Moving get_shape method to util
      - Finalizing the test cases and the code structure for optimized dense computation.
      
      * - Fixing cpplint.
      
      * - Addressing review comments.
      
      * - Renaming the variables correctly.
      
      * - Renaming the variables correctly.
      shoubhik committed
  19. 21 Oct, 2019 1 commit
    • Add support for quantized multiply to Relay (#4141) · e5835425
      This patch adds multiply operator for quantized tensors.
      The details of the quantized multiplication are outlined
      in the code.
      
      This builds on pull request 3927 and includes the changes
      Animesh mentions in the comments on that request.
      
      Change-Id: I555715b53d0266a91d5c03dc3dfe8fc31e7ce4e1
      ekalda committed
  20. 17 Oct, 2019 1 commit
  21. 16 Oct, 2019 1 commit
  22. 30 Sep, 2019 1 commit
  23. 22 Sep, 2019 1 commit
    • Qnn fully connected (#3910) · 43f54a58
      * Qnn Dense layer.
      
      * Reformatting code.
      
      * Reformatting code and making the test case more readable.
      
      * Fixing lint issues.
      
      * Fixing test method names to pass the nose related configurations.
      
      * Aligning the code for code style.
      shoubhik committed
  24. 16 Sep, 2019 1 commit
  25. 05 Sep, 2019 1 commit
  26. 04 Sep, 2019 1 commit
  27. 31 Aug, 2019 1 commit
  28. 30 Aug, 2019 1 commit
  29. 16 Aug, 2019 1 commit
    • QNN quantize and dequantize operators. (#3745) · d3eb9cb8
      * QNN quantize and dequantize operators.
      
      * addressing review comments.
      
      * addressing review comments.
      
      * Adding new line at the end of the file.
      
      * Adhering to styling guidelines.
      
      * Adding name to contributors.
      
      * Fixing lint issue.
      
      * Fixing file name.
      
      * Removing unnecessary code.
      shoubhik committed
  30. 15 Aug, 2019 1 commit
  31. 14 Aug, 2019 1 commit
  32. 08 Aug, 2019 1 commit
    • [QNN] Requantize operator (#3531) · a78adbd5
      * [Relay] [Quantization] WIP - Common files for the qauntization work.
      
      * [Relay] [Quantization] WIP - Prototyping requantize op.
      
      * Requantize operator implementation.
      
      Requantize converts one quantized tensor representation to another quantized
      representation. The PR has following implementation features
      
      - Requantize operator defined in qnn namespace - relay.qnn.requantize
      - Lowering of the requantize to exisiting Relay operators
      - Integer fixed point implementation of requantize
          - Two rounding modes - FE_UPWARDS (round towards infinity) and
          FE_AWAY_FROM_ZERO (std::round behavior)
      - Floating point implementation as well, that can act as reference or can be
      used for devices when FP32 computation is not used.
      - Unit test cases
      
      Relevant Issue - https://github.com/dmlc/tvm/issues/2351
      
      Credit to TFLite and GemmLowp to provide reference implementations.
      
      * Typo and lint fixes.
      
      * Doc fix.
      
      * Uncommenting the lint script (fixing mistake).
      
      * Modifying the unit tests.
      
      * Moving C++ files into src/relay/qnn
      
      * Moving python files to python/tvm/relay/qnn. Some minor fixes.
      
      * Moving the attrs.h inside the include directory.
      
      * Pushing files that I forgot earlier. Changing util location.
      
      * Incorporating comments. API change. Lint fixes.
      
      * Modifying the GetFixedPointMultiplierShift API as per comments.
      
      * Forgot the dialect change.
      
      * Changing rewrite to qnn_lower.
      
      * Renaming Quantize to Qnn for clarity.
      
      * Remove use_int_domain.
      
      * Incorportaing review comments.
      
      * Adding API doc for QNN dialect.
      
      * Move the qnn_lower pass to transform namespace.
      
      * Moving from expr to module. Adding namespace in C++.
      
      * Minor sentence rewrites. Added qnn namespace.
      
      * Added the API doc.
      
      * Chanding default out_dtype to int8. Adding a test with in/out_dtype as uint8.
      
      * Style fixes. Better error messages.
      
      * Adding documentation.
      
      * More documentation fixes.
      
      * Adding out dtype check for requantize.
      
      * Adding corner case for FP32 to fixed point conversion.
      
      * Adding extra line.
      
      * Documentation fix.
      
      * Adding static inline.
      
      * Incorporating jackwish comment. Removed idtype from requantize lowering.
      
      * Removing Quantize/Dequantize code. Restricting Requantize to (u)int8/int32.
      
      * Style fixes.
      
      * Fix the docs.
      
      * Move to Legalize API.
      Animesh Jain committed