1. 01 Apr, 2020 7 commits
  2. 31 Mar, 2020 10 commits
  3. 30 Mar, 2020 12 commits
  4. 29 Mar, 2020 5 commits
  5. 28 Mar, 2020 4 commits
    • [NODE][IR] Introduce StructuralHash for the Unified IR. (#5160) · 497d01d3
      * [NODE][IR] Introduce StructuralHash for the Unified IR.
      
      This PR introduces a new way to handle structural hash for the unified IR.
      
      - Each object can now register an optional SEqualHash function, which
        describes how to reduce its structural equality to sequence of hash values.
      - Optionally, the object can choose to allow labeling of vars(e.g. function parameters)
        by calling DefHash
      - We implemented a non-recursive structural hasher that maintains its own stack
        to traverse te IR.
      
      This PR also improves the hash value property from the previous relay's hash utility.
      In particular, the graph node mode hashs a DAG differently from a tree
      by attaching an unique occurence index to each graph node.
      
      In all of the test cases so far, structural_hash is consistent with structural_equal.
      - if structrual(x, y) then structural_hash(x) == structural_hash(y)
      - if structural_hash(x) == structural_hash(y) then highly likely structural_equal(x, y)
        - hash no collison is found in our testcases.
      
      Ideally we should work on automatically generating these functions in the future.
      
      * Fix cases for EnvFunc and Array dims
      
      * fix testcase
      
      * Update src/node/structural_hash.cc
      
      Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe>
      
      Co-authored-by: 雾雨魔理沙 <lolisa@marisa.moe>
      Tianqi Chen committed
    • [NODE][IR] Introduce StructuralEqual Infra for the unified IR. (#5154) · 997a14ed
      * [NODE][IR] Introduce StructuralEqual Infra for the Unified IR.
      
      This PR introduces a new way to handle structural equality
      for both TIR and relay nodes in an extensive way.
      
      - Each object can now register an optional SEqualReduce function, which
        describes how to reduce its structural equality to another instance
        into equality of the children.
      - Optionally, the object can choose to allow remapping of vars(e.g. function parameters)
        by calling DefEqual
      - We implemented a non-recursive structural equality checker that
        recursively traverses the objects and does the structural equality checking.
      
      This PR also fixes a few potential problems in previous relay's AlphaEqual.
      
      - In particular, the new structural equality relation will be communicative.
      - It is can be dangerous to use same_as relation to quickly check equality,
        demonstrated by the following case. (%x, %y) are shared vars between two functions.
      
      - function0: fn (%x, %y) { %x + %y }
      - function1: fn (%y, %x) { %x + %y }
      
      The new structural equal is intented to supersede AlphaEqual and AttrsEqual.
      
      Follow-up PRs should be performed to redirect the existing usages, and removes
      the corresponding implementation.
      
      * Update the rule to distinguish between graph node and non-graph nodes.
      
      * Refactor the test cases to use structural equal.
      
      * address comments
      
      * Mark more relay::Expr as graph node, fix a testcase issue(was bug that was not caught by previous alpha equal)
      
      * Remove unrelated comment
      
      * Fix file comment
      
      * Address review comment
      
      * Relax condition to fit flaky case
      Tianqi Chen committed
    • [Relay][Frontend][Pytorch] Fixed ConvTranspose2D parsing (#5157) · 9c806621
      * Fixed conv transpose parsing.
      
      * small format change.
      
      * Chage test module names.
      
      * Simplified test syntax.
      Josh Fromm committed
    • Adding support for QNN subtract op (#5153) · dada6761
      * Adding support for QNN subtract op
      
      * Fixing typo.
      
      * Fixing typo.
      
      * Fixing lint.
      
      * Addressing review comments.
      
      * Renaming variables as per convention and renamed QnnBinaryOpTypes -> QnnBinaryOpType
      
      * Renaming QnnBinaryOpType to QnnBinaryOpTensorType which now takes the index you want to extract to make the code more readable.
      
      * Fixing lint.
      
      * Moving common code to macro.
      
      * Fixing alignment.
      
      * Fixing typo.
      
      * Fixing lint.
      
      * Renaming method to pass CI.
      shoubhik committed
  6. 27 Mar, 2020 2 commits
    • [TOPI][Tensor Core] Conv2d and Dense ops support on Tensor Core (#5099) · f4286cc7
      * [TOPI][Tensor Core] Optimization of CNNs on Tensor Core #6004
      
      * update conv2d test
      
      * # pylint: dense_tensorcore.py
      
      * modify
      
      * modify conv2d
      
      * modify the unclear comment,add shape assertion in conv2d compute,combine general gemm intrinsic
      
      * add shape assertion in conv2d compute, combine general gemm intrinsic
      
      Co-authored-by: libaihong <libaihong@inspur.com>
      Co-authored-by: libaihong <61525430+libaihong@users.noreply.github.com>
      Shawn-Inspur committed
    • [External Codegen] Fix annotate pass static variable (#5023) · 949dca4d
      'fannotate' in the annotate_target pass was designated as
      static. This meant that if you use the pass to annotate
      more than one codegen, its value is not updated when the
      target changes resulting in incorrect annotation.
      
      Change-Id: Ib4f3af5cfbef44f29771818219755198ac313a0e
      mbaret committed