1. 28 Mar, 2020 4 commits
    • [NODE][IR] Introduce StructuralHash for the Unified IR. (#5160) · 497d01d3
      * [NODE][IR] Introduce StructuralHash for the Unified IR.
      
      This PR introduces a new way to handle structural hash for the unified IR.
      
      - Each object can now register an optional SEqualHash function, which
        describes how to reduce its structural equality to sequence of hash values.
      - Optionally, the object can choose to allow labeling of vars(e.g. function parameters)
        by calling DefHash
      - We implemented a non-recursive structural hasher that maintains its own stack
        to traverse te IR.
      
      This PR also improves the hash value property from the previous relay's hash utility.
      In particular, the graph node mode hashs a DAG differently from a tree
      by attaching an unique occurence index to each graph node.
      
      In all of the test cases so far, structural_hash is consistent with structural_equal.
      - if structrual(x, y) then structural_hash(x) == structural_hash(y)
      - if structural_hash(x) == structural_hash(y) then highly likely structural_equal(x, y)
        - hash no collison is found in our testcases.
      
      Ideally we should work on automatically generating these functions in the future.
      
      * Fix cases for EnvFunc and Array dims
      
      * fix testcase
      
      * Update src/node/structural_hash.cc
      
      Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe>
      
      Co-authored-by: 雾雨魔理沙 <lolisa@marisa.moe>
      Tianqi Chen committed
    • [NODE][IR] Introduce StructuralEqual Infra for the unified IR. (#5154) · 997a14ed
      * [NODE][IR] Introduce StructuralEqual Infra for the Unified IR.
      
      This PR introduces a new way to handle structural equality
      for both TIR and relay nodes in an extensive way.
      
      - Each object can now register an optional SEqualReduce function, which
        describes how to reduce its structural equality to another instance
        into equality of the children.
      - Optionally, the object can choose to allow remapping of vars(e.g. function parameters)
        by calling DefEqual
      - We implemented a non-recursive structural equality checker that
        recursively traverses the objects and does the structural equality checking.
      
      This PR also fixes a few potential problems in previous relay's AlphaEqual.
      
      - In particular, the new structural equality relation will be communicative.
      - It is can be dangerous to use same_as relation to quickly check equality,
        demonstrated by the following case. (%x, %y) are shared vars between two functions.
      
      - function0: fn (%x, %y) { %x + %y }
      - function1: fn (%y, %x) { %x + %y }
      
      The new structural equal is intented to supersede AlphaEqual and AttrsEqual.
      
      Follow-up PRs should be performed to redirect the existing usages, and removes
      the corresponding implementation.
      
      * Update the rule to distinguish between graph node and non-graph nodes.
      
      * Refactor the test cases to use structural equal.
      
      * address comments
      
      * Mark more relay::Expr as graph node, fix a testcase issue(was bug that was not caught by previous alpha equal)
      
      * Remove unrelated comment
      
      * Fix file comment
      
      * Address review comment
      
      * Relax condition to fit flaky case
      Tianqi Chen committed
    • [Relay][Frontend][Pytorch] Fixed ConvTranspose2D parsing (#5157) · 9c806621
      * Fixed conv transpose parsing.
      
      * small format change.
      
      * Chage test module names.
      
      * Simplified test syntax.
      Josh Fromm committed
    • Adding support for QNN subtract op (#5153) · dada6761
      * Adding support for QNN subtract op
      
      * Fixing typo.
      
      * Fixing typo.
      
      * Fixing lint.
      
      * Addressing review comments.
      
      * Renaming variables as per convention and renamed QnnBinaryOpTypes -> QnnBinaryOpType
      
      * Renaming QnnBinaryOpType to QnnBinaryOpTensorType which now takes the index you want to extract to make the code more readable.
      
      * Fixing lint.
      
      * Moving common code to macro.
      
      * Fixing alignment.
      
      * Fixing typo.
      
      * Fixing lint.
      
      * Renaming method to pass CI.
      shoubhik committed
  2. 27 Mar, 2020 3 commits
  3. 26 Mar, 2020 6 commits
  4. 25 Mar, 2020 2 commits
  5. 24 Mar, 2020 8 commits
  6. 23 Mar, 2020 9 commits
  7. 22 Mar, 2020 5 commits
  8. 21 Mar, 2020 3 commits