1. 30 Mar, 2020 1 commit
  2. 29 Mar, 2020 2 commits
  3. 28 Mar, 2020 2 commits
    • [NODE][IR] Introduce StructuralHash for the Unified IR. (#5160) · 497d01d3
      * [NODE][IR] Introduce StructuralHash for the Unified IR.
      
      This PR introduces a new way to handle structural hash for the unified IR.
      
      - Each object can now register an optional SEqualHash function, which
        describes how to reduce its structural equality to sequence of hash values.
      - Optionally, the object can choose to allow labeling of vars(e.g. function parameters)
        by calling DefHash
      - We implemented a non-recursive structural hasher that maintains its own stack
        to traverse te IR.
      
      This PR also improves the hash value property from the previous relay's hash utility.
      In particular, the graph node mode hashs a DAG differently from a tree
      by attaching an unique occurence index to each graph node.
      
      In all of the test cases so far, structural_hash is consistent with structural_equal.
      - if structrual(x, y) then structural_hash(x) == structural_hash(y)
      - if structural_hash(x) == structural_hash(y) then highly likely structural_equal(x, y)
        - hash no collison is found in our testcases.
      
      Ideally we should work on automatically generating these functions in the future.
      
      * Fix cases for EnvFunc and Array dims
      
      * fix testcase
      
      * Update src/node/structural_hash.cc
      
      Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe>
      
      Co-authored-by: 雾雨魔理沙 <lolisa@marisa.moe>
      Tianqi Chen committed
    • [NODE][IR] Introduce StructuralEqual Infra for the unified IR. (#5154) · 997a14ed
      * [NODE][IR] Introduce StructuralEqual Infra for the Unified IR.
      
      This PR introduces a new way to handle structural equality
      for both TIR and relay nodes in an extensive way.
      
      - Each object can now register an optional SEqualReduce function, which
        describes how to reduce its structural equality to another instance
        into equality of the children.
      - Optionally, the object can choose to allow remapping of vars(e.g. function parameters)
        by calling DefEqual
      - We implemented a non-recursive structural equality checker that
        recursively traverses the objects and does the structural equality checking.
      
      This PR also fixes a few potential problems in previous relay's AlphaEqual.
      
      - In particular, the new structural equality relation will be communicative.
      - It is can be dangerous to use same_as relation to quickly check equality,
        demonstrated by the following case. (%x, %y) are shared vars between two functions.
      
      - function0: fn (%x, %y) { %x + %y }
      - function1: fn (%y, %x) { %x + %y }
      
      The new structural equal is intented to supersede AlphaEqual and AttrsEqual.
      
      Follow-up PRs should be performed to redirect the existing usages, and removes
      the corresponding implementation.
      
      * Update the rule to distinguish between graph node and non-graph nodes.
      
      * Refactor the test cases to use structural equal.
      
      * address comments
      
      * Mark more relay::Expr as graph node, fix a testcase issue(was bug that was not caught by previous alpha equal)
      
      * Remove unrelated comment
      
      * Fix file comment
      
      * Address review comment
      
      * Relax condition to fit flaky case
      Tianqi Chen committed
  4. 24 Mar, 2020 2 commits
    • [REFACTOR][TIR] Introduce PrimFuncPass. (#5139) · 0a0e58bf
      * [REFACTOR][TIR] Introduce PrimFuncPass.
      
      - Introduce PrimFuncPass
      - Convert one pass to the unified Pass API.
      
      * Address comments
      
      * Fix comments
      Tianqi Chen committed
    • [Relay] GradientCell Relay Pass (#5039) · e6dd8e1e
      * save
      
      * gradient.rly
      
      * fix
      
      * NOT WORKING: gradient cell pass
      
      * test gradient pass
      
      * fixed basic call ops
      
      * more tests
      
      * fix bug
      
      * transform calls to one ones_like zero zero_like
      
      * maintenance stuff
      
      * fix linting
      
      * linting
      
      * linting
      
      * throw default
      
      * remove unrelated changes
      
      * import gradent.rly in pass
      
      * comment
      
      * linting
      
      * remove changes to test files
      
      * move gradient_cell.cc to transforms
      
      * revert change
      
      * update files with new commits
      
      * type
      
      * wrapper function to main outermost function type
      
      * fix linting
      
      * fix unsigned and signed int comparison
      
      * review
      
      * GetConstructor definition in module and change op comparison
      
      * update node instantiations
      
      * increase code readability
      
      Co-authored-by: Marisa Kirisame <lolisa@marisa.moe>
      Andrew Liu committed
  5. 23 Mar, 2020 3 commits
  6. 20 Mar, 2020 2 commits
  7. 18 Mar, 2020 2 commits
  8. 17 Mar, 2020 1 commit
    • [Relay, TF Frontend] Dilation2D operator support (#5033) · 646cfc63
      * update docs for dilation 2d
      
      * dilation2d compute
      
      * dilation2d register
      
      * dilation2d rel compute
      
      * dilation2d strategy
      
      * dilation2d attrs
      
      * dilation2d generic schedule
      
      * dilation2d tf frontend support
      
      * dilation2d tf frontend test case
      
      * dilation2d test cases
      
      * pylint fixes
      
      * add exception for cuda target
      
      * Update docstring
      
      * Update docstring
      
      * change rates to dilations
      
      * removed unused param
      
      * merge master
      
      * Update nn.py
      
      * Update nn.py
      Mahesh Ambule committed
  9. 15 Mar, 2020 2 commits
  10. 14 Mar, 2020 1 commit
    • [TIR] Introduce tir::PrimFunc (#5070) · e0316415
      This PR introduces tir::PrimFunc which will be used as the TIR function
      container in the unified IR.
      
      Also streamlined the function attributes a bit further.
      - All common attributes are under tvm::attr
      - TIR specific attributes are under tvm::tir::attr and comes with a tir prefix
      - Use stl_style for attributes for now
      Tianqi Chen committed
  11. 12 Mar, 2020 2 commits
    • [Bugfix][IR][ATTRS] Fix AttrEqual for Array and StrMap, double (#5054) · 29a3d3a6
      - Use fuzzy comparison for double.
      - Removed the hack for BatchNormAttrs and DictAttr.
      
      Also removed a warning from text printer printing.
      Tianqi Chen committed
    • [REFACTOR] Streamline Function Attr interface. (#5045) · ec86d7f1
      * [REFACTOR] Streamline Function Attr interface.
      
      There has been quite a few recent changes that depends heavily on
      the function attr interface. This PR streamlines that interface by introducing
      two APIs that covers most of the usages.
      
      - GetAttr which gets a typed object for a given key
        - HasNonzeroAttr is a quick helper that calls GetAttr to quickly check an attribute
      - WithAttr that creates a new function object with the given attr
        - The API comes with copy on write optimization to avoid multiple copies
        - We deliberately pick the prefix With(instead of Set) to indicate this
          function does not mutate the original input.
      
      On the python side:
      - We allow read access via func.attrs (which is a DictAttr)
      - func.with_attrs to create a new instance with updated attrs.
      
      We also get rid of the small wrapper functions and make sure the API centered around
      the GetAttr and HasNonzeroAttr interface.
      
      This PR also changes the function construction to follow the new convention.
      
      * Address review comments
      
      * Address review comments
      
      * Fix doxygen path
      Tianqi Chen committed
  12. 11 Mar, 2020 4 commits
  13. 10 Mar, 2020 3 commits
  14. 06 Mar, 2020 1 commit
    • [topi][relay] add operation tan to TVM (#4938) · d992468d
      * Add relay operation relay.op.tan.
      
      * Update tan implementation in TVM.
      
      * Update tests.
      
      * Add shape function for tan.
      
      * Add missing main test to python/frontend/tensorflow/test_forward.
      
      * Revert, back to sin/cos.
      
      * Revert "Revert, back to sin/cos."
      
      This reverts commit 4da5b503b921585ba9d80944b29136142b575c40.
      
      * Fix implementation of tan in cuda. Do not support tan for float16.
      
      Simplify topi/tests/python/test_topi_math. Add testing for tan with float32 and float64.
      
      Try again to implement tan as sin/cos in llvm.
      Yao Wang committed
  15. 05 Mar, 2020 1 commit
  16. 03 Mar, 2020 1 commit
  17. 02 Mar, 2020 1 commit
  18. 01 Mar, 2020 1 commit
  19. 29 Feb, 2020 1 commit
    • Added CopyFromBytes and CopyToBytes convenience methods to NDArray. Fixed typos. (#4970) · 474c70d7
      * Added CopyFromBytes and CopyToBytes convenience methods.  Fixed typos.
      
      * Removed unneed argument check
      
      * Use TVMArrayCopyFrom/ToBytes methods
      
      * Moved CopyFrom/ToBytes to ndarray.cc
      
      * CopyToBytes impl was using CopyFromBytes.  Fixed
      
      * changed inline to TVM_DLL
      
      * Used impl from TVMArrayCopyTo/FromBytes into NDArray CopyTo/FromBytes
      
      * Move implementation of all CopyFrom/ToBytes into a common impls
      
      * make arg const
      
      * simplify method impl
      jmorrill committed
  20. 27 Feb, 2020 1 commit
  21. 26 Feb, 2020 1 commit
  22. 24 Feb, 2020 1 commit
    • [Relay][AutoTVM] Relay op strategy (#4644) · 623dd208
      * relay op strategy
      
      fix lint
      
      bitpack strategy
      
      bitserial_dense (#6)
      
      * update strategy
      
      * address comments
      
      fix a few topi test
      
      Dense strategy (#5)
      
      * dense
      
      * add biforst; remove comments
      
      * address comment
      
      Refactor x86 conv2d_NCHWc (#4)
      
      * Refactor x86 conv2d
      
      * Add x86 depthwise_conv2d_NCHWc
      
      * Add back topi x86 conv2d_nchw
      
      * Merge x86 conv2d_nchw and conv2d_NCHWc
      
      * Minor fix for x86 conv2d
      
      fix more strategy
      
      Add x86 conv2d_NCHWc_int8 strategy (#8)
      
      * Add x86 conv2d_NCHWc_int8 strategy
      
      * Remove contrib_conv2d_nchwc_int8
      
      * Fix generic conv2d_NCHWc for int8
      
      * Fix topi arm_cpu conv2d_NCHWc_int8
      
      update x86 conv2d
      
      enable specify relay ops to be tuned for autotvm
      
      add cuda conv2d strategy
      
      add conv2d strategy for rocm
      
      add conv2d strategy for hls
      
      add conv2d strategy for arm cpu
      
      add conv2d strategy for mali
      
      add conv2d strategy for bifrost
      
      add conv2d strategy for intel graphics
      
      clean up and fix lint
      
      remove template keys from autotvm
      
      remove 2 in the func name
      
      address comments
      
      fix
      
      * fix bugs
      
      * lint
      
      * address comments
      
      * add name to op implement
      
      * Modify topi tests (#9)
      
      * Add pooling, reorg, softmax and vision
      
      * Add lrn
      
      * fix topi test
      
      * fix more topi test
      
      * lint
      
      * address comments
      
      * x
      
      * fix more tests & bugs
      
      * Modify more tests (#10)
      
      * Modify tests for bitserial_conv2d, bitserial_dense, bitserial_conv2d_rasp and bnn
      
      * Minor fix
      
      * More minor fix
      
      * fix more test
      
      * try to update vta using strategy
      
      * fix cpptest
      
      * x
      
      * fix rebase err
      
      * Fix two tests (#11)
      
      * change autotvm log format
      
      * lint
      
      * minor fix
      
      * try fix vta test
      
      * fix rebase err
      
      * tweak
      
      * tmp hack for vta pass
      
      * fix tutorial
      
      * fix
      
      * fix more tutorials
      
      * fix vta tutorial
      
      * minor
      
      * address comments
      
      * fix
      
      * address comments
      
      * fix cpptest
      
      * fix docs
      
      * change data structure name and api
      
      * address comments
      
      * lint
      
      * fix rebase err
      
      * updates
      
      * fix winograd test
      
      * fix doc
      
      * rebase
      
      * upgrade tophub version number
      
      * fix bug
      
      * re-enable vta tsim test after tophub is upgraded
      
      * fix vta test to use the correct args so the config can be found in tophub
      
      Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
      Haichen Shen committed
  23. 21 Feb, 2020 1 commit
    • [CODEGEN] Support cuda tensorcore subbyte int data type in auto tensorcore (#4546) · f23ac969
      * support cuda tensorcore subbyte int data type in auto tensorcore
      
      * add lisence
      
      * pass cpplint
      
      * fix code review comments
      
      * merge the int4/int1 codegen tutorial into the existing auto tensorcore tutorial
      
      * using master's new API
      
      * disable tuning when cuda is not enabled
      
      * address cr comment
      
      * do not run the tuning
      
      * fix test failure
      
      * fix cpplint error
      
      * fix bool type reduction bug
      
      * 1. fix a index bug 2. fix returned bytes value of int1/int4/uint4
      
      * fix typo
      Orion34C committed
  24. 12 Feb, 2020 1 commit
    • [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding files (#4862) · a5661611
      * [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files.
      
      This PR establishes tvm.ir and migrates the corresponding relay
      files into the new folder.
      
      API Change:
      - relay.Module -> tvm.IRModule
      
      * Update with ADT
      
      * Migrate transform
      
      * address comments
      
      * Migrate module
      
      * Migrate json_compact
      
      * Migrate attrs
      
      * Move LoweredFunc to stmt temporarily
      
      * temp migrate container
      
      * Finish migrate container
      Tianqi Chen committed
  25. 11 Feb, 2020 1 commit
  26. 10 Feb, 2020 1 commit
    • [Relay] Added Merge Composite pass (#4771) · 0dbe70c1
      * [Relay] Added MergeComposite pass
      
      This pass allows for patterns to be wrapped
      in a function marked with 'Composite' and a
      composite function name. This is intended to be
      used with the external codegen for the cases where
      an external operator maps to multiple Relay
      operators. In that case, the mapping can be expressed
      as a pattern and assigned a name.
      
      For more information on this pass and its motivation,
      see the RFC:
      https://discuss.tvm.ai/t/rfc-external-codegen-defining-composite-relay-operators/5470
      
      Change-Id: Icb1b803a9f0ac57c529143200228f3bb5793afc0
      
      * [Relay] Merge composite tests
      
      Added tests for the merge_composite pass.
      
      Change-Id: I1728b4a05b0c1c36140a40f1afe028fde62185dd
      
      * Merge composite additional test
      
      Change-Id: I9bc7d6053c575e9468ac5abc31214c6ad8507e46
      
      * Support priority order in merge_composite
      
      The order in which the patterns are matched
      was currently random as an unordered_map was
      used to store the pattern table. This uses
      arrays instead so that a distinct priority
      order of matching can be defined. Additional
      tests have also been added to verify this
      behaviour.
      
      Change-Id: Ief347df4262639138d5d9d7c8cee7ef233af7b56
      
      * Improved merge composite docs
      
      Change-Id: Ie3a72045ecc3f13ad3c302fbdf192b7296a306a8
      
      * Removed unused variable
      
      Change-Id: I7814d5fde368ffaf1b3d6d806060c774c7720364
      
      * Remove unnecessary op check
      
      Change-Id: I38e78d2acd5b86cb8e837be72ff9d72cd10bcf33
      
      * Improve styling on composite function creation
      
      Change-Id: I37add1c3134e0b5d5085fe1eb9daf8e06890fa8c
      
      * Comment reword
      
      Change-Id: Ie05872dcbbe0c3e1190b0597083b9a64e6b66c66
      
      * Stylistic changes to avoid std::move
      
      Change-Id: I43a93995bbf10530399900c992aa99dd4ae4575f
      
      * Relax a check in ExtractPattern
      
      Change-Id: I0faef77a66c55f83f09e6e47c561ffaea63dedfa
      
      * Remove new line
      
      Change-Id: Ifdd02c12087a7e1a0a9b54825669bc0de8f13c3d
      
      * Removed MatchPattern from MergeComposite
      
      This is not necessary now that ExtractPattern
      can fulfill the same purpose.
      
      Change-Id: I14dc020afa8e50f2df4c0a2efb88a011987f8196
      
      * Removed a new line
      
      Change-Id: I8b50f0c9069aa1bcaccbe68eb421031f01a64842
      
      * Improved docs for merge composite
      
      Change-Id: Ib1959a35c856e7ea5639de2e4ef314a54f44caf5
      
      * Fixed free vars in test
      
      Change-Id: I2b7f273db275964ec0e9820560663f0808adee79
      
      * Handle case where root arg might not be a call
      
      Change-Id: I4eeea3ce723d3ba337d110dcc690377daebe8626
      
      * Removed blank line
      
      Change-Id: I07f5392c0e95cfe3cfa5c333703cc6f82d6034fb
      
      * Change to CHECK_EQ
      
      Change-Id: I5c5d62d3cd57f72508b30b926f72091ae6f0d1cc
      
      * Revised a conditional
      
      Change-Id: I23a7897ca15a7cd076db5039dc653a4b8c27e803
      
      * Improved doc styling
      
      Change-Id: I377f0a1c1ac70f3b8d7584b0c49bddc8c6c134ef
      
      * Fail extraction if vars conflict
      
      Change-Id: I78e36d805e8ed6b55e61d490212a967c857554a4
      
      * Added further merge composite tests
      
      Change-Id: Ib1d800409fca4c1834c7fe0cab5a26ab99a26820
      
      Co-authored-by: lhutton1 <35535092+lhutton1@users.noreply.github.com>
      mbarrett97 committed