1. 01 Mar, 2020 1 commit
  2. 28 Feb, 2020 1 commit
  3. 27 Feb, 2020 3 commits
    • [DOCS] Sphinx -- Introduce alias detection. (#4954) · 1dbdcfb5
      * [DOCS] Sphinx -- Introduce alias detection.
      
      Background: some of our namespaces import function from another
      namespace. For example tvm.te imports most of the operators from tvm.tir.
      
      Previously we manually exclude these aliases from the doc.
      However that means we can not link them by the alias name.
      
      This PR adds a sphinx callback plugin to detect such aliases, and create a rubric block
      on the button of its current docstring `Alias of the original class`.
      It is done in a way so that we can refer to the generated docs.
      
      We also fixed a few docs errors.
      
      * Fix most of the issues
      Tianqi Chen committed
    • [REFACTOR][PY][API-CHANGE] Remove legacy python files. (#4943) · 9816efc2
      * [REFACTOR][PY][API-CHANGE] Remove legacy python files.
      
      Remove legacy python files.
      Use the te namespace for most of the tensor expression primitives.
      
      - tvm.create_schedule -> tvm.te.create_schedule
      - tvm.placeholder -> tvm.te.placeholder
      - tvm.compute -> tvm.te.compute
      
      * Remove top-level exposures.
      Tianqi Chen committed
    • [TUTORIAL] Fix tedd tutorial after strategy change (#4947) · c9be16bd
      * [TUTORIAL] Fix tedd tutorial after strategy change
      
      * Remove scale, remove link to external gdoc
      Tianqi Chen committed
  4. 26 Feb, 2020 3 commits
    • [Tutorial] Add a tutorial for PyTorch (#4936) · 61bea507
      * Add a tutorial for PyTorch
      
      * Fix sphinx formatting, add version support
      
      * Remove space
      
      * Remove version check
      
      * Some refactoring
      
      * Use no grad
      
      * Rename input
      
      * Update cat img source
      Alex Wong committed
    • [DOCS] Fix Sphinx Warning: the target found for cross-reference (#4925) · b598c545
      * [DOCS] Fix Sphinx Warnings: the target found for cross-reference warnings
      
      * Fix the warning: undefined label
      Neo Chien committed
    • Tensor Expression Debug Display (TEDD) (#4651) · b0b1e7da
      * Initial TEDD for publishing.
      
      * 1. Fix lint issues. 2. Print intrin.body instead of intrin.name in Schedule Tree.  3. Add examples to top level APIs' comments.  4. Top level APIs don't print Dot string by default, unless outputdotstring is True.
      
      * Fix more lint issues.
      
      * Update top level API argument names and use raw strings to avoid Python lint warnings in the tests.
      
      * Disable TEDD verification, but keep TE construction.
      
      * Stop importing tedd to avoid failure.
      
      * Separate data extraction and visualization. 1. Add API tedd.dump_json(schedule) to dump a json string for the schedule data for visualization.  2. Update tests.  3. Add a tutorial.  4. Add range information to IterVars.
      
      * Update TEDD about InferBound failure.  1. TEDD doesn't call inferbound for DFG. 2. Update tutorial about the InferBound failure.
      
      * 1. Import IPython only if SVG is requested.  This is required to fix a tutorial publishing faliure.  2. Fix test about IPython availability check.
      yongfeng-nv committed
  5. 24 Feb, 2020 1 commit
    • [Relay][AutoTVM] Relay op strategy (#4644) · 623dd208
      * relay op strategy
      
      fix lint
      
      bitpack strategy
      
      bitserial_dense (#6)
      
      * update strategy
      
      * address comments
      
      fix a few topi test
      
      Dense strategy (#5)
      
      * dense
      
      * add biforst; remove comments
      
      * address comment
      
      Refactor x86 conv2d_NCHWc (#4)
      
      * Refactor x86 conv2d
      
      * Add x86 depthwise_conv2d_NCHWc
      
      * Add back topi x86 conv2d_nchw
      
      * Merge x86 conv2d_nchw and conv2d_NCHWc
      
      * Minor fix for x86 conv2d
      
      fix more strategy
      
      Add x86 conv2d_NCHWc_int8 strategy (#8)
      
      * Add x86 conv2d_NCHWc_int8 strategy
      
      * Remove contrib_conv2d_nchwc_int8
      
      * Fix generic conv2d_NCHWc for int8
      
      * Fix topi arm_cpu conv2d_NCHWc_int8
      
      update x86 conv2d
      
      enable specify relay ops to be tuned for autotvm
      
      add cuda conv2d strategy
      
      add conv2d strategy for rocm
      
      add conv2d strategy for hls
      
      add conv2d strategy for arm cpu
      
      add conv2d strategy for mali
      
      add conv2d strategy for bifrost
      
      add conv2d strategy for intel graphics
      
      clean up and fix lint
      
      remove template keys from autotvm
      
      remove 2 in the func name
      
      address comments
      
      fix
      
      * fix bugs
      
      * lint
      
      * address comments
      
      * add name to op implement
      
      * Modify topi tests (#9)
      
      * Add pooling, reorg, softmax and vision
      
      * Add lrn
      
      * fix topi test
      
      * fix more topi test
      
      * lint
      
      * address comments
      
      * x
      
      * fix more tests & bugs
      
      * Modify more tests (#10)
      
      * Modify tests for bitserial_conv2d, bitserial_dense, bitserial_conv2d_rasp and bnn
      
      * Minor fix
      
      * More minor fix
      
      * fix more test
      
      * try to update vta using strategy
      
      * fix cpptest
      
      * x
      
      * fix rebase err
      
      * Fix two tests (#11)
      
      * change autotvm log format
      
      * lint
      
      * minor fix
      
      * try fix vta test
      
      * fix rebase err
      
      * tweak
      
      * tmp hack for vta pass
      
      * fix tutorial
      
      * fix
      
      * fix more tutorials
      
      * fix vta tutorial
      
      * minor
      
      * address comments
      
      * fix
      
      * address comments
      
      * fix cpptest
      
      * fix docs
      
      * change data structure name and api
      
      * address comments
      
      * lint
      
      * fix rebase err
      
      * updates
      
      * fix winograd test
      
      * fix doc
      
      * rebase
      
      * upgrade tophub version number
      
      * fix bug
      
      * re-enable vta tsim test after tophub is upgraded
      
      * fix vta test to use the correct args so the config can be found in tophub
      
      Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
      Haichen Shen committed
  6. 21 Feb, 2020 1 commit
    • [CODEGEN] Support cuda tensorcore subbyte int data type in auto tensorcore (#4546) · f23ac969
      * support cuda tensorcore subbyte int data type in auto tensorcore
      
      * add lisence
      
      * pass cpplint
      
      * fix code review comments
      
      * merge the int4/int1 codegen tutorial into the existing auto tensorcore tutorial
      
      * using master's new API
      
      * disable tuning when cuda is not enabled
      
      * address cr comment
      
      * do not run the tuning
      
      * fix test failure
      
      * fix cpplint error
      
      * fix bool type reduction bug
      
      * 1. fix a index bug 2. fix returned bytes value of int1/int4/uint4
      
      * fix typo
      Orion34C committed
  7. 20 Feb, 2020 1 commit
  8. 17 Feb, 2020 1 commit
  9. 14 Feb, 2020 1 commit
  10. 12 Feb, 2020 1 commit
    • [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding files (#4862) · a5661611
      * [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files.
      
      This PR establishes tvm.ir and migrates the corresponding relay
      files into the new folder.
      
      API Change:
      - relay.Module -> tvm.IRModule
      
      * Update with ADT
      
      * Migrate transform
      
      * address comments
      
      * Migrate module
      
      * Migrate json_compact
      
      * Migrate attrs
      
      * Move LoweredFunc to stmt temporarily
      
      * temp migrate container
      
      * Finish migrate container
      Tianqi Chen committed
  11. 07 Feb, 2020 2 commits
  12. 27 Jan, 2020 1 commit
  13. 11 Jan, 2020 1 commit
  14. 28 Dec, 2019 1 commit
  15. 26 Dec, 2019 1 commit
  16. 23 Dec, 2019 1 commit
  17. 15 Dec, 2019 1 commit
  18. 26 Nov, 2019 1 commit
  19. 22 Nov, 2019 1 commit
  20. 18 Nov, 2019 1 commit
  21. 14 Nov, 2019 1 commit
  22. 12 Nov, 2019 1 commit
    • Fix the TF tutorial to run against TF2.0 and TF1.x (#4104) · e541c758
      * WIP Run the TF tutorial on TF2
      
      * Remove debugger statement.
      
      * Complete the support for TF2.0's `resize`.
      
      TF2.0 adds a `half_pixel_centers` attribute to the `resize` function in
      the image API. This commit completes the hooks in Relay's TF frontend.
      
      At the point of this commit, no new test yet. Also, this commit
      addresses solely the `resize` change. Other commits address other
      changes in TF2.0.
      
      * Support TF2.0 in the tutorial by using the compat API.
      
      This looks cleaner than trying to detect the TF version.
      
      * Use the TF compat API, so as to support TF2.0.
      
      This is a direct change, relying on the compat API provided by the TF
      team.
      
      This code will last as long as the compat API exists, so a
      "proper" support for TF1.x and 2.x will require more work in some
      future.
      
      * Partial support for EXPLICIT padding introduced in TF2.0.
      
      Explicit padding is a special case in TF2.0 (see reference linked
      below). Some models are serialized with that mode, and break TF support
      in TVM.
      
      Support is *partial* as EXPLICIT falls back to set padding on the
      Relay op, which only supports 2 values. At some point, padding may need
      to be extended to support 4 values, but that is out of scope of this
      support commit.
      
      Reference on EXPLICIT padding: https://github.com/tensorflow/tensorflow/commit/ec81825aaf7e848d9f8ddffdf1e0d20aebe9172c#diff-1d1c0bb0a880f85b6164f71dbb2f446e
      
      * Guard on checking for optional TF2.0 attribute.
      
      * Do not expect Relay to implement TF-specific attributes.
      
      The `half_pixel_centers` attribute is a new feature in TF2.0. Earlier
      commits of mine mistakenly introduce them in the Relay API. This is
      probably not what Relay is expected to support, and the semantics of
      `half_pixel_centers` is unclear (to me, at least) at this point.
      
      * Remove unclear comment.
      
      CR https://github.com/dmlc/tvm/pull/4104#discussion_r338705742
      
      Addresses #4104
      
      * Changes after review.
      
      Complying without understanding the rationale for now.
      
      * Fix the arguments set mistakenly.
      
      An argument ignored for the wrong operation.
      Eric Platon committed
  23. 11 Nov, 2019 1 commit
  24. 09 Nov, 2019 1 commit
    • Auto TensorCore CodeGen (#4234) · d64bf6b5
      * Add Auto TensorCore TensorCore Unit Test
      
      * Rebase to tvm master branch & Add auto tensor core
      
      * Code Refine
      
      * Add tensor core switch by pragma
      
      * Add pragma in tensor core example code
      
      * Get real tile size to replace hard coded 16
      
      * support more than 2 dimensions (e.g. batchmatmul) for buffer bind scope
      
      * support batch matmul
      
      * Move cuda env check to tensor_core.cc
      
      * Coderefine for tensor_core.cc
      
      * Refine comments
      
      * Some refinements of code and comment
      
      * Update TensorCore UT to pass the CPU test
      
      * remove redundant code
      
      * matmul's storage align for different layout
      
      * Add support for differenct position of type cast
      
      * Add formal tutorial for auto tensorcore codegen
      
      * move tensorcore check up to tutorial code
      
      * code and doc refine
      
      * comment out tune_and_evaluate in tutorial
      
      * fix cpplint error
      Minmin Sun (孙敏敏) committed
  25. 06 Nov, 2019 1 commit
  26. 31 Oct, 2019 1 commit
  27. 24 Oct, 2019 1 commit
    • TensorCore Support using Intrinsic (#4136) · 324a9607
      * add tensor core support
      
      * avoid memory bank conflict
      
      * fix thread sync & better performance
      
      * better performance
      
      * add schedule test for conv2d
      
      * extend into BatchMatMul
      
      * support config fragment shape and layout using intrinsic
      
      * add TensorCore tutorial
      
      * add int support and fix lint
      
      * address comment
      
      * add 32*16*8 TensorCore test
      
      * fix wmma include logic
      Siyuan Feng committed
  28. 11 Oct, 2019 1 commit
    • Tutorial: update Building a Graph Convolutional Network tutorial (#4060) · ef666539
      * update build_gcn.py tutorial
      
      updates
      * support bias in GCN layer
      * download pretrained gcn model
      * verify model accuracy
      * use time_evaluator to measure runtime
      
      * fix adding bias in gcn layer
      
      * remove printing output
      
      * fix small bug
      
      * add DGL-PyTorch comparison into the build_gcn tutorial
      
      * add accuracy testing
      
      * adjust import order
      
      * handle different dgl versions
      
      * update number for dgl version checking
      Chien-Yu Lin committed
  29. 08 Oct, 2019 1 commit
  30. 07 Oct, 2019 1 commit
  31. 29 Sep, 2019 1 commit
  32. 28 Sep, 2019 1 commit
  33. 17 Sep, 2019 1 commit
  34. 07 Sep, 2019 1 commit
  35. 05 Sep, 2019 1 commit