1. 26 Feb, 2020 1 commit
  2. 24 Feb, 2020 1 commit
    • [Relay][AutoTVM] Relay op strategy (#4644) · 623dd208
      * relay op strategy
      
      fix lint
      
      bitpack strategy
      
      bitserial_dense (#6)
      
      * update strategy
      
      * address comments
      
      fix a few topi test
      
      Dense strategy (#5)
      
      * dense
      
      * add biforst; remove comments
      
      * address comment
      
      Refactor x86 conv2d_NCHWc (#4)
      
      * Refactor x86 conv2d
      
      * Add x86 depthwise_conv2d_NCHWc
      
      * Add back topi x86 conv2d_nchw
      
      * Merge x86 conv2d_nchw and conv2d_NCHWc
      
      * Minor fix for x86 conv2d
      
      fix more strategy
      
      Add x86 conv2d_NCHWc_int8 strategy (#8)
      
      * Add x86 conv2d_NCHWc_int8 strategy
      
      * Remove contrib_conv2d_nchwc_int8
      
      * Fix generic conv2d_NCHWc for int8
      
      * Fix topi arm_cpu conv2d_NCHWc_int8
      
      update x86 conv2d
      
      enable specify relay ops to be tuned for autotvm
      
      add cuda conv2d strategy
      
      add conv2d strategy for rocm
      
      add conv2d strategy for hls
      
      add conv2d strategy for arm cpu
      
      add conv2d strategy for mali
      
      add conv2d strategy for bifrost
      
      add conv2d strategy for intel graphics
      
      clean up and fix lint
      
      remove template keys from autotvm
      
      remove 2 in the func name
      
      address comments
      
      fix
      
      * fix bugs
      
      * lint
      
      * address comments
      
      * add name to op implement
      
      * Modify topi tests (#9)
      
      * Add pooling, reorg, softmax and vision
      
      * Add lrn
      
      * fix topi test
      
      * fix more topi test
      
      * lint
      
      * address comments
      
      * x
      
      * fix more tests & bugs
      
      * Modify more tests (#10)
      
      * Modify tests for bitserial_conv2d, bitserial_dense, bitserial_conv2d_rasp and bnn
      
      * Minor fix
      
      * More minor fix
      
      * fix more test
      
      * try to update vta using strategy
      
      * fix cpptest
      
      * x
      
      * fix rebase err
      
      * Fix two tests (#11)
      
      * change autotvm log format
      
      * lint
      
      * minor fix
      
      * try fix vta test
      
      * fix rebase err
      
      * tweak
      
      * tmp hack for vta pass
      
      * fix tutorial
      
      * fix
      
      * fix more tutorials
      
      * fix vta tutorial
      
      * minor
      
      * address comments
      
      * fix
      
      * address comments
      
      * fix cpptest
      
      * fix docs
      
      * change data structure name and api
      
      * address comments
      
      * lint
      
      * fix rebase err
      
      * updates
      
      * fix winograd test
      
      * fix doc
      
      * rebase
      
      * upgrade tophub version number
      
      * fix bug
      
      * re-enable vta tsim test after tophub is upgraded
      
      * fix vta test to use the correct args so the config can be found in tophub
      
      Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
      Haichen Shen committed
  3. 20 Feb, 2020 1 commit
  4. 18 Feb, 2020 2 commits
  5. 14 Feb, 2020 1 commit
  6. 13 Feb, 2020 1 commit
  7. 12 Feb, 2020 1 commit
    • [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding files (#4862) · a5661611
      * [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files.
      
      This PR establishes tvm.ir and migrates the corresponding relay
      files into the new folder.
      
      API Change:
      - relay.Module -> tvm.IRModule
      
      * Update with ADT
      
      * Migrate transform
      
      * address comments
      
      * Migrate module
      
      * Migrate json_compact
      
      * Migrate attrs
      
      * Move LoweredFunc to stmt temporarily
      
      * temp migrate container
      
      * Finish migrate container
      Tianqi Chen committed
  8. 09 Feb, 2020 1 commit
  9. 07 Feb, 2020 2 commits
  10. 04 Feb, 2020 2 commits
  11. 23 Jan, 2020 1 commit
    • [VTA] Support network which have no unique operator as start/stop name for graph pack. (#4703) · b9328d02
      * [VTA] Support network which have no unique operator as start/stop name
      for graph pack.
      
      [Issue]
        Current vta use 'start' and 'stop' name to define the pack start point
        and end point, but this method not work for these network which have
        no 2 unique operator as  start point and stop point.
      
      [Solution]
        In this solution we give 2 addtional parameters start_name_indx and
        stop_name_indx to make vta pack logic work with the said network,
        for exampl for following networks which have no unique operator,
      
        %0 = nn.add
        %1 = nn.conv2d
        %2 = nn.batch_norm
        %3 = nn.leaky_relu
        %4 = nn.add
        %5 = nn.conv2d
        %6 = nn.batch_norm
        %7 = nn.leaky_relu
        %8 = nn.add
      
        with this solution we can use following parameter format to make
        vta work on it.
      
        relay_prog = graph_pack(
                      //....
                      start_name="nn.add",
                      stop_name="nn.add",
                      start_name_idx=0,
                      stop_name_idx=4)
      
        to apply on new network, by printing the network we can get index information like following.
      
        print(mod.astext(show_meta_data=False))
        relay_prog = graph_pack(mod
                                ...
                                start_name="nn.add",
                                stop_name="nn.add",
                                start_name_idx=0,
                                stop_name_idx=4)
      
      * address review comments and fix index count bug
      
      issue:
      when do print(mod), the output not only the Call is also have other type
      like Var, need add logic to count all except meta.
      
      solution:
      add related logic
      
      * address review comments.
      
      * address review comments
      
      * add more detail comments.
      Hua Jiang committed
  12. 19 Jan, 2020 1 commit
  13. 17 Jan, 2020 1 commit
    • [VTA][TSIM] Enable TSIM CI Testing (#4407) · 2738eddf
      * Update task_python_vta.sh
      
      * install sbt=1.1.1 with apt-get
      
      * update verilator_opt
      
      * install verilator with major version 4.0
      
      * disable multi-threading for now
      
      * bug fix for correcting uop fetch address in LoadUop module
      
      * bug fix for correcting uop fetch address in LoadUop module
      
      * adjustment to read from dram_offset
      
      * enable USE_THREADS with verilator 4.x
      
      * DEBUG: try avoid core dump with verilator 4.x
      
      * bug fix in LoadUop module
      
      * log mega cycles in tsim
      
      * download cat.png to avoid fetching in each run
      
      * bug fix in LoadUop module
      
      * solve dram_even/sram_even issue
      
      * bug fix
      
      * introduce scalalint in ci
      
      * speedup tsim in ci
      
      * bug fix
      
      * lint scala code before building
      
      * disable multi-threading
      
      * split fsim/tsim script
      
      * update Jenkins settings
      
      * duplicate task_python_vta_fsim.sh as task_python_vta.sh for now
      
      Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
      Liangfu Chen committed
  14. 16 Jan, 2020 1 commit
  15. 15 Jan, 2020 1 commit
  16. 14 Jan, 2020 1 commit
  17. 11 Jan, 2020 1 commit
    • [Relay][TOPI]Fix meaning of conv2d_transpose output_padding parameter (#4318) · dcf7fbf1
      * Add output_padding to generic
      
      * Add output_padding to the reference impl
      
      * Add output_padding to arm_cpu
      
      * Add output_padding to the test
      
      * Add output_padding for cuda
      
      * Add output_padding for x86
      
      * Make use of the new output_padding argument in Relay
      
      * Adjust conv2d_transpose Relay test
      
      * Fix lint errors
      
      * Fix the VTA declaration of conv2d_transpose
      
      * support for output padding in conv2d transpose
      
      * some output padding will break IR pass
      
      * Fix new conv2d_transpose test
      
      * Update tophub
      
      * Fix conv1d output_padding too.
      
      * Fix the conv1d_transpose reference function.
      
      * Fix the cuda impl
      
      * fix the topi test for conv1d
      
      * Update the versions in tophub.py
      
      Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
      abergeron committed
  18. 06 Jan, 2020 3 commits
  19. 03 Jan, 2020 1 commit
  20. 23 Dec, 2019 1 commit
  21. 21 Dec, 2019 1 commit
  22. 16 Dec, 2019 1 commit
  23. 11 Dec, 2019 1 commit
  24. 09 Dec, 2019 1 commit
  25. 28 Nov, 2019 1 commit
  26. 27 Nov, 2019 2 commits
  27. 26 Nov, 2019 1 commit
  28. 24 Nov, 2019 3 commits
  29. 22 Nov, 2019 1 commit
  30. 18 Nov, 2019 1 commit
  31. 15 Nov, 2019 1 commit
  32. 14 Nov, 2019 1 commit