- 20 Mar, 2020 1 commit
-
-
* Temporary fix to the stack overflow issue in autotvm task extraction * fix lint * fix graph tuner test
Haichen Shen committed
-
- 18 Mar, 2020 1 commit
-
-
Zhi committed
-
- 13 Mar, 2020 1 commit
-
-
* consider layout_transform in has_multiple_inputs * refactor code * remove debug info * remove subclass assignment * refactoring a little bit * remove default value * remove trailing whitespace * modify test for has_multiple_inputs Co-authored-by: Ubuntu <ubuntu@ip-172-31-40-194.us-west-2.compute.internal>
zhen-jia committed
-
- 12 Mar, 2020 1 commit
-
-
* init * fix template * tweak naming
Haichen Shen committed
-
- 27 Feb, 2020 1 commit
-
-
* [REFACTOR][PY][API-CHANGE] Remove legacy python files. Remove legacy python files. Use the te namespace for most of the tensor expression primitives. - tvm.create_schedule -> tvm.te.create_schedule - tvm.placeholder -> tvm.te.placeholder - tvm.compute -> tvm.te.compute * Remove top-level exposures.
Tianqi Chen committed
-
- 24 Feb, 2020 1 commit
-
-
* relay op strategy fix lint bitpack strategy bitserial_dense (#6) * update strategy * address comments fix a few topi test Dense strategy (#5) * dense * add biforst; remove comments * address comment Refactor x86 conv2d_NCHWc (#4) * Refactor x86 conv2d * Add x86 depthwise_conv2d_NCHWc * Add back topi x86 conv2d_nchw * Merge x86 conv2d_nchw and conv2d_NCHWc * Minor fix for x86 conv2d fix more strategy Add x86 conv2d_NCHWc_int8 strategy (#8) * Add x86 conv2d_NCHWc_int8 strategy * Remove contrib_conv2d_nchwc_int8 * Fix generic conv2d_NCHWc for int8 * Fix topi arm_cpu conv2d_NCHWc_int8 update x86 conv2d enable specify relay ops to be tuned for autotvm add cuda conv2d strategy add conv2d strategy for rocm add conv2d strategy for hls add conv2d strategy for arm cpu add conv2d strategy for mali add conv2d strategy for bifrost add conv2d strategy for intel graphics clean up and fix lint remove template keys from autotvm remove 2 in the func name address comments fix * fix bugs * lint * address comments * add name to op implement * Modify topi tests (#9) * Add pooling, reorg, softmax and vision * Add lrn * fix topi test * fix more topi test * lint * address comments * x * fix more tests & bugs * Modify more tests (#10) * Modify tests for bitserial_conv2d, bitserial_dense, bitserial_conv2d_rasp and bnn * Minor fix * More minor fix * fix more test * try to update vta using strategy * fix cpptest * x * fix rebase err * Fix two tests (#11) * change autotvm log format * lint * minor fix * try fix vta test * fix rebase err * tweak * tmp hack for vta pass * fix tutorial * fix * fix more tutorials * fix vta tutorial * minor * address comments * fix * address comments * fix cpptest * fix docs * change data structure name and api * address comments * lint * fix rebase err * updates * fix winograd test * fix doc * rebase * upgrade tophub version number * fix bug * re-enable vta tsim test after tophub is upgraded * fix vta test to use the correct args so the config can be found in tophub Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
Haichen Shen committed
-
- 20 Feb, 2020 1 commit
-
-
* Fix Python docstrings * More fixes * Fix lint
Cody Yu committed
-
- 18 Feb, 2020 2 commits
-
-
Tianqi Chen committed
-
- Move the related files to tvm.te - Move build_module.py to tvm.driver
Tianqi Chen committed
-
- 15 Feb, 2020 1 commit
-
-
* Support range in index based tuners * Address comments * Remove __*state__ * trigger CI
Cody Yu committed
-
- 14 Feb, 2020 1 commit
-
-
- Move related files into the corresponding location as in C++ - Keep the top-level TVM API backward compatible to make minimum changes in topi
tqchen committed
-
- 13 Feb, 2020 2 commits
-
-
Add tuneable conv3d_ndhwc schedule
Alex Gladkov committed -
Move the related target modules into tvm.target. API change: - tvm.target.current_target -> tvm.target.Target.current - tvm.datatype -> tvm.target.datatype
tqchen committed
-
- 12 Feb, 2020 1 commit
-
-
* [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files. This PR establishes tvm.ir and migrates the corresponding relay files into the new folder. API Change: - relay.Module -> tvm.IRModule * Update with ADT * Migrate transform * address comments * Migrate module * Migrate json_compact * Migrate attrs * Move LoweredFunc to stmt temporarily * temp migrate container * Finish migrate container
Tianqi Chen committed
-
- 09 Feb, 2020 1 commit
-
-
Tianqi Chen committed
-
- 04 Feb, 2020 1 commit
-
-
* [AutoTVM] Minor bug fixes in AutoTVM for QNN graphs. * Bring back strided_slice. * Replace tvm.nd change.
Animesh Jain committed
-
- 29 Jan, 2020 1 commit
-
-
- Do not use numpy.prod which ignores integer (64 bits) overflows. This leads to an incorrect number of points in the search space.
wpan11nv committed
-
- 28 Jan, 2020 1 commit
-
-
Cody Yu committed
-
- 15 Jan, 2020 2 commits
-
-
This reverts commit dcf7fbf1.
Haichen Shen committed -
* [REFACTOR][IR] Unify IntImm and UIntImm This PR unifies UIntImm and IntImm to simplify the codebase. Unsigned integer constants will also be stored as IntImm. For uint constant that does not fit into int64(rare case), we introduced an intrinsic tvm_big_uint_imm to construct such intgers by its lower and higher 32bits. * [REFACTOR][IR] Remove UIntImm to use IntImm * rename big->large
Tianqi Chen committed
-
- 11 Jan, 2020 1 commit
-
-
* Add output_padding to generic * Add output_padding to the reference impl * Add output_padding to arm_cpu * Add output_padding to the test * Add output_padding for cuda * Add output_padding for x86 * Make use of the new output_padding argument in Relay * Adjust conv2d_transpose Relay test * Fix lint errors * Fix the VTA declaration of conv2d_transpose * support for output padding in conv2d transpose * some output padding will break IR pass * Fix new conv2d_transpose test * Update tophub * Fix conv1d output_padding too. * Fix the conv1d_transpose reference function. * Fix the cuda impl * fix the topi test for conv1d * Update the versions in tophub.py Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
abergeron committed
-
- 10 Jan, 2020 1 commit
-
-
Xingyu Zhou committed
-
- 09 Jan, 2020 1 commit
-
-
* [AutoTVM] Use vm compile in extracting task from relay * update * restructure vm compiler to reduce task extraction time * x * fix * update doc * udpate doc * lint
Haichen Shen committed
-
- 27 Dec, 2019 1 commit
-
-
Wang Yucheng committed
-
- 26 Dec, 2019 1 commit
-
-
* [AutoTVM][TOPI] NHWC conv2d templates (spatial pack) for ARM As some frontends (tflite for example) are using NHWC as the default layout, we are enabling NHWC schedule templates in TOPI and AutoTVM. * some comments fix
黎明灰烬 committed
-
- 22 Dec, 2019 1 commit
-
-
* [TEST] Remove nnvm related code in topi and test script * Remove docs dep
Tianqi Chen committed
-
- 18 Dec, 2019 1 commit
-
-
Alex Gladkov committed
-
- 16 Dec, 2019 1 commit
-
-
Cody Yu committed
-
- 26 Nov, 2019 1 commit
-
-
* select model with the most tuned schedules * change detect empty map method * modify model description for load_reference_log
Xingyu Zhou committed
-
- 21 Nov, 2019 1 commit
-
-
Previously, we would rely on the later phases to error out (often for using too much shared memory). This enables the checks on the IR that already exist for CUDA and OpenCL also for ROCm.
Thomas Viehmann committed
-
- 19 Nov, 2019 1 commit
-
-
Yizhi Liu committed
-
- 16 Nov, 2019 1 commit
-
-
* AutoTVM: selecting tuning templates when extracting task Make the procedure of trying new templates easier. Test: tests/python/relay/test_autotvm_task_extraction.py * Use dict to match key for topi ops * fix lint issue * be more pythonic :)
黎明灰烬 committed
-
- 15 Nov, 2019 1 commit
-
-
Alex Gladkov committed
-
- 11 Nov, 2019 1 commit
-
-
* Add shape functions * Fix get_const_tuple * Fix cpplint * Fix pylint * Fix pylint * rebase and fix * Check Any for infer type * Fix expand_dim shape func for zero rank input * Fix pooling infer type * Address comment * Register layout transform attr
Yao Wang committed
-
- 07 Nov, 2019 1 commit
-
-
* Batch matmul tuning running but with errors. * Default x86 schedule as good as before. * Code Cleanup * Remove unused argument. * improved template documentation. * Silly lint fix * Removed leftover comment. * Moved cfg declaration to schedule for batch_matmul * Moved x86 dense cfg declaration to schedule. * lint fix * Removed duplicate cfg declaration in dense. * Reverted changes to dense.
Josh Fromm committed
-
- 29 Oct, 2019 1 commit
-
-
* Optimize task extraction speed * correct pylint errors * Delete unused function * remove unnecessary argument * resolve code review comments * corrent cpp lint errors * remove one more graph_json return value * fix test bugs
LiangHao committed
-
- 24 Oct, 2019 1 commit
-
-
* support conv2d HWCN in AutoTVM and Relay * fix lint * fix comments and unit tests
Cody Hao Yu committed
-
- 22 Oct, 2019 1 commit
-
-
Cody Hao Yu committed
-
- 03 Oct, 2019 1 commit
-
-
Jon Soifer committed
-
- 01 Oct, 2019 1 commit
-
-
Cody Hao Yu committed
-