- 28 Feb, 2020 1 commit
-
-
* The initial import of refactored implementation, all tests passed * enable mobilenet v2 test * minor cleanup * reorg * fix lint * use input names that come with torch IR * fix typo * introduce parse_operators * fix lint * add _ prefix
masahi committed
-
- 27 Feb, 2020 4 commits
-
-
* [DOCS] Sphinx -- Introduce alias detection. Background: some of our namespaces import function from another namespace. For example tvm.te imports most of the operators from tvm.tir. Previously we manually exclude these aliases from the doc. However that means we can not link them by the alias name. This PR adds a sphinx callback plugin to detect such aliases, and create a rubric block on the button of its current docstring `Alias of the original class`. It is done in a way so that we can refer to the generated docs. We also fixed a few docs errors. * Fix most of the issues
Tianqi Chen committed -
* move contrib * lint * address comment * address comment
Cody Yu committed -
* make_loss test case * mxnet frontend make_loss support * added comment for make_loss * pylint fix * Update mxnet.py
maheshambule committed -
* [REFACTOR][PY][API-CHANGE] Remove legacy python files. Remove legacy python files. Use the te namespace for most of the tensor expression primitives. - tvm.create_schedule -> tvm.te.create_schedule - tvm.placeholder -> tvm.te.placeholder - tvm.compute -> tvm.te.compute * Remove top-level exposures.
Tianqi Chen committed
-
- 26 Feb, 2020 2 commits
-
-
* [Frontend][TFLite] Add parser support for square operator * Add parser implementation * Add relevant tests * Note: 'square' is an unary elemwise operator but it's added separately in the parser since there is no Relay 'square' op and instead we have to use 'multiply' * Change relay operation from 'multiply' to 'power' * Remove a redundant line as requested
Ina Dobreva committed -
* call graph for relay * CallGraphEntryNode->CallGraphEntry, __getitem__->print_var * fix typos
Zhi committed
-
- 25 Feb, 2020 3 commits
-
-
* Add a PyTorch to Relay parser * Add alexnet, googlenet, mnasnet, shufflenet wip * Fix lint * Remove fix for shufflenet * Lower check * Pull changes from neo-ai/tvm changes * Remove commented out section * Use infer_shape everywhere * Change back to using trace instead of path in from_pytorch * Parse state_dict to add param names * Umbrella single_op under test_forwards * Remove print and cleanup call * Check if update to test broke CI * Retrigger CI * Add back in updated tests * Try splitting up tests * First pass at flexible typing, implemented for ones * Add int32 for all ops * Remove print statements * Fix lint * Broad except * Add other tensor types * Temporarily use old tests * Retrigger CI * Lower type names * Use numpy to convert in dense op * Fix lint * Remove print * Need to cleanup but verify int32 works for add * Rough tests for different types, a lot of types are not supported on CPU * Probably doesn't build, need to save work as I have to switch branches (constantly) * Parse param type * Remove print stmt in parser * Clean up some code * Working on flaot32 for bn * Add resnet18 double type * Fix lint * Temporarily move PT tests first * Temporarily add back refactored tests to fix mem issue * Add more type test and temp remove some tests * Comment out tests, hopefully CI prints a trace * Get stack trace * Remove operator dict key, rename op_name to node_id, remove dead code * Make relay map a list * Remove some hacky string stuff * Move to PyTorch 1.4 * Remove input_type as param * Remove _get_fill_value, fix full ops * Remove unused code and combine ops for identity and none * Remove fn_param * Clean up main loop * Remove useless if/else for outputs * Remove ir_names, only used once * Remove some string hacking * Remove string parsing to get output name * Fix bug with output sizes of nodes * Use attributeNames in parse ops * Remove continue and add_op in parse_op * Do this everywhere, use assert instead of explciitly type casting * Remove unnecessary swap * Slight refactor for elemwise input parse * Use a copy of graph everywhere * Rename nid_to_node_name * Refactor parse import prereqs * Clean up input node kind check * Clean up conditionals * Clean up add_op * Cleanup type for ones and zeros op * Fix lint * Add torch install to CI * Actually use torch * Try moving import torch to only where it's needed * Import torch for CI * Use take op for select * Temporarily add ignore for jit inline pass for CI * Use CompleteTensorType, might be a PT 1.2 only thing * Use different types in elemwise op * Use float16 ones * Fix float16 test * Remove the temp docker changes * Remove temp test * Temporarily comment out original tests * Remove file * Empty cache after each test * Add some prints and lower input sizes * Try using no grad * Trying to globally set grad off * Use no grad for torchvision * Remove xfail tests * Remove VGG and AlexNet due to some issues * Combine pooling tests * Remove extra test file * Remove single op, remove larger pooling tests * Remove maxpool3 * Remove debug prints * Remove inference call and add no_grad in measure latency * Use standard string start char * Remove redundant infer_shape in slice * Convert most to checks to just expr * Remove extra paren * More refactor of isinstance * Add helper for creating typed constants * Assert instead of return when no matching type * Remove network variants * Add no_grad when forward, remove deatch, fix lint * Change isinstance to expr in transpose * Use opnotimplemented, refactor * Fix full ops, remove duplicate tests * Never use shape field unless we know the type * Remove comma, retrigger CI * Add paren, retrigger CI * Use inline if-else for flags * Throw exception instead of assert * Remove version check for CI * Check version when doing inline pass * Fix lint * Lower more input sizes * Add new line, conv2d only accepts weight as expr * Use tvm.runtime.ndarray * Remove change to torch version install * Try no grad for mobilenet * Fix lint * Fix lint again * Revert to last passing * Delete test files * Ignore lint * Revert back * Comment out mobilenet * Clean up compare compiled and baseline outputs * Use IRModule * Add todos * Refactor use_bias * Add todo for fix conv op channels * Change input to data type * Remove todo * Handle channel multiplier > 1
Alex Wong committed -
* Use opencv reisze method for preprocessing of image in darknet * Use opencv reisze method for preprocessing of image in darknet * Fix pylint issues
vizero1 committed -
GaussianDropout & GaussianNoise are active only during training time. This can be skipped during inference.
Samuel committed
-
- 24 Feb, 2020 1 commit
-
-
* relay op strategy fix lint bitpack strategy bitserial_dense (#6) * update strategy * address comments fix a few topi test Dense strategy (#5) * dense * add biforst; remove comments * address comment Refactor x86 conv2d_NCHWc (#4) * Refactor x86 conv2d * Add x86 depthwise_conv2d_NCHWc * Add back topi x86 conv2d_nchw * Merge x86 conv2d_nchw and conv2d_NCHWc * Minor fix for x86 conv2d fix more strategy Add x86 conv2d_NCHWc_int8 strategy (#8) * Add x86 conv2d_NCHWc_int8 strategy * Remove contrib_conv2d_nchwc_int8 * Fix generic conv2d_NCHWc for int8 * Fix topi arm_cpu conv2d_NCHWc_int8 update x86 conv2d enable specify relay ops to be tuned for autotvm add cuda conv2d strategy add conv2d strategy for rocm add conv2d strategy for hls add conv2d strategy for arm cpu add conv2d strategy for mali add conv2d strategy for bifrost add conv2d strategy for intel graphics clean up and fix lint remove template keys from autotvm remove 2 in the func name address comments fix * fix bugs * lint * address comments * add name to op implement * Modify topi tests (#9) * Add pooling, reorg, softmax and vision * Add lrn * fix topi test * fix more topi test * lint * address comments * x * fix more tests & bugs * Modify more tests (#10) * Modify tests for bitserial_conv2d, bitserial_dense, bitserial_conv2d_rasp and bnn * Minor fix * More minor fix * fix more test * try to update vta using strategy * fix cpptest * x * fix rebase err * Fix two tests (#11) * change autotvm log format * lint * minor fix * try fix vta test * fix rebase err * tweak * tmp hack for vta pass * fix tutorial * fix * fix more tutorials * fix vta tutorial * minor * address comments * fix * address comments * fix cpptest * fix docs * change data structure name and api * address comments * lint * fix rebase err * updates * fix winograd test * fix doc * rebase * upgrade tophub version number * fix bug * re-enable vta tsim test after tophub is upgraded * fix vta test to use the correct args so the config can be found in tophub Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
Haichen Shen committed
-
- 21 Feb, 2020 1 commit
-
-
* add TFLite version check for 'ceil' and 'cos' * fix name check of test_op for positive inputs * add error message for operator not found in the installed fbs schema
Ina Dobreva committed
-
- 20 Feb, 2020 1 commit
-
-
* Fix Python docstrings * More fixes * Fix lint
Cody Yu committed
-
- 19 Feb, 2020 1 commit
-
-
hcyang committed
-
- 18 Feb, 2020 4 commits
-
-
Tianqi Chen committed
-
* [Relay] Expose FunctionGetAttr to Python * add test Co-authored-by: Jon Soifer <jonso@microsoft.com>
Jon Soifer committed -
* Basic test working * Almost all tests working. * all tests passing. * Fixed lint. * Improved Style.
Josh Fromm committed -
- Move the related files to tvm.te - Move build_module.py to tvm.driver
Tianqi Chen committed
-
- 15 Feb, 2020 2 commits
- 14 Feb, 2020 2 commits
- 13 Feb, 2020 2 commits
-
-
* [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess This adds support for the custom operator TFLite_Detection_PostProcess which is commonly used in object detection networks such as SSD Mobilenet. It only adds support for when use_regular_nms = False. Change-Id: I819b253c0eb6f0fa55da65d2634e09359b888828 * Added a test for the tflite custom op Change-Id: Ie5baa092deae9a8bcffd2ebd9f6d346b90e58afd * Removed trailing comma Change-Id: Ib08f02b5f1a59a883048bfb36e4321152cd2e7f2 * Added spaces between divide Change-Id: If1171fc03d211a809cedeb800804394972af4060 * Formatted comment Change-Id: I3ce7e69b8d2c73aec57369c1c64ea1eec07f087b * Reduced line length in test Change-Id: I49eaafc3369070f8f3e85fbb965ad20972096c68 * Set random seed for test Change-Id: I542a787d11422ea83c52147b2cb1144fcef0dd77 * Fixes to style Change-Id: I2971b8ecebe08c882b2481a99f67cfbe515e0b1f * Assert for incorrect number of inputs Change-Id: I393f3b3b62be73e427498d98456fb1d5a214e0af * Change comparison to pass linting The linter was updated, so I needed to fix a small style issue as a result. Change-Id: Ia3c954565a00de92e7fb1912eae9ed9875d60c7c
mbarrett97 committed -
Move the related target modules into tvm.target. API change: - tvm.target.current_target -> tvm.target.Target.current - tvm.datatype -> tvm.target.datatype
tqchen committed
-
- 12 Feb, 2020 2 commits
-
-
tqchen committed
-
* [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files. This PR establishes tvm.ir and migrates the corresponding relay files into the new folder. API Change: - relay.Module -> tvm.IRModule * Update with ADT * Migrate transform * address comments * Migrate module * Migrate json_compact * Migrate attrs * Move LoweredFunc to stmt temporarily * temp migrate container * Finish migrate container
Tianqi Chen committed
-
- 11 Feb, 2020 4 commits
-
-
* Fix onnx import bugs Fix onnx attributes of string type incorrect handling Merge symmetric padding of Conv to symmetric form * Only merge symmetric padding for conv2d
kice committed -
Zhi committed
-
masahi committed
-
* [TFLite] Using real image for QNN testing. * Setting seed for SSD mobilenet for fixed input. * Support quantized Pad op. * Remove unnnecessary line. * Ina comments.
Animesh Jain committed
-
- 10 Feb, 2020 2 commits
-
-
* [Relay] Added MergeComposite pass This pass allows for patterns to be wrapped in a function marked with 'Composite' and a composite function name. This is intended to be used with the external codegen for the cases where an external operator maps to multiple Relay operators. In that case, the mapping can be expressed as a pattern and assigned a name. For more information on this pass and its motivation, see the RFC: https://discuss.tvm.ai/t/rfc-external-codegen-defining-composite-relay-operators/5470 Change-Id: Icb1b803a9f0ac57c529143200228f3bb5793afc0 * [Relay] Merge composite tests Added tests for the merge_composite pass. Change-Id: I1728b4a05b0c1c36140a40f1afe028fde62185dd * Merge composite additional test Change-Id: I9bc7d6053c575e9468ac5abc31214c6ad8507e46 * Support priority order in merge_composite The order in which the patterns are matched was currently random as an unordered_map was used to store the pattern table. This uses arrays instead so that a distinct priority order of matching can be defined. Additional tests have also been added to verify this behaviour. Change-Id: Ief347df4262639138d5d9d7c8cee7ef233af7b56 * Improved merge composite docs Change-Id: Ie3a72045ecc3f13ad3c302fbdf192b7296a306a8 * Removed unused variable Change-Id: I7814d5fde368ffaf1b3d6d806060c774c7720364 * Remove unnecessary op check Change-Id: I38e78d2acd5b86cb8e837be72ff9d72cd10bcf33 * Improve styling on composite function creation Change-Id: I37add1c3134e0b5d5085fe1eb9daf8e06890fa8c * Comment reword Change-Id: Ie05872dcbbe0c3e1190b0597083b9a64e6b66c66 * Stylistic changes to avoid std::move Change-Id: I43a93995bbf10530399900c992aa99dd4ae4575f * Relax a check in ExtractPattern Change-Id: I0faef77a66c55f83f09e6e47c561ffaea63dedfa * Remove new line Change-Id: Ifdd02c12087a7e1a0a9b54825669bc0de8f13c3d * Removed MatchPattern from MergeComposite This is not necessary now that ExtractPattern can fulfill the same purpose. Change-Id: I14dc020afa8e50f2df4c0a2efb88a011987f8196 * Removed a new line Change-Id: I8b50f0c9069aa1bcaccbe68eb421031f01a64842 * Improved docs for merge composite Change-Id: Ib1959a35c856e7ea5639de2e4ef314a54f44caf5 * Fixed free vars in test Change-Id: I2b7f273db275964ec0e9820560663f0808adee79 * Handle case where root arg might not be a call Change-Id: I4eeea3ce723d3ba337d110dcc690377daebe8626 * Removed blank line Change-Id: I07f5392c0e95cfe3cfa5c333703cc6f82d6034fb * Change to CHECK_EQ Change-Id: I5c5d62d3cd57f72508b30b926f72091ae6f0d1cc * Revised a conditional Change-Id: I23a7897ca15a7cd076db5039dc653a4b8c27e803 * Improved doc styling Change-Id: I377f0a1c1ac70f3b8d7584b0c49bddc8c6c134ef * Fail extraction if vars conflict Change-Id: I78e36d805e8ed6b55e61d490212a967c857554a4 * Added further merge composite tests Change-Id: Ib1d800409fca4c1834c7fe0cab5a26ab99a26820 Co-authored-by: lhutton1 <35535092+lhutton1@users.noreply.github.com>
mbarrett97 committed -
Wang Yucheng committed
-
- 09 Feb, 2020 1 commit
-
-
Tianqi Chen committed
-
- 07 Feb, 2020 4 commits
-
-
* [REFACTOR][PY-API] Polish tvm.runtime, tvm.runtime.module API update This PR updates the tvm.runtime to use the new FFI style. - Remove top-level tvm.module to avoid confusion between runtime.Module and IRModule - API changes wrt to runtime.Module - tvm.module.load -> tvm.runtime.load_module - tvm.module.enabled -> tvm.runtime.enabled - tvm.module.system_lib -> tvm.runtime.system_lib - Remove dep on api_internal from runtime. * Update module.load in the latest API
Tianqi Chen committed -
Wang Yucheng committed
-
* [Relay][Frontend][TFlite] Add support for quantized LOGISTIC * add qnn implementation * add qnn test case for qnn logistic * Helper functions for quantize and dequantize.
Ina Dobreva committed -
* Initial version working and passing tests. * WIP on supporting other activations. * add support for multiple activation functions in lstm * All tests working and code cleaned up. * Undo import swap to avoid conflict with masahi. * Added new tests and related bug fixes. Co-authored-by: Matthew Brookhart <mbrookhart@octoml.ai>
Josh Fromm committed
-
- 06 Feb, 2020 1 commit
-
-
* Add bitwise ops to topi * Add the bitwise ops to relay.
abergeron committed
-
- 05 Feb, 2020 2 commits
-
-
* enforce 4-way padding * add util with get_pad_tuple * delete unnecessary arguments * fix lint * add container.Array case * fix cudnn conv2d asymmetric padding logic * rename get_pad_tuple to get_pad_tuple2d * revert change for topi/python/topi/nn/conv2d.py * add get_pad_tuple2d for several contrib conv2d ops * add get_pad_tuple2d for all conv2d ops
Xingyu Zhou committed -
* [Relay][Frontend][TFLite] Add parser support for logical operators * Add parser support for logical_and, logical_or * Add boolean dtype as a valid tensor type * BOOLEAN dtype is supported only from tf 1.15 so logical ops work only in that and newer versions * Logical_not is ommited since tflite can't convert it --> throws errors for addv2 * Add TFLite vesion check in tests for logical ops * Check is added because of boolean dtype lack of support
Ina Dobreva committed
-