- 27 Apr, 2020 1 commit
-
-
Nikolay Nez committed
-
- 25 Apr, 2020 2 commits
- 24 Apr, 2020 1 commit
-
-
* [PYTORCH]Where, addcdiv, addcmul op support * Review comments fixed
Samuel committed
-
- 20 Apr, 2020 1 commit
-
-
Samuel committed
-
- 17 Apr, 2020 2 commits
- 16 Apr, 2020 1 commit
-
-
Samuel committed
-
- 15 Apr, 2020 1 commit
-
-
* [PYTORCH]take, topk op support * Ci Failure fix
Samuel committed
-
- 14 Apr, 2020 1 commit
-
-
* [RELAY][PYTORCH]isNan, isinf, isfinite, ceil, clamp, round ops * Review comments
Samuel committed
-
- 13 Apr, 2020 2 commits
-
-
* [PYTORCH]Reduce_ops support added * Review comments updated * typo bug in qnn test
Samuel committed -
* use funcs from prelude, pass around convert_map * get relay input type from user ishape * handle tuple unpack * experimenting with static tensor array * use prelude concat instead of cons + rev * minor clean up * fix layer norm conversion bug, unwrap tensor array * add infer shape on tensor array * pass around prelude for now * compile worked but runtime error * fix tensor array wrapping * begin list dynamic test * is_list_dynamic first version * finish dynamic list test * a few fix * use shape_of function if Any is found * improve size conversion * working on adding free vars to loop block * fixed inlined inner loop issue * clean up free var handling * add support for tensor array concat * adding ta concat on last axis * fix concat, but got runtime error * disable concat on axis -1 for now * add lstm tests * revert unrelated change * fix stacked bidir test * minor fix to test * relax tol a bit, revert dnnl change to avoid conflict * simplify infer type, use input tensor shape rather than concat shape * more shape fix
masahi committed
-
- 11 Apr, 2020 1 commit
-
-
* [PYTHON]Abs, Arange, Softplus ops * Review comments updated
Samuel committed
-
- 10 Apr, 2020 1 commit
-
-
Samuel committed
-
- 08 Apr, 2020 1 commit
-
-
Samuel committed
-
- 07 Apr, 2020 1 commit
-
-
Samuel committed
-
- 06 Apr, 2020 1 commit
-
-
Samuel committed
-
- 02 Apr, 2020 2 commits
-
-
* [PYTORCH]AvgPool3d, MaxPool3d and Squeeze op support * Testcases added * review comments
Samuel committed -
* [Frontend][Torch] Simplify operator input handling * [Frontend][Torch] Allow user supplied input names to override graph inputs * Fix pylint issues * Updates from code review feedback * Fix tutorial to use shape list input * Disable intermittent test failure in topi vision test
Jeremy Johnson committed
-
- 01 Apr, 2020 2 commits
- 31 Mar, 2020 1 commit
-
-
* [Torch] Add support for split * fix * fix test class
Wang Yucheng committed
-
- 28 Mar, 2020 1 commit
-
-
* Fixed conv transpose parsing. * small format change. * Chage test module names. * Simplified test syntax.
Josh Fromm committed
-
- 24 Mar, 2020 2 commits
-
-
* [Torch] Add support for max_pool1d * add test * fix line-too-long * remove wrapper class
Wang Yucheng committed -
* Fix conv2d conversion for group conv * add more comment for clarification
masahi committed
-
- 19 Mar, 2020 1 commit
-
-
* fix minor lint issue * add conv3d and adaptive avg pool3d conversion with test * fix max pool handling * add batch norm 3d test * add resnet 3d test * add more conv3d test * clean up batch norm test * add note on disabling inception v3 test * add more tests * add more tests * fix names
masahi committed
-
- 10 Mar, 2020 2 commits
-
-
* Add support for prim::If and prim::Loop with test cases * rebase and fix tests * add some comments * simplifying, fix float cast * parse -> convert * recursivly retrive ops in get_all_op_names * use multiple return values from block correctly, simplify loop convert * choose dtype properly for zeros and ones * simplifying, replace convert_inputs with _get_relay_input_vars * fix for while loop with non input dependent init cond * add assert on loop var update * move the condition around * better testing for seg models * rebase fix, disable inception v3 in quant test as it is too slow to load with torch-1.4 + torchvision 0.5 * simplify and add more comparison op converter
masahi committed
-
- 09 Mar, 2020 1 commit
-
-
This reverts commit fc7f0783.
Animesh Jain committed
-
- 07 Mar, 2020 1 commit
-
-
* fix unordered dictionary problem for python version 3.5 * modify style * default value of stride in torch.nn.functional.avg_pool is None * delete prev modifications * add testcase for nn.functional.avg_pool2d
pyjhzwh committed
-
- 04 Mar, 2020 1 commit
-
-
* qnn support initial import * fix upsampling num input * imagenet tests added * add qunatized module tests * quantized module tests working * imagenet test working * fix lint * remove top level torch import to fix ci error * disable lint warning on outside toplevel import * revert parse -> convert change * add comments to qnn translation * address comments, add sample outputs * add more comments * refactor bias add and requantize step
Animesh Jain committed
-
- 01 Mar, 2020 1 commit
-
-
* add custom conversion map * add roi align test using custom convert map * refactor test * add support for upsampling op and test on segmentation models * remove redundant no_grad * add upsampling test case * make the default custom map None, instead of empty dict * updated tests, remove packaging and drop PT 1.2 support * add better support for aten::to and tests * add a note on dilation in x86
masahi committed
-
- 28 Feb, 2020 1 commit
-
-
* The initial import of refactored implementation, all tests passed * enable mobilenet v2 test * minor cleanup * reorg * fix lint * use input names that come with torch IR * fix typo * introduce parse_operators * fix lint * add _ prefix
masahi committed
-
- 27 Feb, 2020 1 commit
-
-
* [REFACTOR][PY][API-CHANGE] Remove legacy python files. Remove legacy python files. Use the te namespace for most of the tensor expression primitives. - tvm.create_schedule -> tvm.te.create_schedule - tvm.placeholder -> tvm.te.placeholder - tvm.compute -> tvm.te.compute * Remove top-level exposures.
Tianqi Chen committed
-
- 25 Feb, 2020 1 commit
-
-
* Add a PyTorch to Relay parser * Add alexnet, googlenet, mnasnet, shufflenet wip * Fix lint * Remove fix for shufflenet * Lower check * Pull changes from neo-ai/tvm changes * Remove commented out section * Use infer_shape everywhere * Change back to using trace instead of path in from_pytorch * Parse state_dict to add param names * Umbrella single_op under test_forwards * Remove print and cleanup call * Check if update to test broke CI * Retrigger CI * Add back in updated tests * Try splitting up tests * First pass at flexible typing, implemented for ones * Add int32 for all ops * Remove print statements * Fix lint * Broad except * Add other tensor types * Temporarily use old tests * Retrigger CI * Lower type names * Use numpy to convert in dense op * Fix lint * Remove print * Need to cleanup but verify int32 works for add * Rough tests for different types, a lot of types are not supported on CPU * Probably doesn't build, need to save work as I have to switch branches (constantly) * Parse param type * Remove print stmt in parser * Clean up some code * Working on flaot32 for bn * Add resnet18 double type * Fix lint * Temporarily move PT tests first * Temporarily add back refactored tests to fix mem issue * Add more type test and temp remove some tests * Comment out tests, hopefully CI prints a trace * Get stack trace * Remove operator dict key, rename op_name to node_id, remove dead code * Make relay map a list * Remove some hacky string stuff * Move to PyTorch 1.4 * Remove input_type as param * Remove _get_fill_value, fix full ops * Remove unused code and combine ops for identity and none * Remove fn_param * Clean up main loop * Remove useless if/else for outputs * Remove ir_names, only used once * Remove some string hacking * Remove string parsing to get output name * Fix bug with output sizes of nodes * Use attributeNames in parse ops * Remove continue and add_op in parse_op * Do this everywhere, use assert instead of explciitly type casting * Remove unnecessary swap * Slight refactor for elemwise input parse * Use a copy of graph everywhere * Rename nid_to_node_name * Refactor parse import prereqs * Clean up input node kind check * Clean up conditionals * Clean up add_op * Cleanup type for ones and zeros op * Fix lint * Add torch install to CI * Actually use torch * Try moving import torch to only where it's needed * Import torch for CI * Use take op for select * Temporarily add ignore for jit inline pass for CI * Use CompleteTensorType, might be a PT 1.2 only thing * Use different types in elemwise op * Use float16 ones * Fix float16 test * Remove the temp docker changes * Remove temp test * Temporarily comment out original tests * Remove file * Empty cache after each test * Add some prints and lower input sizes * Try using no grad * Trying to globally set grad off * Use no grad for torchvision * Remove xfail tests * Remove VGG and AlexNet due to some issues * Combine pooling tests * Remove extra test file * Remove single op, remove larger pooling tests * Remove maxpool3 * Remove debug prints * Remove inference call and add no_grad in measure latency * Use standard string start char * Remove redundant infer_shape in slice * Convert most to checks to just expr * Remove extra paren * More refactor of isinstance * Add helper for creating typed constants * Assert instead of return when no matching type * Remove network variants * Add no_grad when forward, remove deatch, fix lint * Change isinstance to expr in transpose * Use opnotimplemented, refactor * Fix full ops, remove duplicate tests * Never use shape field unless we know the type * Remove comma, retrigger CI * Add paren, retrigger CI * Use inline if-else for flags * Throw exception instead of assert * Remove version check for CI * Check version when doing inline pass * Fix lint * Lower more input sizes * Add new line, conv2d only accepts weight as expr * Use tvm.runtime.ndarray * Remove change to torch version install * Try no grad for mobilenet * Fix lint * Fix lint again * Revert to last passing * Delete test files * Ignore lint * Revert back * Comment out mobilenet * Clean up compare compiled and baseline outputs * Use IRModule * Add todos * Refactor use_bias * Add todo for fix conv op channels * Change input to data type * Remove todo * Handle channel multiplier > 1
Alex Wong committed
-