-
[Relay] Add a PyTorch to Relay Parser (#4497) · 87c20bb2
* Add a PyTorch to Relay parser * Add alexnet, googlenet, mnasnet, shufflenet wip * Fix lint * Remove fix for shufflenet * Lower check * Pull changes from neo-ai/tvm changes * Remove commented out section * Use infer_shape everywhere * Change back to using trace instead of path in from_pytorch * Parse state_dict to add param names * Umbrella single_op under test_forwards * Remove print and cleanup call * Check if update to test broke CI * Retrigger CI * Add back in updated tests * Try splitting up tests * First pass at flexible typing, implemented for ones * Add int32 for all ops * Remove print statements * Fix lint * Broad except * Add other tensor types * Temporarily use old tests * Retrigger CI * Lower type names * Use numpy to convert in dense op * Fix lint * Remove print * Need to cleanup but verify int32 works for add * Rough tests for different types, a lot of types are not supported on CPU * Probably doesn't build, need to save work as I have to switch branches (constantly) * Parse param type * Remove print stmt in parser * Clean up some code * Working on flaot32 for bn * Add resnet18 double type * Fix lint * Temporarily move PT tests first * Temporarily add back refactored tests to fix mem issue * Add more type test and temp remove some tests * Comment out tests, hopefully CI prints a trace * Get stack trace * Remove operator dict key, rename op_name to node_id, remove dead code * Make relay map a list * Remove some hacky string stuff * Move to PyTorch 1.4 * Remove input_type as param * Remove _get_fill_value, fix full ops * Remove unused code and combine ops for identity and none * Remove fn_param * Clean up main loop * Remove useless if/else for outputs * Remove ir_names, only used once * Remove some string hacking * Remove string parsing to get output name * Fix bug with output sizes of nodes * Use attributeNames in parse ops * Remove continue and add_op in parse_op * Do this everywhere, use assert instead of explciitly type casting * Remove unnecessary swap * Slight refactor for elemwise input parse * Use a copy of graph everywhere * Rename nid_to_node_name * Refactor parse import prereqs * Clean up input node kind check * Clean up conditionals * Clean up add_op * Cleanup type for ones and zeros op * Fix lint * Add torch install to CI * Actually use torch * Try moving import torch to only where it's needed * Import torch for CI * Use take op for select * Temporarily add ignore for jit inline pass for CI * Use CompleteTensorType, might be a PT 1.2 only thing * Use different types in elemwise op * Use float16 ones * Fix float16 test * Remove the temp docker changes * Remove temp test * Temporarily comment out original tests * Remove file * Empty cache after each test * Add some prints and lower input sizes * Try using no grad * Trying to globally set grad off * Use no grad for torchvision * Remove xfail tests * Remove VGG and AlexNet due to some issues * Combine pooling tests * Remove extra test file * Remove single op, remove larger pooling tests * Remove maxpool3 * Remove debug prints * Remove inference call and add no_grad in measure latency * Use standard string start char * Remove redundant infer_shape in slice * Convert most to checks to just expr * Remove extra paren * More refactor of isinstance * Add helper for creating typed constants * Assert instead of return when no matching type * Remove network variants * Add no_grad when forward, remove deatch, fix lint * Change isinstance to expr in transpose * Use opnotimplemented, refactor * Fix full ops, remove duplicate tests * Never use shape field unless we know the type * Remove comma, retrigger CI * Add paren, retrigger CI * Use inline if-else for flags * Throw exception instead of assert * Remove version check for CI * Check version when doing inline pass * Fix lint * Lower more input sizes * Add new line, conv2d only accepts weight as expr * Use tvm.runtime.ndarray * Remove change to torch version install * Try no grad for mobilenet * Fix lint * Fix lint again * Revert to last passing * Delete test files * Ignore lint * Revert back * Comment out mobilenet * Clean up compare compiled and baseline outputs * Use IRModule * Add todos * Refactor use_bias * Add todo for fix conv op channels * Change input to data type * Remove todo * Handle channel multiplier > 1
Alex Wong committed
Name |
Last commit
|
Last update |
---|---|---|
.. | ||
backend | Loading commit data... | |
frontend | Loading commit data... | |
grammar | Loading commit data... | |
op | Loading commit data... | |
qnn | Loading commit data... | |
quantize | Loading commit data... | |
std | Loading commit data... | |
testing | Loading commit data... | |
__init__.py | Loading commit data... | |
_analysis.py | Loading commit data... | |
_base.py | Loading commit data... | |
_build_module.py | Loading commit data... | |
_expr.py | Loading commit data... | |
_make.py | Loading commit data... | |
_parser.py | Loading commit data... | |
_transform.py | Loading commit data... | |
adt.py | Loading commit data... | |
analysis.py | Loading commit data... | |
annotation.py | Loading commit data... | |
base.py | Loading commit data... | |
build_module.py | Loading commit data... | |
contrib.py | Loading commit data... | |
debug.py | Loading commit data... | |
expr.py | Loading commit data... | |
expr_functor.py | Loading commit data... | |
feature.py | Loading commit data... | |
image.py | Loading commit data... | |
loops.py | Loading commit data... | |
memory_alloc.py | Loading commit data... | |
nn.py | Loading commit data... | |
param_dict.py | Loading commit data... | |
parser.py | Loading commit data... | |
prelude.py | Loading commit data... | |
scope_builder.py | Loading commit data... | |
transform.py | Loading commit data... | |
ty.py | Loading commit data... | |
type_functor.py | Loading commit data... | |
util.py | Loading commit data... | |
vision.py | Loading commit data... |