1. 27 Apr, 2020 1 commit
  2. 25 Apr, 2020 2 commits
  3. 24 Apr, 2020 1 commit
  4. 20 Apr, 2020 1 commit
  5. 17 Apr, 2020 2 commits
  6. 16 Apr, 2020 1 commit
  7. 15 Apr, 2020 1 commit
  8. 14 Apr, 2020 1 commit
  9. 13 Apr, 2020 2 commits
    • [PYTORCH]Reduce_ops support added (#5308) · 6805d543
      * [PYTORCH]Reduce_ops support added
      
      * Review comments updated
      
      * typo bug in qnn test
      Samuel committed
    • [Torch] Support Python list, more realistic recurrent networks (#5306) · 0145cd50
      * use funcs from prelude, pass around convert_map
      
      * get relay input type from user ishape
      
      * handle tuple unpack
      
      * experimenting with static tensor array
      
      * use prelude concat instead of cons + rev
      
      * minor clean up
      
      * fix layer norm conversion bug, unwrap tensor array
      
      * add infer shape on tensor array
      
      * pass around prelude for now
      
      * compile worked but runtime error
      
      * fix tensor array wrapping
      
      * begin list dynamic test
      
      * is_list_dynamic first version
      
      * finish dynamic list test
      
      * a few fix
      
      * use shape_of function if Any is found
      
      * improve size conversion
      
      * working on adding free vars to loop block
      
      * fixed inlined inner loop issue
      
      * clean up free var handling
      
      * add support for tensor array concat
      
      * adding ta concat on last axis
      
      * fix concat, but got runtime error
      
      * disable concat on axis -1 for now
      
      * add lstm tests
      
      * revert unrelated change
      
      * fix stacked bidir test
      
      * minor fix to test
      
      * relax tol a bit, revert dnnl change to avoid conflict
      
      * simplify infer type, use input tensor shape rather than concat shape
      
      * more shape fix
      masahi committed
  10. 11 Apr, 2020 1 commit
  11. 10 Apr, 2020 1 commit
  12. 08 Apr, 2020 1 commit
  13. 07 Apr, 2020 1 commit
  14. 06 Apr, 2020 1 commit
  15. 02 Apr, 2020 2 commits
  16. 01 Apr, 2020 2 commits
  17. 31 Mar, 2020 1 commit
  18. 28 Mar, 2020 1 commit
  19. 24 Mar, 2020 2 commits
  20. 19 Mar, 2020 1 commit
  21. 10 Mar, 2020 2 commits
    • [Torch] Add initial control flow support (#4964) · 06e9542e
      * Add support for prim::If and prim::Loop with test cases
      
      * rebase and fix tests
      
      * add some comments
      
      * simplifying, fix float cast
      
      * parse -> convert
      
      * recursivly retrive ops in get_all_op_names
      
      * use multiple return values from block correctly, simplify loop convert
      
      * choose dtype properly for zeros and ones
      
      * simplifying, replace convert_inputs with _get_relay_input_vars
      
      * fix for while loop with non input dependent init cond
      
      * add assert on loop var update
      
      * move the condition around
      
      * better testing for seg models
      
      * rebase fix, disable inception v3 in quant test as it is too slow to
      load with torch-1.4 + torchvision 0.5
      
      * simplify and add more comparison op converter
      masahi committed
  22. 09 Mar, 2020 1 commit
  23. 07 Mar, 2020 1 commit
  24. 04 Mar, 2020 1 commit
    • [Torch, QNN] Add support for quantized models via QNN (#4977) · fc7f0783
      * qnn support initial import
      
      * fix upsampling num input
      
      * imagenet tests added
      
      * add qunatized module tests
      
      * quantized module tests working
      
      * imagenet test working
      
      * fix lint
      
      * remove top level torch import to fix ci error
      
      * disable lint warning on outside toplevel import
      
      * revert parse -> convert change
      
      * add comments to qnn translation
      
      * address comments, add sample outputs
      
      * add more comments
      
      * refactor bias add and requantize step
      Animesh Jain committed
  25. 01 Mar, 2020 1 commit
  26. 28 Feb, 2020 1 commit
  27. 27 Feb, 2020 1 commit
  28. 25 Feb, 2020 1 commit
    • [Relay] Add a PyTorch to Relay Parser (#4497) · 87c20bb2
      * Add a PyTorch to Relay parser
      
      * Add alexnet, googlenet, mnasnet, shufflenet wip
      
      * Fix lint
      
      * Remove fix for shufflenet
      
      * Lower check
      
      * Pull changes from neo-ai/tvm changes
      
      * Remove commented out section
      
      * Use infer_shape everywhere
      
      * Change back to using trace instead of path in from_pytorch
      
      * Parse state_dict to add param names
      
      * Umbrella single_op under test_forwards
      
      * Remove print and cleanup call
      
      * Check if update to test broke CI
      
      * Retrigger CI
      
      * Add back in updated tests
      
      * Try splitting up tests
      
      * First pass at flexible typing, implemented for ones
      
      * Add int32 for all ops
      
      * Remove print statements
      
      * Fix lint
      
      * Broad except
      
      * Add other tensor types
      
      * Temporarily use old tests
      
      * Retrigger CI
      
      * Lower type names
      
      * Use numpy to convert in dense op
      
      * Fix lint
      
      * Remove print
      
      * Need to cleanup but verify int32 works for add
      
      * Rough tests for different types, a lot of types are not supported on CPU
      
      * Probably doesn't build, need to save work as I have to switch branches (constantly)
      
      * Parse param type
      
      * Remove print stmt in parser
      
      * Clean up some code
      
      * Working on flaot32 for bn
      
      * Add resnet18 double type
      
      * Fix lint
      
      * Temporarily move PT tests first
      
      * Temporarily add back refactored tests to fix mem issue
      
      * Add more type test and temp remove some tests
      
      * Comment out tests, hopefully CI prints a trace
      
      * Get stack trace
      
      * Remove operator dict key, rename op_name to node_id, remove dead code
      
      * Make relay map a list
      
      * Remove some hacky string stuff
      
      * Move to PyTorch 1.4
      
      * Remove input_type as param
      
      * Remove _get_fill_value, fix full ops
      
      * Remove unused code and combine ops for identity and none
      
      * Remove fn_param
      
      * Clean up main loop
      
      * Remove useless if/else for outputs
      
      * Remove ir_names, only used once
      
      * Remove some string hacking
      
      * Remove string parsing to get output name
      
      * Fix bug with output sizes of nodes
      
      * Use attributeNames in parse ops
      
      * Remove continue and add_op in parse_op
      
      * Do this everywhere, use assert instead of explciitly type casting
      
      * Remove unnecessary swap
      
      * Slight refactor for elemwise input parse
      
      * Use a copy of graph everywhere
      
      * Rename nid_to_node_name
      
      * Refactor parse import prereqs
      
      * Clean up input node kind check
      
      * Clean up conditionals
      
      * Clean up add_op
      
      * Cleanup type for ones and zeros op
      
      * Fix lint
      
      * Add torch install to CI
      
      * Actually use torch
      
      * Try moving import torch to only where it's needed
      
      * Import torch for CI
      
      * Use take op for select
      
      * Temporarily add ignore for jit inline pass for CI
      
      * Use CompleteTensorType, might be a PT 1.2 only thing
      
      * Use different types in elemwise op
      
      * Use float16 ones
      
      * Fix float16 test
      
      * Remove the temp docker changes
      
      * Remove temp test
      
      * Temporarily comment out original tests
      
      * Remove file
      
      * Empty cache after each test
      
      * Add some prints and lower input sizes
      
      * Try using no grad
      
      * Trying to globally set grad off
      
      * Use no grad for torchvision
      
      * Remove xfail tests
      
      * Remove VGG and AlexNet due to some issues
      
      * Combine pooling tests
      
      * Remove extra test file
      
      * Remove single op, remove larger pooling tests
      
      * Remove maxpool3
      
      * Remove debug prints
      
      * Remove inference call and add no_grad in measure latency
      
      * Use standard string start char
      
      * Remove redundant infer_shape in slice
      
      * Convert most to checks to just expr
      
      * Remove extra paren
      
      * More refactor of isinstance
      
      * Add helper for creating typed constants
      
      * Assert instead of return when no matching type
      
      * Remove network variants
      
      * Add no_grad when forward, remove deatch, fix lint
      
      * Change isinstance to expr in transpose
      
      * Use opnotimplemented, refactor
      
      * Fix full ops, remove duplicate tests
      
      * Never use shape field unless we know the type
      
      * Remove comma, retrigger CI
      
      * Add paren, retrigger CI
      
      * Use inline if-else for flags
      
      * Throw exception instead of assert
      
      * Remove version check for CI
      
      * Check version when doing inline pass
      
      * Fix lint
      
      * Lower more input sizes
      
      * Add new line, conv2d only accepts weight as expr
      
      * Use tvm.runtime.ndarray
      
      * Remove change to torch version install
      
      * Try no grad for mobilenet
      
      * Fix lint
      
      * Fix lint again
      
      * Revert to last passing
      
      * Delete test files
      
      * Ignore lint
      
      * Revert back
      
      * Comment out mobilenet
      
      * Clean up compare compiled and baseline outputs
      
      * Use IRModule
      
      * Add todos
      
      * Refactor use_bias
      
      * Add todo for fix conv op channels
      
      * Change input to data type
      
      * Remove todo
      
      * Handle channel multiplier > 1
      Alex Wong committed