1. 05 Mar, 2020 2 commits
  2. 04 Mar, 2020 3 commits
    • [Torch, QNN] Add support for quantized models via QNN (#4977) · fc7f0783
      * qnn support initial import
      
      * fix upsampling num input
      
      * imagenet tests added
      
      * add qunatized module tests
      
      * quantized module tests working
      
      * imagenet test working
      
      * fix lint
      
      * remove top level torch import to fix ci error
      
      * disable lint warning on outside toplevel import
      
      * revert parse -> convert change
      
      * add comments to qnn translation
      
      * address comments, add sample outputs
      
      * add more comments
      
      * refactor bias add and requantize step
      Animesh Jain committed
    • Tighten split's extent (#4931) · 585f9ce6
      * Set split node's range to minimum of ext and split factor or split nparts, but only when PassDownDomain is called with allow_missing == false, i.e. by InferBound.  Add a helper PassUpThreadBinding() to get a map telling whether an IterVar has at least one leaf IterVar deriving from it binding to a thread. Add two unit tests.
      
      * Enhance LoopVectorizer for vectorizing by 0.  Found at least one case from testtopi/tests/python/test_topi_transform.py::test_tile.
      
      * Revert changes vectorize_loop.cc; when parent's ext is zero, set split's range to the factor or nparts.
      
      * Update with comments.
      
      * Refactor the ext tightening predicate.
      
      * Fix reference types.
      
      * Integrate tvm.te changes.
      
      * Trivial comment change to trigger CI.
      
      * Trivial comment correction to trigger testing.
      Lianmin Zheng committed
    • [Torch] fix unordered dictionary problem for python version under 3.6 (#4982) · 5a0f39b5
      * fix unordered dictionary problem for python version 3.5
      
      * modify style
      pyjhzwh committed
  3. 03 Mar, 2020 2 commits
  4. 02 Mar, 2020 4 commits
  5. 01 Mar, 2020 3 commits
  6. 29 Feb, 2020 2 commits
  7. 28 Feb, 2020 4 commits
  8. 27 Feb, 2020 7 commits
  9. 26 Feb, 2020 9 commits
  10. 25 Feb, 2020 4 commits
    • [Fix] remove unnecessary spliting in the cached chunk (#4935) · 545f6ea3
      * remove unnecessary spliting in the cached chunk
      
      * remove unnecessary spliting in the cached chunk
      Yida Wang committed
    • [LLVM] Fix build breaks from StringRef changes (#4923) · 588523dd
      - llvm::StringRef to std::string conversion is explicit now.
      
      Signed-off-by: Wei Pan <wpan11nv@nvidia.com>
      wpan11nv committed
    • [Relay][External Codegen] Support data types for CSourceModuleCodegen args and output (#4934) · a2429c1f
      * Support int args and no extra buffers
      
      * Fixes
      
      * remove testing code
      
      * fix style
      
      * more style
      
      * use const args
      
      * style
      
      Co-authored-by: Jon Soifer <jonso@microsoft.com>
      Jon Soifer committed
    • [Relay] Add a PyTorch to Relay Parser (#4497) · 87c20bb2
      * Add a PyTorch to Relay parser
      
      * Add alexnet, googlenet, mnasnet, shufflenet wip
      
      * Fix lint
      
      * Remove fix for shufflenet
      
      * Lower check
      
      * Pull changes from neo-ai/tvm changes
      
      * Remove commented out section
      
      * Use infer_shape everywhere
      
      * Change back to using trace instead of path in from_pytorch
      
      * Parse state_dict to add param names
      
      * Umbrella single_op under test_forwards
      
      * Remove print and cleanup call
      
      * Check if update to test broke CI
      
      * Retrigger CI
      
      * Add back in updated tests
      
      * Try splitting up tests
      
      * First pass at flexible typing, implemented for ones
      
      * Add int32 for all ops
      
      * Remove print statements
      
      * Fix lint
      
      * Broad except
      
      * Add other tensor types
      
      * Temporarily use old tests
      
      * Retrigger CI
      
      * Lower type names
      
      * Use numpy to convert in dense op
      
      * Fix lint
      
      * Remove print
      
      * Need to cleanup but verify int32 works for add
      
      * Rough tests for different types, a lot of types are not supported on CPU
      
      * Probably doesn't build, need to save work as I have to switch branches (constantly)
      
      * Parse param type
      
      * Remove print stmt in parser
      
      * Clean up some code
      
      * Working on flaot32 for bn
      
      * Add resnet18 double type
      
      * Fix lint
      
      * Temporarily move PT tests first
      
      * Temporarily add back refactored tests to fix mem issue
      
      * Add more type test and temp remove some tests
      
      * Comment out tests, hopefully CI prints a trace
      
      * Get stack trace
      
      * Remove operator dict key, rename op_name to node_id, remove dead code
      
      * Make relay map a list
      
      * Remove some hacky string stuff
      
      * Move to PyTorch 1.4
      
      * Remove input_type as param
      
      * Remove _get_fill_value, fix full ops
      
      * Remove unused code and combine ops for identity and none
      
      * Remove fn_param
      
      * Clean up main loop
      
      * Remove useless if/else for outputs
      
      * Remove ir_names, only used once
      
      * Remove some string hacking
      
      * Remove string parsing to get output name
      
      * Fix bug with output sizes of nodes
      
      * Use attributeNames in parse ops
      
      * Remove continue and add_op in parse_op
      
      * Do this everywhere, use assert instead of explciitly type casting
      
      * Remove unnecessary swap
      
      * Slight refactor for elemwise input parse
      
      * Use a copy of graph everywhere
      
      * Rename nid_to_node_name
      
      * Refactor parse import prereqs
      
      * Clean up input node kind check
      
      * Clean up conditionals
      
      * Clean up add_op
      
      * Cleanup type for ones and zeros op
      
      * Fix lint
      
      * Add torch install to CI
      
      * Actually use torch
      
      * Try moving import torch to only where it's needed
      
      * Import torch for CI
      
      * Use take op for select
      
      * Temporarily add ignore for jit inline pass for CI
      
      * Use CompleteTensorType, might be a PT 1.2 only thing
      
      * Use different types in elemwise op
      
      * Use float16 ones
      
      * Fix float16 test
      
      * Remove the temp docker changes
      
      * Remove temp test
      
      * Temporarily comment out original tests
      
      * Remove file
      
      * Empty cache after each test
      
      * Add some prints and lower input sizes
      
      * Try using no grad
      
      * Trying to globally set grad off
      
      * Use no grad for torchvision
      
      * Remove xfail tests
      
      * Remove VGG and AlexNet due to some issues
      
      * Combine pooling tests
      
      * Remove extra test file
      
      * Remove single op, remove larger pooling tests
      
      * Remove maxpool3
      
      * Remove debug prints
      
      * Remove inference call and add no_grad in measure latency
      
      * Use standard string start char
      
      * Remove redundant infer_shape in slice
      
      * Convert most to checks to just expr
      
      * Remove extra paren
      
      * More refactor of isinstance
      
      * Add helper for creating typed constants
      
      * Assert instead of return when no matching type
      
      * Remove network variants
      
      * Add no_grad when forward, remove deatch, fix lint
      
      * Change isinstance to expr in transpose
      
      * Use opnotimplemented, refactor
      
      * Fix full ops, remove duplicate tests
      
      * Never use shape field unless we know the type
      
      * Remove comma, retrigger CI
      
      * Add paren, retrigger CI
      
      * Use inline if-else for flags
      
      * Throw exception instead of assert
      
      * Remove version check for CI
      
      * Check version when doing inline pass
      
      * Fix lint
      
      * Lower more input sizes
      
      * Add new line, conv2d only accepts weight as expr
      
      * Use tvm.runtime.ndarray
      
      * Remove change to torch version install
      
      * Try no grad for mobilenet
      
      * Fix lint
      
      * Fix lint again
      
      * Revert to last passing
      
      * Delete test files
      
      * Ignore lint
      
      * Revert back
      
      * Comment out mobilenet
      
      * Clean up compare compiled and baseline outputs
      
      * Use IRModule
      
      * Add todos
      
      * Refactor use_bias
      
      * Add todo for fix conv op channels
      
      * Change input to data type
      
      * Remove todo
      
      * Handle channel multiplier > 1
      Alex Wong committed