1. 21 Mar, 2020 1 commit
  2. 20 Mar, 2020 1 commit
  3. 18 Mar, 2020 1 commit
  4. 17 Mar, 2020 2 commits
  5. 27 Feb, 2020 2 commits
    • [DOCS] Sphinx -- Introduce alias detection. (#4954) · 1dbdcfb5
      * [DOCS] Sphinx -- Introduce alias detection.
      
      Background: some of our namespaces import function from another
      namespace. For example tvm.te imports most of the operators from tvm.tir.
      
      Previously we manually exclude these aliases from the doc.
      However that means we can not link them by the alias name.
      
      This PR adds a sphinx callback plugin to detect such aliases, and create a rubric block
      on the button of its current docstring `Alias of the original class`.
      It is done in a way so that we can refer to the generated docs.
      
      We also fixed a few docs errors.
      
      * Fix most of the issues
      Tianqi Chen committed
    • [REFACTOR][PY][API-CHANGE] Remove legacy python files. (#4943) · 9816efc2
      * [REFACTOR][PY][API-CHANGE] Remove legacy python files.
      
      Remove legacy python files.
      Use the te namespace for most of the tensor expression primitives.
      
      - tvm.create_schedule -> tvm.te.create_schedule
      - tvm.placeholder -> tvm.te.placeholder
      - tvm.compute -> tvm.te.compute
      
      * Remove top-level exposures.
      Tianqi Chen committed
  6. 26 Feb, 2020 1 commit
  7. 25 Feb, 2020 1 commit
    • [Relay] Add a PyTorch to Relay Parser (#4497) · 87c20bb2
      * Add a PyTorch to Relay parser
      
      * Add alexnet, googlenet, mnasnet, shufflenet wip
      
      * Fix lint
      
      * Remove fix for shufflenet
      
      * Lower check
      
      * Pull changes from neo-ai/tvm changes
      
      * Remove commented out section
      
      * Use infer_shape everywhere
      
      * Change back to using trace instead of path in from_pytorch
      
      * Parse state_dict to add param names
      
      * Umbrella single_op under test_forwards
      
      * Remove print and cleanup call
      
      * Check if update to test broke CI
      
      * Retrigger CI
      
      * Add back in updated tests
      
      * Try splitting up tests
      
      * First pass at flexible typing, implemented for ones
      
      * Add int32 for all ops
      
      * Remove print statements
      
      * Fix lint
      
      * Broad except
      
      * Add other tensor types
      
      * Temporarily use old tests
      
      * Retrigger CI
      
      * Lower type names
      
      * Use numpy to convert in dense op
      
      * Fix lint
      
      * Remove print
      
      * Need to cleanup but verify int32 works for add
      
      * Rough tests for different types, a lot of types are not supported on CPU
      
      * Probably doesn't build, need to save work as I have to switch branches (constantly)
      
      * Parse param type
      
      * Remove print stmt in parser
      
      * Clean up some code
      
      * Working on flaot32 for bn
      
      * Add resnet18 double type
      
      * Fix lint
      
      * Temporarily move PT tests first
      
      * Temporarily add back refactored tests to fix mem issue
      
      * Add more type test and temp remove some tests
      
      * Comment out tests, hopefully CI prints a trace
      
      * Get stack trace
      
      * Remove operator dict key, rename op_name to node_id, remove dead code
      
      * Make relay map a list
      
      * Remove some hacky string stuff
      
      * Move to PyTorch 1.4
      
      * Remove input_type as param
      
      * Remove _get_fill_value, fix full ops
      
      * Remove unused code and combine ops for identity and none
      
      * Remove fn_param
      
      * Clean up main loop
      
      * Remove useless if/else for outputs
      
      * Remove ir_names, only used once
      
      * Remove some string hacking
      
      * Remove string parsing to get output name
      
      * Fix bug with output sizes of nodes
      
      * Use attributeNames in parse ops
      
      * Remove continue and add_op in parse_op
      
      * Do this everywhere, use assert instead of explciitly type casting
      
      * Remove unnecessary swap
      
      * Slight refactor for elemwise input parse
      
      * Use a copy of graph everywhere
      
      * Rename nid_to_node_name
      
      * Refactor parse import prereqs
      
      * Clean up input node kind check
      
      * Clean up conditionals
      
      * Clean up add_op
      
      * Cleanup type for ones and zeros op
      
      * Fix lint
      
      * Add torch install to CI
      
      * Actually use torch
      
      * Try moving import torch to only where it's needed
      
      * Import torch for CI
      
      * Use take op for select
      
      * Temporarily add ignore for jit inline pass for CI
      
      * Use CompleteTensorType, might be a PT 1.2 only thing
      
      * Use different types in elemwise op
      
      * Use float16 ones
      
      * Fix float16 test
      
      * Remove the temp docker changes
      
      * Remove temp test
      
      * Temporarily comment out original tests
      
      * Remove file
      
      * Empty cache after each test
      
      * Add some prints and lower input sizes
      
      * Try using no grad
      
      * Trying to globally set grad off
      
      * Use no grad for torchvision
      
      * Remove xfail tests
      
      * Remove VGG and AlexNet due to some issues
      
      * Combine pooling tests
      
      * Remove extra test file
      
      * Remove single op, remove larger pooling tests
      
      * Remove maxpool3
      
      * Remove debug prints
      
      * Remove inference call and add no_grad in measure latency
      
      * Use standard string start char
      
      * Remove redundant infer_shape in slice
      
      * Convert most to checks to just expr
      
      * Remove extra paren
      
      * More refactor of isinstance
      
      * Add helper for creating typed constants
      
      * Assert instead of return when no matching type
      
      * Remove network variants
      
      * Add no_grad when forward, remove deatch, fix lint
      
      * Change isinstance to expr in transpose
      
      * Use opnotimplemented, refactor
      
      * Fix full ops, remove duplicate tests
      
      * Never use shape field unless we know the type
      
      * Remove comma, retrigger CI
      
      * Add paren, retrigger CI
      
      * Use inline if-else for flags
      
      * Throw exception instead of assert
      
      * Remove version check for CI
      
      * Check version when doing inline pass
      
      * Fix lint
      
      * Lower more input sizes
      
      * Add new line, conv2d only accepts weight as expr
      
      * Use tvm.runtime.ndarray
      
      * Remove change to torch version install
      
      * Try no grad for mobilenet
      
      * Fix lint
      
      * Fix lint again
      
      * Revert to last passing
      
      * Delete test files
      
      * Ignore lint
      
      * Revert back
      
      * Comment out mobilenet
      
      * Clean up compare compiled and baseline outputs
      
      * Use IRModule
      
      * Add todos
      
      * Refactor use_bias
      
      * Add todo for fix conv op channels
      
      * Change input to data type
      
      * Remove todo
      
      * Handle channel multiplier > 1
      Alex Wong committed
  8. 20 Feb, 2020 2 commits
  9. 18 Feb, 2020 1 commit
  10. 17 Feb, 2020 1 commit
  11. 13 Feb, 2020 1 commit
  12. 12 Feb, 2020 2 commits
  13. 07 Feb, 2020 1 commit
    • [REFACTOR][PY][API-Change] Polish tvm.runtime, tvm.runtime.module API update (#4837) · e0122c0e
      * [REFACTOR][PY-API] Polish tvm.runtime, tvm.runtime.module API update
      
      This PR updates the tvm.runtime to use the new FFI style.
      
      - Remove top-level tvm.module to avoid confusion between runtime.Module and IRModule
      - API changes wrt to runtime.Module
        - tvm.module.load -> tvm.runtime.load_module
        - tvm.module.enabled -> tvm.runtime.enabled
        - tvm.module.system_lib -> tvm.runtime.system_lib
      - Remove dep on api_internal from runtime.
      
      * Update module.load in the latest API
      Tianqi Chen committed
  14. 16 Jan, 2020 1 commit
    • [Arith] add SizeVar representing non-neg valued variable in a tensor shape (#4684) · 3a672e3e
      * [arith] add ShapeVar representing non-neg valued variable in a tensor shape
      
      * bounder remover; deal with div in int_set differently
      
      * fix bounder_remover
      
      * migrate unittest to use shape_var
      
      * use tvm.shape_var in integration & relay tests
      
      * add test case; fix Var register
      
      * fix lint
      
      * fix lint again
      
      * add default ShapeVar visitor in Relay
      
      * fix override
      
      * fix ShapeVar visit bug
      
      * revert IntervalSet for shape_var
      
      * remove bound_remover
      
      * remove is_var; use constructor for shapevar/var instead
      
      * ShapeVar -> SizeVar; add constructor comments
      
      * shape_var -> size_var in doc
      
      * tindex -> size
      Yizhi Liu committed
  15. 11 Jan, 2020 2 commits
  16. 09 Jan, 2020 1 commit
  17. 22 Dec, 2019 1 commit
  18. 18 Nov, 2019 1 commit
  19. 15 Nov, 2019 1 commit
  20. 30 Oct, 2019 1 commit
  21. 27 Sep, 2019 1 commit
  22. 25 Sep, 2019 1 commit
  23. 24 Sep, 2019 1 commit
    • [Relay] Add new IR pass CombineParallelDense (#3862) · ed9fdfb0
      * Refactor to create abstract ParallelOpCombiner
      
      * First draft of CombineParallelDense
      
      * Begin to work on tests
      
      * Test
      
      * Refactor to move out more common code
      
      * Clean up
      
      * Fix
      
      * Remove statics
      
      * fix wording
      
      * Start to add combine_parallel_op_batch
      
      * Resolve PR comments
      
      * Resolve PR comments
      
      * dummy change to retrigger CI
      
      * Change special case from bias_add to add
      
      * Revert special case change
      
      * Ignore units check
      
      * dummy change to retrigger CI
      
      * dummy change to re-trigger CI
      
      * Improve docs
      
      * Update docs
      
      * Update docs
      Jon Soifer committed
  24. 22 Sep, 2019 1 commit
  25. 21 Sep, 2019 1 commit
  26. 16 Sep, 2019 1 commit
  27. 22 Aug, 2019 1 commit
    • [TOPI][Relay][TensorFlow] Add OneHot operator (#3781) · 554df211
      * Add one-hot to Relay
      
      * topi implementation
      
      * Working
      
      * add topi test
      
      * Add TF test
      
      * Fix check
      
      * fix linting issues
      
      * fix documentation
      
      * Fix documentation
      
      * Add support for on_value, off_value, axis, dtype
      
      * Add full support for axis
      
      * Fix compute and update test_forward
      
      * Move on_value and off_value to inputs
      
      * Add topi test
      
      * Update tests
      
      * Update docs
      
      * Fix style
      
      * re-enable tests
      
      * Add one_hot to mxnet converter
      Jon Soifer committed
  28. 13 Aug, 2019 1 commit
  29. 23 Jul, 2019 1 commit
    • {relay,topi}.reinterpret support (#3599) · 2ed31b24
      = Motivation
      
      It's useful to expose the tvm::reinterpret functionality to Relay/TOPI users, as
      this allows them to build (fused) operators leveraging the bitwise
      reinterpretation of an operator. An example is approximate transcendental
      functions, which can be implemented similar to:
      
      ```.py
          def C(x):
              return relay.expr.const(x, "float32")
      
          def approx_exp(x):
              x = relay.minimum(relay.maximum(x, C(-88.0)), C(88.0))
              x = C(127.0) + x * C(1.44269504)
              xf = relay.floor(x)
              i = relay.cast(xf, "int32")
              x = x - xf
              Y = C(0.99992522) + x * (C(0.69583354) + x * (C(0.22606716) + x * C(0.078024523)))
              exponent = relay.left_shift(i, relay.expr.const(23, "int32"))
              exponent = relay.reinterpret(exponent, "float32")
              return exponent * Y
      
          def approx_sigmoid(x):
              # <2.0e-5 absolute error over [-5, 5]
              y = approx_exp(x)
              return y / (y + C(1.0))
      
          def approx_tanh(x):
              # <4.0e-5 absolute error over [-5, 5]
              x = x * C(2.0)
              y = approx_exp(x)
              return (y - C(1.0)) / (y + C(1.0))
      ```
      
      See unit tests for implementations of these approximate transendentals.
      Andrew Tulloch committed
  30. 19 Jul, 2019 1 commit
  31. 02 Jul, 2019 1 commit
  32. 28 Jun, 2019 1 commit
    • [RELAY] [OP] [MXNet Frontend] Add sequence_mask (#3437) · 8ef22176
      * Add sequence_mask
      
      use exactly the same arguments as mxnet
      
      fix
      
      * fix lint
      
      * fix lint
      
      * add mxnet conversion + relay
      
      * update
      
      * update doc
      
      * fix pylint
      
      * fix doc
      
      * address comment
      
      * try to address comments
      
      * try to enable shape check for valid_length
      
      * fix
      
      * try to fix
      
      * fix bug
      
      * try to fix
      
      * address comment
      
      * address comment
      Xingjian Shi committed
  33. 04 Jun, 2019 1 commit
  34. 24 May, 2019 1 commit
  35. 21 May, 2019 1 commit