- 22 Dec, 2019 1 commit
-
-
dtype.h -> runtime/data_type.h Changes: - Rename all old reference of tvm::Type to DataType - ExprNode.type -> ExprNode.dtype - Expr.type() -> Expr.dtype() - Change Expr related functions to expr_operator. - DataType::min() -> min_value(DataType) - DataType::max() -> max_value(DataType) - Move type constructor Int, UInt, Float, Handle, Bool into DataType. - Int(bits) -> DataType::Int(bits) - UInt(bits) -> DataType::UInt(bits)
Tianqi Chen committed
-
- 20 Dec, 2019 1 commit
-
-
Zhi committed
-
- 18 Dec, 2019 2 commits
-
-
Alex Gladkov committed
-
Zhi committed
-
- 12 Dec, 2019 2 commits
-
-
* fix mul rewrite * register Realize Rewrite for global avg pool and add test * remove unnecessary check * improve the test case
masahi committed -
* [TOPI] implement pool3d op * use PoolInferCorrectLayout for both 2d and 3d pooling * unify MakeMaxPool and MakeAvgPool
optima2005 committed
-
- 10 Dec, 2019 1 commit
-
-
* [Relay][Fix] Fix alter op layout when calling a global var * add test case
Haichen Shen committed
-
- 08 Dec, 2019 1 commit
-
-
Zhi committed
-
- 05 Dec, 2019 1 commit
-
-
* [relay][op] Add shape func to tile * retrigger ci * check dynamic axes * retrigger ci
Zhi committed
-
- 04 Dec, 2019 1 commit
-
-
* implement conv3d op * add back missed conv2d_output_shape by mistake * fix typo and docs, add topi test * rebase to master and merge 2d/3d unification * use cudnn.conv_forward
optima2005 committed
-
- 03 Dec, 2019 1 commit
-
-
abergeron committed
-
- 01 Dec, 2019 1 commit
-
-
* Fix lambda lift * clean up * lint * fix * remove unused import
Haichen Shen committed
-
- 24 Nov, 2019 1 commit
-
-
* [LINT] Improve the check tool to handle ASF copyright message. * [LINT] Remove unnecessary copyright message as per ASF requirement. * Fix codegen hybrid * [LINT] Broaden license checks to include html, xml * [LINT] Fix rest of the files * Fix notice * [LINT] Improve check file type error message
Tianqi Chen committed
-
- 22 Nov, 2019 1 commit
-
-
* [VM] add a few more API to vm * [VM][Fix] fix vm convert args * [VM] a few fixes * rename fields * update * update vm profiler * x * add doc * lint * fix test * address comments
Haichen Shen committed
-
- 21 Nov, 2019 2 commits
-
-
Animesh Jain committed
-
Zhi committed
-
- 19 Nov, 2019 2 commits
-
-
* [Relay][Quantize] Integrate data-aware calibration into quantization * Update _calibrate.py * trigger ci * Address comments * address comments
Wuwei Lin committed -
Animesh Jain committed
-
- 18 Nov, 2019 1 commit
-
-
* Add tf FloorMod * Add floor_div/mod into topi and relay * Add to rst * Fix test
Yao Wang committed
-
- 16 Nov, 2019 1 commit
-
-
* Add qnn conv2d attributes for input_tensor_scale and kernel_tensor_scale. The lowering in the tflite frontend loses the input_tensor_scale and the kernel_tensor_scale by multiplying it and putting it into the Requantize operation. This means that any graph partitioning passes or other passes that need to access this information no longer have it available in the qnn dialect. regards Ramana * Store input tensor scale and Weight tensor scale for Dense as well As for conv2d, the tflite frontend drops the input tensor scale and the weight tensor scale from the relay op. Store it as separate fields in there. * Fix unintentional tab * Rename input_tensor_scale to input_scale and kernel_tensor_scale to kernel_scale for conv2d. * input_tensor_scale -> input_scale weight_tensor_scale->weight_scale * Rework dense testcase And use input_scale and kernel_scale * Be consistent in use of input_scale and kernel_scale values * Fixup qnn conv2d tests for input_scale and kernel_scale * Make pydoc identical between conv2d and dense for weight_tensor * Fix up conv2d parameters to be in the same order between C++ and python * Fix ordering of parameters for dense. * Add input_scale and output_scale to try and satisfy ci gods * Delete input_scale and kernel_scale. nn.conv2d does not contain input_scale and kernel_scale. We need to delete it when lowering it to nn.conv2d. * Add input_scale and kernel_scale for qnn.conv2d
Ramana Radhakrishnan committed
-
- 15 Nov, 2019 2 commits
-
-
[Relay][VM][Interpreter] Enable first-class constructors in VM and interpreter via eta expansion (#4218) * Fix constructor pretty printing * Make Module::HasDef name consistent with API * Add VM constructor compilation via eta expansion * Lint * Fix CI * Fix failing test * Address comment * Retrigger CI * Retrigger CI
Logan Weber committed -
* [Relay][Pass] Add pass to remove unused functions in relay module * Add tests * Fix lint * Fix visit order * Add pass argument * Fix
Wei Chen committed
-
- 14 Nov, 2019 2 commits
-
-
Animesh Jain committed
-
Animesh Jain committed
-
- 11 Nov, 2019 3 commits
-
-
* Add shape functions * Fix get_const_tuple * Fix cpplint * Fix pylint * Fix pylint * rebase and fix * Check Any for infer type * Fix expand_dim shape func for zero rank input * Fix pooling infer type * Address comment * Register layout transform attr
Yao Wang committed -
Previously runtime::Module was supported using shared_ptr. This PR refactors the codebase to use the Object protocol. It will open doors to allow easier interpolation between Object containers and module in the future.
Tianqi Chen committed -
* Add pass manager tutorial * fix some examples * retrigger ci * Update tutorials/dev/relay_pass_infra.py Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe> * Add ToANormalForm link
Zhi committed
-
- 04 Nov, 2019 1 commit
-
-
XFPlus committed
-
- 01 Nov, 2019 3 commits
-
-
* [NODE][REFACTOR] Rename IRFunctor->NodeFunctor, use function pointer for dispatching. Previously we used std::function for the functor dispatching. It introduces additional overhead and problems during dll destruction(of std::function). This PR changes the std::function to function pointers. This change a bit restrictions around the set_dispatch that we can get around, but will improve the general efficiency by reducing one level of indirection in the std::function. We also no longer need special marcos to register functions to the Functor.
Tianqi Chen committed -
Jared Roesch committed
-
* [Relay][Pass] Avoid FoldConstant folding some ops * rename
Wuwei Lin committed
-
- 30 Oct, 2019 2 commits
-
-
* Add support for Any op * Support ONNX frontend * Add doc * Add to relay docs * Dummy change to retrigger CI
Jon Soifer committed -
* [QNN] Improving Dense lowering. * - Moving get_shape method to util - Finalizing the test cases and the code structure for optimized dense computation. * - Fixing cpplint. * - Addressing review comments. * - Renaming the variables correctly. * - Renaming the variables correctly.
shoubhik committed
-
- 29 Oct, 2019 2 commits
- 28 Oct, 2019 2 commits
-
-
* :add scale2 for upsample * update unit test for upsampling * support latest upsample op for multiple frontend * fix lint * fix lint * fix lint * fix lint * update scale description and rebase
Xingyu Zhou committed -
Animesh Jain committed
-
- 27 Oct, 2019 2 commits
-
-
* Add support for attaching params * Fix types * Fix test
Jared Roesch committed -
* add checkpoint annotation for checkpointing memory optimization * add alpha-equivalence checkpoint test and fix gradient type issue * fix build issues * ignore checkpoint annotation when checking missing gradients * refactor, fix checkpoint compute for tuple and add tests
Altan Haan committed
-
- 25 Oct, 2019 1 commit
-
-
* save * lint
雾雨魔理沙 committed
-