- 15 Nov, 2019 3 commits
-
-
A test for qnn_mul has to be added when the qnn elemwise tests (#4282) get merged.
Ina Dobreva committed -
* [Relay][Pass] Add pass to remove unused functions in relay module * Add tests * Fix lint * Fix visit order * Add pass argument * Fix
Wei Chen committed -
Peter Yeh committed
-
- 14 Nov, 2019 9 commits
-
-
* Fix build * dummy change to retrigger CI * dummy change to retrigger ci * dummy change to retrigger ci
Jon Soifer committed -
Tianqi Chen committed
-
* add volatile override back * [codegen] remove fp16 function override for cuda
Yizhi Liu committed -
Tianqi Chen committed
-
Zhi committed
-
Animesh Jain committed
-
Animesh Jain committed
-
* [DOCKER] Add ONNX runtime dep * Improve ci script
Tianqi Chen committed -
jason-song-dev committed
-
- 13 Nov, 2019 2 commits
-
-
Animesh Jain committed
-
* Support Proposal operator on CPU. * PyLint space issue * PyLint space issue * Pylint singleton-comparison issue
Zhao Wu committed
-
- 12 Nov, 2019 7 commits
-
-
* WIP Run the TF tutorial on TF2 * Remove debugger statement. * Complete the support for TF2.0's `resize`. TF2.0 adds a `half_pixel_centers` attribute to the `resize` function in the image API. This commit completes the hooks in Relay's TF frontend. At the point of this commit, no new test yet. Also, this commit addresses solely the `resize` change. Other commits address other changes in TF2.0. * Support TF2.0 in the tutorial by using the compat API. This looks cleaner than trying to detect the TF version. * Use the TF compat API, so as to support TF2.0. This is a direct change, relying on the compat API provided by the TF team. This code will last as long as the compat API exists, so a "proper" support for TF1.x and 2.x will require more work in some future. * Partial support for EXPLICIT padding introduced in TF2.0. Explicit padding is a special case in TF2.0 (see reference linked below). Some models are serialized with that mode, and break TF support in TVM. Support is *partial* as EXPLICIT falls back to set padding on the Relay op, which only supports 2 values. At some point, padding may need to be extended to support 4 values, but that is out of scope of this support commit. Reference on EXPLICIT padding: https://github.com/tensorflow/tensorflow/commit/ec81825aaf7e848d9f8ddffdf1e0d20aebe9172c#diff-1d1c0bb0a880f85b6164f71dbb2f446e * Guard on checking for optional TF2.0 attribute. * Do not expect Relay to implement TF-specific attributes. The `half_pixel_centers` attribute is a new feature in TF2.0. Earlier commits of mine mistakenly introduce them in the Relay API. This is probably not what Relay is expected to support, and the semantics of `half_pixel_centers` is unclear (to me, at least) at this point. * Remove unclear comment. CR https://github.com/dmlc/tvm/pull/4104#discussion_r338705742 Addresses #4104 * Changes after review. Complying without understanding the rationale for now. * Fix the arguments set mistakenly. An argument ignored for the wrong operation.
Eric Platon committed -
Wei Chen committed
-
* Add test for the qnn_add operator The tests use fake quant approach so until the tf session tensors remain in float32. The test data has to be passed in uint8 because of how the tflite/tvm comparison works. Abs tolerance up to 1 is allowed for the qnn results. For now input_stats are hardcoded assuming the tests for the other qnn ops will pass the input data in the same range. * Separate qnn uint8 test function from the fp32 elemwise tests Isolate qnn uint8 elemwise tests Remove blank lines
Ina Dobreva committed -
Haichen Shen committed
-
* Relay Keras frontent batch_norm op params not handeling well * add unit test for Relay Frontend Keras batch_norm
Xingyu Zhou committed -
* Fix incorrect call to Unicode Win32 * Removed inet_pton call. Win32 already has it.
jmorrill committed -
Neo Chien committed
-
- 11 Nov, 2019 7 commits
-
-
* Add shape functions * Fix get_const_tuple * Fix cpplint * Fix pylint * Fix pylint * rebase and fix * Check Any for infer type * Fix expand_dim shape func for zero rank input * Fix pooling infer type * Address comment * Register layout transform attr
Yao Wang committed -
* [TF][Relay][Op] Pass module when infer shape * Fix lint * Improve style * Add test
Wei Chen committed -
Previously runtime::Module was supported using shared_ptr. This PR refactors the codebase to use the Object protocol. It will open doors to allow easier interpolation between Object containers and module in the future.
Tianqi Chen committed -
the test case was removed in #4181 for some reason @tqchen @soiferj @zhiics
Yong Wu committed -
* Fix tf reshape * Fix test * Fix pylint * Fix pylint
Yao Wang committed -
* Add pass manager tutorial * fix some examples * retrigger ci * Update tutorials/dev/relay_pass_infra.py Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe> * Add ToANormalForm link
Zhi committed -
Animesh Jain committed
-
- 10 Nov, 2019 5 commits
- 09 Nov, 2019 1 commit
-
-
* Add Auto TensorCore TensorCore Unit Test * Rebase to tvm master branch & Add auto tensor core * Code Refine * Add tensor core switch by pragma * Add pragma in tensor core example code * Get real tile size to replace hard coded 16 * support more than 2 dimensions (e.g. batchmatmul) for buffer bind scope * support batch matmul * Move cuda env check to tensor_core.cc * Coderefine for tensor_core.cc * Refine comments * Some refinements of code and comment * Update TensorCore UT to pass the CPU test * remove redundant code * matmul's storage align for different layout * Add support for differenct position of type cast * Add formal tutorial for auto tensorcore codegen * move tensorcore check up to tutorial code * code and doc refine * comment out tune_and_evaluate in tutorial * fix cpplint error
Minmin Sun (孙敏敏) committed
-
- 08 Nov, 2019 2 commits
-
-
fix the problem that android_rpc compilation failed
peike committed -
* fix_winograd_cuda_kernel_size * add unit test
Cody Hao Yu committed
-
- 07 Nov, 2019 2 commits
-
-
Jon Soifer committed
-
* Batch matmul tuning running but with errors. * Default x86 schedule as good as before. * Code Cleanup * Remove unused argument. * improved template documentation. * Silly lint fix * Removed leftover comment. * Moved cfg declaration to schedule for batch_matmul * Moved x86 dense cfg declaration to schedule. * lint fix * Removed duplicate cfg declaration in dense. * Reverted changes to dense.
Josh Fromm committed
-
- 06 Nov, 2019 2 commits
-
-
* fix winograd * move get padding after kernel transform
Cody Hao Yu committed -
* [Contrib] Fix error message at callback_get_section_size() * Trigger notification
Neo Chien committed
-