- 15 Oct, 2019 4 commits
-
-
* [Relay][AlterOpLayout] NHWC to NCHWc pad operator. * Fixing culprit. * Flaky test 1. * Flaky test 2.
Animesh Jain committed -
Sergei Grechanik committed
-
* [RUNTIME] Introduce new object protocol. This PR introduces a new object protocol to unify the node and object. We also updated the existing runtime::vm code to make use of the new system. Update to the node will be done in a follow up PR. Other changes: - Remove object related code in json serializer as that code logic was not complete and we have a separate serializer for VM, can revisit later. * address review comment * Fix the child slot logic
Tianqi Chen committed -
Animesh Jain committed
-
- 14 Oct, 2019 4 commits
-
-
Tianqi Chen committed
-
Tianqi Chen committed
-
Animesh Jain committed
-
Tianqi Chen committed
-
- 13 Oct, 2019 2 commits
-
-
Leo Chen committed
-
This implementation provides cast to limited number of dtypes that tflite currently supports for placeholder op. Add INT64 in the possible dtypes as it appears to be supported accrording to tlfite schema.
Ina Dobreva committed
-
- 12 Oct, 2019 1 commit
-
-
Thierry Moreau committed
-
- 11 Oct, 2019 6 commits
-
-
* overload half operators for cuda codegen * add float16 te test_op_level1 * fix test_op_level1.py * fix lint * disable fp16 test if gpu does not support * disable fp16 test if gpu does not support * bypass float16 test if gpu does not support float16
Xingyu Zhou committed -
* Fix layer norm for fp16 * [Fix] Fix arange for fp16 * [Fix] Fix mxnet frontend for fp16 * [Fix] Fix arange for fp16 * remove comments * x * fix nnvm
Haichen Shen committed -
* [tvm][any] broadcast with values other than 1 * Add test for incompatible runtime values * Remove hybrid script compact buffer binding * retrigger ci
Zhi committed -
Peter Yeh committed
-
* update build_gcn.py tutorial updates * support bias in GCN layer * download pretrained gcn model * verify model accuracy * use time_evaluator to measure runtime * fix adding bias in gcn layer * remove printing output * fix small bug * add DGL-PyTorch comparison into the build_gcn tutorial * add accuracy testing * adjust import order * handle different dgl versions * update number for dgl version checking
Chien-Yu Lin committed -
Animesh Jain committed
-
- 10 Oct, 2019 9 commits
-
-
* Add FIFO buffer op to enable explicit computation re-use in convolution * Add a test * Add end-to-end test with 1D convolution * Add a stub in MXNet frontend * Address reviewer comments * Add back stub for MXNet frontend
Philip Hyunsu Cho committed -
* app init push * fix on readme * change name, add bit serial explanantion * rm serialLoadMM, change doc * syntax change for readme * add parallel test functionality * fix readme * add python doc * syntax
Benjamin Tu committed -
LiangHao committed
-
Leyuan Wang committed
-
- Adding support for Mxnet flavored dequantization for both default and using MKLDNN. User can choose between the two at runtime. (#3945) - Added tests for new methods added.
shoubhik committed -
* move the number of nodes constraint in op fusion up to the dom tree level * add test case of limiting the max number of ops to be fused * uncomment other test cases
Yida Wang committed -
Aniket Rangrej committed
-
The torch package depends on the future package but the torch wheel does not expose that dependency resulting in an inconsitent install. Ideally the wheel should declare all of its dependencies, I'm not sure why the packagers have choosen not to do this, for now the simple work around is to explicitly install the future package. Change-Id: Ic9f0f4bb4c78ab65706fc1b20c1b4fd287856a9e
Marcus Shawcroft committed -
* [Relay][VM] Fix constant folding issue in VM compiler 1. allow pass params when compile a module 2. enhance profiler robustness * remove dead code * fix lint * add get_params * fix test * don't pass params back * remove get_params * docs * move compile function to api * compile clashes with builtin name * fix compilation error * remove dead code
Wei Chen committed
-
- 09 Oct, 2019 4 commits
-
-
* add valid autotune * fix pylint
Leyuan Wang committed -
The current bounds checking infrastructure inserts checks like: ``` for (i, 0, bounds[n]) { if (likely(i < bounds[n]) { ... } } ``` into the TVM IR which is currently not removed by simplification infrastructure. This is a little unclean, as these are trivially true since for a loop var `i` with a given min and extent, we are guaranteed that `i >= min` and `i < min + extent`. Thus, we can insert these checks into the IR and use them to eliminate trivial bounds checks early on.
Andrew Tulloch committed -
Zhi committed
-
Animesh Jain committed
-
- 08 Oct, 2019 8 commits
-
-
Yizhi Liu committed
-
Use fdiv in the tests for the deduce_bound
Umang Yadav committed -
if n_trial is larger then config space.
Attila Dusnoki committed -
Issue: git clone latest TVM/VTA and run VTA on xilinx FPGA board, application crashed due to the "call stack overflow" which caused by a infinite recursive function call. this issue ever happen before and get addressed by PR 3843. Analysis: seems like de10-nano driver PR used old code base then the logic change of 3843 get eliminated. Solution: add the logic back.
Hua Jiang committed -
The -mfloat-abi hard option does not work for LLVM < 6.0 as it is ignored. This adds a fatal error when using unsupported LLVM versions so that the failure is not silent.
mbarrett97 committed -
Animesh Jain committed
-
* Fix VM invoke with set_params * add test * tweak
Haichen Shen committed -
Wuwei Lin committed
-
- 07 Oct, 2019 2 commits
-
-
Logan Weber committed
-
In current implementation HIDE_PRIVATE_SYMBOLS hides symbols from TVM itself but not from its dependent libraries. This is problematic when other third-party libraries with the same symbols are linked to the same executable. One example is using TVM with Mesa OpenCL drivers: they depend on LLVM and load its shared libraries with RTLD_GLOBAL flag, which results in conflicts with LLVM symbols that TVM uses. Arguably this particular issue belongs to Mesa (here's their tracking bug: https://gitlab.freedesktop.org/mesa/mesa/issues/236) but in general that's the right thing to do regardless of this particular bug. Note that I'm not enabling this functionality for Darwin as in my earlier tests their linker didn't seem to understand "--exclude-libs" (but I don't have test platform ATM to double-check).
ndl committed
-