- 11 Oct, 2019 2 commits
-
-
* update build_gcn.py tutorial updates * support bias in GCN layer * download pretrained gcn model * verify model accuracy * use time_evaluator to measure runtime * fix adding bias in gcn layer * remove printing output * fix small bug * add DGL-PyTorch comparison into the build_gcn tutorial * add accuracy testing * adjust import order * handle different dgl versions * update number for dgl version checking
Chien-Yu Lin committed -
Animesh Jain committed
-
- 10 Oct, 2019 9 commits
-
-
* Add FIFO buffer op to enable explicit computation re-use in convolution * Add a test * Add end-to-end test with 1D convolution * Add a stub in MXNet frontend * Address reviewer comments * Add back stub for MXNet frontend
Philip Hyunsu Cho committed -
* app init push * fix on readme * change name, add bit serial explanantion * rm serialLoadMM, change doc * syntax change for readme * add parallel test functionality * fix readme * add python doc * syntax
Benjamin Tu committed -
LiangHao committed
-
Leyuan Wang committed
-
- Adding support for Mxnet flavored dequantization for both default and using MKLDNN. User can choose between the two at runtime. (#3945) - Added tests for new methods added.
shoubhik committed -
* move the number of nodes constraint in op fusion up to the dom tree level * add test case of limiting the max number of ops to be fused * uncomment other test cases
Yida Wang committed -
Aniket Rangrej committed
-
The torch package depends on the future package but the torch wheel does not expose that dependency resulting in an inconsitent install. Ideally the wheel should declare all of its dependencies, I'm not sure why the packagers have choosen not to do this, for now the simple work around is to explicitly install the future package. Change-Id: Ic9f0f4bb4c78ab65706fc1b20c1b4fd287856a9e
Marcus Shawcroft committed -
* [Relay][VM] Fix constant folding issue in VM compiler 1. allow pass params when compile a module 2. enhance profiler robustness * remove dead code * fix lint * add get_params * fix test * don't pass params back * remove get_params * docs * move compile function to api * compile clashes with builtin name * fix compilation error * remove dead code
Wei Chen committed
-
- 09 Oct, 2019 4 commits
-
-
* add valid autotune * fix pylint
Leyuan Wang committed -
The current bounds checking infrastructure inserts checks like: ``` for (i, 0, bounds[n]) { if (likely(i < bounds[n]) { ... } } ``` into the TVM IR which is currently not removed by simplification infrastructure. This is a little unclean, as these are trivially true since for a loop var `i` with a given min and extent, we are guaranteed that `i >= min` and `i < min + extent`. Thus, we can insert these checks into the IR and use them to eliminate trivial bounds checks early on.
Andrew Tulloch committed -
Zhi committed
-
Animesh Jain committed
-
- 08 Oct, 2019 8 commits
-
-
Yizhi Liu committed
-
Use fdiv in the tests for the deduce_bound
Umang Yadav committed -
if n_trial is larger then config space.
Attila Dusnoki committed -
Issue: git clone latest TVM/VTA and run VTA on xilinx FPGA board, application crashed due to the "call stack overflow" which caused by a infinite recursive function call. this issue ever happen before and get addressed by PR 3843. Analysis: seems like de10-nano driver PR used old code base then the logic change of 3843 get eliminated. Solution: add the logic back.
Hua Jiang committed -
The -mfloat-abi hard option does not work for LLVM < 6.0 as it is ignored. This adds a fatal error when using unsupported LLVM versions so that the failure is not silent.
mbarrett97 committed -
Animesh Jain committed
-
* Fix VM invoke with set_params * add test * tweak
Haichen Shen committed -
Wuwei Lin committed
-
- 07 Oct, 2019 5 commits
-
-
Logan Weber committed
-
In current implementation HIDE_PRIVATE_SYMBOLS hides symbols from TVM itself but not from its dependent libraries. This is problematic when other third-party libraries with the same symbols are linked to the same executable. One example is using TVM with Mesa OpenCL drivers: they depend on LLVM and load its shared libraries with RTLD_GLOBAL flag, which results in conflicts with LLVM symbols that TVM uses. Arguably this particular issue belongs to Mesa (here's their tracking bug: https://gitlab.freedesktop.org/mesa/mesa/issues/236) but in general that's the right thing to do regardless of this particular bug. Note that I'm not enabling this functionality for Darwin as in my earlier tests their linker didn't seem to understand "--exclude-libs" (but I don't have test platform ATM to double-check).
ndl committed -
雾雨魔理沙 committed
-
fix some typos
Bohan Hou committed -
Chengji Yao committed
-
- 06 Oct, 2019 4 commits
-
-
Animesh Jain committed
-
Animesh Jain committed
-
The tensorflow zeros_like operation provided in array_ops.py produces directly a tensor with zeros without a graph, using only the shape and type of the input. This imposes the use of gen_array_ops.py that produces both a tensor and a graph so a comparison between tflite and tvm can be done.
Ina Dobreva committed -
@zhiics @icemelon9
Yong Wu committed
-
- 05 Oct, 2019 4 commits
-
-
* [Relay][VM] Add more passes to VMCompiler * Check build config * Add todo
Wei Chen committed -
Wei Chen committed
-
* fix * tweak
Haichen Shen committed -
* save save redo max test save address comment fix * address comment * increase rtol * address review comment
雾雨魔理沙 committed
-
- 04 Oct, 2019 1 commit
-
-
Yizhi Liu committed
-
- 03 Oct, 2019 2 commits
-
-
Jon Soifer committed
-
* [Relay][Op] Add instance norm op * mend [Relay][Op] Add instance norm op
bindog committed
-
- 02 Oct, 2019 1 commit
-
-
Animesh Jain committed
-