- 12 Sep, 2019 2 commits
-
-
This is an alternative implementation of a subset of the TVM runtime API (and graph runtime) that focuses entirely on reducing code size, at the expense of functionality (no tvm.extern(..) calls via PackedFunc, CPU only, etc). It might be worth incrementally expanding the surface area if there's interest. The motivation for this work was seeing what the minimal useful subset of the TVM runtime is. This is relevant for e.g. super code-size constrained applications in e.g. embedded/mobile. The current runtime is more like O(100KiB) or so, so this might be compelling for some users. The smaller surface area for auditing might make this relevant for https://github.com/dmlc/tvm/issues/3159, or the usecases I was thinking about in https://github.com/dmlc/tvm/issues/2523#issuecomment-459165815 re: the Rust runtime. The symbols in the tvm::minimalruntime space (i.e. excluding std:: and picojson::) are about 5KiB, so I think there's a bunch of room here (i.e. we could replace picojson:: with [`jsmn`](https://zserge.com/jsmn.html) or something, and we could replace more of the `std::unordered_map` usage, etc with custom primitives as well (similar to the `DynArray`).
Andrew Tulloch committed -
* Module refactor * Add load module * Add support for idempotent import * Tweak load paths * Move path around * Expose C++ import functions in Python * Fix import * Add doc string * Fix * Fix lint * Fix lint * Fix test failure * Add type solver * Fix lint
Jared Roesch committed
-
- 11 Sep, 2019 4 commits
-
-
Lianmin Zheng committed
-
* support LLVM trunk * guard with USE_LLVM in if condition for c++14 * GREATER_EQUAL -> GREATER * [Arm] parallel batch axis
Yizhi Liu committed -
Zhao Wu committed
-
雾雨魔理沙 committed
-
- 10 Sep, 2019 2 commits
-
-
* [Relay][Frontend][Keras] Fix ReLU in Keras Converter missed the case * [Relay][Frontend][Keras] Add test case for ReLU in Keras Converter missed the case * [Relay][Frontend][Keras] Add test case for ReLU in Keras Converter missed the case
Neo Chien committed -
Pratyush Patel committed
-
- 09 Sep, 2019 4 commits
-
-
* save * save
雾雨魔理沙 committed -
Luis Vega committed
-
* numpy compatible type inference * update * try to fix * fix * try to fix * fix lint * Update nn.h * cast to int32 * try to fix * fix again * retrigger ci
Xingjian Shi committed -
* add more ops * stop vectorization for erf * x * cleanup * fix * add whitelist for vectorizable intrin * add tf converter * fix dense * fix * add missing intrin * fix mxnet frontend * fix nvptx
Haichen Shen committed
-
- 08 Sep, 2019 2 commits
- 07 Sep, 2019 7 commits
-
-
* fix cmake for mac os * rename
Haichen Shen committed -
* support LLVM trunk * guard with USE_LLVM in if condition for c++14 * GREATER_EQUAL -> GREATER
Yizhi Liu committed -
noituIover committed
-
fix lld
Peter Yeh committed -
Haichen Shen committed
-
* [VTA] Support TLPP in function simulator. Issue: currently vta function simulator just doing serialized instruction execution, the dependency logic of runtime ISA which use for task level pipe line parallelism can not get verified by function simulator. Solution: make the simulator driver to be multiple thread and support TLPP. Benefit: TLPP support VTA function simulator would make VTA logic testing/debug /change more easy. replace boost lockfree queue add configure control for simulator tlpp enable or disable. change code tyle into google style. Wrap queue read/write and sync logic to make function call more simple. Add some comments. Remove MT logic, change into Single thread mode. address review comments. code style change to match google code style and add comments. add cmake macro to enable/disable simulator tlpp logic. submodule update. correct file name mentioned in comments. * remove USE_VTA_FSIM_TLPP.
Hua Jiang committed -
* update lint * lint fixed * lint updated * lint fixed * lint fixed * lint fixed * updates * add intel graphics as a package * remove print info * depthwise conv2d schedule added for intel graphics * asdf * fix lint * fix lint * fix ci * add channels
Leyuan Wang committed
-
- 06 Sep, 2019 6 commits
-
-
雾雨魔理沙 committed
-
* save * init * move type_relations
雾雨魔理沙 committed -
* Getting closer to having ADT defs * ADT defs working probly * Match parsing basipally done * came to earth in a silver chrome UFO * match finished? * All tests but newest are passing * ADT constructors work now cleanup? * Cleanup round 1 * Cleanup round 2 * Cleanup round 3 * Cleanup round 4 * Cleanup round 6 * Cleanup round 7 * Lil grammar fix * Remove ANTLR Java files * Lint roller * Lint roller * Address feedback * Test completeness in match test * Remove unused imports * Lint roller * Switch to Rust-style ADT syntax * Lil fix * Add dummy `extern type` handler * Add type arg to test * Update prelude semantic version * Repair test * Fix graph var handling in match * Revert 's/graph_equal/is_unifiable' change
Logan Weber committed -
Yong Wu committed
-
Installed through pypi
Jason Knight committed -
Yizhi Liu committed
-
- 05 Sep, 2019 12 commits
-
-
* Search PATH for DLLs * Fix lint issue
Jon Soifer committed -
* implement tuple pattern * add tuple pattern * lint; * lint * lint * fix error * fix * add test
雾雨魔理沙 committed -
kice committed
-
* save * add test * refactor * fix indent * save * refactor
雾雨魔理沙 committed -
* Update from_source.rst * Update deploy_ssd_gluoncv.py
Haibin Lin committed -
* [Relay] enable NHWC of `relay.testing.mobilenet` In this way, we can play around NHWC inside TVM regardless of the frontends. * [Test] test for NHWC of relay.testing.mobilenet
黎明灰烬 committed -
* initial conv2d_transpose * correct select operator * cleanup * fix * fix correcness check * conv2d transpose declaration fix * autotvm conv2d_transpose tuning script * ir pass fix * fix tuning script * deriving params from env, adding bias * removing bias comp from deconvolution * lint * fix * lint * lint * turning off cpu * lint, ops * lint * import fix * removing hard coded values * lint
Thierry Moreau committed -
* adding support for graphpack over multiply op * increasing resnet model coverage * fix indentation * lint * moving recursion limit fix into graphpack pass * moving recursionlimit to relay init * pooling on NCHWnc format * adding more models * deploy_resnet_on_vta.py * trailing line * generalizing to vision models * merge conflicts * fix, apply quantization to VTA only * improving comments * trimming models that have runtime issues for the moment * lint * lint * lint
Thierry Moreau committed -
* init * fix
雾雨魔理沙 committed -
Animesh Jain committed
-
* rework; * `de10-nano` -> `de10nano`; * fix compilation error; * bug fix; * Update install.md * Update install.md * Update install.md * update with current runtime; * add debug messages; * bug fix in cma kernel module;
Liangfu Chen committed -
miheer vaidya committed
-
- 04 Sep, 2019 1 commit
-
-
Luis Vega committed
-