- 28 Jul, 2019 3 commits
-
-
Luis Vega committed
-
Balint Cristian committed
-
Luis Vega committed
-
- 27 Jul, 2019 3 commits
-
-
* fix tensor issue/commit in gemm * remove trailing spaces
Luis Vega committed -
Yong Wu committed
-
peterjc123 committed
-
- 26 Jul, 2019 6 commits
-
-
YPBlib committed
-
* Add USE_GTEST as a CMake variable * Add GTest section in installation docs * Incorporate feedback
Logan Weber committed -
Enhance test to cover this case
lixiaoquan committed -
* [TOPI][CUDA] Schedule for pool_grad * Relay test * Fix fused op * doc * Remove set scope local
Wuwei Lin committed -
* add check_grad * finish * what does the fox say? * lint lint lint lint lint lint lint lint lint
雾雨魔理沙 committed -
* support for different inp/wgt bits, rewrote dot for clarity * [VTA] [Chisel] support for different inp/wgt bits, rewrote DotProduct for clarity * [VTA] [Chisel] support for different inp/wgt bits, rewrote DotProduct for clarity * change back to sim * fix index * fix index * fix indent * fix indent * fix indent * fix trailing spaces * fix trailing spaces * change to more descriptive name * matric->matrix * fix spacing * fix spacing & added generic name for dot * better parameter flow * spacing * spacing * spacing * update requirement (tested) for dot, spacing * function call convention * small edit
Benjamin Tu committed
-
- 25 Jul, 2019 7 commits
-
-
Lianmin Zheng committed
-
Balint Cristian committed
-
* uTVM interfaces (#14) * some minor interface changes * implemented HostLowLevelDevice * added MicroDeviceAPI * implemented micro_common and added Python interfaces * current status, semi implemented micro session * added micro_common implementation and python interfaces (#18) * added micro_common implementation and python interfaces (#18) * current status, semi implemented * host test working * updated interfaces for MicroSession arguments allocation * make somewhat lint compatible * fix based on comments * added rounding macro * fix minor bug * improvements based on comments * Clean up `binutil.py` and make Python-3-compatible * Change argument allocation design * Address feedback and lint errors * Improve binutil tests * Simplify allocator (per @tqchen's suggestions) * Doc/style fixes * farts * mcgee * rodata section werks (and so does `test_runtime_micro_workspace.py`) * simple graph runtime werk * TEMP * ResNet works, yo * First round of cleanup * More cleanup * runs a dyson over the code * Another pass * Fix `make lint` issues * ready to pr... probably * final * Undo change * Fix rebase resolution * Minor fixes * Undo changes to C codegen tests * Add `obj_path` in `create_micro_lib` * TEMP * Address feedback * Add missing TODO * Partially address feedback * Fix headers * Switch to enum class for `SectionKind` * Add missing ASF header * Fix lint * Fix lint again * Fix lint * Kill lint warnings * Address feedback * Change Python interface to MicroTVM All interaction with the device is now through `Session` objects, which are used through Python's `with` blocks. * Reorder LowLevelDevice interface * Store shared ptr to session in all alloced objects * Move helper functions out of `tvm.micro` * Switch static char arr to vector * Improve general infra and code quality Does not yet address all of tqchen's feedback * Forgot a rename * Fix lint * Add ASF header * Fix lint * Partially address MarisaKirisame's feedback * Lint * Expose `MicroSession` as a node to Python * Revert to using `Session` constructor * Fix compiler error * (Maybe) fix CI error * Debugging * Remove * Quell lint * Switch to stack-based session contexts * Make uTVM less intrusive to host codegen And use SSA for operands of generated ternary operators * Inline UTVMArgs into UTVMTask struct * Remove `HostLowLevelDevice` header * Remove `BaseAddr` class * Address feedback * Add "utvm" prefix to global vars in runtime * Fix lint * Fix CI * Fix `test_binutil.py` * Fix submodules * Remove ResNet tests * Make `test_binutil.py` work with nose * Fix CI * I swear this actually fixes the binutil tests * lint * lint * Add fcompile-compatible cross-compile func * Add docs for uTVM runtime files * Move pointer patching into `MicroSession` * Fix lint * First attempt at unifying cross-compile APIs * Fix lint * Rename `cross_compile` back to `cc` * Address feedback * Remove commented code * Lint * Figure out failing function * Remove debugging code * Change "micro_dev" target to "micro" * Add checks in tests for whether uTVM is enabled * Add TODO for 32-bit support * Rename more "micro_dev" to "micro" * Undo rename We already have `tvm.micro` as a namespace. Can't have it as a method as well. * Fix failing CI Thanks to @tqchen for finding this bug. Emitting ternary operators for `min` and `max` causes concurrency bugs in CUDA, so we're moving the ternary op emissions from `CodeGenC` to `CodeGenCHost`. * Address feedback * Fix lint
Logan Weber committed -
Philip Hyunsu Cho committed
-
Yong Wu committed
-
Jian Weng committed
-
* [TOPI] Average Pool2D Bug. Issue - https://github.com/dmlc/tvm/issues/3581 * Add uint16 test.
Animesh Jain committed
-
- 24 Jul, 2019 6 commits
-
-
Logan Weber committed
-
Tianqi Chen committed
-
Tianqi Chen committed
-
quickfix
雾雨魔理沙 committed -
Wuwei Lin committed
-
* small bug fix for DataTypeObject * retrigger ci
Zhi committed
-
- 23 Jul, 2019 7 commits
-
-
internally and externally, interested in replacing standard dense layers with block-sparse matrix multiplication layers. The motivations are generally: higher performance (due to reduction in FLOPs, memory bandwidth/cache footprint), enabling larger models (e.g. fitting more layers in a given memory budget). Some public work along these lines: * https://openai.com/blog/block-sparse-gpu-kernels/ * https://openai.com/blog/sparse-transformer/ * https://arxiv.org/abs/1802.08435 * https://arxiv.org/abs/1711.02782 Various groups have been able to successfully train models with reasonable levels of sparsity (90%+) with marginal accuracy changes, which suggests substantial speedups are possible (as this implies a >10x reduction in FLOPs). It is fairly straightforward to realize these theoretical speedups, see e.g. TVM benchmarks for Intel CPUs in https://gist.github.com/ajtulloch/e65f90487bceb8848128e8db582fe902, and CUDA results in https://github.com/openai/blocksparse, etc. * https://github.com/openai/blocksparse (CUDA) * https://software.intel.com/en-us/mkl-developer-reference-c-mkl-bsrmm (MKL BSRM) * https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.bsr_matrix.html (SCIPY BSR representation) This is extracted from an internal patch we've been using internally. There are various extensions possible (int8/fp16/bf16, CUDA/other GPU architectures), but this is a reasonable starting point. This needs more thorough unit test coverage however. We follow the conventions established by scipy.sparse.bsr_matrix and other libraries, see the unit tests for details. For folks interested in experimenting with scheduling/AutoTVM etc, https://gist.github.com/ajtulloch/e65f90487bceb8848128e8db582fe902 is a useful starting point.
Andrew Tulloch committed -
= Motivation It's useful to expose the tvm::reinterpret functionality to Relay/TOPI users, as this allows them to build (fused) operators leveraging the bitwise reinterpretation of an operator. An example is approximate transcendental functions, which can be implemented similar to: ```.py def C(x): return relay.expr.const(x, "float32") def approx_exp(x): x = relay.minimum(relay.maximum(x, C(-88.0)), C(88.0)) x = C(127.0) + x * C(1.44269504) xf = relay.floor(x) i = relay.cast(xf, "int32") x = x - xf Y = C(0.99992522) + x * (C(0.69583354) + x * (C(0.22606716) + x * C(0.078024523))) exponent = relay.left_shift(i, relay.expr.const(23, "int32")) exponent = relay.reinterpret(exponent, "float32") return exponent * Y def approx_sigmoid(x): # <2.0e-5 absolute error over [-5, 5] y = approx_exp(x) return y / (y + C(1.0)) def approx_tanh(x): # <4.0e-5 absolute error over [-5, 5] x = x * C(2.0) y = approx_exp(x) return (y - C(1.0)) / (y + C(1.0)) ``` See unit tests for implementations of these approximate transendentals.
Andrew Tulloch committed -
Luis Vega committed
-
* Update the Relay adding pass doc to reference the new pass infrastructure * Correct pass name Co-Authored-By: Zhi <5145158+zhiics@users.noreply.github.com> * Align header equals signs
Steven S. Lyubomirsky committed -
Animesh Jain committed
-
雾雨魔理沙 committed
-
In cases where we have multiple models or threadpools active, spinning around `sched_yield()` may not be desirable, as it prevents the OS from effectively scheduling other threads. Thus, allow users to conditionally disable this behaviour (via an environment variable `TVM_THREAD_POOL_SPIN_COUNT`, similar to existing environment flags for the thread pool such as `TVM_BIND_THREADS`, etc). This substantially improves tail latencies in some of our multi-tenant workloads in practice. Unit tests have been added - on my laptop, running: ``` TVM_THREAD_POOL_SPIN_COUNT=0 ./build/threading_backend_test; TVM_THREAD_POOL_SPIN_COUNT=1 ./build/threading_backend_test; ./build/threading_backend_test; ``` gives https://gist.github.com/ajtulloch/1805ca6cbaa27f5d442d23f9d0021ce6 (i.e. 97ms -> <1ms after this change)
Andrew Tulloch committed
-
- 22 Jul, 2019 3 commits
-
-
* [RFC] Initial support for Tflite operator SPLIT This patch adds initial support for the tflite operator split. However I am not yet sure how to handle the axis parameter for the split operator and support it in the test infrastructure. Putting this up for an initial review and comment. The split operator in tflite according to https://www.tensorflow.org/lite/guide/ops_compatibility appears to take num_or_size_split as a 0D tensor. I also note that tflite.split is one of the few operators that returns multiple outputs and thus the helper routines in the tests needed some massaging to make this work. @apivarov , could you please review this ? Thanks, Ramana * Fix the axis parameter Add more tests * Address review comments * Try out frozen_gene's suggestion * Handle split of 1 element * int32 is only supported in tflite 1.14, let's check that version here. * Keep this at python3.5 * Add packaging as a python package to be installed
Ramana Radhakrishnan committed -
Tianqi Chen committed
-
* updated runtime to support non-shared memory FPGAs for instruction and micro-op kernels * adding driver-defined memcpy function to handle F1 cases * refactor to include flush/invalidate in memcpy driver function * update tsim driver * bug fixes * cleanup * pre-allocate fpga readable buffers to improve perf * fix * remove instruction stream address rewrite pass for micro op kernels * fix: * white spaces * fix lint * avoid signed/unsigned compilation warning * avoid signed/unsigned compilation warning * fix * fix * addressing comments * whitespace * moving flush/invalidate out of memmove * clearnup * fix * cosmetic * rename API * comment fix
Thierry Moreau committed
-
- 21 Jul, 2019 2 commits
-
-
Tianqi Chen committed
-
Luis Vega committed
-
- 20 Jul, 2019 1 commit
-
-
Luis Vega committed
-
- 19 Jul, 2019 2 commits