- 08 Apr, 2019 1 commit
-
-
* [HEADER] ASF header dir=include * [HEADER] ASF Header dir=src * [HEADER] ASF Header -dir=python * [HEADER] ASF header dir=topi * [HEADER] ASF Header dir=nnvm * [HEADER] ASF Header -dir=tutorials * [HEADER] ASF Header dir=tests * [HEADER] ASF Header -dir=docker * fix whitespace * [HEADER] ASF Header -dir=jvm * [HEADER] ASF Header -dir=web * [HEADER] ASF Header --dir=apps * [HEADER] ASF Header --dir=vta * [HEADER] ASF Header -dir=go * temp * [HEADER] ASF Header --dir=rust * [HEADER] Add ASF Header --dir=cmake * [HEADER] ASF Header --dir=docs * [HEADER] Header for Jenkinsfile * [HEADER] ASF Header to toml and md * [HEADER] ASF Header to gradle * Finalize rat cleanup * Fix permission * Fix java test * temporary remove nnvm onnx test
Tianqi Chen committed
-
- 31 Oct, 2018 1 commit
-
-
Haichen Shen committed
-
- 02 Oct, 2018 1 commit
-
-
Sergei Grechanik committed
-
- 23 Aug, 2018 1 commit
-
-
* [NNVM][TEST] Numerical gradient testing * [NNVM][TEST] Make some tests a little faster * Fix the failing test_top_level3 * Target exclusion for the check_function * Try to ignore singularities * grad_input_vars now can't contain shapes * Don't pass unnecessary grad_input_vars to check_function * Multiple outputs; fixes; testing of check_function * Use numerical_grads_params to pass parameters to numgrad checker * Fail when no action is requested excplicitly * Pass additional params to functions * Silence the linter issue * Simplified numgrad checking * Improved docs for check_function * Fixed the error message when no dtype is provided * Several fixes * Tests with shape/dtype inference for inputs * Don't check dense's grads on cuda * Raise an error if output dtypes haven't been inferred * Moved shape/dtype inference into a separate function; use float32 as fallback * Remove redundant dtype=float32 * Fix multiple outputs * Use check_function in the rest of the test_top_level1
Sergei Grechanik committed
-
- 04 Jul, 2018 1 commit
-
-
Dayananda V committed
-
- 28 Jun, 2018 1 commit
-
-
Pariksheet Pinjari committed
-
- 24 Jun, 2018 1 commit
-
-
Tianqi Chen committed
-
- 22 Jun, 2018 1 commit
-
-
Pariksheet Pinjari committed
-
- 05 Jun, 2018 1 commit
-
-
[NNVM] Fix softmax gradient
larrywyang committed
-
- 29 May, 2018 17 commits
-
-
Pariksheet Pinjari committed
-
Yao Wang committed
-
nhynes committed
-
* Update docs * Add backward operator to enable backward graph * Fix testing * Refactor top level1 test code * Fix format * Test * Added zeros ones op * Register fill_like operator * Fix unit test
yuruofeifei committed -
* vgg16 success * remove six.PY2, use sys.version_info; convert_activation() accepts activation type name(str, e.g. 'relu') as input; * add convert_merge * fix convert_batchnorm; improve tests * fix lint * add numpy-style pad operator * deal with asymmetry padding * resnet50 success * fix pool_convert; xception passes test * update tvm * fix bias error; all tests pass * use > >, not >>
Yuwei Hu committed -
* add transform * fix * update doc * Update tvm
Xingjian Shi committed -
Tianqi Chen committed
-
Tianqi Chen committed
-
Tianqi Chen committed
-
* register pool, global_pool; add mobilenet example * tests of pool and global_pool * use new API of runtime module * small fix
Yuwei HU committed -
Tianqi Chen committed
-
Tianqi Chen committed
-
Tianqi Chen committed
-
* [TOP] Add dense, batchnorm * update tvm
Tianqi Chen committed -
* [RUNTIME][COMPILER] Formal compiler pipeline, runtime wrapper module * more detailed comments
Tianqi Chen committed -
* register depthconv, elemwise * use global elemwise schedule for relu
Yuwei HU committed -
Yuwei HU committed
-