- 19 Mar, 2020 1 commit
-
-
* [ConvertLayout] Support QNN ops. * Changing layouts to C. * Fixing dilation. * Empty commit. Co-authored-by: Ubuntu <ubuntu@ip-172-31-53-55.us-west-2.compute.internal>
Animesh Jain committed
-
- 12 Mar, 2020 1 commit
-
-
* [refactor][relay pass] Separate analysis and transform passes into different subfolders * remove pass folder
Zhi committed
-
- 24 Feb, 2020 1 commit
-
-
* relay op strategy fix lint bitpack strategy bitserial_dense (#6) * update strategy * address comments fix a few topi test Dense strategy (#5) * dense * add biforst; remove comments * address comment Refactor x86 conv2d_NCHWc (#4) * Refactor x86 conv2d * Add x86 depthwise_conv2d_NCHWc * Add back topi x86 conv2d_nchw * Merge x86 conv2d_nchw and conv2d_NCHWc * Minor fix for x86 conv2d fix more strategy Add x86 conv2d_NCHWc_int8 strategy (#8) * Add x86 conv2d_NCHWc_int8 strategy * Remove contrib_conv2d_nchwc_int8 * Fix generic conv2d_NCHWc for int8 * Fix topi arm_cpu conv2d_NCHWc_int8 update x86 conv2d enable specify relay ops to be tuned for autotvm add cuda conv2d strategy add conv2d strategy for rocm add conv2d strategy for hls add conv2d strategy for arm cpu add conv2d strategy for mali add conv2d strategy for bifrost add conv2d strategy for intel graphics clean up and fix lint remove template keys from autotvm remove 2 in the func name address comments fix * fix bugs * lint * address comments * add name to op implement * Modify topi tests (#9) * Add pooling, reorg, softmax and vision * Add lrn * fix topi test * fix more topi test * lint * address comments * x * fix more tests & bugs * Modify more tests (#10) * Modify tests for bitserial_conv2d, bitserial_dense, bitserial_conv2d_rasp and bnn * Minor fix * More minor fix * fix more test * try to update vta using strategy * fix cpptest * x * fix rebase err * Fix two tests (#11) * change autotvm log format * lint * minor fix * try fix vta test * fix rebase err * tweak * tmp hack for vta pass * fix tutorial * fix * fix more tutorials * fix vta tutorial * minor * address comments * fix * address comments * fix cpptest * fix docs * change data structure name and api * address comments * lint * fix rebase err * updates * fix winograd test * fix doc * rebase * upgrade tophub version number * fix bug * re-enable vta tsim test after tophub is upgraded * fix vta test to use the correct args so the config can be found in tophub Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
Haichen Shen committed
-
- 21 Jan, 2020 1 commit
-
-
Bring up namespace te -- Tensor expression language DSL.
Tianqi Chen committed
-
- 20 Jan, 2020 1 commit
-
-
* [REFACTOR][TYPE] Finish move all types to IR. - Move definition of Ref and TensorType to ir - Move type_functor.h to public header. - Rename RefType -> RelayRefType for clarity. * Add atol
Tianqi Chen committed
-
- 19 Jan, 2020 1 commit
-
-
TIR is the new namespace for low-level IR for tensor-level optimizations and loop transformations. This PR establishes the namespace and files. - lowered_func.h,buffer.h,data_layout.h -> tir/buffer.h,tir/data_layout.h,tir/lowered_func.h - ir.h -> tir/expr.h, tir/stmt.h - ir_functor_ext.h -> tir/expr_functor.h, tir/stmt_functor.h
Tianqi Chen committed
-
- 16 Jan, 2020 1 commit
-
-
* [REFACTOR] introduce top - Tensor Operation DSL. Historically we put Tensor, Schedule and compute under the root tvm namespace. This is no longer a good idea as the project's scope grows larger than the tensor operation DSL. This PR introduces top -- a namespace for tensor operational DSL concepts such as schedule, tensor, compute. We moved the related files to the new top subfolder. * Move relevant files into include/tvm/top and src/top
Tianqi Chen committed
-
- 14 Jan, 2020 1 commit
-
-
- Use consistent constructor style to construct objects. - Move env_func to ir as it is mainly used to construct IRs. - Make docs consistent.
Tianqi Chen committed
-
- 09 Jan, 2020 1 commit
-
-
* [REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr) As part of unified IR, we will need to unify relay::Expr and the current tvm::Expr under the same base type. From the techinical point of view. tvm::Expr is a "primitive" expression that only contains POD types and handles and does not do life-cycle management. This PR renames Expr->PrimExpr to clarify that. We will send a subsequent PR to introduce the base expr class. * Remove legacy VarExpr and ExprHash/Equal
Tianqi Chen committed
-
- 08 Jan, 2020 1 commit
-
-
* [REFACTOR][IR] Variable -> VarNode * [REFACTOR][IR] Add/Sub/Mul/Div -> AddNode/SubNode etc. * [REFACTOR][IR] Min/Max/FloorDiv/FloorMod -> MinNode/MaxNode etc. * [REFACTOR][IR] EQ/NE/LT/LE/GT/GE/Select -> EQNode/NENode etc. * [REFACTOR][IR] Add Node suffix to Select/Call/Load/Ramp/Shuffle/Let * [REFACTOR][IR] Add node suffix to IntImm/UIntImm/FloatImm/StringImm * [REFACTOR][IR] Add Node suffix to Any, AttrStmt, AssertStmt * [REFACTOR][IR] Add Node suffix to Store/Provide/Allocate/Free * [REFACTOR][IR] Add Node suffix to ProducerConsumer * Fix lint * style updates, test fixes
Tianqi Chen committed
-
- 06 Jan, 2020 1 commit
-
-
Previously we support a limited case of function type deduction and in many places we have to supply the type twice during set_body_typed (one in the template parameter, another in the lambda signature). This PR improves the deduce function by enablng automatic function signature deduction. ``` TVM_REGISTER_GLOBAL("sub") .set_body_typed([](int x, int y) -> int { return x - y; }); ``` Unfortunately, because of template conflict, we can not support the original case where both type signature and lambda are supplied through set_body_typed. This PR refactors the existing regsitration to the new style.
Tianqi Chen committed
-
- 04 Jan, 2020 1 commit
-
-
TVM_REGSISTER_API is an alias of TVM_REGISTER_GLOBAL. In the spirit of simplify redirections, this PR removes the original TVM_REGISTER_API macro and directly use TVM_REGISTER_GLOBAL. This type of refactor will also simplify the IDE navigation tools such as FFI navigator to provide better code reading experiences. Move EnvFunc's definition to node.
Tianqi Chen committed
-
- 31 Dec, 2019 1 commit
-
-
* [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal and macros to Object. Historically, we have classes like NodePtr/Ref/HashEqual. After unified object protocol, these names are just alias of the object counterpart. Moreover, there are helper macros defined over the places for defining these object. This PR consoldiate the terminologies into the corresponding ones in the Object system so we have a clean and consistent API moving forward. * Update include/tvm/attrs.h Co-Authored-By: Wei Chen <ipondering.weic@gmail.com> * fix compilation Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
Tianqi Chen committed
-
- 30 Dec, 2019 1 commit
-
-
Animesh Jain committed
-
- 26 Dec, 2019 1 commit
-
-
Animesh Jain committed
-
- 24 Dec, 2019 1 commit
-
-
* Added tvm function stencil for subpixel operations to topi. * Topi subpixel operators added and tested. * Added subpixel attrs. * Added depth_to_space relay attributes. * depth_to_space fully working. * Fixed NHWC shape bug. * SpaceToDepth in and all tests passing. * lint fixes. * Added string include * Fixed topi formatting. * Added DCR/CDR mode to depthtospace operator.
Josh Fromm committed
-
- 22 Dec, 2019 1 commit
-
-
dtype.h -> runtime/data_type.h Changes: - Rename all old reference of tvm::Type to DataType - ExprNode.type -> ExprNode.dtype - Expr.type() -> Expr.dtype() - Change Expr related functions to expr_operator. - DataType::min() -> min_value(DataType) - DataType::max() -> max_value(DataType) - Move type constructor Int, UInt, Float, Handle, Bool into DataType. - Int(bits) -> DataType::Int(bits) - UInt(bits) -> DataType::UInt(bits)
Tianqi Chen committed
-
- 24 Nov, 2019 1 commit
-
-
* [LINT] Improve the check tool to handle ASF copyright message. * [LINT] Remove unnecessary copyright message as per ASF requirement. * Fix codegen hybrid * [LINT] Broaden license checks to include html, xml * [LINT] Fix rest of the files * Fix notice * [LINT] Improve check file type error message
Tianqi Chen committed
-
- 11 Nov, 2019 1 commit
-
-
* Add shape functions * Fix get_const_tuple * Fix cpplint * Fix pylint * Fix pylint * rebase and fix * Check Any for infer type * Fix expand_dim shape func for zero rank input * Fix pooling infer type * Address comment * Register layout transform attr
Yao Wang committed
-
- 25 Oct, 2019 1 commit
-
-
* save * lint
雾雨魔理沙 committed
-
- 21 Oct, 2019 1 commit
-
-
* [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol. This PR removes the original node system, and make node as a subclass of Object. This is a major refactor towards a better unified runtime object system. List of changes in the refactor: - We now hide data_ field, use Downcast explicitly to get a sub-class object. - Removed the node system FFI in python. - Removed the node C API, instead use PackedFunc for list and get attrs. - Change relay::Op::set_attr_type_key(attr_key_name) to relay::Op::set_attr_type<AttrType>(). - This change was necessary because of the new Object registration mechanism. - Subsequent changes to the op registrations - The change revealed a few previous problems that is now fixed. - Patched up a few missing node type registration. - Now we will raise an error if we register object that is not registered. - The original node.h and container.h are kept in the same location. - Calling convention: kObjectHandle now equals the old kNodeHandle, kNodeHandle is removed. - IRFunctor now dispatches on ObjectRef. - Update to the new type checking API: is_type, derived_from are replaced by IsInstance. - Removed .hash member function, instead use C++ convention hasher functors. * Address review comments
Tianqi Chen committed
-
- 10 Oct, 2019 1 commit
-
-
* Add FIFO buffer op to enable explicit computation re-use in convolution * Add a test * Add end-to-end test with 1D convolution * Add a stub in MXNet frontend * Address reviewer comments * Add back stub for MXNet frontend
Philip Hyunsu Cho committed
-
- 05 Oct, 2019 1 commit
-
-
* save save redo max test save address comment fix * address comment * increase rtol * address review comment
雾雨魔理沙 committed
-
- 03 Oct, 2019 1 commit
-
-
* [Relay][Op] Add instance norm op * mend [Relay][Op] Add instance norm op
bindog committed
-
- 30 Aug, 2019 1 commit
-
-
Animesh Jain committed
-
- 07 Aug, 2019 1 commit
-
-
* Add LayerNorm op * update * fix * Add mean_std and mean_variance * add std and update doc * add license * x * lint * x * fix * fix doc
Haichen Shen committed
-
- 01 May, 2019 1 commit
-
-
* Fix PRelu layout in Relay * Fix cpplint * Add PRelu test case
Zhao Wu committed
-
- 26 Apr, 2019 1 commit
-
-
* Quantize dense layers * Add out_dtype arggument to dense; Add dense_int8 on CUDA * Add topi unittest of dense int8 * Fix relay * Fix topi integration * Fix quantization * Update dense_rewrite * Triger CI * Change qconfig quantize_dense to quantize_op * Fix * Remove quantize_op from qconfig
Wuwei Lin committed
-
- 17 Apr, 2019 1 commit
-
-
* Implement nn.bias_add compute in C++ * Address comments * Remove unnecessary check
Yinghai Lu committed
-
- 16 Apr, 2019 1 commit
-
-
return false mean retry in the future, and in the case of error, it should be report ASAP, not retry.
雾雨魔理沙 committed
-
- 10 Apr, 2019 1 commit
-
-
* Add `set_body_simple` to Registry, refactor a lot of code to use it * Add more types to Relay PackedFuncs * Add Registry::set_body_method to easily make Node methods into PackedFuncs * Add set_body_method, set_body_node_method; start typing api_lang * Add some docs, remove unused script * Fix mysterious linter problem * Touch up api_ir.cc * Fix some issues with TOPI argument counts * Revert changes to topi.cc to avoid problems with optional arguments * A little more cleanup * Type more of the api _ functions * Whitespace * Finalize names and docs for new registry helpers * Update docs
James Gilles committed
-
- 08 Apr, 2019 1 commit
-
-
* [HEADER] ASF header dir=include * [HEADER] ASF Header dir=src * [HEADER] ASF Header -dir=python * [HEADER] ASF header dir=topi * [HEADER] ASF Header dir=nnvm * [HEADER] ASF Header -dir=tutorials * [HEADER] ASF Header dir=tests * [HEADER] ASF Header -dir=docker * fix whitespace * [HEADER] ASF Header -dir=jvm * [HEADER] ASF Header -dir=web * [HEADER] ASF Header --dir=apps * [HEADER] ASF Header --dir=vta * [HEADER] ASF Header -dir=go * temp * [HEADER] ASF Header --dir=rust * [HEADER] Add ASF Header --dir=cmake * [HEADER] ASF Header --dir=docs * [HEADER] Header for Jenkinsfile * [HEADER] ASF Header to toml and md * [HEADER] ASF Header to gradle * Finalize rat cleanup * Fix permission * Fix java test * temporary remove nnvm onnx test
Tianqi Chen committed
-
- 01 Mar, 2019 1 commit
-
-
* Add batch_dot and cpu schedule * Add relay support for batch_dot * Rename batch_dot to batch_matmul * nits * Add missing file * Put batch_matmul and dense x86 schedule in separate files * Fix pylint * Remove unused import * Add cuda schedule for batch_matmul * Add test case with larger batch size * Add batch_matmul in api doc * Fix quantize pass rounding error * Fix pylint and minor change * bug fix
Haichen Shen committed
-
- 28 Feb, 2019 1 commit
-
-
* move layout.h & layout.cc from relay to tvm * change ConvertLayout in relay to bijectiveLayout->Forward/backward * add first test case * add LayoutAxis * add LayoutAxis struct and compiles * simplify BijectiveLayout rule consturct * polish func name for Layout, move impl to .cc, remove Layout::defined(), add defined() checker * partially add layout py support * add layout test cases * add doc for tvm.layout & tvm.bijective_layout * fix lint * fix lint * fix layout name generation bug * fix layout typo * address comments and add topi.layout_transform * layout.h->data_layout.h, test_lang_layout.py->test_lang_data_layout.py
Yizhi Liu committed
-
- 30 Nov, 2018 1 commit
-
-
* [RELAY] Finish alter op pass * [RELAY] AlterOpLayout Pass * fix broadcast operators * fix broadcast operators * fix broadcast operators * Support concatenate * address comments * address comments * add comments * rebase
Lianmin Zheng committed
-
- 21 Nov, 2018 1 commit
-
-
Lianmin Zheng committed
-
- 19 Nov, 2018 2 commits
-
-
Siju committed
-
Animesh Jain committed
-
- 18 Nov, 2018 1 commit
-
-
Yizhi Liu committed
-
- 09 Nov, 2018 1 commit
-
-
Tianqi Chen committed
-