- 02 Apr, 2020 1 commit
-
-
* [REFACTOR][TIR] Introduce ExprDeepEqual, Remove IRDeepCompare This PR introduces ExprDeepEqual which reuses the StructuralEqual infra. We migrated the usecases of ir_pass::Equal to ExprDeepEqual and StructuralEqual. * Address comments
Tianqi Chen committed
-
- 28 Mar, 2020 1 commit
-
-
* Adding support for QNN subtract op * Fixing typo. * Fixing typo. * Fixing lint. * Addressing review comments. * Renaming variables as per convention and renamed QnnBinaryOpTypes -> QnnBinaryOpType * Renaming QnnBinaryOpType to QnnBinaryOpTensorType which now takes the index you want to extract to make the code more readable. * Fixing lint. * Moving common code to macro. * Fixing alignment. * Fixing typo. * Fixing lint. * Renaming method to pass CI.
shoubhik committed
-
- 23 Mar, 2020 1 commit
-
-
* relay Node::make to constructor * patternwildcard * Address comments
Zhi committed
-
- 19 Mar, 2020 1 commit
-
-
* [ConvertLayout] Support QNN ops. * Changing layouts to C. * Fixing dilation. * Empty commit. Co-authored-by: Ubuntu <ubuntu@ip-172-31-53-55.us-west-2.compute.internal>
Animesh Jain committed
-
- 12 Mar, 2020 1 commit
-
-
* [refactor][relay pass] Separate analysis and transform passes into different subfolders * remove pass folder
Zhi committed
-
- 11 Mar, 2020 1 commit
-
-
* [QNN] Support 4D padding. * Empty commit. Co-authored-by: Ubuntu <ubuntu@ip-172-31-38-96.us-west-2.compute.internal>
Animesh Jain committed
-
- 15 Feb, 2020 1 commit
-
-
* add test case for per channel dense * add unit arg in tflite frontend * update qnn legalize test * fix output dim index
masahi committed
-
- 05 Feb, 2020 1 commit
-
-
* [QNN] Optimize lowering for requantize and FixedPointMultiply. * Add check for requantize scale gt 1. * Added test case.
Animesh Jain committed
-
- 04 Feb, 2020 1 commit
-
-
* [LINT] Fix -Wextra * Fix virtual-dtor
Tianqi Chen committed
-
- 03 Feb, 2020 1 commit
-
-
Animesh Jain committed
-
- 24 Jan, 2020 1 commit
-
-
masahi committed
-
- 20 Jan, 2020 1 commit
-
-
* [REFACTOR][TYPE] Finish move all types to IR. - Move definition of Ref and TensorType to ir - Move type_functor.h to public header. - Rename RefType -> RelayRefType for clarity. * Add atol
Tianqi Chen committed
-
- 19 Jan, 2020 1 commit
-
-
TIR is the new namespace for low-level IR for tensor-level optimizations and loop transformations. This PR establishes the namespace and files. - lowered_func.h,buffer.h,data_layout.h -> tir/buffer.h,tir/data_layout.h,tir/lowered_func.h - ir.h -> tir/expr.h, tir/stmt.h - ir_functor_ext.h -> tir/expr_functor.h, tir/stmt_functor.h
Tianqi Chen committed
-
- 17 Jan, 2020 2 commits
-
-
Move the conversion extensions to the specific class definitions so that we longer need to include packed_func_ext.
Tianqi Chen committed -
* [QNN] Conv2D type checking for kernel per-channel scales. * Address commments. * Address comments. * - Adding safety checks for downcasts. Co-authored-by: shoubhik <shoubhikbhatti@gmail.com>
Animesh Jain committed
-
- 15 Jan, 2020 1 commit
-
-
* [REFACTOR][IR] Unify IntImm and UIntImm This PR unifies UIntImm and IntImm to simplify the codebase. Unsigned integer constants will also be stored as IntImm. For uint constant that does not fit into int64(rare case), we introduced an intrinsic tvm_big_uint_imm to construct such intgers by its lower and higher 32bits. * [REFACTOR][IR] Remove UIntImm to use IntImm * rename big->large
Tianqi Chen committed
-
- 14 Jan, 2020 1 commit
-
-
We will use a single ErrorReporter to report errors during program transformations.
Tianqi Chen committed
-
- 09 Jan, 2020 1 commit
-
-
* [REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr) As part of unified IR, we will need to unify relay::Expr and the current tvm::Expr under the same base type. From the techinical point of view. tvm::Expr is a "primitive" expression that only contains POD types and handles and does not do life-cycle management. This PR renames Expr->PrimExpr to clarify that. We will send a subsequent PR to introduce the base expr class. * Remove legacy VarExpr and ExprHash/Equal
Tianqi Chen committed
-
- 07 Jan, 2020 1 commit
-
-
Animesh Jain committed
-
- 06 Jan, 2020 1 commit
-
-
Previously we support a limited case of function type deduction and in many places we have to supply the type twice during set_body_typed (one in the template parameter, another in the lambda signature). This PR improves the deduce function by enablng automatic function signature deduction. ``` TVM_REGISTER_GLOBAL("sub") .set_body_typed([](int x, int y) -> int { return x - y; }); ``` Unfortunately, because of template conflict, we can not support the original case where both type signature and lambda are supplied through set_body_typed. This PR refactors the existing regsitration to the new style.
Tianqi Chen committed
-
- 04 Jan, 2020 1 commit
-
-
TVM_REGSISTER_API is an alias of TVM_REGISTER_GLOBAL. In the spirit of simplify redirections, this PR removes the original TVM_REGISTER_API macro and directly use TVM_REGISTER_GLOBAL. This type of refactor will also simplify the IDE navigation tools such as FFI navigator to provide better code reading experiences. Move EnvFunc's definition to node.
Tianqi Chen committed
-
- 03 Jan, 2020 1 commit
-
-
Animesh Jain committed
-
- 31 Dec, 2019 1 commit
-
-
* [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal and macros to Object. Historically, we have classes like NodePtr/Ref/HashEqual. After unified object protocol, these names are just alias of the object counterpart. Moreover, there are helper macros defined over the places for defining these object. This PR consoldiate the terminologies into the corresponding ones in the Object system so we have a clean and consistent API moving forward. * Update include/tvm/attrs.h Co-Authored-By: Wei Chen <ipondering.weic@gmail.com> * fix compilation Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
Tianqi Chen committed
-
- 22 Dec, 2019 1 commit
-
-
dtype.h -> runtime/data_type.h Changes: - Rename all old reference of tvm::Type to DataType - ExprNode.type -> ExprNode.dtype - Expr.type() -> Expr.dtype() - Change Expr related functions to expr_operator. - DataType::min() -> min_value(DataType) - DataType::max() -> max_value(DataType) - Move type constructor Int, UInt, Float, Handle, Bool into DataType. - Int(bits) -> DataType::Int(bits) - UInt(bits) -> DataType::UInt(bits)
Tianqi Chen committed
-
- 24 Nov, 2019 1 commit
-
-
* [LINT] Improve the check tool to handle ASF copyright message. * [LINT] Remove unnecessary copyright message as per ASF requirement. * Fix codegen hybrid * [LINT] Broaden license checks to include html, xml * [LINT] Fix rest of the files * Fix notice * [LINT] Improve check file type error message
Tianqi Chen committed
-
- 21 Nov, 2019 1 commit
-
-
Animesh Jain committed
-
- 16 Nov, 2019 1 commit
-
-
* Add qnn conv2d attributes for input_tensor_scale and kernel_tensor_scale. The lowering in the tflite frontend loses the input_tensor_scale and the kernel_tensor_scale by multiplying it and putting it into the Requantize operation. This means that any graph partitioning passes or other passes that need to access this information no longer have it available in the qnn dialect. regards Ramana * Store input tensor scale and Weight tensor scale for Dense as well As for conv2d, the tflite frontend drops the input tensor scale and the weight tensor scale from the relay op. Store it as separate fields in there. * Fix unintentional tab * Rename input_tensor_scale to input_scale and kernel_tensor_scale to kernel_scale for conv2d. * input_tensor_scale -> input_scale weight_tensor_scale->weight_scale * Rework dense testcase And use input_scale and kernel_scale * Be consistent in use of input_scale and kernel_scale values * Fixup qnn conv2d tests for input_scale and kernel_scale * Make pydoc identical between conv2d and dense for weight_tensor * Fix up conv2d parameters to be in the same order between C++ and python * Fix ordering of parameters for dense. * Add input_scale and output_scale to try and satisfy ci gods * Delete input_scale and kernel_scale. nn.conv2d does not contain input_scale and kernel_scale. We need to delete it when lowering it to nn.conv2d. * Add input_scale and kernel_scale for qnn.conv2d
Ramana Radhakrishnan committed
-
- 14 Nov, 2019 2 commits
-
-
Animesh Jain committed
-
Animesh Jain committed
-
- 30 Oct, 2019 1 commit
-
-
* [QNN] Improving Dense lowering. * - Moving get_shape method to util - Finalizing the test cases and the code structure for optimized dense computation. * - Fixing cpplint. * - Addressing review comments. * - Renaming the variables correctly. * - Renaming the variables correctly.
shoubhik committed
-
- 29 Oct, 2019 1 commit
-
-
Wuwei Lin committed
-
- 21 Oct, 2019 2 commits
-
-
This patch adds multiply operator for quantized tensors. The details of the quantized multiplication are outlined in the code. This builds on pull request 3927 and includes the changes Animesh mentions in the comments on that request. Change-Id: I555715b53d0266a91d5c03dc3dfe8fc31e7ce4e1
ekalda committed -
* [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol. This PR removes the original node system, and make node as a subclass of Object. This is a major refactor towards a better unified runtime object system. List of changes in the refactor: - We now hide data_ field, use Downcast explicitly to get a sub-class object. - Removed the node system FFI in python. - Removed the node C API, instead use PackedFunc for list and get attrs. - Change relay::Op::set_attr_type_key(attr_key_name) to relay::Op::set_attr_type<AttrType>(). - This change was necessary because of the new Object registration mechanism. - Subsequent changes to the op registrations - The change revealed a few previous problems that is now fixed. - Patched up a few missing node type registration. - Now we will raise an error if we register object that is not registered. - The original node.h and container.h are kept in the same location. - Calling convention: kObjectHandle now equals the old kNodeHandle, kNodeHandle is removed. - IRFunctor now dispatches on ObjectRef. - Update to the new type checking API: is_type, derived_from are replaced by IsInstance. - Removed .hash member function, instead use C++ convention hasher functors. * Address review comments
Tianqi Chen committed
-
- 16 Oct, 2019 2 commits
-
-
shoubhik committed
-
Animesh Jain committed
-
- 08 Oct, 2019 1 commit
-
-
Wuwei Lin committed
-
- 06 Oct, 2019 1 commit
-
-
Animesh Jain committed
-
- 02 Oct, 2019 1 commit
-
-
Animesh Jain committed
-
- 26 Sep, 2019 1 commit
-
-
Animesh Jain committed
-
- 23 Sep, 2019 1 commit
-
-
Animesh Jain committed
-