- 12 Apr, 2020 1 commit
-
-
This PR enables the copy on write optimizations passes: - Enable COW for IRModule both TIR and relay passes. - Enabled COW for PrimFunc in TIR passes. Need more thoughts into whether/how to enable COW for relay::Function, due to some function passes depend on the presence of IRModule for context information, and the std::move of the related function to nullptr might affect the related behavior.
Tianqi Chen committed
-
- 11 Apr, 2020 1 commit
-
-
* [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc This PR introduces RValue reference support the PackedFunc calling convention to address the above issue. Specifically, when an argument is a r-value reference, we will use a assign a different type code(`kObjectRValueRefArg`), and pass `Object**` (the address to the Object pointer) instead through the values array. The callee can choose to move out this Object pointer and set the original Object pointer from the caller side to be nullptr. We also add an experimental move support to the python side(marked as _move so to indicate the dev nature). This enhancement will enable copy on write optimizations through out the TVM stack. * Address review comments * fix compilation
Tianqi Chen committed
-
- 10 Apr, 2020 2 commits
-
-
* Use runtime::String * move string to tvm namespace * add const char* constructor * implicit cast from std::string
Zhi committed -
This PR refactors the serialization mechanism to support general serialization of leaf objects into bytes. The new feature superceded the original GetGlobalKey feature for singletons. Added serialization support for runtime::String.
Tianqi Chen committed
-
- 01 Apr, 2020 1 commit
-
-
* [TIR][TRANSFORM] Migrate LowerIntrin * LowerDeviceStorageAccessInfo * Migrate LowerWarpMemory
Tianqi Chen committed
-
- 30 Mar, 2020 1 commit
-
-
Zhi committed
-
- 29 Mar, 2020 1 commit
-
-
Zhi committed
-
- 28 Mar, 2020 2 commits
-
-
* [NODE][IR] Introduce StructuralHash for the Unified IR. This PR introduces a new way to handle structural hash for the unified IR. - Each object can now register an optional SEqualHash function, which describes how to reduce its structural equality to sequence of hash values. - Optionally, the object can choose to allow labeling of vars(e.g. function parameters) by calling DefHash - We implemented a non-recursive structural hasher that maintains its own stack to traverse te IR. This PR also improves the hash value property from the previous relay's hash utility. In particular, the graph node mode hashs a DAG differently from a tree by attaching an unique occurence index to each graph node. In all of the test cases so far, structural_hash is consistent with structural_equal. - if structrual(x, y) then structural_hash(x) == structural_hash(y) - if structural_hash(x) == structural_hash(y) then highly likely structural_equal(x, y) - hash no collison is found in our testcases. Ideally we should work on automatically generating these functions in the future. * Fix cases for EnvFunc and Array dims * fix testcase * Update src/node/structural_hash.cc Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe> Co-authored-by: 雾雨魔理沙 <lolisa@marisa.moe>
Tianqi Chen committed -
* [NODE][IR] Introduce StructuralEqual Infra for the Unified IR. This PR introduces a new way to handle structural equality for both TIR and relay nodes in an extensive way. - Each object can now register an optional SEqualReduce function, which describes how to reduce its structural equality to another instance into equality of the children. - Optionally, the object can choose to allow remapping of vars(e.g. function parameters) by calling DefEqual - We implemented a non-recursive structural equality checker that recursively traverses the objects and does the structural equality checking. This PR also fixes a few potential problems in previous relay's AlphaEqual. - In particular, the new structural equality relation will be communicative. - It is can be dangerous to use same_as relation to quickly check equality, demonstrated by the following case. (%x, %y) are shared vars between two functions. - function0: fn (%x, %y) { %x + %y } - function1: fn (%y, %x) { %x + %y } The new structural equal is intented to supersede AlphaEqual and AttrsEqual. Follow-up PRs should be performed to redirect the existing usages, and removes the corresponding implementation. * Update the rule to distinguish between graph node and non-graph nodes. * Refactor the test cases to use structural equal. * address comments * Mark more relay::Expr as graph node, fix a testcase issue(was bug that was not caught by previous alpha equal) * Remove unrelated comment * Fix file comment * Address review comment * Relax condition to fit flaky case
Tianqi Chen committed
-
- 24 Mar, 2020 2 commits
-
-
* [REFACTOR][TIR] Introduce PrimFuncPass. - Introduce PrimFuncPass - Convert one pass to the unified Pass API. * Address comments * Fix comments
Tianqi Chen committed -
* save * gradient.rly * fix * NOT WORKING: gradient cell pass * test gradient pass * fixed basic call ops * more tests * fix bug * transform calls to one ones_like zero zero_like * maintenance stuff * fix linting * linting * linting * throw default * remove unrelated changes * import gradent.rly in pass * comment * linting * remove changes to test files * move gradient_cell.cc to transforms * revert change * update files with new commits * type * wrapper function to main outermost function type * fix linting * fix unsigned and signed int comparison * review * GetConstructor definition in module and change op comparison * update node instantiations * increase code readability Co-authored-by: Marisa Kirisame <lolisa@marisa.moe>
Andrew Liu committed
-
- 20 Mar, 2020 1 commit
-
-
As part of the unified IR refactor. This PR refactors the target codegen to use IRModule containing tir::PrimFuncs. In order to break the refactor into several steps without breaking the codebase, we built an conversion pass to convert Array<LoweredFunc> into IRModule. The follow-up refactors will gradually move the passes covered by IRModule up until we cover all the passes. Then we can remove the additional redundant concepts such as LoweredFunc.
Tianqi Chen committed
-
- 15 Mar, 2020 1 commit
-
-
Tianqi Chen committed
-
- 14 Mar, 2020 1 commit
-
-
This PR introduces tir::PrimFunc which will be used as the TIR function container in the unified IR. Also streamlined the function attributes a bit further. - All common attributes are under tvm::attr - TIR specific attributes are under tvm::tir::attr and comes with a tir prefix - Use stl_style for attributes for now
Tianqi Chen committed
-
- 12 Mar, 2020 2 commits
-
-
- Use fuzzy comparison for double. - Removed the hack for BatchNormAttrs and DictAttr. Also removed a warning from text printer printing.
Tianqi Chen committed -
* [REFACTOR] Streamline Function Attr interface. There has been quite a few recent changes that depends heavily on the function attr interface. This PR streamlines that interface by introducing two APIs that covers most of the usages. - GetAttr which gets a typed object for a given key - HasNonzeroAttr is a quick helper that calls GetAttr to quickly check an attribute - WithAttr that creates a new function object with the given attr - The API comes with copy on write optimization to avoid multiple copies - We deliberately pick the prefix With(instead of Set) to indicate this function does not mutate the original input. On the python side: - We allow read access via func.attrs (which is a DictAttr) - func.with_attrs to create a new instance with updated attrs. We also get rid of the small wrapper functions and make sure the API centered around the GetAttr and HasNonzeroAttr interface. This PR also changes the function construction to follow the new convention. * Address review comments * Address review comments * Fix doxygen path
Tianqi Chen committed
-
- 19 Feb, 2020 1 commit
-
-
* [REFACTOR] Polish ffi convention. - Remove the src/api, keep registration local to the c++ function. - Remove the api_internal as it is no longer needed. * Update the codebase walk through
Tianqi Chen committed
-
- 14 Feb, 2020 1 commit
-
-
- Move related files into the corresponding location as in C++ - Keep the top-level TVM API backward compatible to make minimum changes in topi
tqchen committed
-
- 12 Feb, 2020 1 commit
-
-
* [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files. This PR establishes tvm.ir and migrates the corresponding relay files into the new folder. API Change: - relay.Module -> tvm.IRModule * Update with ADT * Migrate transform * address comments * Migrate module * Migrate json_compact * Migrate attrs * Move LoweredFunc to stmt temporarily * temp migrate container * Finish migrate container
Tianqi Chen committed
-
- 04 Feb, 2020 1 commit
-
-
* [LINT] Fix -Wextra * Fix virtual-dtor
Tianqi Chen committed
-
- 28 Jan, 2020 1 commit
-
-
* Implement pass tracing API * Set is_before correctly * Add docs for trace function * Fix lint * Remove PDB * Ensure trace_func is set before calling * Fix conditional
Jared Roesch committed
-
- 21 Jan, 2020 2 commits
-
-
Bring up namespace te -- Tensor expression language DSL.
Tianqi Chen committed -
* [REFACTOR] Establish printer in the source folder. As we move towards the unified IR, we will eventually want to build a unified printers for both relay and TIR. This PR isolate the printer component into a separate folder in src as a first step. - Refactored the Doc DSL using Object, clean up APIs. - Isolate out the meta data into a header. - move printer into relay_text_printer, add comments about further TODos. * Rename NodePrinter -> ReprPrinter to distinguish it from other printers
Tianqi Chen committed
-
- 20 Jan, 2020 1 commit
-
-
* [REFACTOR][TYPE] Finish move all types to IR. - Move definition of Ref and TensorType to ir - Move type_functor.h to public header. - Rename RefType -> RelayRefType for clarity. * Add atol
Tianqi Chen committed
-
- 19 Jan, 2020 1 commit
-
-
TIR is the new namespace for low-level IR for tensor-level optimizations and loop transformations. This PR establishes the namespace and files. - lowered_func.h,buffer.h,data_layout.h -> tir/buffer.h,tir/data_layout.h,tir/lowered_func.h - ir.h -> tir/expr.h, tir/stmt.h - ir_functor_ext.h -> tir/expr_functor.h, tir/stmt_functor.h
Tianqi Chen committed
-
- 17 Jan, 2020 1 commit
-
-
Move the conversion extensions to the specific class definitions so that we longer need to include packed_func_ext.
Tianqi Chen committed
-
- 16 Jan, 2020 2 commits
-
-
* [arith] add ShapeVar representing non-neg valued variable in a tensor shape * bounder remover; deal with div in int_set differently * fix bounder_remover * migrate unittest to use shape_var * use tvm.shape_var in integration & relay tests * add test case; fix Var register * fix lint * fix lint again * add default ShapeVar visitor in Relay * fix override * fix ShapeVar visit bug * revert IntervalSet for shape_var * remove bound_remover * remove is_var; use constructor for shapevar/var instead * ShapeVar -> SizeVar; add constructor comments * shape_var -> size_var in doc * tindex -> size
Yizhi Liu committed -
This PR introduces more clear naming prefix for C API type codes to avoid conflict with other packages. We also removed TVMArray and TVMType to directly use DLTensor and DLDataType.
Tianqi Chen committed
-
- 15 Jan, 2020 2 commits
-
-
This PR moves attrs.h into the ir folder as it can serve as a common infra for building ir dats structures. We also moved common container(FloatImm) into ir/expr.h
Tianqi Chen committed -
* [REFACTOR][IR] Unify IntImm and UIntImm This PR unifies UIntImm and IntImm to simplify the codebase. Unsigned integer constants will also be stored as IntImm. For uint constant that does not fit into int64(rare case), we introduced an intrinsic tvm_big_uint_imm to construct such intgers by its lower and higher 32bits. * [REFACTOR][IR] Remove UIntImm to use IntImm * rename big->large
Tianqi Chen committed
-
- 14 Jan, 2020 4 commits
-
-
- Use consistent constructor style to construct objects. - Move env_func to ir as it is mainly used to construct IRs. - Make docs consistent.
Tianqi Chen committed -
Move the relay's pass Infra to ir. Keep FunctionPass in relay as it is local to the dialect.
Tianqi Chen committed -
We will use a single ErrorReporter to report errors during program transformations.
Tianqi Chen committed -
This PR brings relay::Module as the unified IRModule structure. IRModule will be used as the basic unit for transformations through out the stack. - Rename relay::Module -> IRModule - Move relay/module.h -> ir/module.h - ModuleNode::FromExpr -> IRModule::FromExpr - FromText -> IRModule::FromText
Tianqi Chen committed
-
- 11 Jan, 2020 2 commits
-
-
This PR migrates relay's Op into the ir folder. Op and its registry provides an useful mechanism to store any attribute meta-data of an operator include function signatures, lowering rules, side effect etc. These features are not only useful for Relay, but also needed in the low-level IR. At the current moment, intrinsic functions in the low-level IR are simply represented by a string. This means we cannot type-check the low-level IR when the type does not meet the constraint, nor can we obtain further information such as side-effect and read write relation of these intrinsics wrt to arguments. Op will be used as the way to handle primitive ops(in DL terminology) (builtin intrinsics or in compiler terminology). We will perform follow-up refactors to make low-level CallNode take Op as the function argument.
Tianqi Chen committed -
This PR moves a few base types from relay and low-level Expr into the ir sub-folder. These classes will serve as a common type system across the stack. Rationale: - PrimExpr for low-level expressions - RelayExpr for advanced features, including Function definition. - Introduce BaseFunc to host all functions, including future PrimFunc(low-level expr functions, subject to discussion). This is a minimum change we can do to unify the classes into a common hierarchy. The main data structure that are variant specific will still be kept in the sub-namespaces. We only include classes that is needed to allow a common Module class. - BaseFunc - GlobalVar - Type definition part of ADT We will only need the BaseFunc and their checked_type to decide the calling convention across the function variants.
Tianqi Chen committed
-
- 06 Jan, 2020 1 commit
-
-
Previously we support a limited case of function type deduction and in many places we have to supply the type twice during set_body_typed (one in the template parameter, another in the lambda signature). This PR improves the deduce function by enablng automatic function signature deduction. ``` TVM_REGISTER_GLOBAL("sub") .set_body_typed([](int x, int y) -> int { return x - y; }); ``` Unfortunately, because of template conflict, we can not support the original case where both type signature and lambda are supplied through set_body_typed. This PR refactors the existing regsitration to the new style.
Tianqi Chen committed
-
- 05 Jan, 2020 1 commit
-
-
Rationale: printer is a common infra that is shared across all nodes.
Tianqi Chen committed
-
- 04 Jan, 2020 1 commit
-
-
This PR moves a few base types from relay to the ir sub-folder. These types will serve as a common type system across the stack. Notably, we want to be able to use the same FuncType for all function signatures. I tried to make a minimum move to bring the necessary dependencies for a FuncType. We can discuss what additional things we want to move as a follow-up. Notably, because the TensorType will have a dependency on low-level Expr, we will need to break the type.h into two files and introduce a tensor_type.h(or leave them in relay for now).
Tianqi Chen committed
-
- 29 Oct, 2016 1 commit
-
-
tqchen committed
-