- 17 Jan, 2020 1 commit
-
-
- Remove operator bool from base object ref macro - Raitionale: operator bool can be dangerous for sub-classes that also overloads other operators(e.g. ==). - If bool is still needed, use explicit operator bool. - Use absolute include when necessary - Move type related util to data_type - Isolate stackvm code from compiler
Tianqi Chen committed
-
- 16 Jan, 2020 13 commits
-
-
* [Docs] Convert Layout pass. * Address comments. Section 3 massaging. * Address comments.
Animesh Jain committed -
* [REFACTOR] introduce top - Tensor Operation DSL. Historically we put Tensor, Schedule and compute under the root tvm namespace. This is no longer a good idea as the project's scope grows larger than the tensor operation DSL. This PR introduces top -- a namespace for tensor operational DSL concepts such as schedule, tensor, compute. We moved the related files to the new top subfolder. * Move relevant files into include/tvm/top and src/top
Tianqi Chen committed -
* BYOC tutorial: codegen C * Address comments * Address comments * Add build option * Address comments * Use TVM_DLL_EXPORT_TYPED_FUNC
Cody Yu committed -
Thierry Moreau committed
-
Spread the arithmetic.h into several components and move into arith subfolder. The arith namespace will be used for arithmetic expression pattern detections and simplifications.
Tianqi Chen committed -
Wei Chen committed
-
Zhao Wu committed
-
* [Relay][Frontend][TFLite] Add parser support for squared difference * fix some error * fix exp_type * add comment
Wang Yucheng committed -
Tianqi Chen committed
-
* [arith] add ShapeVar representing non-neg valued variable in a tensor shape * bounder remover; deal with div in int_set differently * fix bounder_remover * migrate unittest to use shape_var * use tvm.shape_var in integration & relay tests * add test case; fix Var register * fix lint * fix lint again * add default ShapeVar visitor in Relay * fix override * fix ShapeVar visit bug * revert IntervalSet for shape_var * remove bound_remover * remove is_var; use constructor for shapevar/var instead * ShapeVar -> SizeVar; add constructor comments * shape_var -> size_var in doc * tindex -> size
Yizhi Liu committed -
As part of Unified IR infra. Introduce target folder to store all the compilation target related information.
Tianqi Chen committed -
Tianqi Chen committed
-
This PR introduces more clear naming prefix for C API type codes to avoid conflict with other packages. We also removed TVMArray and TVMType to directly use DLTensor and DLDataType.
Tianqi Chen committed
-
- 15 Jan, 2020 8 commits
-
-
* [REFACTOR] Move support related code to include/tvm/support - tvm/logging.h -> tvm/support/logging.h - remove tvm/base.h, move with into tvm/support/with.h * src/common -> src/support
Tianqi Chen committed -
This PR moves attrs.h into the ir folder as it can serve as a common infra for building ir dats structures. We also moved common container(FloatImm) into ir/expr.h
Tianqi Chen committed -
* [Relay][Frontend][TFLite] Add constant input support for elemwise ops * modify in tflite.py
Wang Yucheng committed -
LiangHao committed
-
Zhigao committed
-
This reverts commit dcf7fbf1.
Haichen Shen committed -
Zhi committed
-
* [REFACTOR][IR] Unify IntImm and UIntImm This PR unifies UIntImm and IntImm to simplify the codebase. Unsigned integer constants will also be stored as IntImm. For uint constant that does not fit into int64(rare case), we introduced an intrinsic tvm_big_uint_imm to construct such intgers by its lower and higher 32bits. * [REFACTOR][IR] Remove UIntImm to use IntImm * rename big->large
Tianqi Chen committed
-
- 14 Jan, 2020 7 commits
-
-
- Use consistent constructor style to construct objects. - Move env_func to ir as it is mainly used to construct IRs. - Make docs consistent.
Tianqi Chen committed -
* [relay] Relay annotation and partitioning for codegen * Add fusion unit test * fix comments * Update include/tvm/relay/attrs/annotation.h Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe> * rebase * remove annotation helper * rebase again Co-authored-by: Cody Yu <comaniac0422@gmail.com> Co-authored-by: 雾雨魔理沙 <lolisa@marisa.moe>
Zhi committed -
Move the relay's pass Infra to ir. Keep FunctionPass in relay as it is local to the dialect.
Tianqi Chen committed -
We will use a single ErrorReporter to report errors during program transformations.
Tianqi Chen committed -
Zhi Chen committed
-
Liangfu Chen committed
-
This PR brings relay::Module as the unified IRModule structure. IRModule will be used as the basic unit for transformations through out the stack. - Rename relay::Module -> IRModule - Move relay/module.h -> ir/module.h - ModuleNode::FromExpr -> IRModule::FromExpr - FromText -> IRModule::FromText
Tianqi Chen committed
-
- 12 Jan, 2020 1 commit
-
-
* GitHub Action lint Python code for syntax errors https://flake8.pycqa.org/en/latest/user/error-codes.html On the flake8 test selection, this PR does _not_ focus on "_style violations_" (the majority of flake8 error codes that [__psf/black__](https://github.com/psf/black) can autocorrect). Instead these tests are focus on runtime safety and correctness: * E9 tests are about Python syntax errors usually raised because flake8 can not build an Abstract Syntax Tree (AST). Often these issues are a sign of unused code or code that has not been ported to Python 3. These would be compile-time errors in a compiled language but in a dynamic language like Python they result in the script halting/crashing on the user. * F63 tests are usually about the confusion between identity and equality in Python. Use ==/!= to compare str, bytes, and int literals is the classic case. These are areas where __a == b__ is True but __a is b__ is False (or vice versa). Python >= 3.8 will raise SyntaxWarnings on these instances. * F7 tests logic errors and syntax errors in type hints * F82 tests are almost always _undefined names_ which are usually a sign of a typo, missing imports, or code that has not been ported to Python 3. These also would be compile-time errors in a compiled language but in Python a __NameError__ is raised which will halt/crash the script on the user. * Force a retest * Rename start_rpc_server_to_tracker.py to start_rpc_server_to_tracker.sh This is a bash file, not a Python file.
Christian Clauss committed
-
- 11 Jan, 2020 10 commits
-
-
* added conv1d operators to topi. * Started to add python testing. * Added python conv1d implementation for testing. * Wrote test but need to add cuda schedule :( * Cuda schedules working for both conv1d layouts. * All topi tests passing. * Formatting topi. * Removed pad_method option as its probably overkill. * Added relay op definition of conv1d. * End2end conv1d working with onnx. * Lint fixes. * Formatting fixes. * Rebase fix. * Switched to array based attributes for consistency across convs. * Improved onnx parsing and testing for convolutions. * lint fix * Tiny tweak. * Bug fix * Rebase fix. * Add group ignore to onnx conv1d frontend. * Unified MakeConv and fixed documentation. * improved autopadding * Addressed feedback and simplified onnx frontend. * Format fix. * Basic X86 NCW schedule working. * Added nwc schedule. * fixed name * Added more tests and basic x86 schedules. * Format fix. * Added non power of two shape tests.
Josh Fromm committed -
This PR migrates relay's Op into the ir folder. Op and its registry provides an useful mechanism to store any attribute meta-data of an operator include function signatures, lowering rules, side effect etc. These features are not only useful for Relay, but also needed in the low-level IR. At the current moment, intrinsic functions in the low-level IR are simply represented by a string. This means we cannot type-check the low-level IR when the type does not meet the constraint, nor can we obtain further information such as side-effect and read write relation of these intrinsics wrt to arguments. Op will be used as the way to handle primitive ops(in DL terminology) (builtin intrinsics or in compiler terminology). We will perform follow-up refactors to make low-level CallNode take Op as the function argument.
Tianqi Chen committed -
* [Tutorial] Deploy Quantized Model on CUDA * update * update * address comments
Wuwei Lin committed -
This is a shell file, not a Python file.
Christian Clauss committed -
Under the unified IR. We will allow a single IRModule to store different function variants, such as relay::Function, ExternFunc, and low-level function. This PR changes relay::Function -> BaseFunc in the module file to support multiple function variants.
Tianqi Chen committed -
Identity is not the same thing as equality in Python so use ==/!= to compare str, bytes, and int literals. In Python >= 3.8, these instances will raise __SyntaxWarnings__ so it is best to fix them now. https://docs.python.org/3.8/whatsnew/3.8.html#porting-to-python-3-8 % __python__ ``` >>> dtype = "float" >>> dtype += "16" >>> dtype == "float16" True >>> dtype is "float16" False >>> 0 == 0.0 True >>> 0 is 0.0 False ```
Christian Clauss committed -
#4682 Tried to fix a Python syntax error but did not go far enough because there are _three sets_ of embedded quotes. This PR solves the syntax error by using Python's triple quoted strings on the outside and then double quotes in the middle and then single quotes on the inside.
Christian Clauss committed -
* [TOPI][RELAY][OP] add op crop_and_resize * fix pylint * incorporate comments * fix ci
Yong Wu committed -
This PR moves a few base types from relay and low-level Expr into the ir sub-folder. These classes will serve as a common type system across the stack. Rationale: - PrimExpr for low-level expressions - RelayExpr for advanced features, including Function definition. - Introduce BaseFunc to host all functions, including future PrimFunc(low-level expr functions, subject to discussion). This is a minimum change we can do to unify the classes into a common hierarchy. The main data structure that are variant specific will still be kept in the sub-namespaces. We only include classes that is needed to allow a common Module class. - BaseFunc - GlobalVar - Type definition part of ADT We will only need the BaseFunc and their checked_type to decide the calling convention across the function variants.
Tianqi Chen committed -
* replace TensorObj and TensorValue with NDArray * NodeBase to Object in Python * rebase
Zhi committed
-