- 11 Jan, 2020 8 commits
-
-
Under the unified IR. We will allow a single IRModule to store different function variants, such as relay::Function, ExternFunc, and low-level function. This PR changes relay::Function -> BaseFunc in the module file to support multiple function variants.
Tianqi Chen committed -
Identity is not the same thing as equality in Python so use ==/!= to compare str, bytes, and int literals. In Python >= 3.8, these instances will raise __SyntaxWarnings__ so it is best to fix them now. https://docs.python.org/3.8/whatsnew/3.8.html#porting-to-python-3-8 % __python__ ``` >>> dtype = "float" >>> dtype += "16" >>> dtype == "float16" True >>> dtype is "float16" False >>> 0 == 0.0 True >>> 0 is 0.0 False ```
Christian Clauss committed -
#4682 Tried to fix a Python syntax error but did not go far enough because there are _three sets_ of embedded quotes. This PR solves the syntax error by using Python's triple quoted strings on the outside and then double quotes in the middle and then single quotes on the inside.
Christian Clauss committed -
* [TOPI][RELAY][OP] add op crop_and_resize * fix pylint * incorporate comments * fix ci
Yong Wu committed -
This PR moves a few base types from relay and low-level Expr into the ir sub-folder. These classes will serve as a common type system across the stack. Rationale: - PrimExpr for low-level expressions - RelayExpr for advanced features, including Function definition. - Introduce BaseFunc to host all functions, including future PrimFunc(low-level expr functions, subject to discussion). This is a minimum change we can do to unify the classes into a common hierarchy. The main data structure that are variant specific will still be kept in the sub-namespaces. We only include classes that is needed to allow a common Module class. - BaseFunc - GlobalVar - Type definition part of ADT We will only need the BaseFunc and their checked_type to decide the calling convention across the function variants.
Tianqi Chen committed -
* replace TensorObj and TensorValue with NDArray * NodeBase to Object in Python * rebase
Zhi committed -
* Add output_padding to generic * Add output_padding to the reference impl * Add output_padding to arm_cpu * Add output_padding to the test * Add output_padding for cuda * Add output_padding for x86 * Make use of the new output_padding argument in Relay * Adjust conv2d_transpose Relay test * Fix lint errors * Fix the VTA declaration of conv2d_transpose * support for output padding in conv2d transpose * some output padding will break IR pass * Fix new conv2d_transpose test * Update tophub * Fix conv1d output_padding too. * Fix the conv1d_transpose reference function. * Fix the cuda impl * fix the topi test for conv1d * Update the versions in tophub.py Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
abergeron committed -
* 'fskip' will not always return false fskip returns false at the end of PackedFunc, discards return true in 'cast' case * Update build_module.cc
yuliujq committed
-
- 10 Jan, 2020 12 commits
-
-
[flake8](http://flake8.pycqa.org) testing of https://github.com/apache/incubator-tvm on Python 3.8.0 $ __flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics__ ``` ./apps/vta_rpc/start_rpc_server_to_tracker.py:18:18: E999 SyntaxError: invalid syntax PROJROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../../" && pwd )" ^ ```
Christian Clauss committed -
* [Relay][Frontend][TFlite] Add parses support for SLICE * TFlite 1.13: convertor gives nonsense output when size[i]==-1 * TF parser: SLICE need fixing for size[i]==-1 -> gives wrong output bcs of indices * Set end[i] = input_tensor_shape[i] as suggested in PR review * Add another test to cover size=-1 case
Ina Dobreva committed -
Christian Clauss committed
-
abergeron committed
-
Tianqi Chen committed
-
* Update topi.cc fix topi.nn.global_pool layout="NHWC" * add topi.nn.global_pool layout=NHWC test
戚海涛 committed -
Tianqi Chen committed
-
Zhao Wu committed
-
Liangfu Chen committed
-
Josh Fromm committed
-
Xingyu Zhou committed
-
- 09 Jan, 2020 7 commits
-
-
* [REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr) As part of unified IR, we will need to unify relay::Expr and the current tvm::Expr under the same base type. From the techinical point of view. tvm::Expr is a "primitive" expression that only contains POD types and handles and does not do life-cycle management. This PR renames Expr->PrimExpr to clarify that. We will send a subsequent PR to introduce the base expr class. * Remove legacy VarExpr and ExprHash/Equal
Tianqi Chen committed -
Trevor Morris committed
-
* [Relay][Frontend][Tflite] Add parses support for unary elemwise ops * Add generic method to convert unary functions: abs, exp, ceil, floor log, sin, cos, sqrt, rsqrt, neg * Add relevant tests * Delete excessive underscores as requested in PR review * Change parameter name as suggested in PR review
Ina Dobreva committed -
* [REFACTOR] relay::Module Def -> TypeDef The term Def was not very clear about what is the object of interest(could be function def or type def). Changes the term to TypeDef to be more explicit. * Update include/tvm/relay/module.h Co-Authored-By: Wei Chen <ipondering.weic@gmail.com> Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
Tianqi Chen committed -
* Added 1D pooling to Topi * Added 1D pooling relay op and tests. * Added onnx parsing and tests for maxpool1d and averagepool1d * formatting * moved partial import. * Fixed typo.
Josh Fromm committed -
* [AutoTVM] Use vm compile in extracting task from relay * update * restructure vm compiler to reduce task extraction time * x * fix * update doc * udpate doc * lint
Haichen Shen committed -
* [RUNTIME] Fix windows build after the latest dso module change. Switch to shared_ptr to get around a problem in latest MSVC. * [CI] Add github action for win mac build.
Tianqi Chen committed
-
- 08 Jan, 2020 4 commits
-
-
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
optima2005 committed -
* [REFACTOR][IR] Variable -> VarNode * [REFACTOR][IR] Add/Sub/Mul/Div -> AddNode/SubNode etc. * [REFACTOR][IR] Min/Max/FloorDiv/FloorMod -> MinNode/MaxNode etc. * [REFACTOR][IR] EQ/NE/LT/LE/GT/GE/Select -> EQNode/NENode etc. * [REFACTOR][IR] Add Node suffix to Select/Call/Load/Ramp/Shuffle/Let * [REFACTOR][IR] Add node suffix to IntImm/UIntImm/FloatImm/StringImm * [REFACTOR][IR] Add Node suffix to Any, AttrStmt, AssertStmt * [REFACTOR][IR] Add Node suffix to Store/Provide/Allocate/Free * [REFACTOR][IR] Add Node suffix to ProducerConsumer * Fix lint * style updates, test fixes
Tianqi Chen committed -
Zhi committed
-
Haichen Shen committed
-
- 07 Jan, 2020 4 commits
-
-
* [RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return value. Previously the signature of LibraryModule's PackedFunc does not support return value. This wasn't a limitation for our current usecase but could become one as we start to generate more interesting functions. This feature also start to get interesting as we move towards unified object protocol and start to pass object around. This PR enhances the function signature to allow return values. We also created two macros TVM_DLL_EXPORT_PACKED_FUNC and TVM_DLL_EXPORT_TYPED_FUNC to allow manual creation of functions that can be loaded by a LibraryModule. Examples are added in apps/dso_plugin_module. The change to TVMBackendPackedCFunc is backward compatible, as previous function will simply ignore the return value field. * address review comments
Tianqi Chen committed -
Animesh Jain committed
-
Tianqi Chen committed
-
Tianqi Chen committed
-
- 06 Jan, 2020 5 commits
-
-
Tianqi Chen committed
-
* Make Relay Keras frontend support networks created using Tensorflow (1.13) Keras implementation (tf.Keras) * Modify Keras frontend tests to run from a class rather than a function based script * Adjust Keras frontend tests to run with both 'Keras' and 'tf.Keras' * Change "TestKeras.test_forward_merge" to validate instances by class name rather than instance type
Leandro Nunes committed -
* Fix typos on Docker image versions that we are currently running as part of CI * Add version comment in the same pattern for ci_lint image
Leandro Nunes committed -
* As a result of backwards incompatible changes released in pillow 7.0, torchvision crashes if you just "pip install pillow", as we do in a few places. * This patch sets pillow<7 to be installed in Dockerfiles and support material as tutorials and documentation.
Leandro Nunes committed -
* Improve commentary for operator fusion. * Attempt to clarify what well formed checker is doing
Ramana Radhakrishnan committed
-