- 05 Feb, 2020 3 commits
-
-
* enforce 4-way padding * add util with get_pad_tuple * delete unnecessary arguments * fix lint * add container.Array case * fix cudnn conv2d asymmetric padding logic * rename get_pad_tuple to get_pad_tuple2d * revert change for topi/python/topi/nn/conv2d.py * add get_pad_tuple2d for several contrib conv2d ops * add get_pad_tuple2d for all conv2d ops
Xingyu Zhou committed -
* [REFACTOR][PY] Establish tvm.runtime This PR establishes the tvm.runtime namespace that contains the core runtime data structures. The top-level API are kept inact for now via re-exporting. We will followup later to cleanup some of the top-level APIs. * Fix ndarray name
Tianqi Chen committed -
* [REFACTOR][PY] tvm._ffi - Remove from __future__ import absolute_import in the related files as they are no longer needed if the code only runs in python3 - Remove reverse dependency of _ctypes _cython to object_generic. - function.py -> packed_func.py - Function -> PackedFunc - all registry related logics goes to tvm._ffi.registry - Use absolute references for FFI related calls. - tvm._ffi.register_object - tvm._ffi.register_func - tvm._ffi.get_global_func * Move get global func to the ffi side
Tianqi Chen committed
-
- 04 Feb, 2020 1 commit
-
-
* [TOPI][x86] Injective Schedule Improvement. * Add tiling. * Vectorize when there is an axis.
Animesh Jain committed
-
- 03 Feb, 2020 1 commit
-
-
* [TOPI] upsample operator 'NCHWinic' format support. some hardware accelerator ask packed format data like NCHWinic to fit the hardware resource, here add upsample NCHWinic format support to help such requirement. * address review comments, add assert for 'else must be NCHWxc' logic.
Hua Jiang committed
-
- 01 Feb, 2020 1 commit
-
-
Alex Gladkov committed
-
- 31 Jan, 2020 1 commit
-
-
Animesh Jain committed
-
- 24 Jan, 2020 2 commits
-
-
* remove cpp upsampling * remove cpp resize
masahi committed -
Alex Gladkov committed
-
- 22 Jan, 2020 2 commits
-
-
- combine pad and dilate; - fix for the issue https://discuss.tvm.ai/t/compile-error-for-cuda-target/4164 - fix for the issue https://github.com/apache/incubator-tvm/pull/4472
Alex Gladkov committed -
Alexander Pivovarov committed
-
- 21 Jan, 2020 1 commit
-
-
Bring up namespace te -- Tensor expression language DSL.
Tianqi Chen committed
-
- 20 Jan, 2020 1 commit
-
-
Alex Gladkov committed
-
- 19 Jan, 2020 2 commits
-
-
This PR moves the codegen related code into the target folder, as they are target specific functionalities. We also adopt the term "compiler driver" in common compiler infra such as rust, GHC and clang. As a result, build_module is moved into the driver folder.
Tianqi Chen committed -
TIR is the new namespace for low-level IR for tensor-level optimizations and loop transformations. This PR establishes the namespace and files. - lowered_func.h,buffer.h,data_layout.h -> tir/buffer.h,tir/data_layout.h,tir/lowered_func.h - ir.h -> tir/expr.h, tir/stmt.h - ir_functor_ext.h -> tir/expr_functor.h, tir/stmt_functor.h
Tianqi Chen committed
-
- 18 Jan, 2020 1 commit
-
-
Haichen Shen committed
-
- 17 Jan, 2020 2 commits
-
-
Move the conversion extensions to the specific class definitions so that we longer need to include packed_func_ext.
Tianqi Chen committed -
Animesh Jain committed
-
- 16 Jan, 2020 3 commits
-
-
* [REFACTOR] introduce top - Tensor Operation DSL. Historically we put Tensor, Schedule and compute under the root tvm namespace. This is no longer a good idea as the project's scope grows larger than the tensor operation DSL. This PR introduces top -- a namespace for tensor operational DSL concepts such as schedule, tensor, compute. We moved the related files to the new top subfolder. * Move relevant files into include/tvm/top and src/top
Tianqi Chen committed -
* [arith] add ShapeVar representing non-neg valued variable in a tensor shape * bounder remover; deal with div in int_set differently * fix bounder_remover * migrate unittest to use shape_var * use tvm.shape_var in integration & relay tests * add test case; fix Var register * fix lint * fix lint again * add default ShapeVar visitor in Relay * fix override * fix ShapeVar visit bug * revert IntervalSet for shape_var * remove bound_remover * remove is_var; use constructor for shapevar/var instead * ShapeVar -> SizeVar; add constructor comments * shape_var -> size_var in doc * tindex -> size
Yizhi Liu committed -
This PR introduces more clear naming prefix for C API type codes to avoid conflict with other packages. We also removed TVMArray and TVMType to directly use DLTensor and DLDataType.
Tianqi Chen committed
-
- 15 Jan, 2020 2 commits
-
-
This reverts commit dcf7fbf1.
Haichen Shen committed -
* [REFACTOR][IR] Unify IntImm and UIntImm This PR unifies UIntImm and IntImm to simplify the codebase. Unsigned integer constants will also be stored as IntImm. For uint constant that does not fit into int64(rare case), we introduced an intrinsic tvm_big_uint_imm to construct such intgers by its lower and higher 32bits. * [REFACTOR][IR] Remove UIntImm to use IntImm * rename big->large
Tianqi Chen committed
-
- 11 Jan, 2020 3 commits
-
-
* added conv1d operators to topi. * Started to add python testing. * Added python conv1d implementation for testing. * Wrote test but need to add cuda schedule :( * Cuda schedules working for both conv1d layouts. * All topi tests passing. * Formatting topi. * Removed pad_method option as its probably overkill. * Added relay op definition of conv1d. * End2end conv1d working with onnx. * Lint fixes. * Formatting fixes. * Rebase fix. * Switched to array based attributes for consistency across convs. * Improved onnx parsing and testing for convolutions. * lint fix * Tiny tweak. * Bug fix * Rebase fix. * Add group ignore to onnx conv1d frontend. * Unified MakeConv and fixed documentation. * improved autopadding * Addressed feedback and simplified onnx frontend. * Format fix. * Basic X86 NCW schedule working. * Added nwc schedule. * fixed name * Added more tests and basic x86 schedules. * Format fix. * Added non power of two shape tests.
Josh Fromm committed -
* [TOPI][RELAY][OP] add op crop_and_resize * fix pylint * incorporate comments * fix ci
Yong Wu committed -
* Add output_padding to generic * Add output_padding to the reference impl * Add output_padding to arm_cpu * Add output_padding to the test * Add output_padding for cuda * Add output_padding for x86 * Make use of the new output_padding argument in Relay * Adjust conv2d_transpose Relay test * Fix lint errors * Fix the VTA declaration of conv2d_transpose * support for output padding in conv2d transpose * some output padding will break IR pass * Fix new conv2d_transpose test * Update tophub * Fix conv1d output_padding too. * Fix the conv1d_transpose reference function. * Fix the cuda impl * fix the topi test for conv1d * Update the versions in tophub.py Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
abergeron committed
-
- 10 Jan, 2020 1 commit
-
-
* Update topi.cc fix topi.nn.global_pool layout="NHWC" * add topi.nn.global_pool layout=NHWC test
戚海涛 committed
-
- 09 Jan, 2020 2 commits
-
-
* [REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr) As part of unified IR, we will need to unify relay::Expr and the current tvm::Expr under the same base type. From the techinical point of view. tvm::Expr is a "primitive" expression that only contains POD types and handles and does not do life-cycle management. This PR renames Expr->PrimExpr to clarify that. We will send a subsequent PR to introduce the base expr class. * Remove legacy VarExpr and ExprHash/Equal
Tianqi Chen committed -
* Added 1D pooling to Topi * Added 1D pooling relay op and tests. * Added onnx parsing and tests for maxpool1d and averagepool1d * formatting * moved partial import. * Fixed typo.
Josh Fromm committed
-
- 08 Jan, 2020 3 commits
-
-
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
optima2005 committed -
* [REFACTOR][IR] Variable -> VarNode * [REFACTOR][IR] Add/Sub/Mul/Div -> AddNode/SubNode etc. * [REFACTOR][IR] Min/Max/FloorDiv/FloorMod -> MinNode/MaxNode etc. * [REFACTOR][IR] EQ/NE/LT/LE/GT/GE/Select -> EQNode/NENode etc. * [REFACTOR][IR] Add Node suffix to Select/Call/Load/Ramp/Shuffle/Let * [REFACTOR][IR] Add node suffix to IntImm/UIntImm/FloatImm/StringImm * [REFACTOR][IR] Add Node suffix to Any, AttrStmt, AssertStmt * [REFACTOR][IR] Add Node suffix to Store/Provide/Allocate/Free * [REFACTOR][IR] Add Node suffix to ProducerConsumer * Fix lint * style updates, test fixes
Tianqi Chen committed -
Zhi committed
-
- 06 Jan, 2020 2 commits
-
-
* [CONV] Asymmetic padding * fix lint error * update for legalize, rocm and cudnn * add more test cases * change more symmetric padding * change conv2d winograd tests according orginal cases * remove 'alter_op_layout.h' header in bitserial.cc
optima2005 committed -
* Support empty tensor * Fix schedule * Refactor * Minor fix * Fix pylint * Merge cpp and python is_empty_shape
Yao Wang committed
-
- 03 Jan, 2020 1 commit
-
-
* add onnx resize converter * update frontends * updating topi * adding onnx resize tests * fixed NHWC test by casting size dtype to int32 * fix tests * fix lint * update existing test cases * fix tensorflow frontend * fix lint * remove NHWC stuff * update topi resize test for half_pixel * update doc * fix doc * remove onnx resize bits
masahi committed
-
- 01 Jan, 2020 1 commit
-
-
* [FRONTEND][TF] Add conv3d * fix high rtol
optima2005 committed
-
- 31 Dec, 2019 1 commit
-
-
* [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal and macros to Object. Historically, we have classes like NodePtr/Ref/HashEqual. After unified object protocol, these names are just alias of the object counterpart. Moreover, there are helper macros defined over the places for defining these object. This PR consoldiate the terminologies into the corresponding ones in the Object system so we have a clean and consistent API moving forward. * Update include/tvm/attrs.h Co-Authored-By: Wei Chen <ipondering.weic@gmail.com> * fix compilation Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
Tianqi Chen committed
-
- 28 Dec, 2019 1 commit
-
-
* [FRONTEND][TF] conv3d_transpose 'SAME' support kernel more than 1x1 * revised per as review comments * add more fallback wolkaround to make all tests pass
optima2005 committed
-
- 27 Dec, 2019 1 commit
-
-
* [TOPI] add 3D upsampling Op. * fix lint issues * change align_corners to coordinate_transformation_mode * fix resize3d half_pixel * make a simple function and clean up trilinear_resize3d_python * fix doc
optima2005 committed
-
- 26 Dec, 2019 1 commit
-
-
deepIgnorance committed
-