- 13 Apr, 2020 2 commits
-
-
* [PYTORCH]Reduce_ops support added * Review comments updated * typo bug in qnn test
Samuel committed -
* use funcs from prelude, pass around convert_map * get relay input type from user ishape * handle tuple unpack * experimenting with static tensor array * use prelude concat instead of cons + rev * minor clean up * fix layer norm conversion bug, unwrap tensor array * add infer shape on tensor array * pass around prelude for now * compile worked but runtime error * fix tensor array wrapping * begin list dynamic test * is_list_dynamic first version * finish dynamic list test * a few fix * use shape_of function if Any is found * improve size conversion * working on adding free vars to loop block * fixed inlined inner loop issue * clean up free var handling * add support for tensor array concat * adding ta concat on last axis * fix concat, but got runtime error * disable concat on axis -1 for now * add lstm tests * revert unrelated change * fix stacked bidir test * minor fix to test * relax tol a bit, revert dnnl change to avoid conflict * simplify infer type, use input tensor shape rather than concat shape * more shape fix
masahi committed
-
- 12 Apr, 2020 5 commits
-
-
* [Intrinsic] Add log1p, ldexp, atan2, hypot, nextafter, copysign * Lint
Junru Shao committed -
Jared Roesch committed
-
Zhi committed
-
* Adding Cast back to Int32 in FixedPointMultiply. * Removing extra clip. * Fix space. * Retrigger. * Retrigger.
Animesh Jain committed -
This PR enables the copy on write optimizations passes: - Enable COW for IRModule both TIR and relay passes. - Enabled COW for PrimFunc in TIR passes. Need more thoughts into whether/how to enable COW for relay::Function, due to some function passes depend on the presence of IRModule for context information, and the std::move of the related function to nullptr might affect the related behavior.
Tianqi Chen committed
-
- 11 Apr, 2020 5 commits
-
-
* [PYTHON]Abs, Arange, Softplus ops * Review comments updated
Samuel committed -
* [LLVM] Fix generation of LLVM intrinsics The type list in the call to llvm::Intrinsic::getDeclaration is not the intrinsic's signature, it's the list of overloaded types. Without this fix, the updated unit test would cause the following error: TVMError: LLVM module verification failed with the following errors: Intrinsic name not mangled correctly for type arguments! Should be: llvm.ctlz.i32 i32 (i32, i1)* @llvm.ctlz.i32.i1 Special handling for llvm.prefetch, sig matching for overloaded ints only The prefetch intrinsic returns void in LLVM, while it returns i32 in TVM. This case needs to be handled specially, because rule-based intrinsic translation would cause invalid LLVM type to be created. Do the signature matching only for overloaded intrinsics. It's not needed for non-overloaded ones, so this can save a bit of compile-time. * Include intrinsic name in the error message * Fix number of arguments for llvm.fmuladd and llvm.pow
Krzysztof Parzyszek committed -
* merge change from dev branch * fix string issue * bring comanic's change back
masahi committed -
* Support TF Frontend Static TensorArray * Fix pylint * Fix lint * Move get_tensor_array_shape into prelude * Fix lint * Fix common
Yao Wang committed -
* [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc This PR introduces RValue reference support the PackedFunc calling convention to address the above issue. Specifically, when an argument is a r-value reference, we will use a assign a different type code(`kObjectRValueRefArg`), and pass `Object**` (the address to the Object pointer) instead through the values array. The callee can choose to move out this Object pointer and set the original Object pointer from the caller side to be nullptr. We also add an experimental move support to the python side(marked as _move so to indicate the dev nature). This enhancement will enable copy on write optimizations through out the TVM stack. * Address review comments * fix compilation
Tianqi Chen committed
-
- 10 Apr, 2020 18 commits
-
-
Huacong Yang committed
-
* add target to region * refactor annotate_target * Make all unit test working * quick fix * enable BN, unit test failed * Fix vm test, unit test. Refactor annotate_target a bit. * quick fix fusion * revert fusion change * style fix * Refactor merge region pass * format * minor fix * Skip e2e test * lint * support AnnotateTarget multiple runs * Add HasAttr and revert DNNL codegen * address comment Co-authored-by: Zhi Chen <chzhi@amazon.com>
Cody Yu committed -
Tianqi Chen committed
-
* [arith] linear system and equation solver Co-authored-by: Sergei Grechanik <sergei.grechanik+h@gmail.com> * avoid constructing analyzer every time * generate random test cases and address comments Co-authored-by: Sergei Grechanik <sergei.grechanik@gmail.com> * rename linear_system to int_constraints * add comments and use random seed * message for reporting failure with seed * add SEqualReduce to IntConstraints; allow variables & ranges to be None Co-authored-by: Sergei Grechanik <sergei.grechanik+h@gmail.com> Co-authored-by: Sergei Grechanik <sergei.grechanik@gmail.com>
Yizhi Liu committed -
Samuel committed
-
* [FRONTEND][TENSORFLOW] Fix gather_nd indices * retrigger CI
MORITA Kazutaka committed -
weiliangweiliang committed
-
* Use runtime::String * move string to tvm namespace * add const char* constructor * implicit cast from std::string
Zhi committed -
hlu1 committed
-
* [RUNTIME] Initial implementation of Hexagon runtime support This is only the TVM runtime. The FastRPC libraries, simulator driver, etc. will be provided in subsequent commits. * Fix pylint complaints * Fix some more pylint complaints * Add link to the Hexagon SDK website * Extract VTCM marker into a common variable * Implement device->device memory copy * Disable unsigned PDs by default * Ensure that --hvx_length is present in sim_args if HVX is enabled * Remove the line about clang from README.md Apparently things work with libstdc++. * Mention to set USE_RPC=OFF when building libtvm_runtime.so for Hexagon * Remember to use codegen_hvx in validate_hvx_length * Add a line about minimum version of LLVM
Krzysztof Parzyszek committed -
* Improve DNNL * Add bind params * trigger ci
Cody Yu committed -
shoubhik committed
-
This PR refactors the serialization mechanism to support general serialization of leaf objects into bytes. The new feature superceded the original GetGlobalKey feature for singletons. Added serialization support for runtime::String.
Tianqi Chen committed -
* Legalize - Use Non-recursive Rewriter. * Cleanup.
Animesh Jain committed -
Yizhi Liu committed
-
Samuel committed
-
* Set IterVar index to 0 for local thread bound IterVars. * Lint fix * Use rank instead of scope name to predicate. Add tests. * Handle cases other than local/threadIdx. * Turn warp to the old behavior. * Modify test to cover global/blockIdx. * Fix a typo. * Update test_te_schedule_ops.py with more testing coverage in test_local_stage_predicate; remove test_schedule_schedule_ops.py which was added by mistake.
yongfeng-nv committed -
Tianqi Chen committed
-
- 09 Apr, 2020 1 commit
-
-
* [CI][DOCS] Make sure to refresh the cython part * [BUGFIX] Fix CRT static test bug * Fix demo_static * resolve review comment
Tianqi Chen committed
-
- 08 Apr, 2020 6 commits
-
-
* fix String SEqual * retrigger ci
Zhi committed -
Luis Vega committed
-
Haichen Shen committed
-
In newer versions of LLVM, this header is no longer included by one of the already included headers in llvm_common.h, so include it explicitly.
Krzysztof Parzyszek committed -
Samuel committed
-
* [RELAY] Add 'check' functions to MergeComposite Currently, MergeComposite can only perform structural matches. This patch introduces the ability to specify a 'check' function alongside the pattern which can include custom logic to determine whether an extracted pattern should be merged. For example, if you only want to merge 'NHWC' convolutions, you can specify a 'check' function which queries the data_layout value of the extracted pattern (see the test). Change-Id: I9337ce39f10997051a286d888be38ed0d410d340 * [RELAY] Reformat merge_composite.cc Run clang-format on merge_composite.cc Change-Id: I1736bff798cc6d93e57519b08ab3362869098779 * [RELAY][BYOC] Support composite functions in AnnotateTarget This patch introduces support to annotate composite functions in the AnnotateTarget pass. In order for a composite function to be annotated, you should name it according to the style: {codegen}.{name} eg. dnnl.add_relu Change-Id: I74d6c0b506153d866f6d1feb203b32dad59f2871
mbaret committed
-
- 07 Apr, 2020 3 commits
-
-
* Add implementation of TVMDSOOp * feat: Update cmake script to work with c++11 and in-repo build * feat: Use libtvm as oplib dependency * fix: Add missing link dependency to libtvm * feat: Update tf tvmdso op by review comments * fix: Update with pr comments * fix: Fix lint * feat: Add test script and fix gpu shape * feat: Add test script and fix gpu shape * fix: Conditional build tftvm op for gpu * fix: Conditional build tftvm op for gpu * fix: Fix pylint of tf_op module.py * fix: Fix pylint of tf_op module.py * feat: Conditional enable gpu test for tftvm op * feat: Conditional enable gpu test for tftvm op * feat: Add tf_tvmdsoop test script as an app test * fix: Fix gpu/cpu enabled check on tvm in test script * fix: Make tf tvmdso op test script runnable with pytest * remove unused test script test_tfop_module.py * fix: Remove pushd & popd in tfdsoop test script * fix: Upgrade tftvmop use python3 to find TensorFlow * fix: Upgrade tftvmop use python3 to find TensorFlow * fix: Change target_link_options to target_link_libraries * fix: Add tftvmop build script's c++ option * fix: Add tvm library path to tf op test library path * fix: Debug ci build for tftvm dso op * fix: Fix cmake error and skip tfop test * fix: Fix typo and indentation issues * feat: Use TF list input op def * fix: Fix style and unexpected changes Co-authored-by: baoxinqi <baoxinqi@4paradigm.com> Co-authored-by: Chen Dihao <chendihao@4paradigm.com> Co-authored-by: wrongtest <wrongtest@4paradigm.com>
tobe committed -
This intrinsic was removed in LLVM 11.
Krzysztof Parzyszek committed -
Tianqi Chen committed
-