- 21 Oct, 2019 4 commits
-
-
FIX "After connecting he usb" with "After connecting the usb"
Monkeyking committed -
This patch adds multiply operator for quantized tensors. The details of the quantized multiplication are outlined in the code. This builds on pull request 3927 and includes the changes Animesh mentions in the comments on that request. Change-Id: I555715b53d0266a91d5c03dc3dfe8fc31e7ce4e1
ekalda committed -
Tianqi Chen committed
-
* [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol. This PR removes the original node system, and make node as a subclass of Object. This is a major refactor towards a better unified runtime object system. List of changes in the refactor: - We now hide data_ field, use Downcast explicitly to get a sub-class object. - Removed the node system FFI in python. - Removed the node C API, instead use PackedFunc for list and get attrs. - Change relay::Op::set_attr_type_key(attr_key_name) to relay::Op::set_attr_type<AttrType>(). - This change was necessary because of the new Object registration mechanism. - Subsequent changes to the op registrations - The change revealed a few previous problems that is now fixed. - Patched up a few missing node type registration. - Now we will raise an error if we register object that is not registered. - The original node.h and container.h are kept in the same location. - Calling convention: kObjectHandle now equals the old kNodeHandle, kNodeHandle is removed. - IRFunctor now dispatches on ObjectRef. - Update to the new type checking API: is_type, derived_from are replaced by IsInstance. - Removed .hash member function, instead use C++ convention hasher functors. * Address review comments
Tianqi Chen committed
-
- 20 Oct, 2019 2 commits
-
-
Haichen Shen committed
-
We think it will reduce the confusion with the meaning. https://discuss.tvm.ai/t/discuss-consider-rename-vm-datatype/4339
Wei Chen committed
-
- 18 Oct, 2019 6 commits
-
-
Tianqi Chen committed
-
* Add LiftIfThenElse pass * Add more comments * Rename and refactor * Add description for internal data structure * Rename a test * Minor change * Address comments * Improve update_for
Yao Wang committed -
Animesh Jain committed
-
Tianqi Chen committed
-
Gus Smith committed
-
* [Relay][Frontend][TF] Add tensor array ops * rename * delete test * Move utility function * Refactor * fix tensor array ops * fix test * fix rebase * Fix serializer bug * Improve tf convert name lookup to use prelude api * Fix lint * Fix test
Wei Chen committed
-
- 17 Oct, 2019 5 commits
-
-
* [relay][vm] Separate VM runtime with executable * Address comments * move ctx back to vm * make only vm related fields and methods protected * integrate seriliaztion/deserialization to executable * create stream
Zhi committed -
Arm architecture provides optional FP16 floating point support in two alternative formats, IEEE and an an alternative Arm format. The ACLE (Arm C Language Extension) defined preprocessor symbol __ARM_FP16_FORMAT_IEEE can be used to distinguish between implementations providing IEEE and the Arm alternative format, but cannot, on its own, be used to determined if FP16 HW support is actually present. Testing this preprocessor symbol can lead to undefined __floatdihf at runtime on an aarch64 target where no FP16 HW is present. The relevant preprocessor symbol to determine whether FP16 HW support is present in the target is __ARM_FEATURE_FP16_SCALAR_ARITHMETIC, this symbol implies __ARM_FP16_FORMAT_IEEE. The relevant preprocessor symbols are defined by the ACLE standard, section 5.5.21 16-bit floating-point data processing operations, https://static.docs.arm.com/101028/0008/Q2-ACLE_2019Q2_release-0008.pdf
lhutton1 committed -
The existing sequence of pip install commands fetches and installs torch==1.0.1.post2 then fetches an unpinned version of torchvision, recent torchvision packages hardwire the specific torch version they depend on, the overall effect is that we install a pinned torch version then replace it with whatever version the torchvision package depends on. The most recent torchvision==0.4.1 package results in some test case failures. This patch pins torchvision back to 0.4.0, the most recent version that the test suite worked. Removing the explicit torch install because it is implied and pinned as dependency of torchvision. Change-Id: Ib30bf6aed79ff130ea15ef5134fefb0508790574
Marcus Shawcroft committed -
* [TOPI][x86] Cascade lake support. * Jenkins test debug 1. * Testing cascade lake alone.
Animesh Jain committed -
* Improve build error when no lowered funcs * Switch from fatal to warning
Logan Weber committed
-
- 16 Oct, 2019 5 commits
-
-
* [RUNTIME] Refactor object python FFI to new protocol. This is a pre-req to bring the Node system under object protocol. Most of the code reflects the current code in the Node system. - Use new instead of init so subclass can define their own constructors - Allow register via name, besides type idnex - Introduce necessary runtime C API functions - Refactored Tensor and Datatype to directly use constructor. * address review comments
Tianqi Chen committed -
Tianqi Chen committed
-
shoubhik committed
-
* add and fix gradients * fix linter issues
Altan Haan committed -
Animesh Jain committed
-
- 15 Oct, 2019 5 commits
-
-
* Fix infer type of kernel in dense. * - Moving the check of weight being nullptr up as it is needed in both the branches now. - Adding test case for validating that data dtype and kernel dtypes can be different. * - Fix the dtype check for weight. If the weight is not present then we will use the data dtype.
shoubhik committed -
* [Relay][AlterOpLayout] NHWC to NCHWc pad operator. * Fixing culprit. * Flaky test 1. * Flaky test 2.
Animesh Jain committed -
Sergei Grechanik committed
-
* [RUNTIME] Introduce new object protocol. This PR introduces a new object protocol to unify the node and object. We also updated the existing runtime::vm code to make use of the new system. Update to the node will be done in a follow up PR. Other changes: - Remove object related code in json serializer as that code logic was not complete and we have a separate serializer for VM, can revisit later. * address review comment * Fix the child slot logic
Tianqi Chen committed -
Animesh Jain committed
-
- 14 Oct, 2019 4 commits
-
-
Tianqi Chen committed
-
Tianqi Chen committed
-
Animesh Jain committed
-
Tianqi Chen committed
-
- 13 Oct, 2019 2 commits
-
-
Leo Chen committed
-
This implementation provides cast to limited number of dtypes that tflite currently supports for placeholder op. Add INT64 in the possible dtypes as it appears to be supported accrording to tlfite schema.
Ina Dobreva committed
-
- 12 Oct, 2019 1 commit
-
-
Thierry Moreau committed
-
- 11 Oct, 2019 6 commits
-
-
* overload half operators for cuda codegen * add float16 te test_op_level1 * fix test_op_level1.py * fix lint * disable fp16 test if gpu does not support * disable fp16 test if gpu does not support * bypass float16 test if gpu does not support float16
Xingyu Zhou committed -
* Fix layer norm for fp16 * [Fix] Fix arange for fp16 * [Fix] Fix mxnet frontend for fp16 * [Fix] Fix arange for fp16 * remove comments * x * fix nnvm
Haichen Shen committed -
* [tvm][any] broadcast with values other than 1 * Add test for incompatible runtime values * Remove hybrid script compact buffer binding * retrigger ci
Zhi committed -
Peter Yeh committed
-
* update build_gcn.py tutorial updates * support bias in GCN layer * download pretrained gcn model * verify model accuracy * use time_evaluator to measure runtime * fix adding bias in gcn layer * remove printing output * fix small bug * add DGL-PyTorch comparison into the build_gcn tutorial * add accuracy testing * adjust import order * handle different dgl versions * update number for dgl version checking
Chien-Yu Lin committed -
Animesh Jain committed
-