1. 25 Oct, 2019 3 commits
  2. 24 Oct, 2019 12 commits
  3. 23 Oct, 2019 3 commits
  4. 22 Oct, 2019 5 commits
  5. 21 Oct, 2019 6 commits
    • [Relay][QNN] Add unit test for int8 (#4159) · 6f9d028b
      * [bugfix][codegen] fix casting bug in llvm codegen
      
      * update example
      
      * retrigger ci
      
      * check llvm version
      Zhi committed
    • [Relay][Pass] Count MAC for BatchMatMul (#4157) · e0d286a1
      * count MAC for BatchMatMul
      
      * update doc
      Haichen Shen committed
    • Fix missspelling (#4166) · d660e514
      FIX "After connecting he usb" with "After connecting the usb"
      Monkeyking committed
    • Add support for quantized multiply to Relay (#4141) · e5835425
      This patch adds multiply operator for quantized tensors.
      The details of the quantized multiplication are outlined
      in the code.
      
      This builds on pull request 3927 and includes the changes
      Animesh mentions in the comments on that request.
      
      Change-Id: I555715b53d0266a91d5c03dc3dfe8fc31e7ce4e1
      ekalda committed
    • [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol. (#4161) · 7895adb2
      * [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol.
      
      This PR removes the original node system, and make node as a subclass of Object.
      This is a major refactor towards a better unified runtime object system.
      
      List of changes in the refactor:
      
      - We now hide data_ field, use Downcast explicitly to get a sub-class object.
      - Removed the node system FFI in python.
      - Removed the node C API, instead use PackedFunc for list and get attrs.
      - Change relay::Op::set_attr_type_key(attr_key_name) to relay::Op::set_attr_type<AttrType>().
        - This change was necessary because of the new Object registration mechanism.
        - Subsequent changes to the op registrations
        - The change revealed a few previous problems that is now fixed.
      - Patched up a few missing node type registration.
        - Now we will raise an error if we register object that is not registered.
      - The original node.h and container.h are kept in the same location.
      - Calling convention: kObjectHandle now equals the old kNodeHandle, kNodeHandle is removed.
      - IRFunctor now dispatches on ObjectRef.
      - Update to the new type checking API: is_type, derived_from are replaced by IsInstance.
      - Removed .hash member function, instead use C++ convention hasher functors.
      
      * Address review comments
      Tianqi Chen committed
  6. 20 Oct, 2019 2 commits
  7. 18 Oct, 2019 6 commits
  8. 17 Oct, 2019 3 commits
    • [relay][vm] Separate VM runtime with executable (#4100) · 4052de6d
      * [relay][vm] Separate VM runtime with executable
      
      * Address comments
      
      * move ctx back to vm
      
      * make only vm related fields and methods protected
      
      * integrate seriliaztion/deserialization to executable
      
      * create stream
      Zhi committed
    • [PATCH] Fix undefined __floatdihf in libtvmruntime.so on aarch64. (#4119) · cf046972
      Arm architecture provides optional FP16 floating point support in two alternative formats, IEEE and an an alternative Arm format.
      
      The ACLE (Arm C Language Extension) defined preprocessor symbol __ARM_FP16_FORMAT_IEEE can be used to distinguish between implementations providing IEEE and the Arm alternative format, but cannot, on its own, be used to determined if FP16 HW support is actually present.
      
      Testing this preprocessor symbol can lead to undefined __floatdihf at runtime on an aarch64 target where no FP16 HW is present.
      
      The relevant preprocessor symbol to determine whether FP16 HW support is present in the target is __ARM_FEATURE_FP16_SCALAR_ARITHMETIC, this symbol implies  __ARM_FP16_FORMAT_IEEE.
      
      The relevant preprocessor symbols are defined by the ACLE standard, section 5.5.21 16-bit floating-point data processing operations, https://static.docs.arm.com/101028/0008/Q2-ACLE_2019Q2_release-0008.pdf
      lhutton1 committed
    • [DOCKER] Pin torchvision==0.4.1 (#4140) · a8a98317
      The existing sequence of pip install commands fetches and installs
      torch==1.0.1.post2 then fetches an unpinned version of torchvision,
      recent torchvision packages hardwire the specific torch version they
      depend on, the overall effect is that we install a pinned torch
      version then replace it with whatever version the torchvision package
      depends on.
      
      The most recent torchvision==0.4.1 package results in some test case
      failures.
      
      This patch pins torchvision back to 0.4.0, the most recent version
      that the test suite worked.  Removing the explicit torch install
      because it is implied and pinned as dependency of torchvision.
      
      Change-Id: Ib30bf6aed79ff130ea15ef5134fefb0508790574
      Marcus Shawcroft committed