1. 23 Apr, 2020 2 commits
    • [RFC] Pytest environment improvements (#5421) · e149db28
      * [RFC] Pass pytest options globally.
      
      In many places having a global pytest flag is useful . For me with the
      build and test of tvm , I would like to be able to globally pass in
      pytest options as part of development flow or CI flows where one would
      like to measure other things regularly that need measurements including
      pytest coverage data that I would like to experiment with across the stack.
      
      This has been achieved with an additional setup-pytest-env.sh file in
      tests/scripts rather than putting in something in every single task test
      script and something I would like to avoid.
      
      This now means the -v option to pytest is superfluous. I did consider
      having a pytest.ini file but that doesn't allow me to pass any old
      environment variable in and this seems to be the compromise.
      
      * Improve other use case documentation
      
      * Rationalize pytest environment.
      
      * Remove the setting from docker/with_same_user.
      * Take the opportunity to migrate common PYTHONPATH and
      TVM_PATH into the common environment setting.
      
      * Fixup vta fsim
      
      * Be more explicit with common PYTHONPATH
      
      * Fix python path for task_python_vta_fsim.sh properly
      
      * Fix nit in documentation.
      Ramana Radhakrishnan committed
    • [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416) · 1f6c498b
      * [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings
      
      * Add note block
      Tianqi Chen committed
  2. 16 Apr, 2020 1 commit
  3. 12 Apr, 2020 1 commit
  4. 10 Apr, 2020 1 commit
  5. 09 Apr, 2020 1 commit
  6. 07 Apr, 2020 1 commit
    • [RUNTIME] Implement TVMDSOOp(TensorFlow custom op) for TVM runtime (#4459) · 53a4ad35
      * Add implementation of TVMDSOOp
      
      * feat: Update cmake script to work with c++11 and in-repo build
      
      * feat: Use libtvm as oplib dependency
      
      * fix: Add missing link dependency to libtvm
      
      * feat: Update tf tvmdso op by review comments
      
      * fix: Update with pr comments
      
      * fix: Fix lint
      
      * feat: Add test script and fix gpu shape
      
      * feat: Add test script and fix gpu shape
      
      * fix: Conditional build tftvm op for gpu
      
      * fix: Conditional build tftvm op for gpu
      
      * fix: Fix pylint of tf_op module.py
      
      * fix: Fix pylint of tf_op module.py
      
      * feat: Conditional enable gpu test for tftvm op
      
      * feat: Conditional enable gpu test for tftvm op
      
      * feat: Add tf_tvmdsoop test script as an app test
      
      * fix: Fix gpu/cpu enabled check on tvm in test script
      
      * fix: Make tf tvmdso op test script runnable with pytest
      
      * remove unused test script test_tfop_module.py
      
      * fix: Remove pushd & popd in tfdsoop test script
      
      * fix: Upgrade tftvmop use python3 to find TensorFlow
      
      * fix: Upgrade tftvmop use python3 to find TensorFlow
      
      * fix: Change target_link_options to target_link_libraries
      
      * fix: Add tftvmop build script's c++ option
      
      * fix: Add tvm library path to tf op test library path
      
      * fix: Debug ci build for tftvm dso op
      
      * fix: Fix cmake error and skip tfop test
      
      * fix: Fix typo and indentation issues
      
      * feat: Use TF list input op def
      
      * fix: Fix style and unexpected changes
      
      Co-authored-by: baoxinqi <baoxinqi@4paradigm.com>
      Co-authored-by: Chen Dihao <chendihao@4paradigm.com>
      Co-authored-by: wrongtest <wrongtest@4paradigm.com>
      tobe committed
  7. 02 Apr, 2020 2 commits
  8. 31 Mar, 2020 1 commit
  9. 30 Mar, 2020 4 commits
  10. 29 Mar, 2020 2 commits
  11. 23 Mar, 2020 2 commits
  12. 21 Mar, 2020 1 commit
  13. 12 Mar, 2020 1 commit
  14. 10 Mar, 2020 1 commit
  15. 09 Mar, 2020 1 commit
    • [Runtime] MISRA-C compliant TVM runtime (#3934) · 450f7163
      * implement of MISRA-C compliant TVM runtime;
      
      * working on bundle_deploy_c demo
      
      * move header files into include dir
      
      * fix compatibility issues
      
      * fix compatibility issues
      
      * resolve most of the warnings and errros
      
      * implement c_backend_api
      
      * introduce bridge
      
      * working well
      
      * move to header files and bundle.c into src/runtime/crt
      
      * clean up
      
      * satisfy linter
      
      * clean up
      
      * test with the cat image
      
      * remove synset
      
      * refactoring
      
      * refactoring
      
      * refactoring
      
      * initial crt_runtime_api.c
      
      * improved compatibility with g++
      
      * using exposed API in c_runtime_api.h
      
      * call from c_runtime_api.h
      
      * clean up
      
      * lint
      
      * merge into apps/bundle_deploy directory
      
      Change-Id: I51904db81b8589e65d107d8ca77b47452e3812b5
      
      * make the demo runs in ci
      
      Change-Id: I2c24f8b592508833d3555311c2b24d1931f19385
      
      * address review comments
      
      Change-Id: I027ddff15c31fb4da0bd0e461427dce619de1f93
      
      * release
      
      Change-Id: I5ad5bb8426468aac9fc8d074e56ddea358a7fd91
      
      * fix ci testing
      
      Change-Id: Ic2e82fb3051b6c254ef32a964f976b61e3e5fe4d
      
      * add test case for misra c runtime
      
      Change-Id: Ie0dfd0ade6be4665b4384db7d260a6c69b35010f
      
      * fread files in testing to avoid calling xxd
      
      Change-Id: Ie7fbc16b4b0b9509918d986a841f443900813bef
      Liangfu Chen committed
  16. 28 Feb, 2020 2 commits
  17. 25 Feb, 2020 1 commit
    • [Relay] Add a PyTorch to Relay Parser (#4497) · 87c20bb2
      * Add a PyTorch to Relay parser
      
      * Add alexnet, googlenet, mnasnet, shufflenet wip
      
      * Fix lint
      
      * Remove fix for shufflenet
      
      * Lower check
      
      * Pull changes from neo-ai/tvm changes
      
      * Remove commented out section
      
      * Use infer_shape everywhere
      
      * Change back to using trace instead of path in from_pytorch
      
      * Parse state_dict to add param names
      
      * Umbrella single_op under test_forwards
      
      * Remove print and cleanup call
      
      * Check if update to test broke CI
      
      * Retrigger CI
      
      * Add back in updated tests
      
      * Try splitting up tests
      
      * First pass at flexible typing, implemented for ones
      
      * Add int32 for all ops
      
      * Remove print statements
      
      * Fix lint
      
      * Broad except
      
      * Add other tensor types
      
      * Temporarily use old tests
      
      * Retrigger CI
      
      * Lower type names
      
      * Use numpy to convert in dense op
      
      * Fix lint
      
      * Remove print
      
      * Need to cleanup but verify int32 works for add
      
      * Rough tests for different types, a lot of types are not supported on CPU
      
      * Probably doesn't build, need to save work as I have to switch branches (constantly)
      
      * Parse param type
      
      * Remove print stmt in parser
      
      * Clean up some code
      
      * Working on flaot32 for bn
      
      * Add resnet18 double type
      
      * Fix lint
      
      * Temporarily move PT tests first
      
      * Temporarily add back refactored tests to fix mem issue
      
      * Add more type test and temp remove some tests
      
      * Comment out tests, hopefully CI prints a trace
      
      * Get stack trace
      
      * Remove operator dict key, rename op_name to node_id, remove dead code
      
      * Make relay map a list
      
      * Remove some hacky string stuff
      
      * Move to PyTorch 1.4
      
      * Remove input_type as param
      
      * Remove _get_fill_value, fix full ops
      
      * Remove unused code and combine ops for identity and none
      
      * Remove fn_param
      
      * Clean up main loop
      
      * Remove useless if/else for outputs
      
      * Remove ir_names, only used once
      
      * Remove some string hacking
      
      * Remove string parsing to get output name
      
      * Fix bug with output sizes of nodes
      
      * Use attributeNames in parse ops
      
      * Remove continue and add_op in parse_op
      
      * Do this everywhere, use assert instead of explciitly type casting
      
      * Remove unnecessary swap
      
      * Slight refactor for elemwise input parse
      
      * Use a copy of graph everywhere
      
      * Rename nid_to_node_name
      
      * Refactor parse import prereqs
      
      * Clean up input node kind check
      
      * Clean up conditionals
      
      * Clean up add_op
      
      * Cleanup type for ones and zeros op
      
      * Fix lint
      
      * Add torch install to CI
      
      * Actually use torch
      
      * Try moving import torch to only where it's needed
      
      * Import torch for CI
      
      * Use take op for select
      
      * Temporarily add ignore for jit inline pass for CI
      
      * Use CompleteTensorType, might be a PT 1.2 only thing
      
      * Use different types in elemwise op
      
      * Use float16 ones
      
      * Fix float16 test
      
      * Remove the temp docker changes
      
      * Remove temp test
      
      * Temporarily comment out original tests
      
      * Remove file
      
      * Empty cache after each test
      
      * Add some prints and lower input sizes
      
      * Try using no grad
      
      * Trying to globally set grad off
      
      * Use no grad for torchvision
      
      * Remove xfail tests
      
      * Remove VGG and AlexNet due to some issues
      
      * Combine pooling tests
      
      * Remove extra test file
      
      * Remove single op, remove larger pooling tests
      
      * Remove maxpool3
      
      * Remove debug prints
      
      * Remove inference call and add no_grad in measure latency
      
      * Use standard string start char
      
      * Remove redundant infer_shape in slice
      
      * Convert most to checks to just expr
      
      * Remove extra paren
      
      * More refactor of isinstance
      
      * Add helper for creating typed constants
      
      * Assert instead of return when no matching type
      
      * Remove network variants
      
      * Add no_grad when forward, remove deatch, fix lint
      
      * Change isinstance to expr in transpose
      
      * Use opnotimplemented, refactor
      
      * Fix full ops, remove duplicate tests
      
      * Never use shape field unless we know the type
      
      * Remove comma, retrigger CI
      
      * Add paren, retrigger CI
      
      * Use inline if-else for flags
      
      * Throw exception instead of assert
      
      * Remove version check for CI
      
      * Check version when doing inline pass
      
      * Fix lint
      
      * Lower more input sizes
      
      * Add new line, conv2d only accepts weight as expr
      
      * Use tvm.runtime.ndarray
      
      * Remove change to torch version install
      
      * Try no grad for mobilenet
      
      * Fix lint
      
      * Fix lint again
      
      * Revert to last passing
      
      * Delete test files
      
      * Ignore lint
      
      * Revert back
      
      * Comment out mobilenet
      
      * Clean up compare compiled and baseline outputs
      
      * Use IRModule
      
      * Add todos
      
      * Refactor use_bias
      
      * Add todo for fix conv op channels
      
      * Change input to data type
      
      * Remove todo
      
      * Handle channel multiplier > 1
      Alex Wong committed
  18. 16 Feb, 2020 1 commit
  19. 04 Feb, 2020 1 commit
  20. 17 Jan, 2020 2 commits
    • [VTA][TSIM] Enable TSIM CI Testing (#4407) · 2738eddf
      * Update task_python_vta.sh
      
      * install sbt=1.1.1 with apt-get
      
      * update verilator_opt
      
      * install verilator with major version 4.0
      
      * disable multi-threading for now
      
      * bug fix for correcting uop fetch address in LoadUop module
      
      * bug fix for correcting uop fetch address in LoadUop module
      
      * adjustment to read from dram_offset
      
      * enable USE_THREADS with verilator 4.x
      
      * DEBUG: try avoid core dump with verilator 4.x
      
      * bug fix in LoadUop module
      
      * log mega cycles in tsim
      
      * download cat.png to avoid fetching in each run
      
      * bug fix in LoadUop module
      
      * solve dram_even/sram_even issue
      
      * bug fix
      
      * introduce scalalint in ci
      
      * speedup tsim in ci
      
      * bug fix
      
      * lint scala code before building
      
      * disable multi-threading
      
      * split fsim/tsim script
      
      * update Jenkins settings
      
      * duplicate task_python_vta_fsim.sh as task_python_vta.sh for now
      
      Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
      Liangfu Chen committed
    • [VTA] Update Jenkinsfile for VTA test with TSIM (#4734) · 03ffb01c
      * [VTA] Update Jenkinsfile for VTA test with TSIM
      
      * duplicate task_python_vta.sh multiple copies for now
      Liangfu Chen committed
  21. 07 Jan, 2020 2 commits
    • [RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return val (#4637) · 77c47748
      * [RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return value.
      
      Previously the signature of LibraryModule's PackedFunc does not support return value.
      This wasn't a limitation for our current usecase but could become one
      as we start to generate more interesting functions.
      
      This feature also start to get interesting as we move towards unified
      object protocol and start to pass object around.
      This PR enhances the function signature to allow return values.
      
      We also created two macros TVM_DLL_EXPORT_PACKED_FUNC and TVM_DLL_EXPORT_TYPED_FUNC
      to allow manual creation of functions that can be loaded by a LibraryModule.
      
      Examples are added in apps/dso_plugin_module.
      The change to TVMBackendPackedCFunc is backward compatible,
      as previous function will simply ignore the return value field.
      
      * address review comments
      Tianqi Chen committed
  22. 30 Dec, 2019 1 commit
    • [REFACTOR][RUNTIME] Update NDArray use the Unified Object System (#4581) · 55bd786f
      * [REFACTOR][RUNTIME] Move NDArray to Object System.
      
      Previously NDArray has its own object reference counting mechanism.
      This PR migrates NDArray to the unified object protocol.
      
      The calling convention of NDArray remained intact.
      That means NDArray still has its own type_code and
      its handle is still DLTensor compatible.
      
      In order to do so, this PR added a few minimum runtime type
      detection in TVMArgValue and RetValue only when the corresponding
      type is a base type(ObjectRef) that could also refer to NDArray.
      
      This means that even if we return a base reference object ObjectRef
      which refers to the NDArray. The type_code will still be translated
      correctly as kNDArrayContainer.
      If we assign a non-base type(say Expr) that we know is not compatible
      with NDArray during compile time, no runtime type detection will be performed.
      
      This PR also adopts the object protocol for NDArray sub-classing and
      removed the legacy NDArray subclass protocol.
      Examples in apps/extension are now updated to reflect that.
      
      Making NDArray as an Object brings all the benefits of the object system.
      For example, we can now use the Array container to store NDArrays.
      
      * Address review comments
      Tianqi Chen committed
  23. 24 Dec, 2019 1 commit
  24. 22 Dec, 2019 1 commit
  25. 18 Dec, 2019 1 commit
  26. 24 Nov, 2019 1 commit
  27. 14 Nov, 2019 1 commit
  28. 14 Oct, 2019 1 commit
  29. 08 Sep, 2019 1 commit
  30. 15 Aug, 2019 1 commit