C++ additionally support deployment as system module.
This process need few additional options as given below to NNVM build.
- For target llvm append --system-lib as ```target=llvm --system-lib```
- For a GPU build (or non llvm) the additional option should be given to targat_host as ```target_host=llvm --system-lib```
Module export require additional options for not to compile but save as ```lib.export_library (path, fcompile=False)```
The output of above API is a tar compressed file containing object file ```(lib.o)``` and cpp source file ```(devc.cc)``` which embeds device blob. Thease two files should be compiled along with other files or objects while building c++ application.
Please refer to [Makefile](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/Makefile#L32) for a reference.
The c++ code to load this system module require the below change.
Based on the build environment the system object, device blob source should be included in the final executable. An example with bazel build is given below.
```bash
cc_library(
name ="host_module",
srcs =["lib.o"],
alwayslink=1
)
cc_library(
name ="device_module",
srcs =["devc.cc"],
alwayslink=1
)
cc_library(
name ="tvm_runtime",
srcs =["libtvm_runtime_pack.cc"],
)
cc_binary(
name ="bazel_deploy",
srcs =["cpp_deploy.cc"],
deps =[
":tvm_runtime", ":host_module", ":device_module"
],
linkopts =["-lpthread -ldl"]
)
```
This build directive creates
- new library ```host_module``` out of ```lib.o```
- new library ```device_module``` out of ```devc.cc```
These intermediate modules can be used as a dependency to final deploy application.
In bazel ```alwayslink=1``` enforce embedding entire lib into application (even though it doesn't call any API from this module).