Unverified Commit 88d2a281 by Tianqi Chen Committed by GitHub

[RELAY][DOCS] Core Operator docs (#1821)

parent 0247b971
...@@ -6,4 +6,5 @@ embedded languages in TVM stack. ...@@ -6,4 +6,5 @@ embedded languages in TVM stack.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
relay_op
hybrid_script hybrid_script
Relay Core Tensor Operators
===========================
This page contains the list of core tensor operator primitives pre-defined in tvm.relay.
The core tensor operator primitives covers typical workloads in deep learning.
They can represent workloads in front-end frameworks, and provide basic building blocks for optimization.
Since deep learning is a fast evolving field and it is that possible to have operators that are not in here.
.. note::
This document will directly list the function signature of
these operators in the python frontend.
Overview of Operators
---------------------
**Level 1: Basic Operators**
This level enables fully connected multi-layer perceptron.
.. autosummary::
:nosignatures:
tvm.relay.log
tvm.relay.sqrt
tvm.relay.exp
tvm.relay.add
**Level 2: Convolutions**
This level enables typical convnet models.
.. autosummary::
:nosignatures:
tvm.relay.nn.conv2d
**Level 3: Additional Math And Transform Operators**
**Level 4: Broadcast and Reductions**
**Level 5: Vision/Image Operators**
Level 1 Definitions
-------------------
.. autofunction:: tvm.relay.log
.. autofunction:: tvm.relay.sqrt
.. autofunction:: tvm.relay.exp
.. autofunction:: tvm.relay.add
Level 2 Definitions
-------------------
.. autofunction:: tvm.relay.nn.conv2d
...@@ -329,6 +329,10 @@ inline void SetValue<std::string>(std::string* ptr, const TVMArgValue& val) { ...@@ -329,6 +329,10 @@ inline void SetValue<std::string>(std::string* ptr, const TVMArgValue& val) {
} }
} }
template<> template<>
inline void SetValue(Type* ptr, const TVMArgValue& val) {
*ptr = val.operator Type();
}
template<>
inline void SetValue<double>(double* ptr, const TVMArgValue& val) { inline void SetValue<double>(double* ptr, const TVMArgValue& val) {
if (val.type_code() == kDLFloat || val.type_code() == kDLInt) { if (val.type_code() == kDLFloat || val.type_code() == kDLInt) {
*ptr = val.operator double(); *ptr = val.operator double();
......
...@@ -6,10 +6,11 @@ from . import expr ...@@ -6,10 +6,11 @@ from . import expr
from . import env from . import env
from . import ir_pass from . import ir_pass
from . import ir_builder from . import ir_builder
# Operators
# Root operators
from .op import Op from .op import Op
from .op.tensor import * from .op.tensor import *
from .op import nn from . import nn
# Span # Span
Span = base.Span Span = base.Span
......
# pylint: disable=wildcard-import, unused-import, unused-wildcard-import
"""Neural network related operators."""
# Re-export in a specific file name so that autodoc can pick it up
from .op.nn import *
# pylint: disable=wildcard-import
"""Neural network related operators."""
from __future__ import absolute_import as _abs
from .nn import *
"""Constructor APIs"""
from ...._ffi.function import _init_api
_init_api("relay.op.nn._make", __name__)
...@@ -15,7 +15,30 @@ def conv2d(data, ...@@ -15,7 +15,30 @@ def conv2d(data,
weight_layout="OIHW", weight_layout="OIHW",
out_layout="", out_layout="",
out_dtype=""): out_dtype=""):
"""Two dimensional convolution operator. r"""2D convolution.
This operator takes the weight as the convolution kernel
and convolves it with data to produce an output.
In the default case, where the data_layout is `NCHW`
and weight_layout is `OIHW`, conv2d takes in
a data Tensor with shape `(batch_size, in_channels, height, width)`,
and a weight Tensor with shape `(channels, in_channels, kernel_size[0], kernel_size[1])`
to produce an output Tensor with the following rule:
.. math::
\mbox{out}[b, c, y, x] = \sum_{dy, dx, k}
\mbox{data}[b, k, \mbox{strides}[0] * y + dy, \mbox{strides}[1] * x + dx] *
\mbox{weight}[c, k, dy, dx]
Padding and dilation are applied to data and weight respectively before the computation.
This operator accepts data layout specification.
Semantically, the operator will convert the layout to the canonical layout
(`NCHW` for data and `OIHW` for weight), perform the computation,
then convert to the out_layout.
Parameters Parameters
---------- ----------
...@@ -29,7 +52,7 @@ def conv2d(data, ...@@ -29,7 +52,7 @@ def conv2d(data,
The strides of convoltution. The strides of convoltution.
padding : tuple of int, optional padding : tuple of int, optional
The padding of convolution on both sides of inputs. The padding of convolution on both sides of inputs before convolution.
dilation : tuple of int, optional dilation : tuple of int, optional
Specifies the dilation rate to be used for dilated convolution. Specifies the dilation rate to be used for dilated convolution.
...@@ -37,6 +60,12 @@ def conv2d(data, ...@@ -37,6 +60,12 @@ def conv2d(data,
groups : int, optional groups : int, optional
Number of groups for grouped convolution. Number of groups for grouped convolution.
channels : int, optional
Number of output channels of this convolution.
kernel_size : tuple of int, optional
The spatial of the convolution kernel.
data_layout : str, optional data_layout : str, optional
Layout of the input. Layout of the input.
...@@ -44,10 +73,15 @@ def conv2d(data, ...@@ -44,10 +73,15 @@ def conv2d(data,
Layout of the weight. Layout of the weight.
out_layout : str, optional out_layout : str, optional
Layout of the output. Layout of the output, by default, out_layout is the same as data_layout
out_dtype : str, optional out_dtype : str, optional
Specifies the output data type for mixed precision conv2d. Specifies the output data type for mixed precision conv2d.
Returns
-------
result : relay.Expr
The computed result.
""" """
return _make.conv2d(data, weight, strides, padding, dilation, return _make.conv2d(data, weight, strides, padding, dilation,
groups, channels, kernel_size, data_layout, groups, channels, kernel_size, data_layout,
......
...@@ -12,9 +12,8 @@ from ..expr import Tuple ...@@ -12,9 +12,8 @@ from ..expr import Tuple
# - Not put too much burden on FFI to support complicated features # - Not put too much burden on FFI to support complicated features
# like default value and keyword arguments # like default value and keyword arguments
def log(data): def log(data):
"""Take log of data. """Compute elementwise log of data.
Parameters Parameters
---------- ----------
...@@ -30,7 +29,7 @@ def log(data): ...@@ -30,7 +29,7 @@ def log(data):
def exp(data): def exp(data):
"""Take exp of data. """Compute elementwise exp of data.
Parameters Parameters
---------- ----------
...@@ -46,7 +45,7 @@ def exp(data): ...@@ -46,7 +45,7 @@ def exp(data):
def sqrt(data): def sqrt(data):
"""Take sqrt of data. """Compute elementwise sqrt of data.
Parameters Parameters
---------- ----------
...@@ -62,7 +61,7 @@ def sqrt(data): ...@@ -62,7 +61,7 @@ def sqrt(data):
def add(lhs, rhs): def add(lhs, rhs):
"""Elementwise addition. """Addition with numpy-style broadcasting.
Parameters Parameters
---------- ----------
...@@ -75,12 +74,20 @@ def add(lhs, rhs): ...@@ -75,12 +74,20 @@ def add(lhs, rhs):
------- -------
result : relay.Expr result : relay.Expr
The computed result. The computed result.
Examples
--------
.. code:: python
x = relay.Var("a") # shape is [2, 3]
y = relay.Var("b") # shape is [2, 1]
z = relay.add(x, y) # result shape is [2, 3]
""" """
return _make.add(lhs, rhs) return _make.add(lhs, rhs)
def subtract(lhs, rhs): def subtract(lhs, rhs):
"""Elementwise subtraction. """Elementwise subtraction with broadcasting.
Parameters Parameters
---------- ----------
...@@ -94,11 +101,13 @@ def subtract(lhs, rhs): ...@@ -94,11 +101,13 @@ def subtract(lhs, rhs):
result : relay.Expr result : relay.Expr
The computed result. The computed result.
""" """
return _make.add(lhs, rhs) return _make.subtract(lhs, rhs)
def equal(lhs, rhs): def equal(lhs, rhs):
return _make.equal(lhs, rhs) return _make.equal(lhs, rhs)
def concat(*args): def concat(*args):
"""Concatenate the input tensors along the zero axis. """Concatenate the input tensors along the zero axis.
......
...@@ -124,18 +124,18 @@ Expr MakeConv2D(Expr data, ...@@ -124,18 +124,18 @@ Expr MakeConv2D(Expr data,
attrs->weight_layout = std::move(weight_layout); attrs->weight_layout = std::move(weight_layout);
attrs->out_layout = std::move(out_layout); attrs->out_layout = std::move(out_layout);
attrs->out_dtype = std::move(out_dtype); attrs->out_dtype = std::move(out_dtype);
static const Op& op = Op::Get("conv2d"); static const Op& op = Op::Get("nn.conv2d");
return CallNode::make(op, {data, weight}, Attrs(attrs), {}); return CallNode::make(op, {data, weight}, Attrs(attrs), {});
} }
TVM_REGISTER_API("relay.op._make.conv2d") TVM_REGISTER_API("relay.op.nn._make.conv2d")
.set_body([](const TVMArgs& args, TVMRetValue* rv) { .set_body([](const TVMArgs& args, TVMRetValue* rv) {
runtime::detail::unpack_call<Expr, 12>(MakeConv2D, args, rv); runtime::detail::unpack_call<Expr, 12>(MakeConv2D, args, rv);
}); });
RELAY_REGISTER_OP("conv2d") RELAY_REGISTER_OP("nn.conv2d")
.describe(R"code(2D convolution layer (e.g. spatial convolution over images). .describe(R"code(2D convolution layer (e.g. spatial convolution over images).
This layer creates a convolution kernel that is convolved This layer creates a convolution kernel that is convolved
......
...@@ -8,8 +8,8 @@ Sometimes users may want customize some analysis and IR transformations ...@@ -8,8 +8,8 @@ Sometimes users may want customize some analysis and IR transformations
to adapt TVM to their own specialized hardware. This tutorial helps users write to adapt TVM to their own specialized hardware. This tutorial helps users write
a customized pass in TVM. a customized pass in TVM.
Prerequisites Prerequisites
-------------- -------------
Before reading this tutorial, we assume readers have already known these topics well: Before reading this tutorial, we assume readers have already known these topics well:
...@@ -31,7 +31,7 @@ import numpy as np ...@@ -31,7 +31,7 @@ import numpy as np
###################################################################### ######################################################################
# We first write a very simple vector add and build it with the default schedule. Then, we use # We first write a very simple vector add and build it with the default schedule. Then, we use
# our customized lowering pass to manipulate the IR directly instead of using schedule premitives. # our customized lowering pass to manipulate the IR directly instead of using schedule premitives.
# #
n = tvm.const(128) n = tvm.const(128)
...@@ -83,7 +83,7 @@ def find_width8(op): ...@@ -83,7 +83,7 @@ def find_width8(op):
# post-order callback. If you want to keep the origin IR node, just return None. If you want to # post-order callback. If you want to keep the origin IR node, just return None. If you want to
# change the current node to some node, use TVM IR maker interface to build it and return # change the current node to some node, use TVM IR maker interface to build it and return
# this value. # this value.
# #
# .. note:: # .. note::
# #
# If the pre-order function is called and returns a value which is not None, the post-order # If the pre-order function is called and returns a value which is not None, the post-order
...@@ -113,7 +113,7 @@ def vectorize(stmt): ...@@ -113,7 +113,7 @@ def vectorize(stmt):
# The last list arugment indicates what kinds of nodes will be transformed. # The last list arugment indicates what kinds of nodes will be transformed.
# Thus, in this case only `For` nodes will call `vectorize8` # Thus, in this case only `For` nodes will call `vectorize8`
stmt = tvm.ir_pass.IRTransform(stmt, None, vectorize8, ['For']) stmt = tvm.ir_pass.IRTransform(stmt, None, vectorize8, ['For'])
return stmt return stmt
##################################################################### #####################################################################
...@@ -154,4 +154,3 @@ with tvm.build_config(add_lower_pass=[(1, vectorize)]) as cfg: ...@@ -154,4 +154,3 @@ with tvm.build_config(add_lower_pass=[(1, vectorize)]) as cfg:
# - Wrap up two above to write an IR-transformation function. # - Wrap up two above to write an IR-transformation function.
# - Use ``tvm.build_config`` to put this function to TVM lowering pass # - Use ``tvm.build_config`` to put this function to TVM lowering pass
# #
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment