Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
T
tic
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
wenyuanbo
tic
Commits
f33b9eae
Commit
f33b9eae
authored
Jun 07, 2019
by
Alexander Pivovarov
Committed by
Yao Wang
Jun 07, 2019
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Fix some typos in api docs (#3309)
parent
df16182b
Show whitespace changes
Inline
Side-by-side
Showing
11 changed files
with
26 additions
and
26 deletions
+26
-26
include/tvm/relay/error.h
+1
-1
include/tvm/runtime/c_runtime_api.h
+1
-1
nnvm/include/nnvm/c_api.h
+1
-1
nnvm/python/nnvm/frontend/common.py
+3
-3
nnvm/python/nnvm/frontend/tensorflow.py
+1
-1
python/tvm/relay/frontend/common.py
+3
-3
python/tvm/relay/frontend/tensorflow.py
+6
-6
python/tvm/relay/op/nn/nn.py
+7
-7
src/common/socket.h
+1
-1
src/pass/arg_binder.h
+1
-1
topi/python/topi/cuda/reduction.py
+1
-1
No files found.
include/tvm/relay/error.h
View file @
f33b9eae
...
...
@@ -83,7 +83,7 @@ struct Error : public dmlc::Error {
*
* The final mode represents the old mode, if we report an error that has no span or
* expression, we will default to throwing an exception with a textual representation
* of the error and no indication of where it occured in the original program.
* of the error and no indication of where it occur
r
ed in the original program.
*
* The latter mode is not ideal, and the goal of the new error reporting machinery is
* to avoid ever reporting errors in this style.
...
...
include/tvm/runtime/c_runtime_api.h
View file @
f33b9eae
...
...
@@ -187,7 +187,7 @@ TVM_DLL void TVMAPISetLastError(const char* msg);
/*!
* \brief return str message of the last error
* all function in this file will return 0 when success
* and -1 when an error occured,
* and -1 when an error occur
r
ed,
* TVMGetLastError can be called to retrieve the error
*
* this function is threadsafe and can be called by different thread
...
...
nnvm/include/nnvm/c_api.h
View file @
f33b9eae
...
...
@@ -60,7 +60,7 @@ NNVM_DLL void NNAPISetLastError(const char* msg);
/*!
* \brief return str message of the last error
* all function in this file will return 0 when success
* and -1 when an error occured,
* and -1 when an error occur
r
ed,
* NNGetLastError can be called to retrieve the error
*
* this function is threadsafe and can be called by different thread
...
...
nnvm/python/nnvm/frontend/common.py
View file @
f33b9eae
...
...
@@ -58,7 +58,7 @@ class Renamer(object):
class
AttrConverter
(
object
):
"""Common attribute conveter. An AttrConverter instance is a callable:
"""Common attribute conve
r
ter. An AttrConverter instance is a callable:
```
attr_converter = AttrConverter(op_name, transforms={'a':'b', 'c':('d', 1)})
new_op_name, new_attr = attr_converter(attrs)
...
...
@@ -72,12 +72,12 @@ class AttrConverter(object):
`op_name = func(attr)`
transforms : dict of `new_name, or (new_name, default_value, transform function)`
If only a new_name is provided, it's like renaming the attribute name.
If default_value if provded, then the attribute is considered as optional.
If default_value if prov
i
ded, then the attribute is considered as optional.
If transform function is provided, the original attribute value is handled
by transform function.
excludes : list
A list of excluded attributes that should `NOT` appear.
Raise NotImplementedError if occured.
Raise NotImplementedError if occur
r
ed.
disables : list
A list of attributes that is disabled in nnvm. Log warnings.
ignores : list
...
...
nnvm/python/nnvm/frontend/tensorflow.py
View file @
f33b9eae
...
...
@@ -1177,7 +1177,7 @@ class GraphProto(object):
-> All Const nodes are params.
-> Last node is assumed as graph output.
-> _output_shapes : Graph should be frozen with add_shapes=True.
Or user can pass input shape dictiona
l
y optionally.
Or user can pass input shape dictiona
r
y optionally.
-> DecodeJpeg, ResizeBilinear: These are dummy operators.
Hence user should handle preprocessing outside.
-> CheckNumerics: No implementation as of now for this.
...
...
python/tvm/relay/frontend/common.py
View file @
f33b9eae
...
...
@@ -286,7 +286,7 @@ class ExprTable(object):
class
AttrCvt
(
object
):
"""Common attribute conveter. An AttrConverter instance is a callable:
"""Common attribute conve
r
ter. An AttrConverter instance is a callable:
```
attr_converter = AttrConverter(op_name, transforms={'a':'b', 'c':('d', 1)})
new_op_name, new_attr = attr_converter(attrs)
...
...
@@ -300,12 +300,12 @@ class AttrCvt(object):
`op_name = func(attr)`
transforms : dict of `new_name, or (new_name, default_value, transform function)`
If only a new_name is provided, it's like renaming the attribute name.
If default_value if provded, then the attribute is considered as optional.
If default_value if prov
i
ded, then the attribute is considered as optional.
If transform function is provided, the original attribute value is handled
by transform function.
excludes : list
A list of excluded attributes that should `NOT` appear.
Raise NotImplementedError if occured.
Raise NotImplementedError if occur
r
ed.
disables : list
A list of attributes that is disabled in relay. Log warnings.
ignores : list
...
...
python/tvm/relay/frontend/tensorflow.py
View file @
f33b9eae
...
...
@@ -77,12 +77,12 @@ class AttrCvt(object):
`op_name = func(attr)`
transforms : dict of `new_name, or (new_name, default_value, transform function)`
If only a new_name is provided, it's like renaming the attribute name.
If default_value if provded, then the attribute is considered as optional.
If default_value if prov
i
ded, then the attribute is considered as optional.
If transform function is provided, the original attribute value is handled
by transform function.
excludes : list
A list of excluded attributes that should `NOT` appear.
Raise NotImplementedError if occured.
Raise NotImplementedError if occur
r
ed.
disables : list
A list of attributes that is disabled in relay. Log warnings.
ignores : list
...
...
@@ -1567,7 +1567,7 @@ def _in_while_loop(control_flow_node_map, op_name):
Parameters
----------
control_flow_node_map : Dict[str, Set[str]]
A dictionay contains the un
qi
ue control flow execution frame name to
A dictionay contains the un
iq
ue control flow execution frame name to
a set of primitive operators mapping.
op_name : str
...
...
@@ -1619,7 +1619,7 @@ class Branch:
return tf.add(4, 23)
r = tf.cond(tf.less(i, j), f1, f2)
This condition statement should be coverted into Relay in the following
This condition statement should be co
n
verted into Relay in the following
form:
.. code-block:: python
...
...
@@ -1727,7 +1727,7 @@ class Loop:
self
.
_loop
=
None
def
_while_loop
(
self
):
"""An internal API to create a Relay recur
is
ve call for a matched TF
"""An internal API to create a Relay recur
si
ve call for a matched TF
`while_loop` construct.
"""
wl
=
tvm
.
relay
.
var
(
'while_loop'
)
...
...
@@ -1796,7 +1796,7 @@ class GraphProto(object):
-> All Const nodes are params.
-> Last node is assumed as graph output.
-> _output_shapes : Graph should be frozen with add_shapes=True.
Or user can pass input shape dictiona
l
y optionally.
Or user can pass input shape dictiona
r
y optionally.
-> DecodeJpeg, ResizeBilinear: These are dummy operators.
Hence user should handle preprocessing outside.
-> CheckNumerics: No implementation as of now for this.
...
...
python/tvm/relay/op/nn/nn.py
View file @
f33b9eae
...
...
@@ -67,7 +67,7 @@ def conv2d(data,
The weight expressions.
strides : tuple of int, optional
The strides of convol
t
ution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
...
...
@@ -129,7 +129,7 @@ def conv2d_transpose(data,
The weight expressions.
strides : Tuple[int], optional
The strides of convol
t
ution.
The strides of convolution.
padding : Tuple[int], optional
The padding of convolution on both sides of inputs.
...
...
@@ -842,7 +842,7 @@ def contrib_conv2d_winograd_without_weight_transform(data,
The Tile size of winograd. E.g. 2 for F(2x2, 3x3) and 4 for F(4x4, 3x3)
strides : tuple of int, optional
The strides of convol
t
ution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
...
...
@@ -908,7 +908,7 @@ def contrib_conv2d_winograd_nnpack_without_weight_transform(data,
The weight expressions.
strides : tuple of int, optional
The strides of convol
t
ution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
...
...
@@ -975,7 +975,7 @@ def contrib_conv2d_nchwc(data,
The kernel expressions.
strides : tuple of int, optional
The strides of convol
t
ution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
...
...
@@ -1040,7 +1040,7 @@ def contrib_depthwise_conv2d_nchwc(data,
The kernel expressions.
strides : tuple of int, optional
The strides of convol
t
ution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
...
...
@@ -1156,7 +1156,7 @@ def deformable_conv2d(data,
The weight expressions.
strides : tuple of int, optional
The strides of convol
t
ution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
...
...
src/common/socket.h
View file @
f33b9eae
...
...
@@ -373,7 +373,7 @@ class TCPSocket : public Socket {
}
/*!
* \brief decide whether the socket is at OOB mark
* \return 1 if at mark, 0 if not, -1 if an error occured
* \return 1 if at mark, 0 if not, -1 if an error occur
r
ed
*/
int
AtMark
()
const
{
#ifdef _WIN32
...
...
src/pass/arg_binder.h
View file @
f33b9eae
...
...
@@ -50,7 +50,7 @@ namespace ir {
* - assert bufferB.shape[1] == n + 3
*
* In general, this is a constraint solving problem. We have simplified assumption
* over the binding declaration, such that we require the variable occured in
* over the binding declaration, such that we require the variable occur
r
ed in
* constraint must be declared in argument list. So it is illegal to have signature
* f(tA(shape=(n+3))) without any argument variable corresponds to n, even though
* it is already enough to derive n from the input argument.
...
...
topi/python/topi/cuda/reduction.py
View file @
f33b9eae
...
...
@@ -37,7 +37,7 @@ def _schedule_reduce(op, sch, is_idx_reduce=False):
num_thread
=
32
target
=
tvm
.
target
.
current_target
()
if
target
and
target
.
target_name
==
"opencl"
:
# without it, CL_INVALID_WORK_GROUP_SIZE occured when running test_topi_reduce.py
# without it, CL_INVALID_WORK_GROUP_SIZE occur
r
ed when running test_topi_reduce.py
# don't know why
num_thread
=
16
block_x
=
tvm
.
thread_axis
(
"blockIdx.x"
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment