Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
T
tic
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
wenyuanbo
tic
Commits
184fa484
Commit
184fa484
authored
Sep 07, 2019
by
雾雨魔理沙
Committed by
Tianqi Chen
Sep 08, 2019
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
change docker install script (#3524)
parent
7a15aedf
Hide whitespace changes
Inline
Side-by-side
Showing
38 changed files
with
135 additions
and
185 deletions
+135
-185
conda/tvm/meta.yaml
+2
-2
docker/Dockerfile.conda_cpu
+3
-3
docker/Dockerfile.demo_cpu
+1
-1
docker/Dockerfile.demo_gpu
+1
-1
docker/Dockerfile.demo_opencl
+1
-1
docker/install/ubuntu_install_python_package.sh
+3
-4
docs/contribute/pull_request.rst
+2
-2
nnvm/tests/python/frontend/mxnet/test_forward.py
+1
-1
tests/python/contrib/test_mxnet_bridge.py
+1
-1
tests/python/contrib/test_nnpack.py
+11
-12
tests/python/contrib/test_rpc_proxy.py
+1
-1
tests/python/frontend/mxnet/test_forward.py
+1
-1
tests/python/integration/test_winograd_nnpack.py
+6
-6
tests/python/relay/test_ir_parser.py
+4
-8
tests/python/relay/test_op_level3.py
+2
-3
tests/python/relay/test_pass_check_kind.py
+13
-14
tests/python/relay/test_pass_dead_code_elimination.py
+0
-2
tests/python/relay/test_pass_manager.py
+8
-45
tests/python/relay/test_type_solver.py
+7
-6
tests/python/relay/test_vm.py
+2
-2
tests/python/unittest/test_custom_datatypes_mybfloat16.py
+2
-2
tests/python/unittest/test_hybrid_script.py
+2
-2
tests/python/unittest/test_lang_schedule.py
+2
-2
tests/python/unittest/test_pass_bound_checkers.py
+10
-10
tests/python/unittest/test_pass_split_host_device.py
+2
-2
tests/python/unittest/test_runtime_micro.py
+0
-1
tests/python/unittest/test_runtime_vm_profiler.py
+2
-2
tests/scripts/task_python_frontend.sh
+9
-9
tests/scripts/task_python_integration.sh
+6
-6
tests/scripts/task_python_legacy.sh
+9
-9
tests/scripts/task_python_nightly.sh
+2
-2
tests/scripts/task_python_topi.sh
+2
-2
tests/scripts/task_python_unittest.sh
+2
-2
tests/scripts/task_python_vta.sh
+5
-6
tests/scripts/task_verilog_test.sh
+2
-2
tests/travis/run_test.sh
+3
-4
tests/travis/setup.sh
+3
-4
topi/tests/python/test_topi_group_conv2d_NCHWc_int8.py
+2
-2
No files found.
conda/tvm/meta.yaml
View file @
184fa484
...
...
@@ -50,12 +50,12 @@ test:
-
topi
-
nnvm
requires
:
-
nose
-
pytest
-
scipy
source_files
:
-
tests/python
commands
:
-
python -m
nose
-v tests/python/integration
-
python -m
pytest
-v tests/python/integration
about
:
home
:
https://github.com/dmlc/tvm
...
...
docker/Dockerfile.conda_cpu
View file @
184fa484
...
...
@@ -30,9 +30,9 @@ RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-lat
# Cache some of the packages for the builds
RUN /opt/conda/bin/conda install --download-only cmake make zlib && \
/opt/conda/bin/conda install --download-only -c numba llvmdev=8.0.0 && \
/opt/conda/bin/conda create -n py35 --download-only
nose
scipy numpy=1.11 cython decorator python=3.5 && \
/opt/conda/bin/conda create -n py36 --download-only
nose
scipy numpy=1.11 cython decorator python=3.6 && \
/opt/conda/bin/conda create -n py37 --download-only
nose
scipy numpy=1.11 cython decorator python=3.7
/opt/conda/bin/conda create -n py35 --download-only
pytest
scipy numpy=1.11 cython decorator python=3.5 && \
/opt/conda/bin/conda create -n py36 --download-only
pytest
scipy numpy=1.11 cython decorator python=3.6 && \
/opt/conda/bin/conda create -n py37 --download-only
pytest
scipy numpy=1.11 cython decorator python=3.7
ENV PATH /opt/conda/bin:$PATH
ENV CONDA_BLD_PATH /tmp
...
...
docker/Dockerfile.demo_cpu
View file @
184fa484
...
...
@@ -26,7 +26,7 @@ RUN bash /install/ubuntu_install_core.sh
# Python: basic dependencies
RUN apt-get update && apt-get install -y python3-dev python3-pip
RUN pip3 install numpy
nose-timer
cython decorator scipy
RUN pip3 install numpy
pytest
cython decorator scipy
# LLVM
RUN echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-6.0 main \
...
...
docker/Dockerfile.demo_gpu
View file @
184fa484
...
...
@@ -26,7 +26,7 @@ RUN bash /install/ubuntu_install_core.sh
# Python: basic dependencies
RUN apt-get update && apt-get install -y python3-dev python3-pip
RUN pip3 install numpy
nose-timer
cython decorator scipy
RUN pip3 install numpy
pytest
cython decorator scipy
# LLVM
RUN echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-6.0 main \
...
...
docker/Dockerfile.demo_opencl
View file @
184fa484
...
...
@@ -42,7 +42,7 @@ RUN apt-get install -y libopenblas-dev
RUN echo "Installing Python"
RUN apt-get install -y python3-dev python3-pip
RUN pip3 install setuptools numpy
nose-timer
cython decorator scipy tornado psutil xgboost
RUN pip3 install setuptools numpy
pytest
cython decorator scipy tornado psutil xgboost
RUN echo "Installing Jupyter notebook"
RUN pip3 install matplotlib Image Pillow jupyter[notebook]
...
...
docker/install/ubuntu_install_python_package.sh
View file @
184fa484
...
...
@@ -6,9 +6,9 @@
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
...
...
@@ -21,5 +21,4 @@ set -u
set
-o
pipefail
# install libraries for python package on ubuntu
pip2 install nose
pylint
==
1.9.4 six numpy nose-timer cython decorator scipy tornado typing antlr4-python2-runtime attrs packaging
pip3 install nose
pylint
==
1.9.4 six numpy nose-timer cython decorator scipy tornado typed_ast pytest mypy orderedset antlr4-python3-runtime attrs requests Pillow packaging
pip3 install
pylint
==
1.9.4 six numpy pytest cython decorator scipy tornado typed_ast pytest mypy orderedset antlr4-python3-runtime attrs requests Pillow packaging
docs/contribute/pull_request.rst
View file @
184fa484
...
...
@@ -96,7 +96,7 @@ Necessary dependencies:
.. code:: bash
pip install --user
nose
Cython
pip install --user
pytest
Cython
If you want to run all tests:
...
...
@@ -118,4 +118,4 @@ If you want to run a single test:
export PYTHONPATH=python:topi/python
rm -rf python/tvm/*.pyc python/tvm/*/*.pyc python/tvm/*/*/*.pyc
TVM_FFI=ctypes python -m
nose
-v tests/python/unittest/test_pass_storage_rewrite.py
TVM_FFI=ctypes python -m
pytest
-v tests/python/unittest/test_pass_storage_rewrite.py
nnvm/tests/python/frontend/mxnet/test_forward.py
View file @
184fa484
...
...
@@ -31,7 +31,7 @@ import model_zoo
def
verify_mxnet_frontend_impl
(
mx_symbol
,
data_shape
=
(
1
,
3
,
224
,
224
),
out_shape
=
(
1
,
1000
),
gluon_impl
=
False
,
name
=
None
,
dtype
=
'float32'
):
"""Use name different from test to avoid
let nose pick
it up"""
"""Use name different from test to avoid
pytest picking
it up"""
if
gluon_impl
:
def
get_gluon_output
(
name
,
x
):
net
=
vision
.
get_model
(
name
)
...
...
tests/python/contrib/test_mxnet_bridge.py
View file @
184fa484
...
...
@@ -17,7 +17,7 @@
def
mxnet_check
():
"""This is a simple test function for MXNet bridge
It is not included as
nose
tests, because of its dependency on mxnet
It is not included as
py
tests, because of its dependency on mxnet
User can directly run this script to verify correctness.
"""
...
...
tests/python/contrib/test_nnpack.py
View file @
184fa484
...
...
@@ -18,7 +18,7 @@ import tvm
import
numpy
as
np
import
scipy.signal
from
tvm.contrib
import
nnpack
from
nose
import
SkipT
est
import
pyt
est
def
test_fully_connected_inference
():
...
...
@@ -34,11 +34,11 @@ def test_fully_connected_inference():
def
verify
(
target
=
"llvm"
):
if
not
tvm
.
module
.
enabled
(
target
):
raise
SkipTest
(
"skip because
%
s is not enabled..."
%
target
)
pytest
.
skip
(
"
%
s is not enabled..."
%
target
)
if
not
tvm
.
get_global_func
(
"tvm.contrib.nnpack.fully_connected_inference"
,
True
):
raise
SkipTest
(
"skip because
extern function is not available"
)
pytest
.
skip
(
"
extern function is not available"
)
if
not
nnpack
.
is_available
():
raise
SkipTest
(
"skip because
nnpack is not available"
)
pytest
.
skip
(
"
nnpack is not available"
)
ctx
=
tvm
.
cpu
(
0
)
f
=
tvm
.
build
(
s
,
[
A
,
B
,
D
,
bias
],
target
)
...
...
@@ -112,11 +112,11 @@ def test_convolution_inference():
algorithm
=
nnpack
.
ConvolutionAlgorithm
.
AUTO
,
with_bias
=
True
):
if
not
tvm
.
module
.
enabled
(
target
):
raise
SkipTest
(
"skip because
%
s is not enabled..."
%
target
)
pytest
.
skip
(
"
%
s is not enabled..."
%
target
)
if
not
tvm
.
get_global_func
(
"tvm.contrib.nnpack.fully_connected_inference"
,
True
):
raise
SkipTest
(
"skip because
extern function is not available"
)
pytest
.
skip
(
"
extern function is not available"
)
if
not
nnpack
.
is_available
():
raise
SkipTest
(
"skip because
nnpack is not available"
)
pytest
.
skip
(
"
nnpack is not available"
)
ctx
=
tvm
.
cpu
(
0
)
output
=
nnpack
.
convolution_inference
(
...
...
@@ -174,11 +174,11 @@ def test_convolution_inference_without_weight_transform():
algorithm
=
nnpack
.
ConvolutionAlgorithm
.
AUTO
,
with_bias
=
True
):
if
not
tvm
.
module
.
enabled
(
target
):
raise
SkipTest
(
"skip because
%
s is not enabled..."
%
target
)
pytest
.
skip
(
"
%
s is not enabled..."
%
target
)
if
not
tvm
.
get_global_func
(
"tvm.contrib.nnpack.fully_connected_inference"
,
True
):
raise
SkipTest
(
"skip because
extern function is not available"
)
pytest
.
skip
(
"
extern function is not available"
)
if
not
nnpack
.
is_available
():
raise
SkipTest
(
"skip because
nnpack is not available"
)
pytest
.
skip
(
"
nnpack is not available"
)
ctx
=
tvm
.
cpu
(
0
)
transformed_kernel
=
nnpack
.
convolution_inference_weight_transform
(
...
...
@@ -209,5 +209,4 @@ def test_convolution_inference_without_weight_transform():
if
__name__
==
"__main__"
:
import
nose
nose
.
runmodule
()
pytest
.
main
()
tests/python/contrib/test_rpc_proxy.py
View file @
184fa484
...
...
@@ -24,7 +24,7 @@ from tvm import rpc
def
rpc_proxy_check
():
"""This is a simple test function for RPC Proxy
It is not included as
nose
tests, because:
It is not included as
py
tests, because:
- It depends on tornado
- It relies on the fact that Proxy starts before client and server connects,
which is often the case but not always
...
...
tests/python/frontend/mxnet/test_forward.py
View file @
184fa484
...
...
@@ -34,7 +34,7 @@ def verify_mxnet_frontend_impl(mx_symbol,
gluon_impl
=
False
,
name
=
None
,
dtype
=
'float32'
):
"""Use name different from test to avoid
let nose pick
it up"""
"""Use name different from test to avoid
pytest picking
it up"""
if
gluon_impl
:
def
get_gluon_output
(
name
,
x
):
net
=
vision
.
get_model
(
name
)
...
...
tests/python/integration/test_winograd_nnpack.py
View file @
184fa484
...
...
@@ -23,7 +23,7 @@ from tvm.contrib.pickle_memoize import memoize
import
topi
import
topi.testing
from
topi.util
import
get_const_tuple
from
nose
import
SkipTest
from
pytest
import
skip
def
verify_conv2d_nchw
(
batch
,
in_channel
,
in_size
,
num_filter
,
kernel
,
stride
,
padding
,
dilation
=
1
,
add_bias
=
False
,
add_relu
=
False
,
...
...
@@ -60,7 +60,7 @@ def verify_conv2d_nchw(batch, in_channel, in_size, num_filter, kernel, stride, p
def
check_device
(
device
):
ctx
=
tvm
.
context
(
device
,
0
)
if
not
ctx
.
exist
:
raise
SkipTest
(
"Skip because
%
s is not enabled"
%
device
)
skip
(
"
s is not enabled"
%
device
)
print
(
"Running on target:
%
s"
%
device
)
with
tvm
.
target
.
create
(
device
):
C
=
topi
.
nn
.
conv2d
(
A
,
W
,
stride
,
padding
,
dilation
,
layout
=
'NCHW'
,
out_dtype
=
dtype
)
...
...
@@ -99,10 +99,10 @@ class WinogradFallback(autotvm.FallbackContext):
def
test_conv2d_nchw
():
if
not
tvm
.
get_global_func
(
"tvm.contrib.nnpack.convolution_inference_without_weight_transform"
,
True
):
raise
SkipTest
(
"skip because
extern function is not available"
)
skip
(
"
extern function is not available"
)
if
not
nnpack
.
is_available
():
raise
SkipTest
(
"skip because
nnpack is not available"
)
skip
(
"
nnpack is not available"
)
devices
=
[
'llvm -device=arm_cpu'
]
autotvm
.
DispatchContext
.
current
.
silent
=
True
...
...
@@ -139,5 +139,5 @@ def test_conv2d_nchw():
if
__name__
==
"__main__"
:
import
nose
nose
.
runmodule
()
import
pytest
pytest
.
main
()
tests/python/relay/test_ir_parser.py
View file @
184fa484
...
...
@@ -17,11 +17,12 @@
import
tvm
from
tvm
import
relay
from
tvm.relay.analysis
import
graph_equal
,
assert_graph_equal
from
nose.tools
import
nottest
,
raises
from
tvm.relay.analysis
import
alpha_equal
,
assert_alpha_equal
import
pytest
from
numpy
import
isclose
from
typing
import
Union
from
functools
import
wraps
raises_parse_error
=
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
raises_parse_error
=
pytest
.
mark
.
xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
SEMVER
=
"v0.0.4"
...
...
@@ -186,7 +187,7 @@ def test_op_assoc():
assert
graph_equal
(
parse_text
(
"1 == 1 < 1 + 1 * 1"
),
parse_text
(
"1 == (1 < (1 + (1 * 1)))"
))
@
nottest
@
pytest.mark.skip
def
test_vars
():
# temp vars won't work b/c they start with a digit
# # temp var
...
...
@@ -538,11 +539,6 @@ def test_builtin_types():
parse_text
(
"let
%
_ : {} = (); ()"
.
format
(
builtin_type
))
@nottest
def
test_call_type
():
assert
False
def
test_tensor_type
():
assert
parses_as
(
"let
%
_ : Tensor[(), float32] = (); ()"
,
...
...
tests/python/relay/test_op_level3.py
View file @
184fa484
...
...
@@ -17,7 +17,7 @@
""" Support level3 operator test cases.
"""
import
numpy
as
np
from
nose.tools
import
raises
import
pytest
import
tvm
from
tvm
import
relay
from
tvm.relay
import
create_executor
,
transform
...
...
@@ -220,8 +220,7 @@ def test_squeeze_infer_type():
assert
yy
.
checked_type
==
relay
.
TensorType
(
(
4
,),
"float32"
)
@raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_squeeze_bad_axes_infer_type
():
n
,
t
,
d
=
1
,
4
,
1
x
=
relay
.
var
(
"x"
,
relay
.
TensorType
((
n
,
t
,
d
),
"float32"
))
...
...
tests/python/relay/test_pass_check_kind.py
View file @
184fa484
...
...
@@ -17,8 +17,7 @@
import
tvm
from
tvm
import
relay
from
tvm.relay.analysis
import
check_kind
from
nose.tools
import
raises
import
pytest
def
test_typevar_kind
():
# returns the same kind
...
...
@@ -111,7 +110,7 @@ def test_typecall_kind():
assert
check_kind
(
call
,
new_mod
)
==
relay
.
Kind
.
Type
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_invalid_tuple_kind
():
tp1
=
relay
.
TypeVar
(
'tp1'
,
relay
.
Kind
.
Shape
)
tp2
=
relay
.
TypeVar
(
'tp2'
,
relay
.
Kind
.
BaseType
)
...
...
@@ -122,7 +121,7 @@ def test_invalid_tuple_kind():
check_kind
(
tup_ty
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_invalid_func_kind
():
tp1
=
relay
.
TypeVar
(
'tp1'
,
relay
.
Kind
.
Shape
)
tp2
=
relay
.
TypeVar
(
'tp2'
,
relay
.
Kind
.
BaseType
)
...
...
@@ -137,14 +136,14 @@ def test_invalid_func_kind():
check_kind
(
tf
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_invalid_ref_kind
():
tp
=
relay
.
TypeVar
(
'tp'
,
relay
.
Kind
.
Shape
)
rt
=
relay
.
RefType
(
tp
)
check_kind
(
rt
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_invalid_relation_kind
():
tp1
=
relay
.
TypeVar
(
'tp1'
,
relay
.
Kind
.
Shape
)
tp2
=
relay
.
TypeVar
(
'tp2'
,
relay
.
Kind
.
BaseType
)
...
...
@@ -156,14 +155,14 @@ def test_invalid_relation_kind():
check_kind
(
tr
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_typecall_invalid_callee
():
# global type var must be an ADT handle
gtv
=
relay
.
GlobalTypeVar
(
'v1'
,
relay
.
Kind
.
Type
)
check_kind
(
relay
.
TypeCall
(
gtv
,
[]))
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_typecall_invalid_args
():
# args must all be type kind
mod
=
relay
.
Module
()
...
...
@@ -174,7 +173,7 @@ def test_typecall_invalid_args():
check_kind
(
relay
.
TypeCall
(
gtv
,
[
data
]))
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_typecall_invalid_num_args
():
mod
=
relay
.
Module
()
gtv
=
relay
.
GlobalTypeVar
(
'v1'
)
...
...
@@ -184,7 +183,7 @@ def test_typecall_invalid_num_args():
check_kind
(
relay
.
TypeCall
(
gtv
,
[]))
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_func_with_invalid_ret_type
():
tp1
=
relay
.
TypeVar
(
'tp1'
,
relay
.
Kind
.
Type
)
tp2
=
relay
.
TypeVar
(
'tp2'
,
relay
.
Kind
.
Shape
)
...
...
@@ -193,7 +192,7 @@ def test_func_with_invalid_ret_type():
check_kind
(
tf
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_func_with_invalid_arg_types
():
tp1
=
relay
.
TypeVar
(
'tp1'
,
relay
.
Kind
.
Shape
)
tp2
=
relay
.
TypeVar
(
'tp2'
,
relay
.
Kind
.
Type
)
...
...
@@ -202,7 +201,7 @@ def test_func_with_invalid_arg_types():
check_kind
(
tf
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_func_with_invalid_tuple
():
tp1
=
relay
.
TypeVar
(
'tp1'
,
relay
.
Kind
.
Shape
)
...
...
@@ -212,7 +211,7 @@ def test_func_with_invalid_tuple():
check_kind
(
tf
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_func_with_invalid_relation
():
tp1
=
relay
.
TypeVar
(
'tp1'
,
relay
.
Kind
.
Type
)
tp2
=
relay
.
TypeVar
(
'tp2'
,
relay
.
Kind
.
Shape
)
...
...
@@ -225,7 +224,7 @@ def test_func_with_invalid_relation():
check_kind
(
tf
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_tuple_with_invalid_func
():
tensor_type
=
relay
.
TensorType
(
tvm
.
convert
([
1
,
2
,
3
]),
'float32'
)
...
...
tests/python/relay/test_pass_dead_code_elimination.py
View file @
184fa484
...
...
@@ -14,8 +14,6 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from
nose.tools
import
nottest
import
tvm
from
tvm
import
relay
from
tvm.relay
import
Function
,
transform
...
...
tests/python/relay/test_pass_manager.py
View file @
184fa484
...
...
@@ -16,6 +16,7 @@
# under the License.
"""Unit tests for relay pass manager."""
import
numpy
as
np
import
pytest
import
tvm
from
tvm
import
relay
...
...
@@ -504,7 +505,7 @@ def test_sequential_with_scoping():
assert
analysis
.
alpha_equal
(
zz
,
zexpected
)
def
test_print_ir
():
def
test_print_ir
(
capfd
):
shape
=
(
1
,
2
,
3
)
tp
=
relay
.
TensorType
(
shape
,
"float32"
)
x
=
relay
.
var
(
"x"
,
tp
)
...
...
@@ -519,53 +520,15 @@ def test_print_ir():
relay
.
transform
.
DeadCodeElimination
()
])
def
redirect_output
(
call
):
"""Redirect the C++ logging info."""
import
sys
import
os
import
threading
stderr_fileno
=
sys
.
stderr
.
fileno
()
stderr_save
=
os
.
dup
(
stderr_fileno
)
stderr_pipe
=
os
.
pipe
()
os
.
dup2
(
stderr_pipe
[
1
],
stderr_fileno
)
os
.
close
(
stderr_pipe
[
1
])
output
=
''
def
record
():
nonlocal
output
while
True
:
data
=
os
.
read
(
stderr_pipe
[
0
],
1024
)
if
not
data
:
break
output
+=
data
.
decode
(
"utf-8"
)
t
=
threading
.
Thread
(
target
=
record
)
t
.
start
()
call
()
os
.
close
(
stderr_fileno
)
t
.
join
()
os
.
close
(
stderr_pipe
[
0
])
os
.
dup2
(
stderr_save
,
stderr_fileno
)
os
.
close
(
stderr_save
)
return
output
def
run_pass
():
mod
=
relay
.
Module
({
"main"
:
func
})
with
relay
.
build_config
(
opt_level
=
3
):
mod
=
seq
(
mod
)
mod
=
relay
.
Module
({
"main"
:
func
})
with
relay
.
build_config
(
opt_level
=
3
):
mod
=
seq
(
mod
)
out
=
capfd
.
readouterr
()
.
err
out
=
redirect_output
(
run_pass
)
assert
"Dumping the module IR"
in
out
assert
"multiply"
in
out
if
__name__
==
"__main__"
:
test_function_class_pass
()
test_module_class_pass
()
test_module_pass
()
test_function_pass
()
test_sequential_pass
()
test_sequential_with_scoping
()
test_pass_info
()
test_print_ir
()
pytest
.
main
()
tests/python/relay/test_type_solver.py
View file @
184fa484
...
...
@@ -16,7 +16,7 @@
# under the License.
import
tvm
from
tvm
import
relay
from
nose.tools
import
raises
import
pytest
def
make_rel
(
name
,
args
,
num_inputs
=
None
,
attrs
=
None
):
...
...
@@ -223,7 +223,8 @@ def test_backward_solving_after_child_update():
assert
solver
.
Resolve
(
t4
)
==
tup_concrete
assert
solver
.
Resolve
(
t5
)
==
tup_concrete
@raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_incompatible_tuple_unification
():
solver
=
make_solver
()
t1
=
relay
.
ty
.
IncompleteType
()
...
...
@@ -238,14 +239,14 @@ def test_incompatible_tuple_unification():
solver
.
Unify
(
tup1
,
tup2
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_bad_recursive_unification
():
solver
=
make_solver
()
t1
=
relay
.
ty
.
IncompleteType
()
solver
.
Unify
(
t1
,
relay
.
ty
.
TupleType
([
t1
,
t1
]))
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_unify_invalid_global_typevars
():
solver
=
make_solver
()
gtv1
=
relay
.
GlobalTypeVar
(
'gtv1'
)
...
...
@@ -253,7 +254,7 @@ def test_unify_invalid_global_typevars():
solver
.
Unify
(
gtv1
,
gtv2
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_incompatible_typecall_var_unification
():
solver
=
make_solver
()
gtv1
=
relay
.
GlobalTypeVar
(
'gtv1'
)
...
...
@@ -267,7 +268,7 @@ def test_incompatible_typecall_var_unification():
solver
.
Unify
(
tc1
,
tc2
)
@
raises
(
tvm
.
_ffi
.
base
.
TVMError
)
@
pytest.mark.xfail
(
raises
=
tvm
.
_ffi
.
base
.
TVMError
)
def
test_incompatible_typecall_args_unification
():
solver
=
make_solver
()
gtv
=
relay
.
GlobalTypeVar
(
'gtv1'
)
...
...
tests/python/relay/test_vm.py
View file @
184fa484
...
...
@@ -15,7 +15,6 @@
# specific language governing permissions and limitations
# under the License.
import
os
from
nose.tools
import
nottest
,
raises
import
tvm
import
numpy
as
np
...
...
@@ -23,6 +22,7 @@ from tvm import relay
from
tvm.relay.scope_builder
import
ScopeBuilder
from
tvm.relay.testing.config
import
ctx_list
from
tvm.relay.prelude
import
Prelude
import
pytest
def
check_result
(
args
,
expected_result
,
mod
=
None
):
"""
...
...
@@ -328,7 +328,7 @@ def test_list_hd():
result
=
veval
(
mod
)
tvm
.
testing
.
assert_allclose
(
result
.
asnumpy
(),
3
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_list_tl_empty_list
():
mod
=
relay
.
Module
()
p
=
Prelude
(
mod
)
...
...
tests/python/unittest/test_custom_datatypes_mybfloat16.py
View file @
184fa484
...
...
@@ -24,7 +24,7 @@ import numpy as np
tgt
=
"llvm"
def
setup
():
def
setup
_module
():
# You must first load the library containing the datatype implementation.
# In this case, we have built the test functions used below right into TVM.
# CDLL("libmybfloat16.so", RTLD_GLOBAL)
...
...
@@ -144,7 +144,7 @@ def test_bfloat_add_and_cast_FloatImm():
if
__name__
==
"__main__"
:
setup
()
setup
_module
()
test_bfloat_add_and_cast_1
()
test_bfloat_add_and_cast_2
()
test_bfloat_add_and_cast_FloatImm
()
tests/python/unittest/test_hybrid_script.py
View file @
184fa484
...
...
@@ -14,12 +14,12 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import
tvm
,
inspect
,
sys
,
traceback
,
numpy
,
nose
,
types
,
os
import
tvm
,
inspect
,
sys
,
traceback
,
numpy
,
pytest
,
types
,
os
from
tvm.contrib
import
util
from
tvm.hybrid
import
script
from
tvm.hybrid.runtime
import
HYBRID_GLOBALS
@
nose.tools.nottest
@
pytest.mark.skip
def
run_and_check
(
func
,
args
,
var_dict
=
{},
target
=
'llvm'
,
sch
=
None
,
outs
=
None
):
def
tvm_val_2_py_val
(
val
):
val
=
tvm
.
ir_pass
.
Substitute
(
val
,
var_dict
)
...
...
tests/python/unittest/test_lang_schedule.py
View file @
184fa484
...
...
@@ -14,7 +14,7 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from
nose.tools
import
raises
import
pytest
import
tvm
import
pickle
as
pkl
...
...
@@ -129,7 +129,7 @@ def test_vectorize():
assert
s
[
T
]
.
iter_var_attrs
[
xi
]
.
iter_type
==
UNROLL
assert
s
[
T
]
.
iter_var_attrs
[
yi
]
.
iter_type
==
VECTORIZE
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_vectorize_commreduce
():
V
=
tvm
.
placeholder
((
128
,),
name
=
'V'
)
ax
=
tvm
.
reduce_axis
((
0
,
128
),
name
=
'ax'
)
...
...
tests/python/unittest/test_pass_bound_checkers.py
View file @
184fa484
...
...
@@ -14,7 +14,7 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from
nose.tools
import
raises
import
pytest
import
tvm
import
numpy
as
np
def
collect_visit
(
stmt
,
f
):
...
...
@@ -43,7 +43,7 @@ def lower(sch, args):
stmt
=
tvm
.
ir_pass
.
Simplify
(
stmt
)
return
stmt
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_llvm
(
index_a
,
index_b
):
n
=
tvm
.
var
(
"n"
)
A
=
tvm
.
placeholder
((
n
,),
name
=
'A'
)
...
...
@@ -78,7 +78,7 @@ def test_in_bounds_llvm():
c
=
tvm
.
nd
.
array
(
np
.
zeros
(
1024
,
dtype
=
C
.
dtype
),
ctx
)
fadd
(
a
,
b
,
c
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_vectorize_llvm
(
nn
,
index_a
,
index_b
):
n
=
tvm
.
convert
(
nn
)
a
=
tvm
.
placeholder
((
n
),
name
=
'a'
)
...
...
@@ -144,7 +144,7 @@ def test_in_bounds_loop_partition_basic_llvm():
t
=
tvm
.
nd
.
empty
((
32
,),
T
.
dtype
,
ctx
)
f
(
a
,
b
,
t
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_loop_partition_basic_llvm
(
index_a
,
index_b
):
n
=
tvm
.
var
(
'n'
)
A
=
tvm
.
placeholder
((
n
,
),
name
=
'A'
)
...
...
@@ -229,7 +229,7 @@ def test_in_bounds_const_loop_partition_llvm():
t
=
tvm
.
nd
.
empty
((
n
,),
T
.
dtype
,
ctx
)
f
(
a
,
b
,
t
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_const_loop_partition_llvm
(
index_a
,
index_b
):
with
tvm
.
build_config
(
instrument_bound_checkers
=
True
,
partition_const_loop
=
True
):
n
=
21
...
...
@@ -285,7 +285,7 @@ def test_in_bounds_conv_llvm(loop_tiling=False):
conv_out
=
tvm
.
nd
.
empty
((
batch_size
,
out_channel
,
out_height
,
out_width
),
tvm
.
float32
,
ctx
)
f
(
data_input
,
kernel_input
,
conv_out
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_conv_llvm
(
data_offsets
,
kernel_offsets
,
loop_tiling
=
False
):
HSTR
=
WSTR
=
1
in_channel
=
128
...
...
@@ -348,7 +348,7 @@ def test_in_bounds_tensors_with_same_shapes1D_llvm():
t
=
tvm
.
nd
.
empty
((
32
,),
T
.
dtype
,
ctx
)
f
(
a
,
b
,
t
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_tensors_with_diff_shapes1D_llvm
(
a_shape
,
b_shape
,
c_shape
):
n
=
tvm
.
var
(
'n'
)
k
=
tvm
.
var
(
'k'
)
...
...
@@ -387,7 +387,7 @@ def test_in_bounds_tensors_with_same_shapes2D_llvm():
t
=
tvm
.
nd
.
empty
((
32
,
32
),
T
.
dtype
,
ctx
)
f
(
a
,
b
,
t
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_tensors_with_diff_shapes2D_llvm
(
a_shape
,
b_shape
,
c_shape
):
n
=
tvm
.
var
(
'n'
)
k
=
tvm
.
var
(
'k'
)
...
...
@@ -426,7 +426,7 @@ def test_in_bounds_tensors_with_same_shapes3D_llvm():
t
=
tvm
.
nd
.
empty
((
32
,
32
,
32
),
T
.
dtype
,
ctx
)
f
(
a
,
b
,
t
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_tensors_with_diff_shapes3D_llvm
(
a_shape
,
b_shape
,
c_shape
):
n
=
tvm
.
var
(
'n'
)
k
=
tvm
.
var
(
'k'
)
...
...
@@ -446,7 +446,7 @@ def test_out_of_bounds_tensors_with_diff_shapes3D_llvm(a_shape, b_shape, c_shape
t
=
tvm
.
nd
.
empty
((
c_shape
[
0
],
c_shape
[
1
],
c_shape
[
2
]),
T
.
dtype
,
ctx
)
f
(
a
,
b
,
t
)
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_out_of_bounds_tensors_with_zero_shape_op_with_not_zero_shape_llvm
():
if
not
tvm
.
module
.
enabled
(
"llvm"
):
return
...
...
tests/python/unittest/test_pass_split_host_device.py
View file @
184fa484
...
...
@@ -14,10 +14,10 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from
nose.tools
import
raises
import
pytest
import
tvm
@
raises
(
Exception
)
@
pytest.mark.xfail
def
test_loop_dependent_allocate
():
N
=
tvm
.
var
(
"N"
)
A
=
tvm
.
placeholder
((
2
*
N
,),
"float32"
,
"A"
)
...
...
tests/python/unittest/test_runtime_micro.py
View file @
184fa484
...
...
@@ -17,7 +17,6 @@
import
os
from
nose.tools
import
nottest
import
numpy
as
np
import
tvm
from
tvm.contrib
import
graph_runtime
,
util
...
...
tests/python/unittest/test_runtime_vm_profiler.py
View file @
184fa484
...
...
@@ -18,11 +18,11 @@ import os
import
tvm
import
numpy
as
np
from
nose.tools
import
not
test
import
py
test
from
tvm
import
relay
from
tvm.relay.testing
import
resnet
@
nottest
@
pytest.mark.skip
def
test_basic
():
mod
,
params
=
resnet
.
get_workload
()
compiler
=
relay
.
profiler_vm
.
VMCompilerProfiler
()
...
...
tests/scripts/task_python_frontend.sh
View file @
184fa484
...
...
@@ -27,28 +27,28 @@ export OMP_NUM_THREADS=1
make cython3
echo
"Running relay TFLite frontend test..."
python3
-m
nose
-v
tests/python/frontend/tflite
python3
-m
pytest
-v
tests/python/frontend/tflite
echo
"Running relay MXNet frontend test..."
python3
-m
nose
-v
tests/python/frontend/mxnet
python3
-m
pytest
-v
tests/python/frontend/mxnet
echo
"Running relay Keras frontend test..."
python3
-m
nose
-v
tests/python/frontend/keras
python3
-m
pytest
-v
tests/python/frontend/keras
echo
"Running relay ONNX frontend test..."
python3
-m
nose
-v
tests/python/frontend/onnx
python3
-m
pytest
-v
tests/python/frontend/onnx
echo
"Running relay CoreML frontend test..."
python3
-m
nose
-v
tests/python/frontend/coreml
python3
-m
pytest
-v
tests/python/frontend/coreml
echo
"Running nnvm to relay frontend test..."
python3
-m
nose
-v
tests/python/frontend/nnvm_to_relay
python3
-m
pytest
-v
tests/python/frontend/nnvm_to_relay
echo
"Running relay Tensorflow frontend test..."
python3
-m
nose
-v
tests/python/frontend/tensorflow
python3
-m
pytest
-v
tests/python/frontend/tensorflow
echo
"Running relay caffe2 frontend test..."
python3
-m
nose
-v
tests/python/frontend/caffe2
python3
-m
pytest
-v
tests/python/frontend/caffe2
echo
"Running relay DarkNet frontend test..."
python3
-m
nose
-v
tests/python/frontend/darknet
python3
-m
pytest
-v
tests/python/frontend/darknet
tests/scripts/task_python_integration.sh
View file @
184fa484
...
...
@@ -33,13 +33,13 @@ rm -rf lib
make
cd
../..
python3
-m
nose
-v
apps/extension/tests
python3
-m
pytest
-v
apps/extension/tests
TVM_FFI
=
ctypes python3
-m
nose
-v
tests/python/integration
TVM_FFI
=
ctypes python3
-m
nose
-v
tests/python/contrib
TVM_FFI
=
ctypes python3
-m
pytest
-v
tests/python/integration
TVM_FFI
=
ctypes python3
-m
pytest
-v
tests/python/contrib
TVM_FFI
=
ctypes python3
-m
nose
-v
tests/python/relay
TVM_FFI
=
ctypes python3
-m
pytest
-v
tests/python/relay
# Do not enable OpenGL
# TVM_FFI=cython python -m
nose
-v tests/webgl
# TVM_FFI=ctypes python3 -m
nose
-v tests/webgl
# TVM_FFI=cython python -m
pytest
-v tests/webgl
# TVM_FFI=ctypes python3 -m
pytest
-v tests/webgl
tests/scripts/task_python_legacy.sh
View file @
184fa484
...
...
@@ -27,29 +27,29 @@ export OMP_NUM_THREADS=1
make cython3
echo
"Running nnvm unittest..."
python3
-m
nose
-v
nnvm/tests/python/unittest
python3
-m
pytest
-v
nnvm/tests/python/unittest
echo
"Running nnvm compiler test..."
python3
-m
nose
-v
nnvm/tests/python/compiler
python3
-m
pytest
-v
nnvm/tests/python/compiler
echo
"Running nnvm ONNX frontend test..."
python3
-m
nose
-v
nnvm/tests/python/frontend/onnx
python3
-m
pytest
-v
nnvm/tests/python/frontend/onnx
echo
"Running nnvm MXNet frontend test..."
python3
-m
nose
-v
nnvm/tests/python/frontend/mxnet
python3
-m
pytest
-v
nnvm/tests/python/frontend/mxnet
echo
"Running nnvm DarkNet frontend test..."
python3
-m
nose
-v
nnvm/tests/python/frontend/darknet
python3
-m
pytest
-v
nnvm/tests/python/frontend/darknet
echo
"Running nnvm Keras frontend test..."
python3
-m
nose
-v
nnvm/tests/python/frontend/keras
python3
-m
pytest
-v
nnvm/tests/python/frontend/keras
echo
"Running nnvm Tensorflow frontend test..."
python3
-m
nose
-v
nnvm/tests/python/frontend/tensorflow
python3
-m
pytest
-v
nnvm/tests/python/frontend/tensorflow
echo
"Running nnvm CoreML frontend test..."
python3
-m
nose
-v
nnvm/tests/python/frontend/coreml
python3
-m
pytest
-v
nnvm/tests/python/frontend/coreml
echo
"Running nnvm Caffe2 frontend test..."
python3
-m
nose
-v
nnvm/tests/python/frontend/caffe2
python3
-m
pytest
-v
nnvm/tests/python/frontend/caffe2
tests/scripts/task_python_nightly.sh
View file @
184fa484
...
...
@@ -25,6 +25,6 @@ export PYTHONPATH=python:topi/python
make cython3
rm
-rf
python/tvm/
*
.pyc python/tvm/
*
/
*
.pyc python/tvm/
*
/
*
/
*
.pyc
rm
-rf
topi/python/topi/
*
.pyc topi/python/topi/
*
/
*
.pyc topi/python/topi/
*
/
*
/
*
.pyc topi/python/topi/
*
/
*
/
*
/
*
.pyc
rm
-rf
topi/python/topi/
*
.pyc topi/python/topi/
*
/
*
.pyc topi/python/topi/
*
/
*
/
*
.pyc topi/python/topi/
*
/
*
/
*
/
*
.pyc
python3
-m
nose
-v
topi/tests/python/nightly
python3
-m
pytest
-v
topi/tests/python/nightly
tests/scripts/task_python_topi.sh
View file @
184fa484
...
...
@@ -25,6 +25,6 @@ export PYTHONPATH=python:topi/python
make cython3
rm
-rf
python/tvm/
*
.pyc python/tvm/
*
/
*
.pyc python/tvm/
*
/
*
/
*
.pyc
rm
-rf
topi/python/topi/
*
.pyc topi/python/topi/
*
/
*
.pyc topi/python/topi/
*
/
*
/
*
.pyc topi/python/topi/
*
/
*
/
*
/
*
.pyc
rm
-rf
topi/python/topi/
*
.pyc topi/python/topi/
*
/
*
.pyc topi/python/topi/
*
/
*
/
*
.pyc topi/python/topi/
*
/
*
/
*
/
*
.pyc
python3
-m
nose
-v
topi/tests/python
python3
-m
pytest
-v
topi/tests/python
tests/scripts/task_python_unittest.sh
View file @
184fa484
...
...
@@ -23,6 +23,6 @@ export PYTHONPATH=python:topi/python
rm
-rf
python/tvm/
*
.pyc python/tvm/
*
/
*
.pyc python/tvm/
*
/
*
/
*
.pyc
TVM_FFI
=
ctypes python3
-m
nose
-v
tests/python/unittest
TVM_FFI
=
ctypes python3
-m
pytest
-v
tests/python/unittest
make cython3
TVM_FFI
=
cython python3
-m
nose
-v
tests/python/unittest
TVM_FFI
=
cython python3
-m
pytest
-v
tests/python/unittest
tests/scripts/task_python_vta.sh
View file @
184fa484
...
...
@@ -29,11 +29,11 @@ make cython3
# Run unit tests in functional/fast simulator
echo
"Running unittest in fsim..."
python3
-m
nose
-v
vta/tests/python/unittest
python3
-m
pytest
-v
vta/tests/python/unittest
# Run unit tests in functional/fast simulator
echo
"Running integration test in fsim..."
python3
-m
nose
-v
vta/tests/python/integration
python3
-m
pytest
-v
vta/tests/python/integration
# # Build VTA chisel design and verilator simulator
# (make -C vta/hardware/chisel/)
...
...
@@ -43,11 +43,11 @@ python3 -m nose -v vta/tests/python/integration
# # Run unit tests in cycle accurate simulator
# echo "Running unittest in tsim..."
# python3 -m
nose
-v vta/tests/python/unittest
# python3 -m
pytest
-v vta/tests/python/unittest
# # Run unit tests in cycle accurate simulator
# echo "Running integration test in tsim..."
# python3 -m
nose
-v vta/tests/python/integration
# python3 -m
pytest
-v vta/tests/python/integration
# # Reset default fsim simulation
# cp vta/config/fsim_sample.json vta/config/vta_config.json
\ No newline at end of file
# cp vta/config/fsim_sample.json vta/config/vta_config.json
tests/scripts/task_verilog_test.sh
View file @
184fa484
...
...
@@ -21,5 +21,5 @@ set -u
export
PYTHONPATH
=
python
make verilog
nosetests
-v
tests/verilog/unittest
nosetests
-v
tests/verilog/integration
python3
-m
pytest
-v
tests/verilog/unittest
python3
-m
pytest
-v
tests/verilog/integration
tests/travis/run_test.sh
View file @
184fa484
...
...
@@ -6,9 +6,9 @@
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
...
...
@@ -55,7 +55,6 @@ if [ ${TASK} == "python_test" ] || [ ${TASK} == "all_test" ]; then
if
[
${
TRAVIS_OS_NAME
}
==
"osx"
]
;
then
./tests/scripts/task_python_unittest.sh
||
exit
-1
else
nosetests
-v
tests/python/unittest
||
exit
-1
nosetests3
-v
tests/python/unittest
||
exit
-1
pytests3
-v
tests/python/unittest
||
exit
-1
fi
fi
tests/travis/setup.sh
View file @
184fa484
...
...
@@ -6,9 +6,9 @@
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
...
...
@@ -20,8 +20,7 @@ if [ ${TASK} == "python_test" ] || [ ${TASK} == "all_test" ]; then
if
[
${
TRAVIS_OS_NAME
}
==
"osx"
]
;
then
brew update
brew install python3
python
-m
pip install
--user
nose numpy cython
python3
-m
pip install
--user
nose numpy cython
python3
-m
pip install
--user
pytest numpy cython
fi
fi
...
...
topi/tests/python/test_topi_group_conv2d_NCHWc_int8.py
View file @
184fa484
...
...
@@ -24,7 +24,7 @@ import topi
import
topi.testing
from
tvm.contrib.pickle_memoize
import
memoize
from
topi.util
import
get_const_tuple
from
nose.tools
import
not
test
import
py
test
from
common
import
get_all_backend
...
...
@@ -103,7 +103,7 @@ def verify_group_conv2d_NCHWc_int8(batch, in_channel, groups, in_size, num_filte
with
autotvm
.
tophub
.
context
(
device
):
# load tophub pre-tuned parameters
check_device
(
device
)
@
nottest
@
pytest.mark.skip
def
test_conv2d_NCHWc
():
# ResNet50 workloads
verify_group_conv2d_NCHWc_int8
(
1
,
256
,
32
,
224
,
64
,
7
,
2
,
3
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment