[Torch, QNN] Add support for quantized models via QNN (#4977)
* qnn support initial import * fix upsampling num input * imagenet tests added * add qunatized module tests * quantized module tests working * imagenet test working * fix lint * remove top level torch import to fix ci error * disable lint warning on outside toplevel import * revert parse -> convert change * add comments to qnn translation * address comments, add sample outputs * add more comments * refactor bias add and requantize step
Showing
python/tvm/relay/frontend/qnn_torch.py
0 → 100644
This diff is collapsed.
Click to expand it.
tests/python/frontend/pytorch/qnn_test.py
0 → 100644
This diff is collapsed.
Click to expand it.
Please
register
or
sign in
to comment