-
[Torch, QNN] Add support for quantized models via QNN (#4977) · fc7f0783
* qnn support initial import * fix upsampling num input * imagenet tests added * add qunatized module tests * quantized module tests working * imagenet test working * fix lint * remove top level torch import to fix ci error * disable lint warning on outside toplevel import * revert parse -> convert change * add comments to qnn translation * address comments, add sample outputs * add more comments * refactor bias add and requantize step
Animesh Jain committed
Name |
Last commit
|
Last update |
---|---|---|
.. | ||
tvm | Loading commit data... | |
.gitignore | Loading commit data... | |
setup.py | Loading commit data... |