-
We observe multiple groups across a range of domains (ASR, NMT, LM, etc), (#3566) · d6dcd6c5
internally and externally, interested in replacing standard dense layers with block-sparse matrix multiplication layers. The motivations are generally: higher performance (due to reduction in FLOPs, memory bandwidth/cache footprint), enabling larger models (e.g. fitting more layers in a given memory budget). Some public work along these lines: * https://openai.com/blog/block-sparse-gpu-kernels/ * https://openai.com/blog/sparse-transformer/ * https://arxiv.org/abs/1802.08435 * https://arxiv.org/abs/1711.02782 Various groups have been able to successfully train models with reasonable levels of sparsity (90%+) with marginal accuracy changes, which suggests substantial speedups are possible (as this implies a >10x reduction in FLOPs). It is fairly straightforward to realize these theoretical speedups, see e.g. TVM benchmarks for Intel CPUs in https://gist.github.com/ajtulloch/e65f90487bceb8848128e8db582fe902, and CUDA results in https://github.com/openai/blocksparse, etc. * https://github.com/openai/blocksparse (CUDA) * https://software.intel.com/en-us/mkl-developer-reference-c-mkl-bsrmm (MKL BSRM) * https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.bsr_matrix.html (SCIPY BSR representation) This is extracted from an internal patch we've been using internally. There are various extensions possible (int8/fp16/bf16, CUDA/other GPU architectures), but this is a reasonable starting point. This needs more thorough unit test coverage however. We follow the conventions established by scipy.sparse.bsr_matrix and other libraries, see the unit tests for details. For folks interested in experimenting with scheduling/AutoTVM etc, https://gist.github.com/ajtulloch/e65f90487bceb8848128e8db582fe902 is a useful starting point.
Andrew Tulloch committed
Name |
Last commit
|
Last update |
---|---|---|
.. | ||
attrs | Loading commit data... | |
adt.h | Loading commit data... | |
analysis.h | Loading commit data... | |
base.h | Loading commit data... | |
error.h | Loading commit data... | |
expr.h | Loading commit data... | |
expr_functor.h | Loading commit data... | |
feature.h | Loading commit data... | |
interpreter.h | Loading commit data... | |
module.h | Loading commit data... | |
op.h | Loading commit data... | |
op_attr_types.h | Loading commit data... | |
pattern_functor.h | Loading commit data... | |
transform.h | Loading commit data... | |
type.h | Loading commit data... |