Transformers for Audio Recognition: Introducing DATAR
Transformers have proven to be highly effective in various tasks, but their quadratic complexity in self-attention computation has limited their applicability, particularly in low-resource settings and mobile or edge devices. Previous attempts to reduce computation complexity have involved using hand-crafted attention patterns, but these patterns are often not optimal and may lead to the reduction of relevant keys or values while preserving less important ones. Taking this insight into account, we present a groundbreaking solution called DATAR – a deformable audio Transformer for audio recognition.
DATAR incorporates a deformable attention mechanism with a pyramid transformer backbone, making it both constructible and learnable. This innovative architecture has already demonstrated its effectiveness in prediction tasks, such as event classification. Furthermore, we have identified that the computation of the deformable attention map may oversimplify the input feature, potentially limiting performance. To address this issue, we have introduced a learnable input adaptor to enhance the input feature, resulting in state-of-the-art performance for DATAR in audio recognition tasks.
Abstract:Transformers have achieved promising results on a variety of tasks. However, the quadratic complexity in self-attention computation has limited the applications, especially in low-resource settings and mobile or edge devices. Existing works have proposed to exploit hand-crafted attention patterns to reduce computation complexity. However, such hand-crafted patterns are data-agnostic and may not be optimal. Hence, it is likely that relevant keys or values are being reduced, while less important ones are still preserved. Based on this key insight, we propose a novel deformable audio Transformer for audio recognition, named DATAR, where a deformable attention equipping with a pyramid transformer backbone is constructed and learnable. Such an architecture has been proven effective in prediction tasks,~textit{e.g.}, event classification. Moreover, we identify that the deformable attention map computation may over-simplify the input feature, which can be further enhanced. Hence, we introduce a learnable input adaptor to alleviate this issue, and DATAR achieves state-of-the-art performance.