Neuromorphic object recognition with spiking neural networks (SNNs) is a crucial aspect of low-power neuromorphic computing. However, one major challenge with existing SNNs is their significant latency, requiring 10 to 40 timesteps or even more to recognize neuromorphic objects. At low latencies, the performance of these SNNs is severely degraded. This article introduces a new approach called the Shrinking SNN (SSNN) that aims to address this latency issue without compromising performance.

The key idea behind SSNN is to alleviate the temporal redundancy in SNNs by dividing them into multiple stages with progressively shrinking timesteps. This division significantly reduces the inference latency. To ensure that information is preserved effectively during timestep shrinkage, the authors propose the use of a temporal transformer that smoothly transforms the temporal scale while preserving maximum information.

In addition to tackling latency, the authors also address the problem of performance degradation at low latency by adding multiple early classifiers to the SNN during training. This helps mitigate issues such as mismatch between the surrogate gradient and the true gradient, as well as gradient vanishing/exploding. By doing so, SSNN eliminates the performance degradation and maintains high accuracy even at low latency.

The effectiveness of SSNN is demonstrated through extensive experiments on various neuromorphic datasets, including CIFAR10-DVS, N-Caltech101, and DVS-Gesture. These experiments reveal that SSNN is capable of improving the baseline accuracy by a significant margin of 6.55% to 21.41%. Notably, SSNN achieves an impressive accuracy of 73.63% on CIFAR10-DVS with just 5 average timesteps and without relying on any data augmentation techniques.

This work presents a novel approach to dealing with latency in SNNs by introducing a heterogeneous temporal scale through timestep shrinkage. By combining this with the inclusion of multiple early classifiers and preserving information effectively, SSNN demonstrates impressive improvements in accuracy without compromising on latency. These findings provide valuable insights into the development of high-performance, low-latency SNNs, paving the way for future advancements in the field of neuromorphic computing.

Read the original article