Architectures that incorporate Computing-in-Memory (CiM) using emerging non-volatile memory (NVM) devices have become strong contenders for deep neural network (DNN) acceleration due to their impressive energy efficiency.

This statement immediately highlights the significance of the topic being discussed. The use of non-volatile memory devices in computing architectures is gaining attention for its potential to accelerate deep neural networks while also reducing energy consumption. This indicates that the advancements in non-volatile memory technology are opening up new possibilities in the field of deep learning.

Yet, a significant challenge arises when using these emerging devices: they can show substantial variations during the weight-mapping process. This can severely impact DNN accuracy if not mitigated.

The article reveals a crucial challenge associated with using emerging non-volatile memory devices – their susceptibility to variations during the weight-mapping process. These variations can have a detrimental effect on the accuracy of deep neural networks if not properly addressed. Consequently, this highlights the need for effective techniques to mitigate these variations and ensure reliable and accurate DNN inference.

A widely accepted remedy for imperfect weight mapping is the iterative write-verify approach, which involves verifying conductance values and adjusting devices if needed.

The article identifies the iterative write-verify approach as a commonly adopted solution for addressing imperfect weight mapping. This approach involves verifying conductance values and making necessary adjustments to the devices to ensure accurate weight mapping. It suggests that this iterative process can help improve the accuracy of DNNs implemented using these emerging non-volatile memory devices.

In all existing publications, this procedure is applied to every individual device, resulting in a significant programming time overhead.

One key limitation highlighted in the article is the time overhead associated with applying the iterative write-verify procedure to every individual device. The existing publications seem to follow this approach, which can lead to significant programming time overhead. This indicates the necessity for a more efficient technique that can reduce the time required for write-verify treatment without compromising DNN accuracy.

In our research, we illustrate that only a small fraction of weights need this write-verify treatment for the corresponding devices and the DNN accuracy can be preserved, yielding a notable programming acceleration.

The researchers present their findings, demonstrating that not all weights require the write-verify treatment for the corresponding devices. By identifying that only a small fraction of weights necessitate this procedure, they propose that DNN accuracy can be preserved while achieving significant programming acceleration. This implies that they have discovered a potential solution to mitigate the programming time overhead associated with the iterative write-verify approach.

Building on this, we introduce USWIM, a novel method based on the second derivative. It leverages a single iteration of forward and backpropagation to pinpoint the weights demanding write-verify.

The authors introduce their novel method called USWIM, which builds upon their previous research. This novel approach utilizes the second derivative and employs a single iteration of forward and backpropagation to specifically identify the weights requiring write-verify treatment. By implementing this method, they aim to further reduce programming time by efficiently pinpointing the specific weights that need attention.

Through extensive tests on diverse DNN designs and datasets, USWIM manifests up to a 10x programming acceleration against the traditional exhaustive write-verify method, all while maintaining a similar accuracy level.

The researchers provide evidence of the effectiveness of their USWIM technique by conducting extensive tests on various deep neural network designs and datasets. They indicate that their method achieves remarkable programming acceleration, up to 10 times faster, in comparison to the traditional exhaustive write-verify method. Furthermore, they highlight that this acceleration is achieved without compromising the accuracy of the DNNs. This suggests that their approach presents a significant improvement over existing methods in terms of computational efficiency.

Furthermore, compared to our earlier SWIM technique, USWIM excels, showing a 7x speedup when dealing with devices exhibiting non-uniform variations.

The authors make an additional comparison between their previous SWIM technique and the newly introduced USWIM method. They reveal that USWIM outperforms SWIM by achieving a 7 times speedup when handling devices with non-uniform variations. This showcases the superiority of the USWIM technique in addressing the challenges posed by devices with varying characteristics.

Overall, this article emphasizes the challenges related to emerging non-volatile memory devices in deep neural network acceleration and highlights the need for efficient programming approaches. The researchers’ novel method, USWIM, shows promising results by significantly reducing programming time while preserving DNN accuracy. This research contributes to the advancement of Computing-in-Memory architectures and opens up possibilities for accelerating deep neural networks using non-volatile memory devices.
Read the original article