Adversarial attacks can readily disrupt the image classification system,
revealing the vulnerability of DNN-based recognition tasks. While existing
adversarial perturbations are primarily applied to uncompressed images or
compressed images by the traditional image compression method, i.e., JPEG,
limited studies have investigated the robustness of models for image
classification in the context of DNN-based image compression. With the rapid
evolution of advanced image compression, DNN-based learned image compression
has emerged as the promising approach for transmitting images in many
security-critical applications, such as cloud-based face recognition and
autonomous driving, due to its superior performance over traditional
compression. Therefore, there is a pressing need to fully investigate the
robustness of a classification system post-processed by learned image
compression. To bridge this research gap, we explore the adversarial attack on
a new pipeline that targets image classification models that utilize learned
image compressors as pre-processing modules. Furthermore, to enhance the
transferability of perturbations across various quality levels and
architectures of learned image compression models, we introduce a saliency
score-based sampling method to enable the fast generation of transferable
perturbation. Extensive experiments with popular attack methods demonstrate the
enhanced transferability of our proposed method when attacking images that have
been post-processed with different learned image compression models.
Adversarial attacks have been a significant concern in the field of image classification systems, exposing the vulnerability of deep neural network (DNN) based recognition tasks. While previous studies have focused on attacking uncompressed or traditionally compressed images, there is a lack of understanding regarding the robustness of models for image classification in the context of DNN-based image compression.
In recent times, DNN-based learned image compression has gained traction as a powerful method for transmitting images in security-critical applications like cloud-based face recognition and autonomous driving. The performance of learned image compression surpasses that of traditional compression techniques. Therefore, it becomes crucial to investigate the robustness of classification systems that undergo learned image compression as a pre-processing step.
In order to fill this research gap, this study focuses on exploring adversarial attacks on image classification models that utilize learned image compressors. This new pipeline provides insights into how these models behave when faced with adversarial perturbations. Furthermore, to ensure the effectiveness of these attacks across different quality levels and architectures of learned image compression models, a saliency score-based sampling method is introduced. This method enables the rapid generation of transferable perturbations.
Extensive experiments were conducted using popular attack methods, and the results demonstrated the enhanced transferability of the proposed method when targeting images that have undergone various learned image compression models.
The Multidisciplinary Nature of the Concepts
This research article covers various multidisciplinary fields within multimedia information systems and related technologies such as animations, artificial reality, augmented reality, and virtual realities.
Firstly, it addresses the issue of vulnerability in image classification systems, which is a critical concern across multiple disciplines. Image classification models are used in various applications such as video games, virtual reality simulations, computer vision systems, and more. Understanding the vulnerabilities and developing countermeasures is crucial in fields where accurate and reliable image recognition is essential.
Secondly, the article delves into the realm of learned image compression and its impact on security-critical applications. This concept bridges the fields of multimedia information systems and artificial reality, as cloud-based face recognition and autonomous driving heavily rely on accurate and efficient image processing techniques. By exploring the robustness of classification systems post-processed by learned image compression, this research contributes to the advancement of these fields.
Lastly, the proposed saliency score-based sampling method for generating transferable perturbations adds value to the field of augmented reality. Augmented reality experiences often involve overlaying digital content onto real-world images or video streams, requiring reliable and efficient image classification. Understanding how adversarial attacks can affect augmented reality systems is crucial for maintaining the integrity and security of these experiences.
Conclusion
This research article highlights the need to investigate the robustness of image classification models that utilize learned image compression as a preprocessing step. By exploring the adversarial attacks on such models, the study provides valuable insights into their vulnerabilities and suggests a saliency score-based sampling method to enhance transferability across different compression models. The multidisciplinary nature of this research connects various fields within multimedia information systems, animations, artificial reality, augmented reality, and virtual realities. This research serves as an important step towards enhancing the security and reliability of image classification systems in a rapidly evolving technological landscape.