arXiv:2409.18147v1 Announce Type: new Abstract: Fundus image classification is crucial in the computer aided diagnosis tasks, but label noise significantly impairs the performance of deep neural networks. To address this challenge, we propose a robust framework, Self-Supervised Pre-training with Robust Adaptive Credal Loss (SSP-RACL), for handling label noise in fundus image datasets. First, we use Masked Autoencoders (MAE) for pre-training to extract features, unaffected by label noise. Subsequently, RACL employ a superset learning framework, setting confidence thresholds and adaptive label relaxation parameter to construct possibility distributions and provide more reliable ground-truth estimates, thus effectively suppressing the memorization effect. Additionally, we introduce clinical knowledge-based asymmetric noise generation to simulate real-world noisy fundus image datasets. Experimental results demonstrate that our proposed method outperforms existing approaches in handling label noise, showing superior performance.
The article “Self-Supervised Pre-training with Robust Adaptive Credal Loss for Handling Label Noise in Fundus Image Classification” addresses the challenge of label noise in fundus image datasets and proposes a robust framework, SSP-RACL, to overcome this issue. The framework utilizes Masked Autoencoders for pre-training, extracting features unaffected by label noise. It then employs a superset learning framework called RACL, which sets confidence thresholds and adaptive label relaxation parameters to construct possibility distributions and provide more reliable ground-truth estimates. The article also introduces clinical knowledge-based asymmetric noise generation to simulate real-world noisy fundus image datasets. Experimental results demonstrate the superior performance of the proposed method in handling label noise compared to existing approaches.

New Insights into Handling Label Noise in Fundus Image Classification

New Insights into Handling Label Noise in Fundus Image Classification

Fundus image classification is a crucial task in computer-aided diagnosis, but the presence of label noise significantly impairs the performance of deep neural networks. In this article, we propose a robust framework called Self-Supervised Pre-training with Robust Adaptive Credal Loss (SSP-RACL) to effectively handle label noise in fundus image datasets.

Traditionally, deep neural networks are trained using labeled datasets, assuming that the labels are accurate and reliable. However, in real-world scenarios, labeling errors and inconsistencies are inevitable, leading to incorrect predictions. This phenomenon is particularly prevalent in medical imaging where mislabeled or noisy data can have serious consequences.

The Role of Self-Supervised Pre-training

Our proposed framework begins with a self-supervised pre-training step using Masked Autoencoders (MAE). This pre-training process helps in extracting features from fundus images that are unaffected by label noise. By focusing on reconstructing the input image while ignoring the noisy labels, MAE allows the neural network to learn robust representations that capture the underlying structure of the images.

The Power of Robust Adaptive Credal Loss

After the features are extracted through pre-training, we employ a superset learning framework known as Robust Adaptive Credal Loss (RACL). This approach sets confidence thresholds and adaptive label relaxation parameters to construct possibility distributions, providing more reliable ground-truth estimates. By adopting a credal perspective, RACL acknowledges the uncertainty in label noise and effectively suppresses the memorization effect often encountered in conventional learning methods.

RACL enables our framework to make more informed decisions by considering a range of possibilities rather than relying solely on deterministic labels. This approach not only enhances the model’s ability to handle label noise but also improves its generalization capabilities.

Simulating Real-World Noisy Datasets

In order to evaluate the efficacy of our proposed method, we introduce clinical knowledge-based asymmetric noise generation to simulate real-world noisy fundus image datasets. This noise generation technique takes into account the characteristics of the labeling process in clinical settings, providing a more accurate representation of the label noise encountered in practice.

Superior Performance and Future Directions

Experimental results demonstrate that our SSP-RACL framework outperforms existing approaches in handling label noise for fundus image classification. By leveraging self-supervised pre-training and robust adaptive credal loss, we achieve superior performance in accurately classifying fundus images despite the presence of label noise.

As for future directions, we aim to explore the integration of additional clinical knowledge and contextual information to further improve the performance of our framework. Additionally, investigating the applicability of our proposed method to other medical imaging tasks beyond fundus image classification holds promise for advancing the field of computer-aided diagnosis.

The paper titled “Self-Supervised Pre-training with Robust Adaptive Credal Loss (SSP-RACL) for Handling Label Noise in Fundus Image Datasets” addresses an important issue in the field of computer-aided diagnosis: the impact of label noise on the performance of deep neural networks in fundus image classification.

Label noise refers to the incorrect or noisy labels assigned to the images in the dataset, which can be due to various reasons such as human error or inconsistencies in the annotation process. This noise can significantly impair the accuracy and reliability of the trained models.

To tackle this challenge, the authors propose a robust framework called SSP-RACL. The framework consists of two main components: pre-training using Masked Autoencoders (MAE) and the use of a superset learning framework with Robust Adaptive Credal Loss (RACL).

The pre-training phase with MAE aims to extract features from the fundus images that are unaffected by label noise. This step helps in learning more robust representations of the images, which can then be used for subsequent classification tasks.

The RACL component of the framework is responsible for handling the label noise. It employs a superset learning approach, where confidence thresholds and adaptive label relaxation parameters are set to construct possibility distributions. These distributions provide more reliable ground-truth estimates, effectively suppressing the memorization effect caused by noisy labels.

In addition to the framework itself, the authors also introduce a clinical knowledge-based asymmetric noise generation method to simulate real-world noisy fundus image datasets. This allows for a more realistic evaluation of the proposed method.

The experimental results presented in the paper demonstrate that the SSP-RACL framework outperforms existing approaches in handling label noise. The proposed method shows superior performance in fundus image classification tasks, even in the presence of noisy labels.

Overall, this paper presents a promising approach to address the challenge of label noise in fundus image datasets. The combination of pre-training with MAE and the RACL framework shows potential for improving the accuracy and reliability of computer-aided diagnosis systems. Future research could focus on evaluating the proposed method on larger and more diverse datasets, as well as exploring its applicability to other medical imaging tasks.
Read the original article