arXiv:2407.09029v1 Announce Type: new
Abstract: Multimodal emotion recognition systems rely heavily on the full availability of modalities, suffering significant performance declines when modal data is incomplete. To tackle this issue, we present the Cross-Modal Alignment, Reconstruction, and Refinement (CM-ARR) framework, an innovative approach that sequentially engages in cross-modal alignment, reconstruction, and refinement phases to handle missing modalities and enhance emotion recognition. This framework utilizes unsupervised distribution-based contrastive learning to align heterogeneous modal distributions, reducing discrepancies and modeling semantic uncertainty effectively. The reconstruction phase applies normalizing flow models to transform these aligned distributions and recover missing modalities. The refinement phase employs supervised point-based contrastive learning to disrupt semantic correlations and accentuate emotional traits, thereby enriching the affective content of the reconstructed representations. Extensive experiments on the IEMOCAP and MSP-IMPROV datasets confirm the superior performance of CM-ARR under conditions of both missing and complete modalities. Notably, averaged across six scenarios of missing modalities, CM-ARR achieves absolute improvements of 2.11% in WAR and 2.12% in UAR on the IEMOCAP dataset, and 1.71% and 1.96% in WAR and UAR, respectively, on the MSP-IMPROV dataset.

The Cross-Modal Alignment, Reconstruction, and Refinement (CM-ARR) Framework: Enhancing Emotion Recognition in Multimodal Systems

In the field of multimodal emotion recognition systems, one of the major challenges is handling incomplete modal data. When modalities are missing or incomplete, the performance of such systems tends to suffer. To address this issue, a new framework called Cross-Modal Alignment, Reconstruction, and Refinement (CM-ARR) has been developed.

The CM-ARR framework involves three main phases: cross-modal alignment, reconstruction, and refinement. It leverages unsupervised distribution-based contrastive learning techniques to align modal distributions from different modalities. By reducing discrepancies and effectively modeling semantic uncertainty, CM-ARR ensures better alignment of heterogeneous modal data.

In the reconstruction phase, CM-ARR utilizes normalizing flow models to transform the aligned distributions and recover missing modalities. This step helps in restoring the multimodal information that was initially incomplete or unavailable. By leveraging the power of normalizing flow models, CM-ARR is able to generate plausible representations of missing data.

The final phase of the CM-ARR framework is refinement. In this phase, supervised point-based contrastive learning is employed to disrupt semantic correlations in the representations and emphasize emotional traits. This step enriches the affective content of the reconstructed representations, leading to improved emotion recognition.

The CM-ARR framework has been extensively evaluated using the IEMOCAP and MSP-IMPROV datasets. The results have shown the superior performance of CM-ARR in scenarios with both missing and complete modalities. Across six scenarios of missing modalities, CM-ARR achieved significant improvements in Weighted Average Recall (WAR) and Unweighted Average Recall (UAR) on both datasets.

Overall, the CM-ARR framework addresses the challenges of incomplete modal data in multimodal emotion recognition systems. By leveraging unsupervised and supervised learning techniques, it effectively aligns modal distributions, reconstructs missing modalities, and refines the emotional content. This innovative approach has the potential to enhance emotion recognition in various multimedia information systems, including animations, artificial reality, augmented reality, and virtual realities.

Read the original article