Denoising and demosaicking are two essential steps in reconstructing a clean full-color video from raw data. Traditionally, these steps are performed separately, but recent research suggests that performing them jointly, known as VJDD (Video Joint Denoising and Demosaicking), can result in better video restoration performance. However, there are several challenges in achieving this.

One of the key challenges in VJDD is ensuring the temporal consistency of consecutive frames. When perceptual regularization terms are introduced to enhance video perceptual quality, this challenge becomes even more pronounced. As a result, the proposed VJDD framework focuses on addressing this challenge through consistent and accurate latent space propagation.

The framework leverages the estimation of previous frames as prior knowledge to ensure the consistent recovery of the current frame. To achieve this, two losses are designed: the Data Temporal Consistency (DTC) loss and the Relational Perception Consistency (RPC) loss.

Compared to commonly used flow-based losses, the proposed losses have several advantages. They can circumvent the error accumulation problem caused by inaccurate flow estimation. Additionally, they can effectively handle intensity changes in videos, thereby improving the temporal consistency of the output videos while preserving texture details.

The effectiveness of the proposed method is demonstrated through extensive experiments. The method showcases leading performance in terms of restoration accuracy, perceptual quality, and temporal consistency. For researchers interested in exploring this further, the codes and dataset for the proposed method are made available at the provided URL.

Expert Analysis:

This article introduces an innovative approach to video restoration by jointly performing denoising and demosaicking. By addressing the temporal consistency challenge, the proposed VJDD framework demonstrates significant improvements in restoration accuracy, perceptual quality, and temporal consistency.

The use of consistent and accurate latent space propagation, leveraging prior knowledge from previous frames, is a valuable strategy in tackling the temporal consistency issue. The introduced Data Temporal Consistency (DTC) loss and Relational Perception Consistency (RPC) loss further enhance the framework’s ability to handle intensity changes and preserve texture details.

Importantly, the proposed method addresses the error accumulation problem often associated with inaccurate flow estimation in flow-based losses. By circumventing this issue, the framework avoids the degradation of restoration performance caused by error propagation.

The availability of codes and dataset allows for easy replication and adoption of the proposed method. Researchers and practitioners can benefit from exploring this framework, contributing to advancements in video restoration techniques.

Read the original article