Screen content images typically contain a mix of natural and synthetic image
parts. Synthetic sections usually are comprised of uniformly colored areas and
repeating colors and patterns. In the VVC standard, these properties are
exploited using Intra Block Copy and Palette Mode. In this paper, we show that
pixel-wise lossless coding can outperform lossy VVC coding in such areas. We
propose an enhanced VVC coding approach for screen content images using the
principle of soft context formation. First, the image is separated into two
layers in a block-wise manner using a learning-based method with four block
features. Synthetic image parts are coded losslessly using soft context
formation, the rest with VVC.We modify the available soft context formation
coder to incorporate information gained by the decoded VVC layer for improved
coding efficiency. Using this approach, we achieve Bjontegaard-Delta-rate gains
of 4.98% on the evaluated data sets compared to VVC.

Analyzing Lossless Coding for Screen Content Images in VVC Standard

In the field of multimedia information systems, there is a constant need to improve the efficiency of coding and compression techniques for various types of content. One specific area of interest is screen content images, which often contain a combination of natural and synthetic image parts. Synthetic sections in these images are characterized by uniformly colored areas and repeating colors and patterns.

In the latest VVC (Versatile Video Coding) standard, coding efficiency is improved through the use of Intra Block Copy and Palette Mode for synthetic sections. However, this paper presents a novel approach that demonstrates how pixel-wise lossless coding can outperform lossy VVC coding specifically in synthetic areas of screen content images.

The proposed approach involves separating the image into two layers in a block-wise manner using a learning-based method with four block features. The synthetic image parts are then coded losslessly using soft context formation, while the rest of the image is coded using VVC. This hybrid coding approach allows for more efficient compression and improved image quality in synthetic sections.

What sets this approach apart is the incorporation of information gained from the decoded VVC layer into the soft context formation coder. This integration allows for enhanced coding efficiency, as the soft context formation coder can leverage the knowledge of how the rest of the image is encoded using the VVC standard.

The results of this study are promising, with Bjontegaard-Delta-rate gains of 4.98% compared to using VVC alone. This signifies a significant improvement in coding efficiency and compression performance for screen content images.

This research highlights the multi-disciplinary nature of concepts in multimedia information systems, combining techniques from image processing, coding standards (VVC), and machine learning. It also contributes to the wider field of animations, artificial reality, augmented reality, and virtual realities, as efficient compression of screen content images is crucial for seamless and immersive user experiences in these domains.

Read the original article