arXiv:2409.12980v1 Announce Type: new Abstract: Recently, NVS in human-object interaction scenes has received increasing attention. Existing human-object interaction datasets mainly consist of static data with limited views, offering only RGB images or videos, mostly containing interactions between a single person and objects. Moreover, these datasets exhibit complexities in lighting environments, poor synchronization, and low resolution, hindering high-quality human-object interaction studies. In this paper, we introduce a new people-object interaction dataset that comprises 38 series of 30-view multi-person or single-person RGB-D video sequences, accompanied by camera parameters, foreground masks, SMPL models, some point clouds, and mesh files. Video sequences are captured by 30 Kinect Azures, uniformly surrounding the scene, each in 4K resolution 25 FPS, and lasting for 1$sim$19 seconds. Meanwhile, we evaluate some SOTA NVS models on our dataset to establish the NVS benchmarks. We hope our work can inspire further research in humanobject interaction.
The article “NVS in Human-Object Interaction Scenes: A New Dataset and Benchmark” introduces a new dataset that addresses the limitations of existing human-object interaction datasets. These datasets typically offer only static data with limited views, such as RGB images or videos, and focus on interactions between a single person and objects. However, they lack complexity in lighting environments, suffer from poor synchronization, and have low resolution, which hinders high-quality studies in human-object interaction. To overcome these challenges, the authors present a new dataset comprising 38 series of 30-view multi-person or single-person RGB-D video sequences. The dataset includes camera parameters, foreground masks, SMPL models, point clouds, and mesh files. The video sequences are captured by 30 Kinect Azures, uniformly surrounding the scene, and each sequence has a resolution of 4K at 25 FPS, lasting for 1 to 19 seconds. Additionally, the authors evaluate some state-of-the-art NVS models on their dataset to establish benchmarks for NVS performance. The authors hope that their work will inspire further research in the field of human-object interaction.

An Innovative Approach to Human-Object Interaction Studies

Human-object interaction is a complex and intriguing area of study that has garnered increasing attention in recent years. Researchers have been exploring different datasets and models to better understand the dynamics of human-object interactions. However, the existing datasets have several limitations, such as static data with limited views and poor synchronization, which hinder high-quality studies in this field.

In this paper, we introduce a groundbreaking people-object interaction dataset that aims to address these limitations and provide researchers with a comprehensive resource for their studies. Our dataset comprises 38 series of 30-view multi-person or single-person RGB-D video sequences captured by 30 Kinect Azures. Each sequence is accompanied by camera parameters, foreground masks, SMPL models, point clouds, and mesh files.

One of the key innovations of our dataset is the use of 30 Kinect Azures, uniformly surrounding the scene. This setup allows for a more comprehensive view of the human-object interactions from different angles. Additionally, the videos are captured in 4K resolution at 25 frames per second, providing a high level of detail and smoothness in the captured interactions.

By including camera parameters, foreground masks, SMPL models, point clouds, and mesh files, we aim to empower researchers to explore various aspects of human-object interaction. These additional data types can be used to study factors such as pose estimation, motion tracking, object recognition, and scene reconstruction. This comprehensive dataset opens up new possibilities for in-depth analysis and development of innovative solutions.

To establish benchmarks for the field of human-object interaction, we have evaluated some state-of-the-art NVS models on our dataset. By comparing the performance of these models, we hope to provide researchers with a reference point for their own experiments and help drive advancements in this field.

The availability of this new dataset and the benchmarking results on NVS models mark a significant step forward in human-object interaction studies. We believe that this work will inspire further research and innovation in this field, leading to improved understanding and practical applications of human-object interactions.

We are excited to contribute to the research community with this novel dataset and look forward to witnessing the discoveries and advancements that our dataset will enable. We encourage researchers to explore the possibilities and challenges offered by this dataset and to collaborate in pushing the boundaries of human-object interaction studies.

The paper arXiv:2409.12980v1 introduces a new people-object interaction dataset that aims to address the limitations of existing human-object interaction datasets. These existing datasets typically consist of static data with limited views, such as RGB images or videos, and mostly focus on interactions between a single person and objects. However, this new dataset offers a more comprehensive and dynamic perspective by including 38 series of 30-view multi-person or single-person RGB-D video sequences.

One of the key contributions of this dataset is the inclusion of camera parameters, foreground masks, SMPL models, point clouds, and mesh files. These additional data types provide valuable information for researchers studying human-object interaction. The camera parameters can help in understanding the spatial relationship between the subjects and objects, while the foreground masks can aid in segmenting the people and objects from the background. The SMPL models, point clouds, and mesh files enable detailed analysis and reconstruction of the human body and objects involved in the interactions.

Furthermore, the dataset utilizes 30 Kinect Azures, which are uniformly positioned around the scene, to capture the video sequences. This setup ensures a comprehensive coverage of the interactions from different angles. The videos are recorded in 4K resolution at 25 FPS, providing high-quality visual data for analysis. The duration of the video sequences varies from 1 to 19 seconds, allowing for the study of both short and long interactions.

In addition to introducing the dataset, the authors also evaluate some state-of-the-art NVS (Natural Vision Systems) models on this dataset to establish benchmarks. This evaluation provides a baseline for future research and allows for the comparison of different NVS models in the context of human-object interaction.

Overall, this new people-object interaction dataset addresses the limitations of existing datasets by providing a more comprehensive and dynamic perspective on human-object interactions. The inclusion of various data types and the use of multiple Kinect Azures for data capture contribute to the richness and quality of the dataset. The evaluation of NVS models on this dataset further enhances its usefulness by establishing benchmarks for future research. This work has the potential to inspire and advance further research in the field of human-object interaction.
Read the original article