In the realm of digital situational awareness during disaster situations,
accurate digital representations, like 3D models, play an indispensable role.
To ensure the safety of rescue teams, robotic platforms are often deployed to
generate these models. In this paper, we introduce an innovative approach that
synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller
than 30 cm, equipped with 360 degree cameras and the advances of Neural
Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D
representation of any scene using 2D images and then synthesize it from various
angles upon request. This method is especially tailored for urban environments
which have experienced significant destruction, where the structural integrity
of buildings is compromised to the point of barring entry-commonly observed
post-earthquakes and after severe fires. We have tested our approach through
recent post-fire scenario, underlining the efficacy of NeRFs even in
challenging outdoor environments characterized by water, snow, varying light
conditions, and reflective surfaces.

In the realm of digital situational awareness during disaster situations, accurate digital representations, like 3D models, play an indispensable role. These models provide valuable information to rescue teams by helping them understand the environment and make informed decisions. One of the challenges in generating these models is the availability of up-to-date and detailed data, especially in urban environments that have experienced significant destruction.

Robotic platforms, such as Unmanned Aerial Vehicles (UAVs), have become increasingly popular for collecting data in disaster scenarios. These compact UAVs, smaller than 30 cm, are equipped with 360-degree cameras to capture high-resolution images from multiple angles. This imagery is then used to generate digital 3D models of the affected areas.

The traditional process of generating 3D models from images involves complex algorithms and time-consuming computations. However, recent advances in Neural Radiance Fields (NeRFs) have introduced a more efficient and accurate approach. NeRFs are specialized neural networks that can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request.

In this innovative approach, compact UAVs equipped with 360-degree cameras are deployed to capture images of disaster-stricken areas. These images are then fed into a NeRF network, which analyzes the scene and generates a detailed 3D representation. The resulting model can be explored and visualized from different viewpoints, providing a comprehensive understanding of the environment.

This approach is particularly well-suited for urban environments that have suffered significant damage, such as buildings with compromised structural integrity. In post-earthquake or post-fire scenarios, where entry into buildings may be unsafe, the ability to generate accurate 3D models from external imagery becomes crucial.

The authors of the paper have tested their approach in a recent post-fire scenario, highlighting the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces. These real-world tests demonstrate the potential of this technology to enhance digital situational awareness in disaster response and recovery operations.

This innovative approach to digital situational awareness in disaster situations brings together multiple disciplines, including robotics, computer vision, and neural networks. The use of compact UAVs with 360-degree cameras allows for efficient data collection, while NeRFs provide a powerful tool for generating accurate 3D models. The multi-disciplinary nature of this research highlights the importance of collaboration and integration of expertise from different fields to address complex challenges.

Read the original article