This paper presents a novel method for reconstructing 3D garment models from
a single image of a posed user. Previous studies that have primarily focused on
accurately reconstructing garment geometries to match the input garment image
may often result in unnatural-looking garments when deformed for new poses. To
overcome this limitation, our approach takes a different approach by inferring
the fundamental shape of the garment through sewing patterns from a single
image, rather than directly reconstructing 3D garments. Our method consists of
two stages. Firstly, given a single image of a posed user, it predicts the
garment image worn on a T-pose, representing the baseline form of the garment.
Then, it estimates the sewing pattern parameters based on the T-pose garment
image. By simulating the stitching and draping of the sewing pattern using
physics simulation, we can generate 3D garments that can adaptively deform to
arbitrary poses. The effectiveness of our method is validated through ablation
studies on the major components and a comparison with other approaches.

This paper presents a novel approach to reconstructing 3D garment models from a single image, with a focus on achieving natural-looking garments even when they are deformed for new poses. Previous studies have primarily focused on accurately reconstructing garment geometries, but this often leads to unnatural-looking results when the garments are posed differently. To address this limitation, the proposed method takes a different approach by inferring the fundamental shape of the garment through sewing patterns.

The multi-disciplinary nature of this research is highlighted through its use of computer vision, computer graphics, and physics simulation. By utilizing computer vision techniques, the method predicts the garment image worn on a T-pose, which represents the baseline form of the garment. This prediction is then used to estimate sewing pattern parameters. With this information, a physics simulation is employed to simulate stitching and draping of the sewing pattern.

The use of physics simulation in garment modeling is an important aspect of this research. By simulating the behavior of the fabric and how it interacts with the sewing pattern, the resulting 3D garments can adaptively deform to arbitrary poses. This allows for more natural-looking garments that accurately mimic real-world clothing behavior.

The effectiveness of the proposed method is validated through ablation studies on the major components and a comparison with other approaches. This ensures that each component of the method is evaluated and contributes to the overall performance. Additionally, comparing with other approaches helps to establish the superiority of the proposed method in terms of generating natural-looking 3D garments.

In conclusion, this paper introduces a novel approach to reconstructing 3D garments from a single image by inferring garment shape through sewing patterns. By incorporating computer vision, computer graphics, and physics simulation, the method generates natural-looking garments that can adaptively deform to arbitrary poses. The validation through ablation studies and comparisons with other approaches further strengthens the credibility of this research.

Read the original article