Articulated Furniture Recovery from Rest-State Multi-View Images
Daeun Lee*, Jaeah Lee, Woosung Kim, and 2 more authors
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Under Review, 2026
Digital twins require fully interactive replicas that capture both 3D geometry and articulated structures so that objects can be manipulated as in real life. While recent methods have advanced articulated object reconstruction by incorporating part-wise geometry and joint parameters, they often rely on restrictive input conditions, such as multiple articulation states or prior knowledge of part counts, thereby limiting practical applicability. To overcome these constraints, we propose a rest-state formulation that reconstructs articulated objects such as furniture from multi-view images of a single rest state, where all parts remain closed. Our pipeline begins by reconstructing a surface mesh, then performs 3D functional segmentation of openable objects with uniform or repetitive geometries by leveraging 2D foundation models. It subsequently converts the incomplete surface mesh into closed, watertight part meshes through solidification and amodal shape blending. Finally, we estimate joint parameters by integrating geometric, semantic, and physical constraints to achieve realistic articulation. Our experiments demonstrate that our rest-state approach achieves high-quality reconstructions and accurate articulation results without requiring multiple articulation states or part annotations.