본문 바로가기

Body Mesh10

Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild, ECCV’20 논문 링크 : https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123570035.pdfAbstractWe present a method that infers spatial arrangements and shapes of humans and objects in a globally consistent 3D scene, all from a single image in-the-wild captured in an uncontrolled environment. Notably, our method runs on datasets without any scene- or object level 3D supervision. Our key insight is that co.. 2023. 7. 21.
PROX-D, PROX-E, PROX-S 정리 Resolving 3D Human Pose Ambiguities with 3D Scene Constraints AbstractWe show that current 3D human pose estimation methods produce results that are not consistent with the 3D scene. Our key contribution is to exploit static 3D scene structure to better estimate human pose from monocular images. The method enforces Proximal Relationships with Object eXclusion and is called PROX. To test this, we.. 2023. 7. 21.
Learning Motion Priors for 4D Human Body Capture in 3D Scenes, ICCV’21 논문 링크 : https://openaccess.thecvf.com/content/ICCV2021/papers/Zhang_Learning_Motion_Priors_for_4D_Human_Body_Capture_in_3D_ICCV_2021_paper.pdfAbstractCapturing realistic human scene interactions, while dealing with occlusions and partial views, is challenging. We address this problem by proposing LEMO: LEarning human MOtion priors for 4D human body capture. By leveraging the large scale motion c.. 2023. 7. 21.
STMT: A spatial-temporal mesh transformer for Mocap-based action recognition, CVPR’23 논문 링크 : https://openaccess.thecvf.com/content/CVPR2023/papers/Zhu_STMT_A_Spatial-Temporal_Mesh_Transformer_for_MoCap-Based_Action_Recognition_CVPR_2023_paper.pdfAbstractExisting methods for MoCap-based action recognition take skeletons as input, which requires an extra manual mapping step and loses body shape information. We propose a novel method that directly models raw mesh sequences which ca.. 2023. 7. 21.