본문 바로가기

전체 글28

Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild, ECCV’20 논문 링크 : https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123570035.pdfAbstractWe present a method that infers spatial arrangements and shapes of humans and objects in a globally consistent 3D scene, all from a single image in-the-wild captured in an uncontrolled environment. Notably, our method runs on datasets without any scene- or object level 3D supervision. Our key insight is that co.. 2023. 7. 21.
PROX-D, PROX-E, PROX-S 정리 Resolving 3D Human Pose Ambiguities with 3D Scene Constraints AbstractWe show that current 3D human pose estimation methods produce results that are not consistent with the 3D scene. Our key contribution is to exploit static 3D scene structure to better estimate human pose from monocular images. The method enforces Proximal Relationships with Object eXclusion and is called PROX. To test this, we.. 2023. 7. 21.
Learning Motion Priors for 4D Human Body Capture in 3D Scenes, ICCV’21 논문 링크 : https://openaccess.thecvf.com/content/ICCV2021/papers/Zhang_Learning_Motion_Priors_for_4D_Human_Body_Capture_in_3D_ICCV_2021_paper.pdfAbstractCapturing realistic human scene interactions, while dealing with occlusions and partial views, is challenging. We address this problem by proposing LEMO: LEarning human MOtion priors for 4D human body capture. By leveraging the large scale motion c.. 2023. 7. 21.
STMT: A spatial-temporal mesh transformer for Mocap-based action recognition, CVPR’23 논문 링크 : https://openaccess.thecvf.com/content/CVPR2023/papers/Zhu_STMT_A_Spatial-Temporal_Mesh_Transformer_for_MoCap-Based_Action_Recognition_CVPR_2023_paper.pdfAbstractExisting methods for MoCap-based action recognition take skeletons as input, which requires an extra manual mapping step and loses body shape information. We propose a novel method that directly models raw mesh sequences which ca.. 2023. 7. 21.
HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation, CVPR’20 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/papers/Doosti_HOPE-Net_A_Graph-Based_Model_for_Hand-Object_Pose_Estimation_CVPR_2020_paper.pdfAbstractHand-Object pose estimation(HOPE)는 손과 들고 있는 물체의 포즈를 동시에 감지하는 것을 목표로 한다. 본 논문에서는 2D와 3D에서 실시간으로 손과 물체의 포즈를 추정하는 HOPE-Net이라는 lightweight model을 제안한다. 여기에서는 두 가지의 adaptive graph convolution 을 사용하는데, 하나는 hand joint와 object corner의 2D 좌표를, 하나는 2.. 2023. 4. 6.
Learning joint reconstruction of hands and manipulated objects, CVPR’19 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2019/papers/Hasson_Learning_Joint_Reconstruction_of_Hands_and_Manipulated_Objects_CVPR_2019_paper.pdfAbstractWe present an end to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and e.. 2023. 4. 6.