본문 바로가기

분류 전체보기28

Learning Motion Priors for 4D Human Body Capture in 3D Scenes, ICCV’21 논문 링크 : https://openaccess.thecvf.com/content/ICCV2021/papers/Zhang_Learning_Motion_Priors_for_4D_Human_Body_Capture_in_3D_ICCV_2021_paper.pdfAbstractCapturing realistic human scene interactions, while dealing with occlusions and partial views, is challenging. We address this problem by proposing LEMO: LEarning human MOtion priors for 4D human body capture. By leveraging the large scale motion c.. 2023. 7. 21.
STMT: A spatial-temporal mesh transformer for Mocap-based action recognition, CVPR’23 논문 링크 : https://openaccess.thecvf.com/content/CVPR2023/papers/Zhu_STMT_A_Spatial-Temporal_Mesh_Transformer_for_MoCap-Based_Action_Recognition_CVPR_2023_paper.pdfAbstractExisting methods for MoCap-based action recognition take skeletons as input, which requires an extra manual mapping step and loses body shape information. We propose a novel method that directly models raw mesh sequences which ca.. 2023. 7. 21.
HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation, CVPR’20 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/papers/Doosti_HOPE-Net_A_Graph-Based_Model_for_Hand-Object_Pose_Estimation_CVPR_2020_paper.pdfAbstractHand-Object pose estimation(HOPE)는 손과 들고 있는 물체의 포즈를 동시에 감지하는 것을 목표로 한다. 본 논문에서는 2D와 3D에서 실시간으로 손과 물체의 포즈를 추정하는 HOPE-Net이라는 lightweight model을 제안한다. 여기에서는 두 가지의 adaptive graph convolution 을 사용하는데, 하나는 hand joint와 object corner의 2D 좌표를, 하나는 2.. 2023. 4. 6.
Learning joint reconstruction of hands and manipulated objects, CVPR’19 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2019/papers/Hasson_Learning_Joint_Reconstruction_of_Hands_and_Manipulated_Objects_CVPR_2019_paper.pdfAbstractWe present an end to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and e.. 2023. 4. 6.