본문 바로가기

Paper Summary27

STMT: A spatial-temporal mesh transformer for Mocap-based action recognition, CVPR’23 논문 링크 : https://openaccess.thecvf.com/content/CVPR2023/papers/Zhu_STMT_A_Spatial-Temporal_Mesh_Transformer_for_MoCap-Based_Action_Recognition_CVPR_2023_paper.pdfAbstractExisting methods for MoCap-based action recognition take skeletons as input, which requires an extra manual mapping step and loses body shape information. We propose a novel method that directly models raw mesh sequences which ca.. 2023. 7. 21.
HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation, CVPR’20 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/papers/Doosti_HOPE-Net_A_Graph-Based_Model_for_Hand-Object_Pose_Estimation_CVPR_2020_paper.pdfAbstractHand-Object pose estimation(HOPE)는 손과 들고 있는 물체의 포즈를 동시에 감지하는 것을 목표로 한다. 본 논문에서는 2D와 3D에서 실시간으로 손과 물체의 포즈를 추정하는 HOPE-Net이라는 lightweight model을 제안한다. 여기에서는 두 가지의 adaptive graph convolution 을 사용하는데, 하나는 hand joint와 object corner의 2D 좌표를, 하나는 2.. 2023. 4. 6.
Learning joint reconstruction of hands and manipulated objects, CVPR’19 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2019/papers/Hasson_Learning_Joint_Reconstruction_of_Hands_and_Manipulated_Objects_CVPR_2019_paper.pdfAbstractWe present an end to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and e.. 2023. 4. 6.
Grasping Field: Learning Implicit Representations for Human Grasp, 3DV’20 논문 링크 : https://arxiv.org/pdf/2008.04451.pdf AbstractYet, human grasps are still difficult to synthesize realistically. There are several key reasons: (1) the human hand has many degrees of freedom (more than robotic manipulators); (2) the synthesized hand should conform to the surface of the object;(3) it should interact with the object in a semantically and physically plausible manner.여전히, hum.. 2023. 4. 6.