본문 바로가기

Body Mesh10

Learning joint reconstruction of hands and manipulated objects, CVPR’19 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2019/papers/Hasson_Learning_Joint_Reconstruction_of_Hands_and_Manipulated_Objects_CVPR_2019_paper.pdfAbstractWe present an end to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and e.. 2023. 4. 6.
Hand-Object Contact Consistency Reasoning for Human Grasps Generation, ICCV’21 논문 링크 : https://openaccess.thecvf.com/content/ICCV2021/papers/Jiang_Hand-Object_Contact_Consistency_Reasoning_for_Human_Grasps_Generation_ICCV_2021_paper.pdf1. IntroductionIn this paper, we study the interactions via generation: As shown in Fig. 1, given only a 3D object in the world coordinate, we generate the 3D human hand for grasping it. We argue that it is critical for the hand contact poin.. 2023. 4. 6.
CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction, ICCV’21 논문 링크 : https://openaccess.thecvf.com/content/ICCV2021/papers/Yang_CPF_Learning_a_Contact_Potential_Field_To_Model_the_Hand-Object_ICCV_2021_paper.pdf1. IntroductionTo model the contact, we propose an explicit representation named Contact Potential Field (CPF, §4). It is built upon the idea that the contact between a hand and an object mesh under grasp configuration is multi-point contact, which.. 2023. 4. 6.
GanHand: Predicting Human Grasp Affordances in Multi-Object Scenes, CVPR’19 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/papers/Corona_GanHand_Predicting_Human_Grasp_Affordances_in_Multi-Object_Scenes_CVPR_2020_paper.pdf1. IntroductionIn order to predict feasible human grasps, we introduce GanHand, a multi-task GAN architecture that given solely one input image: 1) estimates the 3D shape/pose of the objects; 2) predicts the best grasp type according to a taxo.. 2023. 4. 6.