본문 바로가기

전체 글28

Grasping Field: Learning Implicit Representations for Human Grasp, 3DV’20 논문 링크 : https://arxiv.org/pdf/2008.04451.pdf AbstractYet, human grasps are still difficult to synthesize realistically. There are several key reasons: (1) the human hand has many degrees of freedom (more than robotic manipulators); (2) the synthesized hand should conform to the surface of the object;(3) it should interact with the object in a semantically and physically plausible manner.여전히, hum.. 2023. 4. 6.
Hand-Object Contact Consistency Reasoning for Human Grasps Generation, ICCV’21 논문 링크 : https://openaccess.thecvf.com/content/ICCV2021/papers/Jiang_Hand-Object_Contact_Consistency_Reasoning_for_Human_Grasps_Generation_ICCV_2021_paper.pdf1. IntroductionIn this paper, we study the interactions via generation: As shown in Fig. 1, given only a 3D object in the world coordinate, we generate the 3D human hand for grasping it. We argue that it is critical for the hand contact poin.. 2023. 4. 6.
CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction, ICCV’21 논문 링크 : https://openaccess.thecvf.com/content/ICCV2021/papers/Yang_CPF_Learning_a_Contact_Potential_Field_To_Model_the_Hand-Object_ICCV_2021_paper.pdf1. IntroductionTo model the contact, we propose an explicit representation named Contact Potential Field (CPF, §4). It is built upon the idea that the contact between a hand and an object mesh under grasp configuration is multi-point contact, which.. 2023. 4. 6.
GanHand: Predicting Human Grasp Affordances in Multi-Object Scenes, CVPR’19 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/papers/Corona_GanHand_Predicting_Human_Grasp_Affordances_in_Multi-Object_Scenes_CVPR_2020_paper.pdf1. IntroductionIn order to predict feasible human grasps, we introduce GanHand, a multi-task GAN architecture that given solely one input image: 1) estimates the 3D shape/pose of the objects; 2) predicts the best grasp type according to a taxo.. 2023. 4. 6.
Convolutional Occupancy Networks, ECCV’20 논문 링크 : https://www.cvlibs.net/publications/Peng2020ECCV.pdf1. IntroductionTowards this goal, we introduce Convolutional Occupancy Networks, a novel representation for accurate large-scale 3D reconstruction with continuous implicit representations (Fig. 1). We demonstrate that this representation not only preserves fine geometric details, but also enables the reconstruction of complex indoor sce.. 2023. 3. 26.
Dynamic Plane Convolutional Occupancy Networks, WACV’21 논문 링크 : https://openaccess.thecvf.com/content/WACV2021/papers/Lionar_Dynamic_Plane_Convolutional_Occupancy_Networks_WACV_2021_paper.pdf1. IntroductionIn this work, we propose Dynamic Plane Convolutional Occupancy Networks, an implicit representation that enables accurate scene-level reconstruction from 3D point clouds. Instead of learning features on three pre-defined canonical planes as in [28].. 2023. 3. 26.