본문 바로가기

분류 전체보기28

ContactOpt: Optimizing Contact to Improve Grasps, CVPR’21 논문 링크 : https://openaccess.thecvf.com/content/CVPR2021/papers/Grady_ContactOpt_Optimizing_Contact_To_Improve_Grasps_CVPR_2021_paper.pdfAbstractGiven a hand mesh and an object mesh, a deep model trained on ground truth contact data infers desirable contact across the surfaces of the meshes. Then, ContactOpt efficiently optimizes the pose of the hand to achieve desirable contact using a differenti.. 2023. 7. 21.
Leveraging Photometric Consistency over Time for Sparsely Supervised Hand-Object Reconstruction, CVPR’20 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/papers/Hasson_Leveraging_Photometric_Consistency_Over_Time_for_Sparsely_Supervised_Hand-Object_Reconstruction_CVPR_2020_paper.pdfAbstractCollecting 3D ground-truth data for hand-object interactions is costly, tedious, and error-prone. To overcome this challenge we present a method to leverage photometric consistency across time when annotat.. 2023. 7. 21.
SampleNet: Differentiable Point Cloud Sampling, CVPR’20 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/papers/Lang_SampleNet_Differentiable_Point_Cloud_Sampling_CVPR_2020_paper.pdf Abstract Classic sampling approaches, such as farthest point sampling (FPS), do not consider the downstream task. A recent work showed that learning a task-specific sampling can improve results significantly. However, the proposed technique did not deal with the n.. 2023. 7. 21.
Offline RL Without Off-Policy Evaluation, NIPS’21 논문 링크 : https://proceedings.neurips.cc/paper_files/paper/2021/file/274a10ffa06e434f2a94df765cac6bf4-Paper.pdfAbstractMost prior approaches to offline reinforcement learning (RL) have taken an iterative actor-critic approach involving off-policy evaluation. In this paper we show that simply doing one step of constrained/regularized policy improvement using an on-policy Q estimate of the behavior .. 2023. 7. 21.