본문 바로가기

전체 글28

Occupancy Networks: Learning 3D Reconstruction in Function Space, CVPR’19 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2019/papers/Mescheder_Occupancy_Networks_Learning_3D_Reconstruction_in_Function_Space_CVPR_2019_paper.pdf1. IntroductionIn this paper, we propose a novel approach to 3D reconstruction based on directly learning the continuous 3D occupancy function (Fig. 1d). Instead of predicting a voxelized representation at a fixed resolution, we predict the c.. 2023. 3. 26.
Deep Marching Cubes: Learning Explicit Surface Representations, CVPR’18 논문 링크 : https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8578406AbstractIn this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss fu.. 2023. 3. 26.
Generative Adversarial Networks, NIPS’14 논문 링크 : https://papers.nips.cc/paper/5423-generative-adversarial-netsAbstractWe simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. IntroductionIn the proposed.. 2023. 3. 26.
DCGAN, ICLR’16 논문 링크 : https://arxiv.org/abs/1511.064341. IntroductionIn this paper, we make the following contributionsWe propose and evaluate a set of constraints on the architectural topology of Convolutional GANs that make them stable to train in most settings. We name this class of architectures Deep Convolutional GANs (DCGAN)We use the trained discriminators for image classification tasks, showing compet.. 2023. 3. 26.
Graph Attention Networks, ICLR’18 논문 링크 : https://arxiv.org/abs/1710.109031. IntroductionOn the other hand, we have non-spectral approaches (Duvenaud et al., 2015; Atwood & Towsley, 2016; Hamilton et al., 2017), which define convolutions directly on the graph, operating on groups of spatially close neighbors.One of the benefits of attention mechanisms is that they allow for dealing with variable sized inputs, focusing on the mos.. 2023. 3. 26.
GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs, UAI’18 논문 링크 : http://www.auai.org/uai2018/proceedings/papers/139.pdf1. IntroductionTreating each attention head equally loses the opportunity to benefit from some attention heads which are inherently more important than others. To this end, we propose the Gated Attention Networks(GaAN) for learning on graphs. GaAN uses a small convolutional subnetwork to compute a soft gate at each attention head to c.. 2023. 3. 26.