SagNet: Structure-aware generative network for 3D-shape modeling

Zhijie Wu, Xiang Wang, Di Lin, Dani Lischinski, Daniel Cohen-Or, Hui Huang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

64 Scopus citations

Abstract

We present SAGNet, a structure-aware generative model for 3D shapes. Given a set of segmented objects of a certain class, the geometry of their parts and the pairwise relationships between them (the structure) are jointly learned and embedded in a latent space by an autoencoder. The encoder intertwines the geometry and structure features into a single latent code, while the decoder disentangles the features and reconstructs the geometry and structure of the 3D model. Our autoencoder consists of two branches, one for the structure and one for the geometry. The key idea is that during the analysis, the two branches exchange information between them, thereby learning the dependencies between structure and geometry and encoding two augmented features, which are then fused into a single latent code. This explicit intertwining of information enables separately controlling the geometry and the structure of the generated models. We evaluate the performance of our method and conduct an ablation study. We explicitly show that encoding of shapes accounts for both similarities in structure and geometry. A variety of quality results generated by SAGNet are presented.

Original languageEnglish
Article number91
JournalACM Transactions on Graphics
Volume38
Issue number4
DOIs
StatePublished - Jul 2019

Bibliographical note

Publisher Copyright:
© 2019 Association for Computing Machinery.

Keywords

  • Data-driven synthesis
  • Generative network
  • Geometric modeling
  • Shape analysis
  • Variational autoencoder

Fingerprint

Dive into the research topics of 'SagNet: Structure-aware generative network for 3D-shape modeling'. Together they form a unique fingerprint.

Cite this