We present SAGNet, a structure-aware generative model for 3D shapes. Given a set of segmented objects of a certain class, the geometry of their parts and the pairwise relationships between them (the structure) are jointly learned and embedded in a latent space by an autoencoder. The encoder intertwines the geometry and structure features into a single latent code, while the decoder disentangles the features and reconstructs the geometry and structure of the 3D model. Our autoencoder consists of two branches, one for the structure and one for the geometry. The key idea is that during the analysis, the two branches exchange information between them, thereby learning the dependencies between structure and geometry and encoding two augmented features, which are then fused into a single latent code. This explicit intertwining of information enables separately controlling the geometry and the structure of the generated models. We evaluate the performance of our method and conduct an ablation study. We explicitly show that encoding of shapes accounts for both similarities in structure and geometry. A variety of quality results generated by SAGNet are presented.
Bibliographical noteFunding Information:
We thank the reviewers for their valuable comments. This work was supported in parts by National 973 Program (2015CB352501), NSFC (61761146002, 61861130365, 61702338), Guangdong Science and Technology Program (2015A030312015), Shenzhen Innovation Program (KQJSCX20170727101233642), LHTD (20170003), ISF (2366/16), ISF-NSFC Joint Research Program (2472/17), and the National Engineering Laboratory for Big Data System Computing Technology.
© 2019 Association for Computing Machinery.
- Data-driven synthesis
- Generative network
- Geometric modeling
- Shape analysis
- Variational autoencoder