Abstract
This paper addresses the challenge of example-based non-stationary texture synthesis. We introduce a novel two-step approach wherein users first modify a reference texture using standard image editing tools, yielding an initial rough target for the synthesis. Subsequently, our proposed method, termed 'self-rectification', automatically refines this target into a coherent, seamless texture, while faithfully preserving the distinct visual characteristics of the reference exemplar. Our method leverages a pretrained diffusion network, and uses self-attention mechanisms, to grad-ually align the synthesized texture with the reference, en-suring the retention of the structures in the provided target. Through experimental validation, our approach ex-hibits exceptional proficiency in handling non-stationary textures, demonstrating significant advancements in texture synthesis when compared to existing state-of-the-art techniques. Code is available at https://github.com/xiaorongjun000/Self-Rectification
Original language | English |
---|---|
Title of host publication | Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 |
Publisher | IEEE Computer Society |
Pages | 7767-7776 |
Number of pages | 10 |
ISBN (Electronic) | 9798350353006 |
DOIs | |
State | Published - 2024 |
Event | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 - Seattle, United States Duration: 16 Jun 2024 → 22 Jun 2024 |
Publication series
Name | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
---|---|
ISSN (Print) | 1063-6919 |
Conference
Conference | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 |
---|---|
Country/Territory | United States |
City | Seattle |
Period | 16/06/24 → 22/06/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- diffusion network
- Non-stationary Textures
- Self-attention mechanism
- Texture Synthesis