We present an algorithm based on statistical learning for synthesizing static and time-varying textures matching the appearance of an input texture. Our algorithm is general and automatic and it works well on various types of textures, including 1D sound textures, 2D texture images, and 3D texture movies. The same method is also used to generate 2D texture mixtures that simultaneously capture the appearance of a number of different input textures. In our approach, input textures are treated as sample signals generated by a stochastic process. We first construct a tree representing a hierarchical multiscale transform of the signal using wavelets. From this tree, new random trees are generated by learning and sampling the conditional probabilities of the paths in the original tree. Transformation of these random trees back into signals results in new random textures. In the case of 2D texture synthesis, our algorithm produces results that are generally as good as or better than those produced by previously described methods in this field. For texture mixtures, our results are better and more general than those produced by earlier methods. For texture movies, we present the first algorithm that is able to automatically generate movie clips of dynamic phenomena such as waterfalls, fire flames, a school of jellyfish, a crowd of people, etc. Our results indicate that the proposed technique is effective and robust.
|Original language||American English|
|Number of pages||15|
|Journal||IEEE Transactions on Visualization and Computer Graphics|
|State||Published - Apr 2001|
Bibliographical noteFunding Information:
This research was supported in part by the Israel Science Foundation, founded by the Israel Academy of Sciences and Humanities, and by a strategic infrastructure grant from the Israeli Ministry of Science.
- Sound textures
- Statistical learning
- Steerable filters
- Texture mixing
- Texture movies
- Texture synthesis
- Time-varying textures