Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/19334
標題: 細節保持之材質轉移演算法之研究
A Texture Transfer Algorithm with Characteristics Preservation
作者: 吳國禎
Wu, Kuo-Chen
關鍵字: Texture
材質
Texture Transfer
Patch-Based
Adaptive Patch Sampling
材質轉移
以區塊為基礎
適應性區塊取樣
出版社: 資訊科學系所
引用: 1. G. Andrew and K. B. Eom, “Color Texture Synthesis with 2-D Moving Average Model,” Proceedings of 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 3, pp. 1381-1384, May 1999. 2. M. Ashikhmin, “Synthesizing Natural Texture,” Proceedings of ACM Symposium on Interactive 3D Graphics, pp. 217-226, March 2001. 3. M. Ashikhmin, “Fast Texture Transfer,” IEEE Computer Graphics and Applications, Vol. 23, No. 4, pp. 38-43, July 2003. 4. J. L. Bentley, “Multidimensional Binary Search Trees Used for Associative Searching,” Communications of the ACM, Vol. 18, No. 9, pp. 509-517, September 1975. 5. J. S. D. Bonet, “Multiresolution Sampling Procedure for Analysis and Synthesis of Texture Images,” Proceedings of SIGGRAPH 1997, pp. 361-368, August 1997. 6. D. Charalampidis, “Texture Synthesis: Textons Revisited,” IEEE Transactions on Image Processing, Vol. 15, No. 3, pp. 777- 787, March 2006. 7. F. Dong, H. Lin, and G. Clapworthy, “Cutting and Pasting Irregularly Shaped Patches for Texture Synthesis,” Computer Graphics Forum, Vol. 24, No. 1, pp. 17-26, March 2005. 8. A. A. Efros and T. Leung,“Texture Synthesis by Non-Parametric Sampling,” Proceedings of International Conference on Computer Vision, Vol. 2, pp. 1033-1038, September 1999. 9. A. A. Efros and W. T. Freeman, “Image Quilting for Texture Synthesis and Transfer,” Proceedings of SIGGRAPH 2001, pp. 341-346, August 2001. 10. J. A. Ferwerda, “Elements of Early Vision for Computer Graphics,” IEEE Computer Graphics and Applications, Vol. 21, No. 5, pp. 22-33, September 2001. 11. D. J. Heeger and J. R. Bergen, “Pyramid-Based Texture Analysis/Synthesis,” Proceedings of SIGGRAPH 1995, pp. 229-238, 1995. 12. A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin, “Image Analogies,” Proceedings of SIGGRAPH 2001, pp. 327-340, 2001. 13. V. Kwatra, A. Schödl, I. Essa, G. Turk and A. Bobick, “Image and Video Synthesis Using Graph Cuts,” ACM Transactions on Graphics, Vol. 22, No. 3, pp. 277-286, 2003. 14. V. Kwatra, I. Essa, A. Bobick, and N. Kwatra, “Texture Optimization for Example-based Synthesis,” ACM Transactions on Graphics, Vol. 24, No. 3, pp. 795-802, 2005. 15. L. Liang, C. Liu, Y. Q. Xu, B. Guo, and H. Y. Shum, “Real-Time Texture Synthesis by Patch-Based Sampling,” Proceedings of SIGGRAPH 2001, pp. 127-150, 2001. 16. D. M. Mount, ANN Programming Manual, Technical Report, Department of Computer Science, University of Maryland, College Park, Maryland, USA, 1998. 17. A. Nealen and M. Alexa, “Hybrid Texture Synthesis,” Proceedings of Eurographics Symposium on Rendering, pp. 97-105, 2003. 18. S. A. Nene and S. K. Nayar. “A Simple Algorithm for Nearest Neighbor Search in High Dimensions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 9, pp. 989-1003, 1997. 19. J. Portilla and E. P. Simoncelli, “Texture Modeling and Synthesis Using Joint Statistics of Complex Wavelet Coefficients,” Proceedings of the IEEE Workshop on Statistical and Computational Theories of Vision, pp. 73-80, 1999. 20. J. Portilla and E. P. Simoncelli, “A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients,” International Journal of Computer Vision, Vol. 40, No. 1, pp. 49-70, October 2000. 21. B. Wang, W. Wang, H. Yang, and J. Sun. “Efficient Example-Based Painting and Synthesis of 2D Directional Texture,” IEEE Transactions on Visualization and Computer Graphics, Vol. 10, No. 3, pp. 266-277, 2004. 22. L. Y. Wei and M. Levoy, “Fast Texture Synthesis Using Tree-Structured Vector Quantization,” Proceedings of SIGGRAPH 2000, pp. 479-488, July 2000. 23. Q. Wu and Y. Yu, “Feature Matching and Deformation for Texture Synthesis,” ACM Transactions on Graphics, Vol. 23, No. 3, pp. 362-365, August 2004. 24. S. Zelinka and M. Garland, “Towards Real-Time Texture Synthesis with the Jump Map,” Proceedings of Eurographics Workshop on Rendering, pp. 20-27, 2002. 25. S. Zelinka and M. Garland, “Jump Map-Based Interactive Texture Synthesis,” ACM Transactions on Graphics, Vol. 23, No.4, pp. 930-962, October 2004.
摘要: 材質轉移是電腦圖學中一個重要的研究課題。材質轉移演算法能將給予的來源材質與目標影像,經由搜尋、比對處理後,產生一張與目標影像具有相同解析度的結果影像,此結果影像包含來源材質具有的紋理,但也仍保留目標影像之特徵。現有的材質轉移演算法存在兩個缺失:無法有效保留來源材質之結構性與目標影像之細節特徵、處理效率不彰。本文提出一個細節保持之材質轉移演算法,來解決上述之缺失。 我們的演算法是以區塊為基準,每次處理目標影像內的一個區段,並將材質區塊貼附上目標影像內。貼附時,我們部分重疊本次所處理的材質轉移區塊與前次已處理、現存於結果影像內的區塊,以便保持相鄰兩材質轉移區塊之連貫性。我們的演算法包含三項主要處理程序。1. 變動偵測:我們對目標影像中的每個處理區段,計算其包含所有像素之標準差,並與預先設定之門檻值加以比較,以決定此區塊是否屬於影像之細節特徵。2. 適應性區塊取樣:若偵測其為目標影像之細節特徵,則將該區段分割為較小的區段尺寸。我們繼續對每個被分割的小區段重複進行變動偵測,以評估是否繼續分割,或已達分割終止的臨界條件。3. 區塊置換:當處理區段已無需再分割時,我們會搜尋、比對以來源材質之區塊為基準所建立的Kd-Tree樹狀結構,尋找最適合匹配的區塊。一旦我們尋獲該最匹配區塊,我們先以較小的區塊作材質轉移處理,然後再以上一層的區塊尺寸貼回未分割前的處理區段,最後我們將此區塊貼回結果影像內之相對位置。我們逐一對處理區段作上述材質轉移處理,直到目標影像全部完成材質轉移為止。 我們實現所提的演算法,並以網路上公開資料庫的材質進行測試。我們也與學者Efros、Hertzmann、Ashikhmin之材質轉移結果作比較分析。實驗結果顯示,我們演算法的結果影像比Hertzmann、Ashikhmin等學者之結果更能夠良好地維持來源材質之結構性。此外,由於使用適應性區塊取樣技巧,在目標影像之細節特徵部分使用較小之區塊作材質轉移,故結果影像也能有效地維持目標影像之細節特徵。我們所提演算法的效率也優於Hertzmann、Ashikhmin學者以像素為基準的方法。最後,我們的方法在運作時,即可變更區塊大小,處理材質轉移,因此演算法的效率也高於Efros學者的迭代方法。整體而言,演算法所產生的圖像,其成像品質優於以上三位學者所提出的材質轉移方法。 總結本研究,我們所提的細節保持之材質轉移演算法,有效改善了現有演算法的缺失。我們的演算法能同時有效保持來源材質之結構性與目標影像之細節特徵。此外,演算法僅需單次即可完成材質轉移處理,不需要進行數次的迭代,具有良好的處理效率。
Texture transfer is an important research subject in computer graphics community. Given a source texture and a target image, the texture transfer algorithm produces a result image containing the features of both the source texture and the target image simultaneously. Current texture transfer algorithms have two drawbacks. First, it is incapable to retain the structural patterns of the source texture as well preserving the detailed characteristics of the target image. Second, the algorithm is inefficient. In this thesis, we present an efficient and detail-preservation texture transfer algorithm to solve the above drawbacks. Our algorithm is a patch-based approach, operating the texture transfer patch by patch in the current processing block. Every texture-transferred patch is later pasted up on the target image with an overlapped region to maintain the texture continuity. Our algorithm consists of three steps including the variance detection, the adaptive patch sampling, and the patch replacement. In the first step, we calculate the variance of pixels in the processing block, and then compare it with a user defined threshold to determine whether the processing block contains the subtle characteristics of the target image. In case of containing the subtle characteristics, in the second step, we subdivide the block into smaller sub-blocks and repeat the variance detection step, until the subdivision has reached the termination criteria. In the third step, if no further subdivision is required, we search the kd-tree nodes constructed from a number of patches in the source texture, finding the best-matched patch with respect to the overlapped region in the processing block. Once found, we temporarily subdivide this patch into smaller sub-patches, on which we operate the texture transfer. Finally, the patch being texture transferred is pasted up on the processing block in the target image. We repeat the above three steps until all blocks have been operated the texture transfer completely. We implemented our algorithm and evaluated the performance using textures in the public texture database. We also compared our results with those presented by Efros',Hertzmann's, and Ashikhmin's. The experimental results show that our algorithm is superior to Hertzmann's and Ashikhmin''s algorithms in preserving the structural patterns of the source texture. Also, our algorithm can retain characteristics of the target image that has visual appearance better than our counterparts. In addition, the efficiency of our algorithm is higher than that of Hertzmann's and Ashikhmin''s methods, which are the pixel-based approach. Finally, our algorithm can operate the texture transfer within several minutes using a single execution process. In contrast,Efros' method requires several iteration processes, which takes longer time to produce the final result. In conclusion, our algorithm effectively solves the drawbacks of the current texture transfer algorithm. Our algorithm can preserve the structural patterns of the source texture as well as retaining the characteristics of the target image. The algorithm performs fast, requiring no iteration, and produces synthesized results in a short time. We consider our algorithm makes a concrete contribution to the texture transfer studies.
URI: http://hdl.handle.net/11455/19334
其他識別: U0005-1108200610524200
文章連結: http://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-1108200610524200
Appears in Collections:資訊科學與工程學系所

文件中的檔案:

取得全文請前往華藝線上圖書館



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.