Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/9229
標題: 基於邊緣分析之模糊影像還原法及其設計與實現
Design and Implementation of Edge Analysis Based Blurred Image Deconvolution Algorithm
作者: 温航瑞
Wen, Hang-Ruei
關鍵字: 模糊影像
Blurred Image
點擴散函數
邊緣分析
去模糊
Deconvolution
Edge Analysis
Deblur
出版社: 電機工程學系所
引用: [1] The Delights Seeing,「Pinhole Photography and the Camera Obscura」. [Online] Available:http://thedelightsofseeing.blogspot.tw/2010/10/pinhole-photography-and-camera-obscura.html [2] 趙仁孝,「建立全對焦影像與各景深場景還原技術」,國立交通大學資訊工程學系碩士論文,民國91年 [3] 關永輝,「基於限制性最小化平方濾波及邊緣偵測之失焦模糊影像還原」,國立東華大學電機工程學系碩士論文,民國101年 [4] 潘世耀,「取像透鏡裝置之調變轉移函數量測系統的研討:以使用不同擴散函數量測方法為例」,國立臺灣師範大學機電科技學系碩士論文,民國95年 [5] 邱志祥,「自動偵測振鈴現象於復原模糊人臉影像」,國立臺灣師範大學應用電子科技研究所碩士論文,民國98年 [6] Sujit Kuthirummal, Hajime Nagahara, Changyin Zhou, Shree K. Nayar, "Flexible Depth of Field Photography," IEEE Transactions on Pattern Analysis and Machine Intelligence, Jan. 2011, PP. 58-71. [7] Sangjin Kim, Eunsung Lee, Monson H. Hayes, Joonki Paik, "Multifocusing and Depth Estimation Using a Color Shift Model-Based Computational Camera," IEEE Transactions on Image Processing, Sept. 2012, PP. 4152-4166. [8] Ren Ng, “DIGITAL LIGHT FIELD PHOTOGRAPHY, ” July 2006.[Online] Available: https://www.lytro.com/renng-thesis.pdf [9] Sang Ku Kim, Sang Rae Park, Joon Ki Paik, “Simultaneous out-of-focus blur estimation and restoration for digital auto-focusing system,” IEEE Transactions on Consumer Electronics, Aug 1998, PP. 1071-1075. [10] Jaehwan Jeon, Inhye Yoon, Donggyun Kim, Jinhee Lee, Joonki Paik, "Fully Digital Auto-Focusing System with Automatic Focusing Region Selection and Point Spread Function Estimation," IEEE Transactions on Consumer Electronics, August 2010, PP. 1204-1210. [11] Li Dongxing, Wang Yichang, Zhao Yan, “A Blind Identification Algorithm Based on the Filtering In Frequency Domain for Degraded Images,” International Conference on Electronic Measurement & Instruments, 2009. ICEMI ''09. 9th, Aug. 2009, PP. 4-165 – 4-170. [12] Xue-fen Wan, Yi Yang, Xin Lin, “Point spread function estimation for noisy out-of-focus blur image restoration,” IEEE International Conference on Software Engineering and Service Sciences (ICSESS), July 2010, PP. 344-347. [13] Yang-Chih Lai, Chih-Li Huo, Yu-Hsiang Yu, Tsung-Ying Sun, “PSO-based estimation for Gaussian blur in blind image deconvolution problem,” International Conference on Fuzzy Systems (FUZZ), June 2011, PP. 1143-1148. [14] Tsung-Ying Sun, Sin-Jhe Ciou, Chan-Cheng Liu, Chih-Li Huo, “Out-of-focus blur estimation for blind image deconvolution: Using particle swarm optimization,” IEEE International Conference on Systems, Man and Cybernetics, 2009. SMC 2009, Oct. 2009, PP. 1627-1632. [15] Chih-Tsung Shen, Wen-Liang Hwang, Soo-Chang Pei, “Spatially-varying out-of-focus image deblurring with L1-2 optimization and a guided blur map,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2012, PP. 1069-1072. Available:http://www.iis.sinica.edu.tw/~ctshen/svDeblur/ICASSP2012.rar (Matlab Code) [16] SmartDeblur., [Online] Available: http://smartdeblur.net/ [17] Piccure. [Online] Available: http://intelligentimagingsolutions.com/index.php/en/ [18] Alasdair McAndrew. (2005). Introduction to Digital Image Processing with MATLAB, 1e. [19] Canny, John, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Nov. 1986, PP. 679-698. [20] Jian Wu, Heng-jun Yue, Yan-yan Cao, Yan-yan Cao, “Video Object Tracking Method Based on Normalized Cross-correlation Matching,” International Symposium on Distributed Computing and Applications to Business Engineering and Science (DCABES) , Aug. 2010, PP. 523-527. [21] Yong Seok Heo, Kyoung Mu Lee, Kyoung Mu Lee, “Robust Stereo Matching Using Adaptive Normalized Cross-Correlation, “ IEEE Transactions on Pattern Analysis and Machine Intelligence, April 2011, PP.807-822. [22] ZHOU Yu, YUAN Yan, HU Huang-hua, ZHANG Xiu-bao,” Focusness Evaluation for Digital Refocusing Light Field Photography,” Acta Photonica Sinica, June 2010, Vol. 39 No.6.
摘要: 在拍攝照片時,常常因失焦問題導致影像模糊。其模糊程度取決於未知的點擴散函數(Point Spread Function ; PSF),該函數由鏡頭焦距與被攝物與相機距離所組成。正確的估測PSF在影像還原是一個相當重要的步驟,如有PSF即可利用反捲積技術將模糊影像變得更銳利(或去模糊)。 此篇論文提出了一個新穎的PSF估測方法,利用模糊影像邊緣擴散特性,進行自動化的PSF估測。由於人眼感知對於影像對焦的判定,往往來自於銳利的邊緣而非平坦區域,因此我們只需針對含邊緣的區域進行處理即可,這樣的處理方式,可顯著的降低運算複雜度並且不影響去模糊影像的品質。 此演算法的架構如下:首先,邊緣資訊對於PSF估測是相當重要的,正確的辨識影像中被銳利化的邊緣,是一個很重要的前處理步驟。因此本演算法採用Canny 邊緣偵測,因其有著優異的雜訊抑制能力與偵測出所有可能的邊緣的優秀表現。在完成邊緣偵測後,針對每一個邊緣點量測其邊緣擴散函數(Edge Spread Function ; ESF),其找尋的方式是沿著該邊緣點的正交方向進行邊緣擴散程度的記錄;在得到該邊緣點的ESF後,將其ESF進行微分以得到線擴散函數(Line Spread Function ; LSF),利用多樣性的LSF去挑選出最合適的高斯模型,以此代表還原所需的PSF。有了可用的PSF資訊後,利用Wiener Filter進行反捲積的還原運算,其中進行反捲積的區域為31 × 31大小的影像區塊,該區塊是以對應的邊緣點為中心,此種方法的優勢是能適應一張影像有多種PSF(意即有景深),而非大部分現有研究中,只能處理單一PSF。最後,利用中值濾波去除因區塊處理所造成的區塊效應。為了評估該演算法的有效性,我們使用了真實拍攝的影像與失焦影像當測試影像,實驗結果顯示提出的演算法能有效的去除模糊效果,並且相比於先前研究能有更佳的計算效率。 我們所提出的演算法實現於Xlinx Virtex5 XC5VLX330的雛形平台上,該平台於PC端進行軟硬體端共同設計的機制。在完成演算法的開發後,利用軟體進行效能分析,以找出演算法花費最多時間的區域,並將該部分由軟體端移至硬體端上實現,以提升該區域的執行效率。效能分析的結果顯示Wiener Filter模組為運算最密集的部分,該模組是於頻率上執行,因此需要FFT與IFFT元件,我們採用管線化的架構提以提升吞吐率。該模組FPGA實現的結果,花費了4155個Slice與4794個LUT,總共使用了95KB的記憶體,工作頻率達到78.38MHz,吞吐量為57.52(Mpixel/s),換算於解析度為2464x1632的RGB影像中,該硬體化的模組其處理速度可達為4.775FPS,由於大部分市售的相機其連拍速度為每秒四張,所提出的設計足夠提供如此高的執行速度。
Out of focus is the main cause leading to a blurred image when taking a photo. The degree of blur depends on an unknown Point Spread Function (PSF), which is determined by the focal length of the camera lens and the distance between the object and the camera. Correctly estimating the point spread function is the essential step in restoring a blurred image. After the PSF estimation, deconvolution techniques can be applied to sharpen (or deblur) the image. In this thesis, a new PSF estimation scheme is proposed. It uses the feature of edge diffusion in a blurred image to estimate PSF. Since human eyes’ perception on whether an image being properly focused is largely based on the sharpness of the edges rather than on the flat areas, we may confine the deblurring process to the edges only. This can significantly reduce the computing complexity without sacrificing the quality of the deblurred images. This proposed algorithm is as follows: First, the edge information is crucial to the PSF estimation, correctly identifying those sharpened edges in the image is thus an essential pre-processing step. The Canny edge detection scheme is adopted due to its superior capability in suppressing the noise effect and detecting all possible edges correctly. After the edge detection, an Edge Spread Function (ESF) is estimated for each edge point. This is accomplished by recoding the degree of edge diffusion along the orthogonal direction of the edge point. After obtaining the ESF, a Line Spread Function (LSF) is calculated as the differential of the ESF. Various LSFs are collected to best fit a Gaussian model, which represents the desired PSF. With the availability of PSF information, the restoration (or deconvolution) process can be performed by using Wiener Filtering. Note that the deconvolution is applied locally to a small area (a sized 31×31 block) containing the corresponding edge point. The advantage of this approach is that the PSF is location adaptive while most existing work can handle the case of single PSF. Finally, a median filter is applied to remove the blocking effect caused by the block-wise processing. To evaluate the effectiveness of the proposed algorithm, we use a set of real shooting and out of focus images as the test bench. Experimental results show that the proposed algorithm is effective in combating the blurring effect. It is also more computationally efficient than previous schemes. The proposed algorithm is also implemented on a rapid prototyping platform consisting of a Xlinx Virtex5 XC5VLX330 FPGA. The platform can be attached to a host PC to support a hardware/software codesign style of prototyping. After finalizing the algorithm development, a software profiling is conducted first to identify the most time consuming section of the algorithm. It is then moved from the software domain to the hardware domain in implementation and a hardwired solution is pursued to speed up the operation. The profiling result shows that the Wiener Filter module is the most computationally intensive one. The Wiener filtering is performed in the frequency domain. Therefore, the design also contains an FFT and an IFFT function units. A pipelined architecture is adopted to enhance the throughput rate. The FPGA implementation report indicates that a total of 4155 Slices and 4794 LUTs are consumed. In addition, 95KB of memory is used. The design is capable of working at 78.38MHz, which suggests a throughput of 57.3(Mpixel/s). For sized 2464×1632 images, the developed hardware kernel can achieve a processing rate of 4.775 frames per second. Since the speed of the successive shooting mode in most commercially available camera is up to 4 frames per second, the proposed design is sufficient in speed performance.
URI: http://hdl.handle.net/11455/9229
其他識別: U0005-2708201316062400
文章連結: http://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-2708201316062400
Appears in Collections:電機工程學系所

文件中的檔案:

取得全文請前往華藝線上圖書館



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.