Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/19938
標題: 一種MRI與PET腦部影像特殊融合方法
A Novel Fusion Method for MR and PET Images
作者: 王書麟
Wang, Shu-Lin
關鍵字: 影像融合
Image fusion
Log-Gabor小波
核磁共振造影
正子造影
Log-Gabor wavelet
Magnetic Resonance Imaging (MRI)
Positron Emission Tomography (PET)
出版社: 資訊科學與工程學系所
引用: [1] 蔡宗達, “在不同區域選擇適當分解尺度與方向之腦部醫學影像融合研究,” 國立中興大學資訊網路與多媒體研究所碩士論文, 2010. [2] 詹金翰, “運用PET顏色校正與權重式MRI灰階強度補強演算法之腦部醫學影像融合系統,” 國立中興大學資訊科學與工程學系碩士論文, 2011. [3] 姜志國, 韓冬兵, 薛斌党, 周孝寬, “基於區域小波變換的序列顯微圖像融合,” 北京航空航太大學學報, 第32卷, 第4期, pp. 399-403, 2006. [4] 陳穎, 蔣遠大, 孫志斌, “基於Log-Gabor濾波的小波顯微圖像融合,” 電腦工程與設計, 第31卷, 第6期, pp. 1316-1323, 2010. [5] Vincent Barra, Jean-Yves Boire, “A General Framework for the Fusion of Anatomical and Functional Medical Images,” NeuroImage, vol.13, pp.410-424, 2001. [6] A.L. Grosu, W.A. Weber, M. Franza, “Reirradiation of recurrent high-grade gliomas using amino acid PET (SPECT)/CT/MRI image fusion to determine gross tumor volume for stereotactic fractionated radiotherapy,” International Journal of Radiation Oncology Biology Physics, Vol.63, pp.511-519, 2005. [7] H. Zhang, L. Liu, N. Lin, “A novel wavelet medical image fusion method,” International Conference on Multimedia and Ubiquitous Engineering, pp.548-553, April 2007. [8] Sabalan Daneshvar, Hassan Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models,” Information Fusion, Vol.11, pp.114-123, April 2010. [9] Te-Ming Tu, Shun-Chi Su, Hsuen-Chyun Shyu, Ping S. Huang, “A new look at IHS-like image fusion methods,” Information Fusion, Vol.11, pp.177-186, September 2001. [10] Maria Gonzalez-Audicana, Jose Luis Saleta, Raquel Garcia Catalan, and Rafael Garcia, “Fusion of Multispectral and Panchromatic Images Using Improved IHS and PCA Mergers Based on Wavelet Decomposition,” IEEE Transactions on geoscience and remote sensing, Vol.43, pp.1291-1299, January 2004. [11] Chavez, P. S. Jr. and A. Y. Kwarteng, “Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis,” Photogrammetric Engineering & Remote Sensing, Vol.55, No.3, pp.339-348, 1989. [12] Jorge Nunez, Xavier Otazu, Octavi Fors, Albert Prades, Vicenc Palam, and Roman Arbiol, “Multiresolution-Based Image Fusion with Additive Wavelet Deocmposition,” IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol.37, NO.3, pp.1204-1211, May 1999. [13] Youcef Chibani, Amrane Houacine, “Redundant versus orthogonal wavelet decomposition for multisensor image fusion,” Pattern Recognition, vol.36, pp.879-887, 2003. [14] Pajares, G., Jesus Manuel de la Cruz, “A wavelet-based images fusion tutorial,” Pattern Recognition, Vol.37, pp.1855-1872, 2004. [15] Krista Amolins, Yun Zhang, Peter Dare, “Wavelet based image fusion techniques-An introduction,review and comparison,” PHOTOGRAMMETRY & REMOTE SENSING, pp.249-263, 2007. [16] Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddins, “Digital Image Processing,” 2nd ed, Prentice Hall, 2002. [17] V. Shiv Naga Prasad and Justin Domke, ''Gabor filter visualization,'' Technical Report, University of Maryland, 2005. [18] Joni-Kristian Kamarainen, Ville Kyrki, Heikki Kalviainen, “ Invariance Properties of Gabor Filter-Based Features-Overview and Applications,” IEEE Transactions on image processing, Vol.15, No.5, pp.1088-1099 May 2006. [19] S.Fischer, R. Redondo and G. Cristobal, “Self-Invertible 2D Log-Gabor Wavelets,” International Journal of Computer Vision, Vol.75, No.2, pp.231-246, May 2007. [20] S.Fischer, R. Redondo and G. Cristobal, “How to construct Log-Gabor filters”, Open Access Digital CSIC Document, 2009. [21] “Info-radiologie.ch,” http://www.info-radiologie.ch/en/index-english.php. [22] R. C. Gonzalez, R. E. Woods and S. L. Eddins, Digital Image Processing Using MATLAB, 2nd ed., Gatesmark Publishing, 2009. [23] “The Whole Brain Atlas,”http://www.med.har-vard.edu/AANLIB/home.html. [24] Zhenhua Li a, Zhongliang Jing, Xuhong Yang, Shaoyuan Sun, “Color transfer based remote sensing image fusion using non-separable wavelet frame transform,” Pattern Recognition Letters, pp.2006-2014, 2005. [25] Guihong Qu, Dali Zhang and Pingfan Yan, “Information measure for performance of image fusion,” ELECTRONICS LETTERS, vol.38, No.7, pp.313-315 March 2002. [26] Yin Chen, Rick S. Blum, “Experimental Tests of Image Fusion for Night Vision,” International Conference on Information Fusion (7th), pp.491-498, 2005.
摘要: 由於核子醫學的發展,醫學影像造影技術的廣泛應用,提升了疾病的診斷正確率。在本篇論文中,我們提出一個新的腦部醫學影像融合方法,利用多重解析度的方法,將腦部核磁共振(Magnetic resonance, MR)影像的結構資訊與腦部正子斷層掃描(Positron Emission Tomography, PET)影像的顏色資訊,融合在一張影像上,並且設計一個新的細節提取的方式,補強融合影像的細節,輔助醫師做出正確的判斷。本篇論文所提的方法主要分成融合與補強兩個階段,第一階段進行PET影像的IHS轉換,以適當的分解尺度與方向數將得到的PET灰階強度分量與MRI影像進行Log-Gabor小波轉換後,取PET灰階強度分量與MRI影像兩者的高頻部分進行融合,融合結果再與PET灰階強度分量的低頻部分進行Log-Gabor小波反轉換,以得到新的灰階強度分量。第二階段利用Otsu法找出MRI影像細節補強目標位置資訊,再藉由目標位置資訊比對新的灰階強度分量與MRI影像的灰階強度,經過取代目標位置之灰階強度值後,將補強完成的灰階強度分量與PET影像的另外兩個分量進行IHS反轉換,得到融合影像。實驗結果顯示,我們提出的方法,融合影像顏色失真較少,且MRI結構的資訊量更豐富,並能清晰地顯示於融合影像中。
Due to the development of nuclear medicine, extensive applications of medical imaging technology enhance the accuracy of diagnosis. The goal of this thesis is to develop a new image fusion approach to combine the structural information of Magnetic Resonance Imaging (MRI) and the color information of Positron Emission Tomography (PET) in one image. A new method is designed to extract and reinforce the details of a fused image. The method has two phases: image fusion and reinforcement. In the first phase, the gray-level intensity is obtained through the IHS transform of PET images. With appropriate scales of decomposition and orientations, Log-Gabor wavelet transform is performed to the gray-level intensity of PET and the image of MRI. After the high frequency components of the previous two are fused, inverse Log-Gabor wavelet transform is applied to the result and the low frequency components of the gray-level intensity of PET to get the new gray-level intensity. In the second phase, the Otsu’s method is used to find the target position of the reinforced details in MRI, and then compared the information with both PET''s and MRI''s gray-level intensity. After the replacement of the target position''s gray-level intensity, inverse IHS transform is performed to the reinforced gray-level intensity and the other two components of the PET images to obtain the fused image. The experimental results show that the color fused images have less distortion with the proposed method; moreover, the amount of MRI''s information structure is more abundant, and can be clearly displayed in the fusion image.
URI: http://hdl.handle.net/11455/19938
其他識別: U0005-1308201311532000
文章連結: http://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-1308201311532000
Appears in Collections:資訊科學與工程學系所

文件中的檔案:

取得全文請前往華藝線上圖書館



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.