Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/6532
DC FieldValueLanguage
dc.contributor林正堅zh_TW
dc.contributor廖俊睿zh_TW
dc.contributor.advisor陶金旭zh_TW
dc.contributor.author李東霖zh_TW
dc.contributor.authorLi, Dong-Linen_US
dc.contributor.other中興大學zh_TW
dc.date2007zh_TW
dc.date.accessioned2014-06-06T06:38:27Z-
dc.date.available2014-06-06T06:38:27Z-
dc.identifierU0005-2007200616333900zh_TW
dc.identifier.citation[1] S. C. Ahalt, A. K. Arishnamurty, P. Chen, and D. E. Melton, “Competitive learning algorithms for vector quantization,” Neural Networks, vol. 3, pp. 277–291, 1990. [2] N. B. Karayiannis, “A methodology for constructing fuzzy algorithms for learning vector quantization,” IEEE Trans. Neural Networks, vol. 8, pp. 505–518, May 1997. [3] N. B. Karayiannis and P.-I. Pai, “Fuzzy algorithms for learning vector quantization,” IEEE Trans. Neural Networks, vol. 7, pp. 1196–1211, Sept. 1996. [4] I. Pitas, C.Kotropoulos, N. Nikolaidis, R.Yang, and M. Gabbouj, “Order statistics learning vector quantizer,” IEEE Trans. Image Processing, vol. 5, pp. 1048–1053, June 1996. [5] E. Yair, K. Zeger, and A. Gersho, “Competitve leaning and soft competition for vector quantization design,” IEEE Trans. Signal Processing , vol. 40, pp. 294–309, Feb. 1992. [6] C. Zhu and L.-M. Po, “Minimax partial distortion competitive learning for optimal codebook design,” IEEE Trans. Image Processing, vol. 7, pp. 1400–1409, Oct. 1998. [7] A. Gersho and R. M. Gray, Vector Quantization and Signal Compression. Norwell, MA: Kluwer, 1992. [8] A. K. Jain and R. C. Dubes, Algorithms for Clustering Data. Englewood Cliffs, NJ: Prentice-Hall, 1988. [9] Y. Linde, A. Buzo, and R. M. Gray, “An algorithm for vector quantizer design,” IEEE Trans. Communications, vol. COM-28, pp. 84–94, Jan. 1980. [10] T. Kohonen, Self-Organization and Associative Memory. New York: Springer-Verlag, 1984, vol. 8, Springer Ser. Inform. Sci.. [11] D. Desieno, “Adding a conscience to competitive learning,” in Proc. IEEE Int. Conf. Neural Networks, vol. I, New York, July 1988, pp. 117–124. [12] D. I. Choi and S. H. Park, “Self-creating and organizing neural network, ” IEEE Trans. Neural Networks, vol. 5, pp. 561–575, July 1994. [13] B. Fritzke, “Growing cell structures a self-organizing neural networks for unsupervised and supvised learning,” Neural Networks, vol. 7, no. 9, pp. 1441–1460, 1994. [14] J.-H. Wang and W.-D. Sun, “Online learning vector quantization: A harmonic competition approach based on conservation network,” IEEE Trans. Syst., Man, Cybern.—Part B: Cybern., vol. 29, pp. 642–653, Oct. 1999. [15] Huilin Xiong; Swamy, M.N.S.; Ahmad, M.O.; Irwin King;” Branching competitive learning Network:A novel self-creating model,” IEEE Trans, Neural Networks, Vol. 15, pp.417 – 429, March 2004 [16] 蘇木春,張孝德,「機器學習:類神經網路、模糊系統以及基因演算法則」,全華科技圖書股份有限公司,民國90 年7 月。 [17] 葉怡成,「類神經網路模式應用與實作」,儒林圖書有限公司,民國91 年3 月 [18] T. Kohonen, “Self-Organizing Maps,” 3nded., New York, 2001. [19] M. A. Kraaijveld, J. Mao, and A. K. Jain, “A Nonlinear Projection Method Based on Kohonen’s Topology Preserving Maps,” IEEE Trans. On Neural Networks, Vol.6, May 1995, pp. 548–559. [20] 林昇甫,洪成安,「神經網路入門與圖樣辨視」,全華科技圖書股份有限公司,民國85年5月二版。 [21] Forgy, E., “Cluster analysis of multivariate data: Efficiency versus interpretability of classifications,” Biometrics, Vol.21, pp.768, 1965. [22] T. Kohonen, Ed., “Self-Organizing Maps. Berlin, Germany: Springer - Verlag,” 1995. [23] J.C. Bezdek. “Pattern Recognition with Fuzzy Objective Function Algorithms.”New York. Kluwer Academic Norwell, 1981. [24] Weiming Hu; Xie, D.; Tieniu Tan; Maybank, S.“Learning activity patterns using fuzzy self-organizing neural network,” IEEE Trans. Systems, Man and Cybernetics, Part B, Vol. 34, June 2004, pp. 1618 – 1626.zh_TW
dc.identifier.urihttp://hdl.handle.net/11455/6532-
dc.description.abstract自我組織映射架構的類神經網路是資料分群中常被用使用的一種方法。本論文提出了一種新的改良式自我組織映射演算法。它的神經元鄰近關係不像一般自我組織映射定義在二維的拓撲平面上,而是建立在以立方體呈現的三維拓撲空間。為了避免網路中死點的產生,在其競爭學習的過程中,其網路結構也會隨時間的不同去動態調整每個神經元之間的鍵結情況。最後再結合自我建立模組使其網路結構在競爭學習的過程中能夠自動調整網路大小,而更適應性的處理各種特殊的資料分佈形態。zh_TW
dc.description.abstractSelf-organizing neural network is one of the methods frequently used in data clustering. In this thesis, we present a new method to improve the self-organizing map algorithm. Instead of the 2-D neighborhood topology in the conventional self-organizing map, a 3-D 6-neighbor topology is adopted in our approach. To avoid the dead (non-functional) neurons and to represent the training data more effectively, the number of neurons and the links between the neurons will be adjusted automatically during the process of the competitive learning by using a self-constructing model.en_US
dc.description.tableofcontents第一章 序論 2 1.1 研究背景 2 1.3 研究目標及步驟 3 1.4 論文架構 3 第二章 類神經網路 5 2.1 簡介 5 2-2 類神經元的模型及架構 5 2.2.1 網路架構 6 2.2.2 類神經網路學習過程 8 2.3 競爭式學習演算法則 9 2.3.1 競爭式學習法則之學習步驟 9 2.4 自我組織映射網路 (Self-Organizing Map, SOM) 11 2.4.1 側向聯結 11 2.4.2 自我組織映射網路之架構 13 2.4.3 自我組織映射演算法 14 第三章 分群方法概述 15 3.1 簡介 15 3.2 K-means分群法 15 3-3 FCM演算法 16 3.4 自我組織特徵映射改良演算法 17 3.5 BCL演算法 19 3.6 Fuzzy SOM演算法 20 第四章 適應性自我組織映射法 22 4.1 前言 22 4.2 網路基本架構之改變 22 4.3 訓練中網路結構改變方法 23 4.4 自我建立模組之添加 26 第五章 實驗結果 29 5.1 簡介 29 5.2 實驗數據 30 5.3 人工資料數據之比較結果 30 5.3.1 數據1 30 5.3.2 數據2 31 5.3.3 數據3 32 5.3.4 數據4 33 5.3.5 數據5 34 5.3.6 數據6 35 5.4 實際彩色圖片之Luv資料分佈數據比較結果 36 5.4.1 Lenna圖 36 5.4.2 Baboon圖 38 5.5 實驗結果總覽 39 5.6 適應性自我組織映射在特殊資料分佈的應用 42 5.7 適應性自我組織映射在顏色量化上之應用 43 第六章 結論 46 參考書目 47zh_TW
dc.language.isoen_USzh_TW
dc.publisher電機工程學系所zh_TW
dc.relation.urihttp://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-2007200616333900en_US
dc.subjectSelf-organizing mapen_US
dc.subject自我組織映射zh_TW
dc.subjectData clusteringen_US
dc.subject資料分群zh_TW
dc.title適應性自我組織映射與應用zh_TW
dc.titleAdaptive Self-Organizing Map and Its Applicationsen_US
dc.typeThesis and Dissertationzh_TW
item.languageiso639-1en_US-
item.openairetypeThesis and Dissertation-
item.cerifentitytypePublications-
item.grantfulltextnone-
item.fulltextno fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
Appears in Collections:電機工程學系所
Show simple item record
 

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.