Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/19383
標題: 一個以參考點為基礎的自我組織映射演算法
An Efficient Self-Organizing Map Algorithm Based on Reference Point
作者: 陳恆裕
Chen, Heng-Yu
關鍵字: Self-Organizing Map
自我組織映射圖
Neural Networks
Clustering Methods
Unsupervised Learning
類神經網路
群集分析
非監督式學習
出版社: 資訊科學系所
引用: 參考文獻 [1] 蘇春木、張孝德,「機器學習類神經網路、模糊系統以及基因演算法則」,全華科技圖書股份有限公司,1992年。 [2] 張斐章、張麗秋、黃浩倫,「類神經網路理論與實務」,東華書局,2002年。 [3] 王進德、蕭大全,「類神經網路與模糊控制理論入門(修定版)」,全華科技圖書股份有限公司,2002年。 [4] 曾憲雄、蔡秀滿、蘇東興、曾秋蓉、王慶堯,「資料探勘Data Mining」,旗標出版股份有限公司,2005年。 [5] 葉怡成,「應用類神經網路(第三版)」,儒林圖書有限公司,2002年。 [6] 方世榮,「基礎統計學」,華泰文化事業公司,1999年。 [7] J. Han and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2000. [8] T. Kohonen, Self-Organizing Maps, 3rd ed. New York, Berlin: Springer-Verlag, 1989. [9] T. Kohonen, “The self-organizing feature map,” proceedings of the IEEE, Vol. 78, No. 9, pp. 1464-1480, 1990. [10] Mehmed Kantardzic, Data Mining : Concepts, Models, Methods, and Algorithms,Wiley-Interscience,2003. [11] Robert Hecht-Nielsen, NEUROCOMPUTING, Addison-Wesley Publishing Company,1990. [12] Helge Ritter, Thomas Martinetz, Klaus Schulten, Neural Computation and Self-Organizing Maps An Introduction, Addison-Wesley,1992. [13] A. K. Jain, M. N. Murt, P.J. Flynn., “Data Clustering: A Review,” ACM Computing Surveys, Vol.41, No.3, 1999. [14] A. K. Jain, and R. C. Dubes, Algorithms for Clustering Data, Prentice-Hall, Inc.,1988. [15] J. Lampinen, E. Oja, “Clustering properties of hierarchical self-organizing maps,” J. Math. Imag. Vis. 2 (2–3) ,pp.261–272,1992. [16] F. Murtagh, “Interpreting the Kohonen self-organizing feature map using contiguity-constrained clustering,” Pattern Recognition Lett. 16, pp. 399–408,1995. [17] M.Y. Kiang, “Extending the Kohonen self-organizing map networks for clustering analysis,” Comput. Stat. Data Anal. 38 ,pp.161–180,2001. [18] Sitao Wu, “ Clustering of the self-organizing map using a clustering validity index based on inter-cluster and intra-cluster density ,” Pattern Recognition 37 ,pp.175 – 188,2004. [19] M. C. Su, T. K. Liu, and H. T. Chang, “An efficient initialization scheme for the self-organizing feature maps,” in IEEE Int. Joint Conf. Neural Networks, Washington, DC, pp. 1906–1910,1999. [20] J. Vesanto, E. Alhoniemi, “Clustering of the Self-Organizing Map”, IEEE Transactions on Neural Networks, Vol. 11, Issue 3, pp.586-600, 2000. [21] UCI KDD Archive, http://kdd.ics.uci.edu/ [22] B. Fritzke, “Growing grid—A self-organizing network with constant neighborhood range and adaptation strength,” Neural Process Letters, Vol. 2, No. 5, pp. 9–13, 1995. [23] B. Fritzke, “Let it grow—Self-organizing feature maps with problem dependent cell structure,” in Artificial Neural Networks, T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, Eds. Amsterdam, The Netherlands: Elsevier, pp. 403–408, 1991. [24] B. Fritzke, “Growing cell structures-a self-organizing network for unsupervised and supervised learning,” Neural Networks, Vol. 7, No. 9, pp.1441–1460, 1994. [25] D. Alahakoon, S. K. Halgamuge, and B. Sirinivasan, “Dynamic Self Organizing Maps With Controlled Growth for Knowledge Discovery,” IEEE Transactions on Neural Networks, Special Issue on Knowledge Discovery and Data Mining, no. 2000. [26] Kenta Koike, Satoru Kato and Tadashi Horiuchi, “A Two-stage Self-organizing Map Algorithm with Threshold Operation for Data Classification,” The Society of Instrument and Control Engineers 2002, Osaka, pp. 3097-3099, 2002. [27] Mu-Chun Su and Hsiao-Te Chang, “Fast Self-organizing Feature Map Algorithm,” IEEE Transactions on Neural Networks, Vol. 11, No.3, pp.721 – 733, 2000. [28] S. Haykin, Neural Network:A Comprehensive Foundation, Prentice-Hall, Inc.,1999. [29] J.MacQueen. “Some methods for classification and analysis of multivariate observations,” Proc.5th Berkely Symp. Math. Statist, Prob., Vol.1 pp.281-297,1967. [30] W. Wang, Yang and R. Muntz, “STING: A Statistical Information grid Approach to Spatial Data Mining,” Conf. Very Large Data Bases(VLDB’97), pp. 186-195, 1997. [31] D.DeSieno, “Adding a conscience to competitive learning,” IEEE ICNN, Vol. 1, pp.117-124, 1988. [32] Mu-Chun Su and Hsiao-Te Chang, “A New Model of Self-Organizing Neural Networks and its Application in Data Projection,” IEEE Transactions on Neural Networks, Vol. 12, No.1, pp.153 – 158, 2001. [33] Habtom Ressom, Dali Wang and Padma Natarajan, “ Clustering gene expression data using adaptive double self-organizing map, ” Physiol. Genomics 14, pp.35-46, 2003. [34] http://cilab.csie.ncu.edu.tw/course/cluster/FCM’s Data.zip [35] http://genome-www.stanford.edu/clustering/Figure2.txt
摘要: 自我組織映射圖網路(Self-Organizing Map , SOM)是一種非監督式學習網路(Unsupervised learning network),主要目的是以非線性的方法將輸入資料轉換至映射圖上,讓自我組織映射圖中的神經元也存著輸入樣本的特性。所以原本輸入資料間所具有某種拓僕關係,就能呈現在映射圖的輸出神經元上,並且能保有與輸入資料相似的拓僕結構,因此相當適合做分群的演算法;因此許多不同的科學領域也利用這項演算法來作歸納分析,如語言學家分析語言、商業管理者對市場反應作分類處理、影像圖樣作分類處理等。 但當整個網路的分群數變多時,當在輸入資料尋找優勝神經元及優勝神經元尋找其鄰近區域神經元,將會花費更多的計算時間才得以完成。 本研究提出一個改善的方法稱為參考點SOM,先利用輸入資料對參考點的距離做為過濾條件,當輸入資料點與分群點對相同參考點的距離相似時,才計算輸入資料與神經元之間的歐基里德距離或修正優勝神經元的鄰近區域神經元權重值,否則就不加以計算;整個計算先經過二個門檻值過濾用以加速SOM的運算時間,第一個門檻值用在輸入資料點找優勝神經元衡量過濾是否相似,第二個門檻值用來過濾優勝神經元尋找其鄰近區域神經元的條件,經由這兩個門檻值比起傳統的SOM演算法可加快將近20%左右的時間。 除此之外,本研究對SOM演算法的神經元初始方式提出一個新方法,用以改善Efficient Initialization Scheme的運算量,使其時間複雜度由O(n2)變為O(n)增快初始化神經元的運算時間。 經由上述方法對時間的改善後,整個演算法非常適用在Two Level Based SOM的第一層演算法。
The self-organizing map (SOM) is an excellent mechanism for data mining. It has been used as a tool for mapping high-dimensional data into a two- (or three-) dimensional feature map. Despite its successes in practical applications, SOM suffers from some drawbacks such as trial-and-error method for searching a neighborhood preserving feature map. In this paper, we present an efficient self-organizing map algorithm to improve the performance of SOM. We use an efficient self-organizing map algorithm based on reference point and two threshold values .We use a threshold value as the search boundary which is used to search for the Best-Matching Unit (BMU) via input vector. Another threshold value is used as the search boundary in which the BMU finds its neighbors. Moreover, we propose a new method to lower the number of computations required when the Efficient Initialization Scheme for the Self-organizing Feature Map Algorithm is applied. The method reduce the time complexity form O(n2) to O(n) in the steps of finding the initial neurons. We ran our algorithm based on the data set from Yeast database and UCI KDD Archive to illustrate the performance improvement of the proposed method. In the experiment, the execution time of the original SOM algorithm is cut in half in our scheme. At the same time, the sum of squared-error distance in our scheme is also smaller than that of SOM. After achieving improvement of time complexity, this method is good enough to apply in the first-layer algorithm of the TWO LEVEL Based SOM.
URI: http://hdl.handle.net/11455/19383
其他識別: U0005-2408200620393500
文章連結: http://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-2408200620393500
Appears in Collections:資訊科學與工程學系所

文件中的檔案:

取得全文請前往華藝線上圖書館



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.