Please use this identifier to cite or link to this item:
標題: 類神經網路監督式演算模式及其應用
The Supervised Learning Algorithm of Artificial Neural Networks and Its Applications
作者: 謝志明
Hsieh, Chih-Ming
關鍵字: Artificial Neural Network
Functional link cell
linearly separable
minimal perfect hashing function
出版社: 應用數學系
摘要: 隨著電腦硬體科技之蓬勃發展與認知科學、生物科學、社會科學等之日益成熟,以及傳統人工智慧技術已遭遇瓶頸,難以突破的同時,類神經網路(artificial neural networks)又再度受到重視,由於類神經網路的運算模式具有分散式平行處理(distributed parallel processing)、學習(learning)、高度容錯(fault tolerance)等特性,因此已成為發展人工智慧系統的重要技術。本文所探討的是有關學習演算法(learning algorithms)的部份,而一般的學習演算法根據其學習的方式,可分為監督式(supervised)及非監督式(unsupervised)兩種;若根據學習資料的性質區分,則可分為二元(binary)資料與連續(continuous)資料。在本文中介紹一種改良式的監督式二元學習模式(supervised binary learning model)。 在監督式二元學習模式下,Perceptron是最早被廣泛應用的學習模式;但它只能解決線性可分割(linearly separable)的問題,無法解決類似XOR等線性不可分割(linearly nonseparable)的問題。後來逆傳遞(backpropagation)模式利用隱藏單元(hidden unit)來解決這個問題﹐雖然已證明其收斂性(convergence),但仍然無法證明其學習是否成功。因此本文以功能連結單元(functional link units)為基礎,提出一個改良式的學習架構與演算法,並以代數方法證明此演算法的學習必然成功。此學習演算法不但具有學習一定成功的特性,而且可以累積學習經驗,亦即有新的學習範例輸入時,已訓練過的範例並不需要重新學習。 雖然改良式學習演算法的功能連結單元具有上述良好的特性,但在訓練範例很多時,功能連結單元個數也會隨著快速增加,整體的執行效率便隨之下降;如果可以判斷出那些訓練範例是線性不可分割,再針對這些訓練範例增加功能連結單元,則功能連結單元的個數便可以降到最低。因此本文中提出一個判斷訓練範例是否為線性分割的方法,並證明此方法的正確性。 本文的後面幾個章節中敘述有關改良式學習演算法的應用。在赫序函數方面,使用傳統的最佳赫序函數(perfect hashing function),必須針對資料的鍵值做複雜的分析,還未必能找出鍵值的特性。利用改良式的學習演算法可以自動歸納出鍵值的特性,不需要對資料的鍵值做任何複雜的分析,就可以得到任意鍵值的最佳完美赫序函數(minimal perfect hashing function)。因此﹐在中文注音壓縮的應用上,顯得更有效率。在布林函數產生器(boolean function generator)以及電腦輔助教學中因數分解(factorization)的應用方面也都有詳細的描述。
Artificial neural networks(ANNs) are referred to as neural networks, connectionism, adaptive networks, artificial neural systems, neurocomputers, and parallel distribution processors. ANNs are mathematical models of theorized mind and exploit the massively parallel local processing, distributed representation properties that are believed to exist in brain. The primary intent of ANNs is to explore and reproduce human information processing tasks such as speech, vision, touch and knowledge processing. In addition, ANNs are used for data compression, near-optimal solutions to combinatorial optimization problems, pattern matching, system modeling, and function approximation. ANNs theory is derived from many disciplines including psychology, mathematics, neuroscience, physics, engineering, computer science, philosophy, biology, and linguistics. It is evident from this diverse listing that ANNs technology represents a “universalization” among the sciences working toward a common goal─building an intelligent system. It is equally evident from the listing that an accurated and complete description of the work in all the listed disciplines is an impossible task. In light of this, we will focus upon ANN paradigms that have applications or application potential. In this thesis, the improved supervised binary learning model of ANNs and some applications will be proposed. The supervised and unsupervised learning algorithms are two kinds of learning algorithms distinguished by learning methods. And the learning data could be separated to binary and continuous data. Perceptron is the earliest applicable method of supervised binary learning model, yet it solved only the linearly separable problem. Linearly nonseparable problems have been resolved by backpropagation learning algorithm using hidden units. The convergence of backpropagation was proved, but the learning maybe fail. Therefore, we proposed the improved learning structure and algorithm, which could learn successfully, based on functional link units. However, the algorithm requires much more storage for functional link units, resulting much more computation time. It is well known that if the given set of binary training patterns is linearly separable, then successful learning can be achieved using only perceptrons without any functional link unit. Consequently, to reduce the number of functional-link cells required for successful learning, it is crucial to provide an easy method for checking the linear separability of each set of binary patterns of the input training example. The method for checking the linear separability of a given set of binary training patterns in a binary supervised learning of an artificial neural network is also proposed in the thesis. If the training patterns are linearly separable, then it can be implemented by the perceptron. And there are some applications of the improved learning algorithm in later chapters.
Appears in Collections:應用數學系所



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.