Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/55531
標題: 利用Weight Decay結合隨機Fault或加注雜訊的在線容錯訓練之特性研究(II)
On-Line Fault Tolerant Learning Algorithms-Simultaneous Weight Noise Injection and Weight Decay for MLP (II)
作者: 沈培輝
關鍵字: 資訊科學軟體;基礎研究;Fault Injection;Fault Tolerant Neural Networks;Noise Injection;Online Learning;Weight Decay;Gradient Descent;Fault Tolerance;KL Divergence;Learning Theory;NeuralNetworks
摘要: 
在訓練類神經網絡方面,一個重要議題便是要令它能抵抗網絡結構的突變、或雜訊(在權重或神經元)所帶來的影響。過去二十年,不少訓練的方法都己提出過。第一種是以Objective function 為導向,後利用gradient descent 方法來設計一個下線(或iterative)方法去訓練類神經網絡。第二種是以較直接但缺乏理論(如收斂性)支持的方法,以On-line Back-propagation 為本,在每次update的step 加入隨機fault 或雜訊,強迫Neural Network 學習這些random fault。很可惜,在第二種研究當中,大部份結論都只能基於電腦模擬結果。所以,我們去年提出申請研究經費,以開始對這種訓練方法做出研究,並得到一年的資助。過去一年時間,對Radial Basis Function (RBF) Network,我們利用Gladyshev Theorem成功證明了online weight noise/node fault injection 結合weight decay 的在線訓練等方法的收斂性、並推導它們的目標函數。我們發現Gladyshev Theorem 並不適用於對MLP 的討論,原因有二。第一,對MLP 而言,權重值在訓練時不一定Bounded。第二,這類將weight noise 在線注入的訓練方法之目標函數尚未推導出來。更甚,我們發現不少學者對Weightnoise injection training 的目標函數,存在非常錯誤的觀念。因此,我們決定申請多一年的經費,延續去年的研究方向,以MLP 為研究對象,作一個完整和深入的探討。將weight noise 在線注入下的訓練方法續一研究,它們的convergence 和objective function 會利用其他Stochastic Approximation 中的理論(例如LjungTheorem、Kushner-Clark Lemma、Bottou Theorem)來證明和推導。然後,結合weightdecay 和weight noise 在線注入所得到的訓練方法也續一研究、並證明它們的收斂性和推導它們的目標函數。最後,以大量的電腦模擬結果來探討,這些在線注入weight noise 的訓練方法可否對函數逼近(Function Approximation)問題、迴歸(Regression)問題、時間序列(Time Series Prediction)問題和分類(Classification)問題帶來更好的Generalization。

While injecting noise (input noise or weight noise) or faults (weight fault or node fault)during training has been applied to improve fault tolerance of a neural network, not muchanalysis has been done to reveal the success of such learning methods. This project isa continuing research of the project funded in July 2009. While the previous research(August 2009 { July 2010) is focusing on radial basis function (RBF) networks, this project(August 2010 { July 2011) is focusing on multilayer perceptron (MLP). Specic goals to beaccomplished are four folds : Derivation of the objective functions for the on-line learning algorithms based onsimultaneous weight noise injection and weight decay for MLP. Analysis on the conditions, in which the boundedness of the weight vector is guar-anteed, for the algorithms based on simultaneous weight noise injection and weightdecay. Applying theory in Stochastic Approximation (or other theories) to show the con-vergence of the algorithms based on simultaneous weight noise injection and weightdecay. Intensive simulated studies on divergence behavior of the on-line learning algorithmbased on injecting multiplicative weight noise.Besides, their prediction errors will be deduced and validated by computer simulations.
URI: http://hdl.handle.net/11455/55531
其他識別: NSC99-2221-E005-090
Appears in Collections:科技管理研究所

Show full item record
 

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.