Please use this identifier to cite or link to this item:
|標題:||Objective Functions of Online Weight Noise Injection Training Algorithms for MLPs||作者:||Ho, K.
|關鍵字:||Fault tolerance;prediction error;weight noise injection;feedforward neural-networks;partial fault-tolerance;multilayer;perceptrons;learning algorithm;regularization;performance||Project:||Ieee Transactions on Neural Networks||期刊/報告no：:||Ieee Transactions on Neural Networks, Volume 22, Issue 2, Page(s) 317-323.||摘要:||
Injecting weight noise during training has been a simple strategy to improve the fault tolerance of multilayer perceptrons (MLPs) for almost two decades, and several online training algorithms have been proposed in this regard. However, there are some misconceptions about the objective functions being minimized by these algorithms. Some existing results misinterpret that the prediction error of a trained MLP affected by weight noise is equivalent to the objective function of a weight noise injection algorithm. In this brief, we would like to clarify these misconceptions. Two weight noise injection scenarios will be considered: one is based on additive weight noise injection and the other is based on multiplicative weight noise injection. To avoid the misconceptions, we use their mean updating equations to analyze the objective functions. For injecting additive weight noise during training, we show that the true objective function is identical to the prediction error of a faulty MLP whose weights are affected by additive weight noise. It consists of the conventional mean square error and a smoothing regularizer. For injecting multiplicative weight noise during training, we show that the objective function is different from the prediction error of a faulty MLP whose weights are affected by multiplicative weight noise. With our results, some existing misconceptions regarding MLP training with weight noise injection can now be resolved.
|Appears in Collections:||期刊論文|
Show full item record
TAIR Related Article
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.