Please use this identifier to cite or link to this item:
|標題:||A robust parameters self-tuning learning algorithm for multilayer feedforward neural network||作者:||Wang, G.J.
|關鍵字:||multilayer feedforward neural network;parameters self-tuning learning||Project:||Neurocomputing||期刊/報告no：:||Neurocomputing, Volume 25, Issue 1-3, Page(s) 167-189.||摘要:||
In this paper, a new and efficient adaptive-learning algorithm for multilayer feedforward neural networks is proposed. The main characteristic of this new algorithm is that learning parameters such as learning rate (eta) and momentum (alpha) can be automatically adjusted according to the learning trajectory. Originally, the proposed algorithm was inspired by the use of the Ist order Taylor series expansion to approximate Delta E-p, the variation of the error function. Two conditions, Delta E-p < 0 and E-p + Delta E-p > 0 are considered first to ensure effective learning. To increase the accuracy of the AE, approximation, we further developed a more robust procedure, namely the robust parameters self-tuning learning (RSTL). The key functions of the RSTL are: (1) Delta w(i) are included in the performance index, (2) relationship between eta and alpha is determined by the geometric approach, and (3) optimal eta and alpha are obtained by using the optimizing technique. Computer simulations show that the proposed RSTL outperforms other algorithms both in converging speed and computing time. Additional advantages such as being insensitive to the initial weights and easy programming can also be illustrated during simulations. (C) 1999 Elsevier Science B.V. All rights reserved.
|Appears in Collections:||生醫工程研究所|
Show full item record
TAIR Related Article
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.