Please use this identifier to cite or link to this item:
Evolutionary Training Methods for the Generalized Feedforward Neural Networks
|引用:|| H. G. Beyer and H. P. Schwefel, "Evolution strategies—A comprehensive introduction," Natural Computing, vol. 1, no. 1, pp. 3 - 52, 2002.  C. M. Bishop. Neural Networks for Pattern Recognition, Clarendon Press, Oxford, 1995.  C. L. Blake and C. J. Merz, “UCI Repository of Machine Learning database,” Dept. Inf. Comput. Sci., Univ. California, Irvine, Irvine, CA, 1998 [Online]. Available: http://www.ics.uci.edu/~mlearn  L. Breiman, “Bagging Predictors,” Machine Learning, vol. 24, no. 2, pp. 123-140, 1996.  D. A. Brown, "Solving the N-bit parity problem with only one hidden unit," Neural Networks, vol. 6, pp. 607-608, 1993.  G. Brown, J. Wyatt, R. Harris, and X. Yao, “Diversity creation methods: A survey and categorisation,” Journal of Information Fusion, vol. 6, pp. 5-20, 2005.  Z. S. H. Chan and N. Kasabov, "Fast neural network ensemble learning via negative-correlation data correction," IEEE Transactions on Neural Networks, vol. 16, pp.1707- 1710, 2005.  Z. S. H. Chan and N. Kasabov, "A preliminary study on negative correlation learning via correlation-corrected data (NCCD)," Neural Processing Letters, vol. 21, no. 3, pp.207-214, 2005.  H.-L. Chao, Feedforward Neural Network Learning Using Genetic Algorithms, Master thesis, Dept. Applied Math., National Chung-Hsing University, Taiwan, 2001.  H. Chen, X. Yao, "Evolutionary random neural ensembles based on negative correlation learning," IEEE Congress on Evolutionary Computation, pp. 1468 - 1474, 2007.  A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, Wiley, London, 2005.  Y. Freund and R. E. Schapire, “Experiments with a new boosting algorithm,” Proc. 13th International Conf. Machine Learning, pp. 148-156, 1996.  Y. Freund and R. E. Schapire, "A short introduction to boosting", Journal of Japanese Society for Artificial Intelligence, vol. 14, no. 5, pp.771 - 780 , 1999.  M. Gen and R. Cheng, Genetic Algorithms and Engineering Design, Wiley, 1997.  A. Georgieva and I. Jordanov, “Supervised neural network training with a hybrid global optimization technique,” in 2006 International Joint Conference on Neural Networks, pp. 6433-6440, Jul 2006.  D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, 1989.  L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 12, no. 10, pp. 993-1001, 1990.  S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd ed., Upper Saddle River, NJ: Prentice-Hall, 1999.  A. S. Hedayat, N. J. A. Sloane and J. Stufken, Orthogonal Arrays: Theory and Applications, Springer-Verlag, NY, 1999.  T. Higuchi, X. Yao, Y. Liu, "Evolutionary ensembles with negative correlation learning," IEEE Transactions on Evolutionary Computation, vol.4, no.4, pp.380-387, 2000.  M. E. Hohil, D. Liu, and S. H. Smith, "Solving the N-bit parity problem using neural networks," Neural Networks, vol. 12, issue 9, pp. 1321-1323, 1999.  M. M. Islam, X. Yao, and K. Murase, “A constructive algorithm for training cooperative neural network ensembles,” IEEE Transactions on Neural Networks, vol. 14, pp. 820-834, 2003.  I. Jordanov, A. Georgieva, "Neural network learning with global heuristic search," IEEE Transactions on Neural Networks, Vol.18, Issue.3, pp.937-942, 2007.  C.-F. Juang, “A hybrid of genetic algorithm and particle swarm optimization for recurrent network design,” IEEE Transactions on System, Man, and Cybernetics - part B, pp. 997-1006, Apr. 2004.  J. Kennedy and R. C. Eberhart, "Particle swarm optimization," in IEEE Proc. Int. Conf. Neural Networks, pp.1942-1948, 1995.  J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas, "On combining classifiers," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 3, pp. 226-239, 1998.  M. Korytkowski and R. Scherer, "Negative correlation learning of neuro-fuzzy system ensembles," ICAISC 2010, part I, LNAI 6113, pp.114-119, 2010.  F. Leung, H. Lam, S. Ling, and P. Tam, “Tuning of the structure and parameters of a neural network using an improved genetic algorithm,” IEEE Transactions on Neural Network, vol. 14, no. 1, pp. 79-88, Jan. 2003.  Y.-W. Leung, and Y. Wang, “An orthogonal genetic algorithm with quantization for global numerical optimization.” IEEE Transactions on Evolutionary Computation, Vol. 5, pp. 41-53, Feb. 2001.  Y. Liu, "Generate different neural networks by negative correlation learning," ICNC 2005, LNCS 3610, pp. 149-156, 2005.  Y. Liu and X. Yao, “Ensemble learning via negative correlation,” Neural Networks, vol. 12, pp. 1399-1404, 1999.  Y. Liu and X. Yao, "Simultaneous training of negatively correlated neural networks in an ensemble," IEEE Transactions on Systems, Man, and Cybernetics - Part B, vol. 29, no. 6, pp.716 - 725, 1999.  Y. Liu, X. Yao, and T. Higuchi, “Evolutionary ensembles with negative correlation learning,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 4, pp. 380-387, 2000.  T. B. Ludermir, A. Yamazaki and C. Zanchettin, "An optimization methodology for neural network weights and architectures," IEEE Transactions on Neural Networks, vol. 17, pp.1452- 1459, 2006.  R. Mendes, P. Cortez, M. Rocha, and J. Neves, “Particle swarms for feedforward neural network training,” in Proceedings of International Joint Conference on Neural Networks, pp. 1895-1899, 2002,  D. C. Montgomery, Design and Analysis of Experiments, 3rd edition, Wiley, New York, 1991.  N. Nikolaev and H. Iba, “Learning polynomial feedforward neural networks by genetic programming and backpropagation,” IEEE Transactions on Neural Network, vol. 14, no. 2, pp. 337-350, Mar. 2003.  D. Opitz and R. Maclin, “Popular ensemble methods: an empirical study,” Journal of Artificial Intelligence Research, vol. 11, pp. 169-198, 1999.  P. P. Palmes, T. Hayasaka, and S. Usui, “Mutation-based genetic neural network,” IEEE Transactions on Neural Network, vol. 16, no. 3, pp. 587-600, May 2005.  L. Prechelt, "Proben1-A set of neural network benchmark problems and benchmarking rules," Technique Report 21/94, Fakultat fur Informatik, Univ. Karlsruhe, Karlsruhe, Germany, Sept. 1994.  B. Rosen, “Ensemble learning using decorrelated neural networks,” Connection Science, vol. 8, no. 3&4, pp. 373-383, 1996.  R. Setiono and L. C. K. Hui, “Use of a quasi-newton method in a feedforward neural-network construction algorithm,” IEEE Transaction on Neural Network, vol. 6, pp. 273-277, 1995.  A. J. C. Sharkey, “On combining artificial neural nets,” Connection Science, vol. 8, no. 3/4, pp. 299-313, 1996.  A. J. C. Sharkey, Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems, Springer-Verlag, London, 1999.  A. Slowik, "Application of an adaptive differential evolution algorithm with multiple trial vectors to artificial neural network training," IEEE Transactions on Industrial Electronics, vol.58, no.8, pp.3160-3167, Aug. 2011  D. G. Stork and J. D. Allen, "How to solve the N-bit parity problem with two hidden units," Neural Networks, vol. 5, issue 6, pp. 923-926, 1992.  T. Su, J. Jhang and C. Hou, “A hybrid artificial neural networks and particle swarm optimization for function approximation,” International Journal of Innovative Computing, Information and Control, vol.4, no.9, pp.2363-2374, 2008.  E.-K. Tang, P. N. Suganthan, and X. Yao, “An analysis of diversity measures,” Machine Learning, vol. 65, pp. 247-271, 2006.  J.-T. Tsai, J.-H. Chou, and T.-K. Liu, “Tuning the structure and parameters of a neural network by using hybrid Taguchi-Genetic algorithm,” IEEE Transaction on Neural Networks, vol. 17, no. 1, pp.69-80, Jan 2006.  J.-T. Tsai, T.-K. Liu, and J.-H. Chou, “Hybrid Taguchi-Genetic algorithm for global numerical optimization,” IEEE Transaction on Evolutionary Computation, vol. 8, no. 4, pp.365-377, Aug 2004.  L.-Y. Tseng and W.-C. Chen, "A two-phase genetic local search algorithm for feedforward neural network training," in Proceedings of International Joint Conference on Neural Networks, pp. 2914-2918, 2006.  L.-Y. Tseng and W.-C. Chen, “The systematic trajectory search algorithm for feedforward neural network training,” in Proceedings of International Joint Conference on Neural Networks, pp. 1174 - 1179, 2007.  X. Yao, “Evolving artificial neural networks,” Proceedings of IEEE, vol. 87, no.9, pp. 1423-1447, Sep. 1999.  X. Yao, M. M. Islam, “Evolving artificial neural network ensembles,” IEEE Computational Intelligence Magazine, vol. 3, no. 1, pp. 31-42, 2008  X. Yao and Y. Liu, “A new evolutionary system for evolving artificial neural networks,” IEEE Transactions on Neural Network, vol. 8, no. 3, pp. 694-713, May 1997  X. Yao and Y. Liu, “Making use of population information in evolutionary artificial neural networks,” IEEE Transactions on System, Man, and Cybernetics - part B, vol. 28, no. 3, pp. 417-425, 1998.  Z.-H. Zhou, J. Wu, and W. Tang, "Ensembling neural networks: Many could be better than all," Artificial Intelligence, vol. 137, no. 1-2, pp.239-253, 2002.|
|摘要:||由於對分類問題與回歸問題具備了良好的解題能力, 類神經網路(Artificial Neural Networks)早已被應用在許多領域當中。而關鍵在於如何針對特定問題來調整/訓練出適當的類神經網路的架構與鏈結權重。在眾多著名的前饋式類神經網路(Feed-forward Neural Networks)的訓練方法當中，倒傳遞演算法(Back-propagation Algorithm) 就是其中之一，並且被成功地運用於解決各種問題；但是，傳統的倒傳遞演算法可能會有陷於局部最佳解的問題發生。此時，演化式演算法(Evolutionary Algorithms)提供了另一個訓練類神經網路的可行方案。此外，一個好的類神經網路除了要在訓練階段將錯誤率(Error Rate)降到最低，更要能夠對未知的資料(unseen instances)進行正確的分類。這就是所謂的一般化能力 (Generalization ability) 。因此，如何提高類神經網路的一般化能力本身就是一個重要的研究課題。而對於複雜的問題，要訓練出一個獨立的類神經網路是非常不容易且可能會耗費許多的訓練時間。此時，以總體 (Ensemble) 決策來取代單一的類神經網路便是另一種可能的選項。不過，總體的建構也是有一定的困難度。而其中一個主要的議題就是：如何提高總體內的個體之間的差異化(Diversity)。
在本論文當中，作者提出了一些訓練廣義前饋式類神經網路的策略與演算法。首先，作者提出一個基於orthogonal array (OA)的雜交運算方法(Crossover Operator)，並嘗試將其運用到基因演算法(Genetic Algorithms)以提升基因演算法訓練類神經網路的效能。接著，在逐漸掌握Orthogonal Array 的相關特性之後，作者提出了〔Systematic Trajectory Search Algorithm; STSA〕演算法。為了要能全面地探索解空間(Solution Space)，STSA 運用 OA 的特點來產生均勻散佈的初始群體，並且利用軌跡式區域搜尋(Trajectory Local Search)的方法在可能存在最佳解(global optima)的區域當中進行較詳盡的搜尋。實驗結果也證實以STSA訓練後的類神經網路確實有不錯的分類能力。然而，STSA 似乎存在過度訓練(Over-training)的問題。為了克服這個問題，作者提出一個 Strong-winner的概念，並且定義了新的混合型評估函數，希望可以藉此來提高訓練後的類神經網路的一般化能力。實驗結果顯示：利用改良後新版的STSA訓練出來的類神經網路確實擁有較佳的分類能力與一般化能力。
以族群為主的演化式演算法(Population-based Evolutionary Algorithms)當中，表現最好的個體通常就是最終結果(解)。然而，除了最好的個體之外，在訓練後的族群當中的其他個體也許擁有可以利用的寶貴資訊。以 STSA為基礎，作者提出了一個類神經網路總體(ANN Ensemble)的演化式建構演算法：VSEC，為了確保總體內個體之間的差異性，作者在評估函數之中加入一個懲罰項(Penalty Term)；此外，作者也設計了一個可變數量之類神經網路總體的建構方法，在該方法之中運用了三種基本運算來進行總體的建構與更新。作者利用 n-bit parity 問題和三個UCI 分類資料集來驗證所提出的相關演算法實際效能，並將實驗數據與先前的研究成果比較，結果證實利用 STSA/VSEC訓練出來的類神經網路總體擁有非常好的分類能力。|
Artificial neural networks (ANNs) have been successfully applied to many areas due to its powerful ability for both classification and regression problems. The key of success is how to tune the architecture and the connection weights of the ANN for solving some specific problem. The back-propagation algorithm (BP) is one of the well-known training algorithms for the feed-forward neural networks and has been successfully applied in many areas. But the traditional BP suffers from the limitation of trapping in local optima. Evolutionary algorithms provide another approach for training ANNs. A good trained ANN classifier should have the error rate as low as possible in the training phase, as well as correctly classifies the unseen instances. The latter is known as "generalization". Thus, how to improve the generalization of the trained ANNs is an important research topic. For complicated problems, it is not easy to train the ANN and the training is also time-consuming. So, instead of using a single ANN classifier, the ANN ensemble classifier is considered to solve difficult problems. Constructing the ANN ensemble is also complicated. One of the main issues is how to maintain the high diversity of the ensemble. In this dissertation, we proposed some strategies and algorithms for training the generalized ANNs. Firstly, an orthogonal-array (OA) crossover operator was proposed to balance the local search and the global search in the genetic algorithm (GA). Secondly, with the understanding of the characteristics of OAs, a systematic trajectory search algorithm (STSA) was presented. The STSA utilized the orthogonal array (OA) to uniformly generate the initial population in order to globally explore the solution space, and then a novel trajectory search method was applied to exploit the promising areas thoroughly. The experimental results revealed the good classification ability of the feedforward neural networks trained by the STSA. However, the STSA tended to over-train the ANNs. In order to overcome this problem, the author introduced the strong-winner concept and the mixed fitness evaluation method to improve the generalization ability of the trained ANNs. The experimental results showed that the feedforward neural networks trained by the improved STSA demonstrated very good classification ability and generalization ability. In the population-based evolutionary algorithms, the best individual is always the output. However, except the best one, the other individuals of the final population may contain valuable information. Based on the STSA, an evolutionary algorithm for constructing the variable-sized ANN ensemble, called VSEC, was proposed. To guarantee the diversity of the ensemble, a penalty term was added to the error function. Besides, a variable-sized ensemble construction method, based on three operations, was provided to update the ensemble members. The performances of the proposed algorithms were evaluated by applying it to train a class of feedforward neural networks to solve the large n-bit parity problems and classify some UCI datasets. By comparing with the previous studies, the experimental results revealed that the neural networks ensemble classifier trained by the STSA/VSEC have very good classification ability.
|Appears in Collections:||資訊科學與工程學系所|
Show full item record
TAIR Related Article
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.