Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/6794
標題: 以螞蟻群最佳化演算法設計模糊控制器及其軟/硬體實現
Fuzzy Controller Design by Ant Colony Optimization Algorithm And Its Software/Hardware Implementation
作者: 盧俊明
Lu, chun-ming
關鍵字: 學習法;格子狀;演算法;產生器;控制器
出版社: 電機工程學系
摘要: 
本論文提出螞蟻群最佳化演算法應用於模糊控制器後件部的設計上,稱為ACO-FC,以增加模糊控制器設計效率及達到更好控制表現。在模糊控制器的設計上,我們先將前件部以格子狀切割完成,並列出每條法則所有後件部可挑選值。將一隻螞蟻走過的路徑當成整個模糊控制器後件部的一組組合。最佳後件部的選擇組合,則以螞蟻群最佳化法中的費洛蒙(pheromone)的濃度來決定。在倒單擺與溫度控制的模擬上,均顯示此法較基因法則的表現來的好。
所採用螞蟻群最佳化演算法並以硬體實現之。在此所採用的硬體為FPGA晶片。晶片内部主要包含有一個憶體單元,用以存放費洛蒙的濃度,一個16位元的亂數產生器,一個16位元除法器,和一些邏輯運算單元。為驗證晶片功能,我們將其應用於水溫控制的模擬上。
針對加強式模糊控制器的設計問題上,我們並將加強式模糊Q學習法結合到螞蟻群最佳化法中,簡稱FQ-ACO,以進一步增強螞蟻群最佳化法的表現。對每一條模糊法則可挑選的後件部值,我們均給定一個Q值,並以模糊Q學習法來條整Q值。整個控制器最佳後件部的組合,則同時根據費洛蒙及Q值來收尋。為了驗證FQ-ACO演算法的性能,我們分別模擬了水溫控制系統、磁浮系統及倒車入庫三個問題,並與單獨使用螞蟻群最佳化演算法及單獨使用模糊Q學習法作比較。而這些模擬結果都顯示FQ-ACO為一較有效的方法。

This thesis proposes the application of Ant Colony Optimization (ACO) algorithm to design the consequent parts of a fuzzy controller. This is called ACO-FC. The ACO-FC that is improved design efficiency and control performance of main objectives. For a fuzzy controller, we partition the antecedent part in grid-type, and then list all candidate consequent values of the rules. The path of an ant is regarded as one combination of consequent values selected from every rule. Searching of the best one among all combinations is based on thickness of the pheromone of ACO. Performance of the proposed method has been shown to be better than genetic algorithm on simulations of cart-pole balancing and temperature control problems.
The used ACO is hardware-implemented on FPGA (Field Programmable Gate Array) chip. The implemented chip contains one memory unit for depositing thickness of pheromone, one random number generator of 16 bits, one 16 bits divider, and some other logic operation units. To verify the performance of the chip, we have applied it on simulation of water bath temperature control.
For reinforcement fuzzy controller design problem, we propose the incorporation of Fuzzy-Q learning into ACO, called FQ-ACO, to further improve the performance of ACO. For all the candidates in the consequent part of a rule, we assign each one a corresponding Q-value. Update of the Q-value is based on fuzzy-Q learning. The best combination of consequent values of a fuzzy controller is searched according to both pheromone and Q-value. To verify the performance of FQ-ACO, reinforcement fuzzy control of water bath temperature control system, magnetic levitation control system, and truck back-upper control are simulated. Simulations on the three problems with ACO alone and fuzzy-Q alone are also performed, respectively. Performance of FQ-ACO is verified from the comparisons.
URI: http://hdl.handle.net/11455/6794
Appears in Collections:電機工程學系所

Show full item record
 

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.