標題: 針對通用圖形處理器上設計模糊類神經網路之架構導向執行緒配對方法
An Architecture-Aware Thread Mapping Methodology for Fuzzy Neural Networks on GPGPUs
作者: 曾浩原
Tseng, Hao-Yuan
周景揚
Jou, Jing-Yang
電子研究所
關鍵字: 模糊類神經網路;圖形處理器;fuzzy neural network;gpu
公開日期: 2012
摘要: 模糊類神經網路(FNN)被廣泛的使用在機器學習的應用上,如分類。它藉由結構和參數學習,以及根據訓練樣本和網路輸出的關係來建構一個網路。隨著輸入特性個數和訓練樣本的增加,學習會越來越耗時,所以一些FNN平行設計被提出來加速學習的過程。但是在FNN平行設計的過程中,為了有效利用硬體資源,必須要考慮執行緒對應和架構的相容性。在本篇論文,我們提出了一個包含架構導向執行緒配對方法(ATM)對於平行FNN的設計流程。ATM在不同特性的訓練樣本和架構的情況下可以找出一個好的執行緒配對,來達到硬體的有效利用。在實驗部分,我們的方法和前人的作法相比,效能可以得到明顯的改善。
The fuzzy neural network (FNN) is extensively used in machine learning applications, such as classifications. It uses structure and parameter learning phases to create a network according to the correlation between input training samples and network outputs. Since the learning phases are getting more time consuming when number of input attributes of training samples increases, some parallel FNN designs are proposed to speed up the learning procedure. While designing an efficient parallel FNN, the thread mapping and the architecture scalability should be taken into consideration to achieve efficient hardware utilization. In this thesis, we present a parallel FNN design flow including architecture-aware thread mapping (ATM) methodology. The proposed ATM efficiently utilizes the hardware resource on GPGPUs by finding a good thread mapping using different training samples and architectures with different characteristics. The experimental results show that our approach can achieve significant performance improvement in some common cases as compared with the prior art.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079911652
http://hdl.handle.net/11536/49177
Appears in Collections:Thesis


Files in This Item:

  1. 165201.pdf