請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/54405
標題: | 考量卷積神經網路加速器單元架構之超大型積體電路擺置 VLSI Structure-aware Placement for Convolutional Neural Network Accelerator Units |
作者: | Yun Chou 周昀 |
指導教授: | 張耀文(Yao-Wen Chang) |
關鍵字: | 實體設計,電路擺置,卷積神經網路, Physical design,Placement,Convolutional neural network, |
出版年 : | 2021 |
學位: | 碩士 |
摘要: | 為因應人工智慧各式各樣不同的應用,專為人工智慧模型所做的硬體電路設計正在快速的發展。而神經網路(neural network)中的許多複雜結構,如卷積層(convolutional layer) 和全連接層(fully-connected layer) ,也都反映在這些硬體設計中,造成了連線緊密的電路結構。這樣密集的連線為電路實體設計帶來了嚴重的繞線擁擠(routing congestion)問題,且無法透過常見的擺置方式來得到解決。本論文針對卷積神經網路加速器單元(convolutional neural network accelerator units),提出了一個新穎的電路擺置框架,能從電路中提取處理器核心(kernel) 結構,並根據這些結構置入擺置區塊(region) ,對擺置過程給予恰當的引導,以最小化繞線的溢流(overflow) 和擁擠度。實驗結果顯示,我們的框架能在不增加繞線線長(wirelength) 的情況下,有效的降低繞線擁擠度,甚至大大超越當前尖端的商用軟體。 AI-dedicated hardware designs are growing dramatically for various AI appli-cations. These designs often contain highly connected circuit structures, reflecting the complicated structure in neural networks, such as convolutional layers and fully-connected layers. As a result, such dense interconnections incur severe congestion problems in physical design that cannot be solved by conventional placement methods. This thesis proposes a novel placement framework for CNN accelerator units, which extracts kernels from the circuit and insert kernel-based regions to guide placement and minimize routing congestion. Experimental results show that our framework effectively reduces global routing congestion without wirelength degradation, significantly outperforming leading commercial tools. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/54405 |
DOI: | 10.6342/NTU202100556 |
全文授權: | 有償授權 |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-0502202100505300.pdf 目前未授權公開取用 | 5.71 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。