請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83258
標題: | 無電池裝置之間歇性深度學習推論 Intermittent Deep Inference on Battery-less Devices |
其他標題: | Intermittent Deep Inference on Battery-less Devices |
作者: | 康智凱 Chih-Kai Kang |
指導教授: | 陳銘憲 Ming-Syan Chen |
關鍵字: | 邊緣運算,能源採集,模型適應,間歇性系統,深度學習, Deep neural networks,Intermittent systems,model adaptation,energy harvesting,edge computing, |
出版年 : | 2023 |
學位: | 博士 |
摘要: | 能量採集技術衍生出了新型態無電池間歇性系統的計算範式,同時也產生了新的研究問題是以往傳統電池供電系統所沒有的。目前的間歇性裝置的執行方式需要完全的系統內部狀態的存取能力來達到系統備份的功能,又或者需要依賴程式開發者依照應用的能源消耗來切分應用任務來完成間歇性的執行進度累積,此外,應用程式需要大量額外的記憶體空間來確保間歇性執行的正確性,這些要求對於在無電池的物聯網裝置上使用硬體加速執行深度學習推論是個非常大的難題。一方面是硬體加速器的內部狀態我們無法完全掌握,另一方面是深度學習模型不易於依照能源消耗來切分任務大小,此外深度學習往往需要大量的記憶體空間,再考慮到保護正確性的記憶體需求,這往往會超過一個微型裝置所能提供的範圍。在本文中,我們討論了關於無電池裝置上執行間歇性深度學習推論的三個重要問題。首先,我們介紹了間歇性執行硬件加速推論的問題並且提出了推論足跡的概念,以用來在斷電期間延續硬體加速器的執行進度。再來,為了解決推論足跡的高執行成本問題,我們提出了擴增模型的概念,讓深度學習模型得以適應間歇性系統。最後我們著重在深度學學的記憶體需求上,我們提出了殘差重分配的概念來重新調整模型內計算單元的連接關係,使得記憶體需求得以降低以符合微型裝置上的資源限制。 Energy harvesting allows for battery-less intermittent systems, but presents challenges for complex applications such as intermittent deep neural network (DNN) inference. Existing approaches to intermittent execution require energy estimation for task splitting or access to internal system state for checkpointing, and also require additional memory for application validity. These requirements can be difficult to fulfill on battery-less IoT devices that use hardware acceleration for DNN inference, as the internal state of peripherals may be inaccessible and it may be difficult for developers to divide the model into appropriate tasks that fit within an energy budget. Additionally, the large size of DNNs can lead to memory requirements that exceed the constraints of the device. This thesis discusses three issues related to deep neural network (DNN) inference on intermittently powered tiny devices. The first issue is how to perform hardware-accelerated DNN inference on these devices in an intermittent manner. We introduce the concept of inference footprinting to maintain progress across power cycles. The second issue is the high overhead of preserving the footprint, which can reduce the throughput benefits of inference footprinting. We propose the use of model augmentation to adapt deep models for use on intermittent devices. The final issue is the challenge of performing DNN inference on devices with limited memory. We present the concept of Deep Reorganization, which reorganizes residual connections in the DNN model to reduce the inference memory requirement and enable its use on resource-constrained devices. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83258 |
DOI: | 10.6342/NTU202210083 |
全文授權: | 同意授權(限校園內公開) |
電子全文公開日期: | 2027-09-29 |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-0455221126291015.pdf 目前未授權公開取用 | 2.92 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。