請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/79482完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 劉邦鋒(Pangfeng Liu) | |
| dc.contributor.author | Chang-Han Chiang | en |
| dc.contributor.author | 江昶翰 | zh_TW |
| dc.date.accessioned | 2022-11-23T09:01:35Z | - |
| dc.date.available | 2022-10-12 | |
| dc.date.available | 2022-11-23T09:01:35Z | - |
| dc.date.copyright | 2021-10-21 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-10-14 | |
| dc.identifier.citation | [1]Apple Inc. Apple’s personal assistant: Siri.https://machinelearning.apple.com/research/hey-siri. [2]Z. Chen, Y. Li, S. Bengio, and S. Si. Gaternet: Dynamic filter selection in convolutional neural network via a dedicated global gating network„ 2018. [3]M. Elbayad, J. Gu, E. Grave, and M. Auli. Depth-adaptive transformer. arXiv preprint arXiv:1910.10073, 2019. [4]Google Inc. Google glass.https://www.google.com/glass/start/. [5]S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally. Eie:efficient inference engine on compressed deep neural network. ACM SIGARCH Computer Architecture News, 44(3):243–254, 2016. [6]S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [7]S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135–1143, 2015. [8]K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [9]Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. ACM SIGARCH Computer Architecture News, 45(1):615–629, 2017 [10]A. Krizhevsky. Learning multiple layers of features from tiny images. Technicalreport, 2009. [11]G. Li, L. Liu, X. Wang, X. Dong, P. Zhao, and X. Feng. Auto-tuning neural network quantization framework for collaborative inference between the cloud and edge. In International Conference on Artificial Neural Networks, pages 402–411. Springer,2018. [12]H. Li, S. De, Z. Xu, C. Studer, H. Samet, and T. Goldstein. Training quantized nets: A deeper understanding. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 5813–5823, 2017. [13]H.Li,A. Kadav,I. Durdanovic,H. Samet, and H.P.Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016. [14]Z. Li, X. Wang, X. Lv, and T. Yang. Sep-nets: Small and effective pattern networks. arXiv preprint arXiv:1706.03912, 2017. [15]Z. Li, L. Xu, and S. Zhu. Pruning of network filters for small dataset. IEEE Access,8:4522–4533, 2019. [16]J. Lin, Y. Rao, J. Lu, and J. Zhou. Runtime neural pruning. In I. Guyon, U. V.Luxburg, S.Bengio, H.Wallach, R.Fergus, S.Vishwanathan, and R.Garnett, editors, Advances in Neural Information Processing Systems, volume30. Curran Associates, Inc., 2017. [17]W. Liu, P. Zhou, Z. Zhao, Z. Wang, H. Deng, and Q. Ju. Fastbert: a self-distilling bert with adaptive inference time. arXiv preprint arXiv:2004.02178, 2020. [18]C. Lo, Y.-Y. Su, C.Y. Lee, and S.-C. Chang. A dynamic deep neural network design for efficient workload allocation in edge computing. In 2017 IEEE International Conference on Computer Design(ICCD), pages 273–280. IEEE, 2017. [19]J.-H. Luo and J. Wu. An entropy-based pruning method for cnn compression. arXiv preprint arXiv:1706.05791, 2017. [20]Y. Matsubara and M. Levorato. Neural compression and filtering for edge-assisted real-time object detection in challenged networks. In 2020 25th International Conference on Pattern Recognition(ICPR), pages 2272–2279. IEEE,2021. [21]Microsoft Corporation. Microsoft crotana. https://www.microsoft.com/en-us/research/group/cortana-research/. [22]P. Panda, A. Sengupta, and K. Roy. Energy-efficient and improved image recognition with conditional deep learning. ACM Journal on Emerging Technologies in Computing Systems(JETC), 13(3):1–21, 2017. [23]A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processingsystems, pages8026–8037, 2019. [24]K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [25]S. Teerapittayanon, B. McDanel, and H.-T. Kung. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd International Conference on PatternRecognition(ICPR), pages 2464–2469. IEEE, 2016. [26]M. Wang, J. Mo, J. Lin, Z. Wang, and L. Du. Dynexit: A dynamic early-exit strategy for deep residual networks. In 2019 IEEE International Workshopon Signal Processing Systems(SiPS), pages 178–183. IEEE, 2019. [27]X. Wang, F. Yu, Z.-Y. Dou, T. Darrell, and J. E. Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. In Proceedings of the European Conference on Computer Vision(ECCV), pages 409–424, 2018. [28]Y. Wang, J. Shen, T.-K. Hu, P. Xu, T. Nguyen, R. Baraniuk, Z. Wang, and Y. Lin.Dual dynamic inference: Enabling more efficient, adaptive, and controllable deep inference. IEEE Journal of Selected Topics in Signal Processing, 14(4):623–633,2020. [29]C.-J. Wu, D. Brooks, K. Chen, D. Chen, S. Choudhury, M. Dukhan, K. Hazelwood,E.Isaac, Y.Jia, B. Jia, etal. Machine learning at facebook: Understanding inference at the edge. In 2019 IEEE International Symposium on High Performance Computer Architecture(HPCA), pages 331–344. IEEE, 2019 [30]Z. Wu, T. Nagarajan, A. Kumar, S. Rennie, L. S. Davis, K. Grauman, and R. Feris.Blockdrop: Dynamic inference paths in residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8817–8826,2018. [31]J. Xin, R. Nogueira, Y. Yu, and J. Lin. Early exiting bert for efficient document ranking. In Proceedings of SustaiNLP:Workshop on Simple and Efficient Natural Language Processing, pages 83–88, 2020 | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/79482 | - |
| dc.description.abstract | 深度神經網路近幾年在機器學習的使用上非常廣泛。為了達到較高的預測正確率,網路的深度不斷的在增加。但在提高預測正確率的同時,同時也需要花費較多的推斷時間以及能源。 前人提出了分支網路的這個技術來處理這個問題,這個技術在原本的網路當中,加入分支,這些分支可以讓資料不須執行完整個網路就可以提前離開產生預測結果。 分支網路需要人工調整一些超參數,這些超參數會非常影響分支網路的效果。而我們這篇論文主要是探討在那些位置加入這些分支會是最佳的。根據我們所知道的,之前還沒有人探討過類似問題。首先我們先定義了分支位置決定問題,並且也證明分支位置決定問題是一個NP完備問題。之後我們提出了一個動態規劃演算法來找到一個網路中最佳的分支位置。 我們將我們的演算法在四種不同深度的VGG網路上做實驗,實驗結果顯示我們的演算法可以有效地計算出最佳分支位置並且經準的預估預測正確率以及實際推斷時間。 | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-23T09:01:35Z (GMT). No. of bitstreams: 1 U0001-0810202115010300.pdf: 2565231 bytes, checksum: 6385a1f64f0a9881eb2237fc23284baa (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | 致謝----------------------------------------------- v 摘要----------------------------------------------- vi Abstract------------------------------------------ vii Contents------------------------------------------ ix List of Figures----------------------------------- xi List of Tables------------------------------------ xii List of Algorithms-------------------------------- xiii Chapter 1 Introduction---------------------------- 1 Chapter 2 Background------------------------------ 3 2.1 Architecture-------------------------------- 3 2.2 Training Methodolodies---------------------- 4 2.2.1 Joint Training-------------------------- 4 2.2.2 Seperate Training----------------------- 5 2.3 Fast Inference by Early Exiting------------- 5 Chapter 3 Related Work---------------------------- 6 3.1 Model Compression--------------------------- 6 3.1.1 Unstructured Pruning-------------------- 6 3.1.2 Structured pruning---------------------- 7 3.2 Dynamic Inference--------------------------- 7 3.2.1 Dynamic Layer Skipping------------------ 8 3.2.2 Adaptive Channel and Filter Pruning----- 8 3.3 Split Computing----------------------------- 8 Chapter 4 Methodology----------------------------- 10 4.1 Hyperparameters----------------------------- 10 4.2 Objective----------------------------------- 10 4.3 Assumption---------------------------------- 11 4.4 Notation------------------------------------ 11 4.5 NP-Completness------------------------------ 13 4.6 Optimal Branches Location Search Algorithm-- 14 4.6.1 Recursion------------------------------- 14 4.6.2 Terminal case--------------------------- 16 4.6.3 Time Complexity Analysis---------------- 16 Chapter 5 Experiment------------------------------ 18 5.1 VGG11--------------------------------------- 19 5.2 Other VGGs---------------------------------- 21 Chapter 6 Conclusion------------------------------ 25 Reference----------------------------------------- 27 | |
| dc.language.iso | en | |
| dc.title | 分支網路中性價比最高之最佳分支位置演算法 | zh_TW |
| dc.title | Optimal Branch Location for Cost-effective Inference on Branchynet | en |
| dc.date.schoolyear | 109-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.coadvisor | 王大為(Da-Wei Wang) | |
| dc.contributor.oralexamcommittee | 吳真貞(Hsin-Tsai Liu),洪鼎詠(Chih-Yang Tseng) | |
| dc.subject.keyword | 深度神經網路,提前退出機制,分支網路,動態規劃,NP完備, | zh_TW |
| dc.subject.keyword | Deep Neural Network,early exits,branchynet,dynamic programming,NP-Complete, | en |
| dc.relation.page | 30 | |
| dc.identifier.doi | 10.6342/NTU202103620 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2021-10-15 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資料科學學位學程 | zh_TW |
| 顯示於系所單位: | 資料科學學位學程 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-0810202115010300.pdf | 2.51 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
