Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8381
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor郭斯彥(Sy-Yen Kuo)
dc.contributor.authorShih-Chang Linen
dc.contributor.author林士彰zh_TW
dc.date.accessioned2021-05-20T00:53:11Z-
dc.date.available2020-08-06
dc.date.available2021-05-20T00:53:11Z-
dc.date.copyright2020-08-06
dc.date.issued2020
dc.date.submitted2020-08-05
dc.identifier.citation[1]Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten,and Kilian Weinberger, “Multi-scale dense networks for resource efficient image classification,” in International Conference on Learning Representations, 2018.
[2]M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” CoRR, 2016.
[3]C. Chen, A. Seff, A. Kornhauser and J. Xiao, “DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 2722-2730, doi: 10.1109/ICCV.2015.312.
[4]Y. Deng, A. Luo and M. Dai, “Building an Automatic Defect Verification System Using Deep Neural Network for PCB Defect Classification,” 2018 4th International Conference on Frontiers of Signal Processing (ICFSP), Poitiers, 2018, pp. 145-149, doi: 10.1109/ICFSP.2018.8552045.
[5]A. Esteva et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017.
[6]D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354-359, Oct. 2017.
[7]D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” in Proc. Int. Conf. Learn. Representations (ICLR), 2014.
[8]C. Doersch, “Tutorial on variational autoencoders,” 2016, arXiv:1606.05908. [Online]. Available: http://arxiv.org/abs/1606.05908
[9]A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Dept. Comput. Sci., Univ. Toronto, Toronto, ON, Canada, Tech. Rep., 2009. [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
[10]J. Deng, W. Dong, R. Socher, L. Li, Kai Li and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 2009, pp. 248-255, doi: 10.1109/CVPR.2009.5206848.
[11]C. Szegedy et al., “Going deeper with convolutions,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1-9, doi: 10.1109/CVPR.2015.7298594.
[12]K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.
[13]J. Hu, L. Shen and G. Sun, “Squeeze-and-Excitation Networks,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 7132-7141, doi: 10.1109/CVPR.2018.00745.
[14]S. Han, H. Mao and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning trained quantization and Huffman coding,” Proc. 6th Int. Conf. Learn. Represent. (ICLR), pp. 1-14, 2016.
[15]S. Han, J. Pool, J. Tran and W. Dally, “Learning both weights and connections for efficient neural network,” Proc. Int. Conf. Neural Inf. Process. Syst., pp. 1135-1143, 2015.
[16]P. Molchanov, S. Tyree, T. Karras, T. Aila and J. Kautz, “Pruning convolutional neural networks for resource efficient inference,” Proc. Int. Conf. Learn. Represent. (ICLR), pp. 1-17, Apr. 2017.
[17]H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning filters for efficient convnets,” in Proc. International Conference on Learning Representations (ICLR), Toulon, Apr. 2017.
[18]Y. He, X. Zhang and J. Sun, “Channel pruning for accelerating very deep neural networks,” in Proc. Int. Conf. Comput. Vis. (ICCV), vol. 2, no. 6, pp. 1389-1397, Oct. 2017.
[19]I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv and Y. Bengio, “Binarized neural networks,” Proc. Adv. Neural Inf. Process. Syst., pp. 4107-4115, 2016.
[20]M. Rastegari, V. Ordonez, J. Redmon and A. Farhadi, “XNOR-Net: ImageNet classification using binary convolutional neural networks,” Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 525-542, Oct. 2016.
[21]Y. Xu, X. Dong, Y. Li and H. Su, 'A Main/Subsidiary Network Framework for Simplifying Binary Neural Networks,' 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 7147-7155, doi: 10.1109/CVPR.2019.00732.
[22]G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, “Densely Connected Convolutional Networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 2261-2269, doi: 10.1109/CVPR.2017.243.
[23]Geometric Image Transformations mdash; OpenCV 2.4.13.7 documentation. (2019) [Online]. Available: https://docs.opencv.org/2.4/modules/imgproc/doc/geo-metric_transformations.html
[24]K. He, X. Zhang, S. Ren and J. Sun, “Identity mappings in deep residual networks,” Proc. Eur. Conf. Comput. Vis, pp. 630-645, 2016.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8381-
dc.description.abstract近年來隨著物聯網、自駕車等應用的發展,邊緣計算越來越為重要。而這些技術的發展與神經網路有著重要的關聯,許多應用都將使用到神經網路。我們發現在邊緣裝置上可能會有計算能力較低的特性,也可能在能源上有所限制或是透過外部的其他資源來獲得能量,且有許多邊緣裝置都需要使用在即時的應用中。因此我們提出了幾項方法來針對神經網路的輸入做處理,以大量減少神經網路所需的計算量,並嘗試保持神經網路的準確率。我們針對我們的方法來搭配資料集做實驗,接著對實驗結果做分析與討論,最後在邊緣裝置的使用上提出相關的建議。在三個資料集的實驗中,我們最好的方法至少可以減少網路架構50%以上的FLOPs。表明我們的方法將可以依照邊緣裝置的情境來解決邊緣裝置計算能力低、能源有限、即時應用等問題。zh_TW
dc.description.abstractIn recent years, with the development of applications of the Internet of Things, Self-driving cars, and other fields, Edge Computing has become increasingly important. The development of these technologies has an important relationship with neural networks, and many applications will use neural networks. We found that edge devices may have lower computing capability, or may have limited resources or obtain energy through other external resources, and many edge devices need to be used in real-time applications. Therefore, we propose several methods to process the input of the neural network to reduce the amount of calculation required by the neural network and try to maintain the accuracy of the neural network. We will experiment with the dataset according to our method, then analyze and discuss the experimental results, and finally put forward relevant suggestions on the use of edge devices. In the experiment of the three datasets, our best method can reduce the FLOPs of the network architecture by at least 50%. It shows that our method can solve the problems of low calculation capacity, limited energy, and real-time applications of edge devices according to the situation of edge devicesen
dc.description.provenanceMade available in DSpace on 2021-05-20T00:53:11Z (GMT). No. of bitstreams: 1
U0001-2707202017113500.pdf: 2647199 bytes, checksum: 48979243f391c54e2999c1abc889a96d (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents論文口試委員會審定書…………………………………………………………………i
誌謝……………………………………………………………………………………………………ii
摘要……………………………………………………………………………………………………iii
Abstract………………………………………………………………………………………iv
Contents………………………………………………………………………………………v
List of Figures……………………………………………………………………vi
List of Tables………………………………………………………………………viii
Chapter 1 Introduction…………………………………………………1
Chapter 2 Related Works………………………………………………6
Chapter 3 Method…………………………………………………………………10
3.1 Pre-Process…………………………………………………………11
3.2 MSDNet architecture modification…20
Chapter 4 Experimental…………………………………………………22
4.1 Dataset……………………………………………………………………22
4.2 Training Details……………………………………………24
4.3 Training Results……………………………………………25
Chapter 5 Conclusions and Future Works………38
References…………………………………………………………………………………39
dc.language.isoen
dc.title邊緣裝置上的高效能隨時分類zh_TW
dc.titleEfficient Anytime Classification on Edge Devicesen
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee雷欽隆(Chin-Laung Lei),顏嗣鈞(Hsu-Chun Yen),袁世一(Shih-Yi Yuan),陳英一(Ing-Yi Chen)
dc.subject.keyword邊緣運算,邊緣裝置,隨時分類,高效分類,高效網路架構,zh_TW
dc.subject.keywordEdge Computing,Edge Device,Anytime Prediction,Efficient Classification,Efficient Network Architecture,en
dc.relation.page42
dc.identifier.doi10.6342/NTU202001923
dc.rights.note同意授權(全球公開)
dc.date.accepted2020-08-05
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
U0001-2707202017113500.pdf2.59 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved