Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/87286
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕zh_TW
dc.contributor.advisorYung-Yu Chuangen
dc.contributor.author周思佳zh_TW
dc.contributor.authorSi-Jia Zhouen
dc.date.accessioned2023-05-18T16:50:36Z-
dc.date.available2023-11-09-
dc.date.copyright2023-05-11-
dc.date.issued2023-
dc.date.submitted2023-02-17-
dc.identifier.citation[1] M. Alotaibi and B. Alotaibi. Distracted driver classification using deep learning. Signal, Image and Video Processing, 14(3):617–624, 2020.
[2] P. M. Chawan, S. Satardekar, D. Shah, R. Badugu, and A. Pawar. Distracted driver detection and classification. International Journal of Engineering Research and Applications, 4(7), 2018.
[3] F. Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258, 2017.
[4] J. Cronje and A. P. Engelbrecht. Training convolutional neural networks with class based data augmentation for detecting distracted drivers. In Proceedings of the 9th International Conference on Computer and Automation Engineering, pages 126– 130, 2017.
[5] Z. Fang, J. Chen, J. Wang, Z. Wang, N. Liu, and G. Yin. Driver distraction behav- ior detection using a vision transformer model based on transfer learning strategy. In 2022 6th CAA International Conference on Vehicular Control and Intelligence (CVCI), pages 1–6. IEEE, 2022.
[6] K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu,et al. A survey on vision transformer. IEEE transactions on pattern analysis and machine intelligence, 45(1):87–110, 2022.
[7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[8] A. Koesdwiady, S. M. Bedawi, C. Ou, and F. Karray. End-to-end deep learn- ing for driver distraction recognition. In Image Analysis and Recognition: 14th International Conference, ICIAR 2017, Montreal, QC, Canada, July 5–7, 2017, Proceedings 14, pages 11–18. Springer, 2017.
[9] E. Li, A. Samat, P. Du, W. Liu, and J. Hu. Improved bilinear cnn model for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters, 19:1–5, 2020.
[10] T.-Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear cnn models for fine-grained visual recognition. In Proceedings of the IEEE international conference on computer vision, pages 1449–1457, 2015.
[11] D. Lu and Q. Weng. A survey of image classification methods and techniques for improving classification performance. International journal of Remote sensing, 28(5):823–870, 2007.
[12] M. Lu, Y. Hu, and X. Lu. Driver action recognition using deformable and dilated faster r-cnn with optimized region proposals. Applied Intelligence, 50:1100–1111, 2020.
[13] S. Masood, A. Rai, A. Aggarwal, M. N. Doja, and M. Ahmad. Detecting distractionof drivers using convolutional neural network. Pattern Recognition Letters, 139:79– 85, 2020.
[14] F. Omerustaoglu, C. O. Sakar, and G. Kar. Distracted driver detection by combining in-vehicle and image data using deep learning. Applied Soft Computing, 96:106657, 2020.
[15] B. Qin, J. Qian, Y. Xin, B. Liu, and Y. Dong. Distracted driver detection based on a cnn with decreasing filter size. IEEE Transactions on Intelligent Transportation Systems, 23(7):6922–6933, 2021.
[16] A. Saluja, J. Xie, and K. Fayazbakhsh. A closed-loop in-process warping detection system for fused filament fabrication using convolutional neural networks. Journal of Manufacturing Processes, 58:407–415, 2020.
[17] C.Szegedy,V.Vanhoucke,S.Ioffe,J.Shlens,andZ.Wojna.Rethinkingtheinception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
[18] M. Tan and Q. Le. Efficientnet: Rethinking model scaling for convolutional neu- ral networks. In International conference on machine learning, pages 6105–6114. PMLR, 2019.
[19] Z. A. Varaich and S. Khalid. Recognizing actions of distracted drivers using in- ception v3 and xception convolutional neural networks. In 2019 2nd International Conference on Advancements in Computational Sciences (ICACS), pages 1–8. IEEE, 2019.
[20] Z. Wu, C. Shen, and A. Van Den Hengel. Wider or deeper: Revisiting the resnet model for visual recognition. Pattern Recognition, 90:119–133, 2019.
[21] C. Yan, F. Coenen, and B. Zhang. Driving posture recognition by convolutional neural networks. IET Computer Vision, 10(2):103–114, 2016.
[22] Z. Zhang and M. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in neural information processing systems, 31, 2018.
[23] H. Zhao, J. Jia, and V. Koltun. Exploring self-attention for image recogni- tion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10076–10085, 2020.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/87286-
dc.description.abstract近年來,道路交通事故的風險正在迅速上升。異常駕駛仍然是交通事故的主 要原因之一。駕駛員異常行為檢測是一個重要的計算機視覺問題,可以在提高交 通安全和減少交通事故方面發揮至關重要的作用。基於卷積神經網絡(CNN)的 廣泛方法已被應用於駕駛員異常駕駛的檢測。在卷積神經網絡 (CNN) 中,卷積運 算擅長提取局部特徵,但難以捕獲全局表示。為解決這一問題,我們提出了一種 新的駕駛員異常行為的檢測方法,該方法將不同的 CNN 模型以及 ViT 模型相結 合來捕獲局部和全局特徵。此外,特徵融合的過程可以大大增強局部特徵的全局 感知能力和全局表示的局部細節。在 StateFarm 數據集上進行的大量實驗表明,我們提出的方法表現出最佳性能。zh_TW
dc.description.abstractThe risk of road accidents has been rising rapidly in recent years. Abnormal driving is still one of the main causes of traffic accidents. Driver abnormal behavior detection is an important computer vision problem that can play a crucial role in improving traffic safety and reducing traffic accidents. A wide range of methods based on convolutional neural networks (CNNs) have been applied to the detection of abnormal driving by drivers. In convolutional neural networks (CNNs), convolution operations are good at extracting local features, but struggle to capture global representations. To address this problem , we propose a novel driver abnormal behavior detection method that combines different CNN models and ViT models to capture local and global features. In addition, the process of feature fusion can greatly enhance the global awareness of local features and the local details of the global representation. Extensive experiments on the StateFarm dataset show that our proposed method exhibits the best performance.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-05-18T16:50:36Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-05-18T16:50:36Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
Chapter 2 Related work 3
2.1 Driver Distraction Detection 3
2.2 Bilinear CNN Model 4
Chapter 3 Method 5
3.1 Multi-Model Fusion Network 5
3.2 CNN Feature Module 6
3.3 Transformer Feature Module 7
3.4 Feature Fusion Module 8
Chapter 4 Experiment and Result 9
4.1 Dataset 9
4.2 Experiment Setup 10
4.3 Evaluation Metrics 10
4.4 Main Results 11
4.5 Ablation Study 12
Chapter 5 Conclusion 14
Reference 15
-
dc.language.isozh_TW-
dc.subject多模型特徵zh_TW
dc.subjectViT 模型zh_TW
dc.subject駕駛員異常行為檢測zh_TW
dc.subject特徵融合zh_TW
dc.subjectFeature Fusionen
dc.subjectDistracted Driver Detectionen
dc.subjectVision Transformeren
dc.subjectMulti- model Featuresen
dc.title使用多模型特徵進行駕駛員異常行為檢測zh_TW
dc.titleDistracted Driver Detection Using Multi-model Featuresen
dc.typeThesis-
dc.date.schoolyear111-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee吳賦哲;葉正聖zh_TW
dc.contributor.oralexamcommitteeBIn-Zhe Wu;Zheng-Sheng Yeen
dc.subject.keyword駕駛員異常行為檢測,特徵融合,ViT 模型,多模型特徵,zh_TW
dc.subject.keywordDistracted Driver Detection,Feature Fusion,Vision Transformer,Multi- model Features,en
dc.relation.page18-
dc.identifier.doi10.6342/NTU202300564-
dc.rights.note未授權-
dc.date.accepted2023-02-18-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-1.pdf
  未授權公開取用
6 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved