Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 土木工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73543
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳柏華(Albert Y. Chen)
dc.contributor.authorJen-Chun Wangen
dc.contributor.author王仁駿zh_TW
dc.date.accessioned2021-06-17T07:41:02Z-
dc.date.available2021-02-20
dc.date.copyright2019-02-20
dc.date.issued2019
dc.date.submitted2019-02-14
dc.identifier.citation[1] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” Proc. - Int. Conf. Image Process. ICIP, vol. 2016–Augus, pp. 3464–3468, 2016.
[2] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: A review,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, pp. 694–711, 2006.
[3] C. Tzomakas, W. VonSeelen, I. Neuroinformatik, and R. Bochum, “Vehicle Detection in Tra c Scenes Using Shadows Vehicle Detection in Tra c Scenes Using Shadows 2 Detecting Shadows,” no. August, 1998.
[4] A. Bensrhair, M. Bertozzi, A. Broggi, P. Miche, S. Mousset, and G. Toulminet, “A cooperative approach to vision-based vehicle detection,” IEEE Intell. Transp. Syst., pp. 209–214, 2001.
[5] Z. Sun, R. Miller, G. Bebis, and D. DiMeo, “A real-time precrash vehicle detection system,” Proc. IEEE Work. Appl. Comput. Vis., vol. 2002–Janua, pp. 171–176, 2002.
[6] S. D. Buluswar and B. A. Draper, “Color machine vision for autonomous vehicles,” Eng. Appl. Artif. Intell., vol. 11, no. 2, pp. 245–256, Apr.1998.
[7] T. Kalinke, C. Tzomakas, and W. VSeelen, “A Texture-based Object Detection and an adaptive Model-based Classiication,” 1998.
[8] R. Mandelbaum, L. McDowell, L. Bogoni, B. Reich, and M. Hansen, “Real-time stereo processing, obstacle detection, and terrain estimation from vehicle-mounted stereo cameras,” Proc. Fourth IEEE Work. Appl. Comput. Vision. WACV’98, no. 1, pp. 288–289, 1998.
[9] A. Giachetti, M. Campani, and V. Torre, “The use of optical flow for road navigation,” IEEE Trans. Robot. Autom., vol. 14, no. 1, pp. 34–48, 1998.
[10] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” Proc. 2001 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognition. CVPR 2001, vol. 1, p. I-511-I-518, 2001.
[11] B. P. Mallikarjuna, B. S. Sastry, G. V. Suresh Kumar, Y. Rajendraprasad, S. M. Chandrashekar, and K. Sathisha, “Synthesis of new 4-isopropylthiazole hydrazide analogs and some derived clubbed triazole, oxadiazole ring systems – A novel class of potential antibacterial, antifungal and antitubercular agents,” Eur. J. Med. Chem., vol. 44, no. 11, pp. 4739–4746, Nov.2009.
[12] T. Mita, T. Kaneko, and O. Hori, “Joint Haar-like features for face detection,” in Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, 2005, p. 1619–1626 Vol. 2.
[13] D. G. Lowe, “Distinctive image features from scale invariant keypoints,” Int. J. Comput. Vis., vol. 60, pp. 91–11020042, 2004.
[14] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proc. - 2005 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognition, CVPR 2005, vol. I, pp. 886–893, 2005.
[15] X. Wang, T. X. Han, and S. Yan, “An HOG-LBP human detector with partial occlusion handling,” in 2009 IEEE 12th International Conference on Computer Vision, 2009, pp. 32–39.
[16] C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn., vol. 20, no. 3, pp. 273–297, Sep.1995.
[17] C. J. C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Data Min. Knowl. Discov., vol. 2, no. 2, pp. 121–167, 1998.
[18] T. Joachims and 〈thorsten@joachimsOrg〉, “SVM light Support Vector Machine on ‘Complexity Reduction in Multivariate Data’ (SFB475),” 2008.
[19] Y.-L. Chiu, A. Y. Chen, and M.-H. Hsieh, “Vision Based Traffic Conflict Analytics of Mixed Traffic Flow,” 2016.
[20] Y. E. Freund Robert Schapire, “A Short Introduction to Boosting,” J. Japanese Soc. Artif. Intell., vol. 14, no. 5, pp. 771–780, 1999.
[21] J. R. Quinlan, “Induction of Decision Trees,” Mach. Learn., vol. 1, no. 1, pp. 81–106, 1986.
[22] L. Breiman, “Random forests,” pp. 1–33, 2001.
[23] P. Kaewtrakulpong and R. Bowden, “An Improved Adaptive Background Mixture Model for Real- time Tracking with Shadow Detection,” 2001.
[24] Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., 2004, p. 28–31 Vol.2.
[25] Z. Zivkovic and F. Van DerHeijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction.”
[26] M. L. Comer and E. J. Delp, “Morphological operations for color image processing,” J. Electron. Imaging, vol. 8, no. 3, p. 279, Jul.1999.
[27] D. C. Cireşan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “Flexible, high performance convolutional neural networks for image classification,” IJCAI Int. Jt. Conf. Artif. Intell., pp. 1237–1242, 2011.
[28] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, Jan.2015.
[29] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks.” pp. 1097–1105, 2012.
[30] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Sep.2014.
[31] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” Feb.2015.
[32] M. T. B. Rasmussen Christoffer Bøgelund, Nasrollahi Kamal, “Aalborg Universitet R-FCN Object Detection Ensemble based on Object Resolution and Image Quality R-FCN Object Detection Ensemble based on Object Resolution and Image Quality,” vol. 1, pp. 110–120, 2017.
[33] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” 2014.
[34] Ross Girshick, “Fast R-CNN,” ICCV, 2015.
[35] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2015.
[36] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.” pp. 91–99, 2015.
[37] J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” 2016.
[38] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018.
[39] C. Dicle, O. I. Camps, and M. Sznaier, “The Way They Move: Tracking Multiple Targets with Similar Appearance,” 2013.
[40] J. H. Yoon, M.-H. Yang, J. Lim, and K.-J. Yoon, “Bayesian Multi-object Tracking Using Motion Context from Multiple Objects,” in 2015 IEEE Winter Conference on Applications of Computer Vision, 2015, pp. 33–40.
[41] C. Kim, F. Li, A. Ciptadi, and J. M. Rehg Georgia, “Multiple Hypothesis Tracking Revisited.”
[42] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” Proc. - Int. Conf. Image Process. ICIP, vol. 2017–Septe, pp. 3645–3649, 2018.
[43] L. Leal-Taixé, A. Milan, I. Reid, S. Roth, and K. Schindler, “MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking,” pp. 1–15, 2015.
[44] R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” J. Basic Eng., vol. 82, no. 1, p. 35, 1960.
[45] H. W. Kuhn, “The Hungarian method for the assignment problem,” Nav. Res. Logist. Q., vol. 2, no. 1–2, pp. 83–97, Mar.1955.
[46] T.-Y. Lin et al., “Microsoft COCO: Common Objects in Context,” Springer, Cham, 2014, pp. 740–755.
[47] Z. Luo et al., “MIO-TCD: A new benchmark dataset for vehicle classification and localization,” IEEE Trans. Image Process., pp. 1–1, 2018.
[48] J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” Dec.2016.
[49] digitalbrain79, “pyyolo,” 2017.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73543-
dc.description.abstract本研究提出一個基於深度學習,利用影像以提取交通特性的架構,目的是要為交通領域提供新的視野,並期許能藉由資料分析,幫助解決交通問題。過去使用的影像方法大多著重於特徵提取並搭配分類器以辨識車輛,而深度學習方法近年興起,且在過去較少被應用於交通領域。又近年來基於深度學習的物體偵測方法進步快速,且學習能力強、網路架構能彈性調整、偵測效果佳、同一偵測器可適用於不同場景等優勢,因此利用深度學習,期望能夠有效地應用於交通問題上。藉由近年出現的基於深度學習的物體偵測方法–You Only Look Once (YOLO),針對其具有偵測速度快 (接近即時)、偵測準確度高、背景錯誤率低、能偵測多類別物體等特性,故選用此方法作為本研究之偵測模式。本文提出之研究架構,先使用 COCO及MIO-TCD資料集訓練客製化YOLO二代模型,之後搭配卡曼濾波器及匈牙利演算法,偵測及追蹤汽車、機車、公車、自行車等物體類別,並取得分別對應的軌跡,亦可估算各個類別之數量,於單一張顯示卡NVIDIA GTX 1080上可達大約 38 FPS,汽車分類之精確率可達91.46%以上,機車可達89.51%以上。本文提出之研究架構應用於各種場景皆能保持一定的偵測精確率,又偵測速度快,且不受相機鏡頭、天氣狀況影響,可應用於許多交通問題。zh_TW
dc.description.abstractWe proposed an image based structure to extract traffic characteristics using deep learning. The purpose is to provide new perspectives for the traffic engineering field and hope to facilitate the analysis related to traffic engineering problems. In the past, image based methods mostly focused on feature extraction combined with classifiers to detect and recognize vehicles. In recent years, deep learning methods have risen while there are not many applications in the field of traffic engineering, and deep learning based object detection methods have progressed rapidly. It has the advantages of strong learning ability, flexibility of network architecture, good detection effect, and robustness to different scenes. Therefore, it is possible that the image-based deep learning method can be effectively applied to traffic engineering problems. You Only Look Once (YOLO), a deep-learning-based object detection method, has the advantages of fast detection speed (almost real-time), high detection accuracy, low background error rate, and the ability to detect multiple categories of objects. Due to these characteristics, it was chosen as the detector of this study. We used the COCO and MIO-TCD datasets to train the customized YOLO v2 model. Together with the Kalman filter and the Hungarian algorithm, this work can precisely detect and track objects such as cars, motorbikes/scooters, buses, and bicycles, and obtain the corresponding trajectory and counting of each category of objects, with the frame rate up to about 38 FPS on a single graphics card NVIDIA GTX 1080. The classification precision can achieve over 89.51% for motorbikes, 91.46% for cars. This model can be applied to various scenes to maintain a certain detection accuracy rate, and also has a fast detection speed, not affected by the camera lens and weather conditions, and can be applied to many traffic problems.en
dc.description.provenanceMade available in DSpace on 2021-06-17T07:41:02Z (GMT). No. of bitstreams: 1
ntu-108-R05521508-1.pdf: 3192159 bytes, checksum: 5a86b60e4116b1fc69addb6798a4c169 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents口試委員會審定書 #
誌謝 I
中文摘要 II
ABSTRACT III
CONTENTS V
LIST OF FIGURES VII
LIST OF TABLES VIII
Chapter 1 Introduction 1
1.1 Research Background 1
1.2 Research Objective 3
1.3 Research Flowchart 4
1.4 Structure of the Study 5
Chapter 2 Literature Review 7
2.1 Object/Vehicle Detection 7
2.1.1 First stage (~2000) 7
2.1.2 Second stage (2000~2012) 8
2.1.3 Third stage (2012~) 9
2.2 Object Tracking 12
2.3 Summary 12
Chapter 3 Methodology 14
3.1 Training Data Introduction and Pre-processing 15
3.2 Network Training 16
3.3 Object Tracking 18
3.3.1 Kalman Filter 19
3.3.2 Hungarian Algorithm 20
3.4 Object Counting 21
Chapter 4 Results 23
4.1 Validation Videos 24
4.2 Parameters Adjustment 26
4.3 Results of Multiple Object Counting 28
4.4 Summary 39
Chapter 5 Conclusion and Future Work 42
5.1 Conclusion 42
5.2 Future Work 44
REFERENCE 45
dc.language.isozh-TW
dc.subject電腦視覺zh_TW
dc.subject深度學習zh_TW
dc.subject物體偵測zh_TW
dc.subject物體追蹤zh_TW
dc.subject交通特性提取zh_TW
dc.subjectComputer Visionen
dc.subjectDeep Learningen
dc.subjectObject Detectionen
dc.subjectObject Trackingen
dc.subjectTraffic Characteristics Extractionen
dc.title透過深度學習基於影像之交通特性提取zh_TW
dc.titleImage-based Traffic Characteristics Extraction through Deep Learningen
dc.typeThesis
dc.date.schoolyear107-1
dc.description.degree碩士
dc.contributor.oralexamcommittee韓仁毓(Jen-Yu Han),謝佑明(Yo-Ming Hsieh)
dc.subject.keyword電腦視覺,深度學習,物體偵測,物體追蹤,交通特性提取,zh_TW
dc.subject.keywordComputer Vision,Deep Learning,Object Detection,Object Tracking,Traffic Characteristics Extraction,en
dc.relation.page49
dc.identifier.doi10.6342/NTU201900418
dc.rights.note有償授權
dc.date.accepted2019-02-14
dc.contributor.author-college工學院zh_TW
dc.contributor.author-dept土木工程學研究所zh_TW
顯示於系所單位:土木工程學系

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  未授權公開取用
3.12 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved