Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/1277
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor施吉昇
dc.contributor.authorChun-Wei Kuen
dc.contributor.author古君葳zh_TW
dc.date.accessioned2021-05-12T09:35:24Z-
dc.date.available2020-03-02
dc.date.available2021-05-12T09:35:24Z-
dc.date.copyright2018-03-02
dc.date.issued2018
dc.date.submitted2018-02-12
dc.identifier.citation[1] “The national highway traffic safety administration research in 2015,”https://www.nhtsa.gov/equipment/safety-technologies.
[2] P. P. D. W. R. Fernandes, C. Premebida and U. Nunes, “Road Detection Using HighResolution LIDAR,” Vehicle Power and Propulsion Conference (VPPC), pp. 1–6,2014.
[3] A. A. A. P. R. Cristiano Premebida, Luis Garrote and U. Nunes, “High-resolution LIDAR-based Depth Mapping using Bilateral Filter,” International Conference onIntelligent Transportation Systems (ITSC), pp. 2469–2474, 2016.
[4] B.-H. L. A. L. Sang-Mook Lee, Jeong Joon Im and A. Kurdila, “A real-time grid map generation and object classification for ground-based 3D LIDAR data using image analysis techniques,” International Conference on Image Processing, pp. 2253– 2256, 2010.
[5] E. Guizzo, “How Google’s self-driving car works,” IEEE Spectr. Online, vol. 18, 2011.
[6] “Google just made a big move to bring down the cost of self-driving cars,” http://www.businessinsider.com/googles-waymo-reduces-lidar-cost-90-in-effort-toscale-self-driving-cars-2017-1.
[7] “Unece r131 regulation,” https://www.unece.org/trans/main/wp29/wp29regs121-140.html.
[8] M. B.-A. J. P. C. C. L. Azevedo, J. L. Cardoso and M. Marques, “Automatic vehicle trajectory extraction by aerial remote sensing.” Proc. Soc. Behavioral Sci, vol. 111, pp. 849–858, 2013.
[9] A. C. Shastry and R. A. Schowengerdt, “Airborne video registration and traffic-flow parameter estimation,” IEEE Trans. Intell. Transp. Syst, vol. 6, no. 4, pp. 391–405,2005.
[10] R. G. J. Redmon, S. Divvala and A. Farhadi, “You only look once: Unified, real-time object detection,” in Computer Vision and Pattern Recognition, 2015.
[11] J. Redmon and A. Farhadi., “Yolo9000: Better, faster, stronger.” in Computer Vision and Pattern Recognition, 2016.
[12] A. Guttman, “R-Tree: A Dynamic Index Structure for Spatial Searching,” Proc. ACM SIGMODE, pp. 47–57, 1984.
[13] I. Kamel and C. Faloutsos, “On packing r-trees.” In Proc. 2nd International Conference on Information and Knowledge Management(CIKM-93), pp. 490–499, 1993.
[14] N. Roussopoulos and D. Leifker, “Direct spatial search on pictorial databases using packed r-trees.” Proc. ACM SIGMOD, pp. 17–31, 1985.
[15] I. Kamel and C. Faloutsos, “Hilbert Rtree: An improved R-tree using fractals.” In Proceedings of the Twentieth International Conference on Very Large Data Bases, pp. 500–509, 1994.
[16] H. tae Kiml and B. Songl, “Vehicle Recognition Based on Radar and Vision Sensor Fusion for Automatic Emergency Braking.” 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013), pp. 1342–1346, 2013.
[17] R. D. M. B. M. Fazeen, B. Gozick and M. C. Gonzalez, “Safe Driving Using Mobile Phones,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 3, pp. 1462–1468, 2012.
[18] U. N. P. P. C. Premebida, G. Monteiro, “A Lidar and Vision-based Approach for Pedestrian and Vehicle Detection and Tracking.” Intelligent Transportation Systems Conference 2007, pp. 1044–1049, 2007.
[19] H. tae Kiml and B. Songl, “Vehicle Recognition Based on Radar and Vision Sensor Fusion for Automatic Emergency Braking.” 13th International Conference on Control, Automation and Systems (ICCAS 2013), p. 1342–1346, 2013.
[20] “Nvidia jetson. the embedded platform for autonomous everything.” http://www.nvidia.com/object/embedded-systems-dev-kits-modules.html.
[21] “Surrounded by ai devices that do everything from flying to farming, nvidia launches jetson tx2.” https://blogs.nvidia.com/blog/2017/03/07/surrounded-by-aidevices-that-do-everything-from-flying-to-farming-nvidia-launches-jetson-tx2/.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/handle/123456789/1277-
dc.description.abstractNowadays, the bulk of these road collisions is caused by human unawareness or distraction. Since the most important thing is your safety and the safety of others, ADAS is developed to support enhanced vehicle system for safety and better driving. AEBS as an important part of the ADAS has become a hot research topic. Computer vision, together with Radar and Lidar, is at the forefront of technologies that enable the evolution of AEBS. Since the cost of long range radar and lidar is very high, we want to use camera-based system to construct AEBS. Instead of using a single monocular camera, we propose a heterogeneous camera-based system to use sensor fusion to combine the strengths of all the difference FoV cameras. Also,We use a heuristic false positive removal method to decrease the false positive rate that caused by the sensor fusion method. We optimize the sensor fusion method Because of the the limitation of computing resource on embedded system. As a result, the recall of YOLO can be increased up to 10% through our heterogeneous camera-based system.en
dc.description.provenanceMade available in DSpace on 2021-05-12T09:35:24Z (GMT). No. of bitstreams: 1
ntu-107-R04922133-1.pdf: 35022533 bytes, checksum: 05d17ae8a2edce861301e4897cf9dd92 (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents口試委員審定書 i
致謝 ii
摘要 iii
Abstract iv
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Background and Related Work 6
2.1 AutonomousEmergencyBrakingSystem ................. 6
2.2 Vision-BasedVehicleDetection-YOLO.................. 7
2.3 R-tree.................................... 9
2.4 RelatedWork ................................ 12
3 System Architecture and Problem Definition 14
3.1 SystemArchitecture............................. 14
3.2 ProblemDefinition ............................. 16
4 Design and Implementation 18
4.1 TheImpactofInputImageSizes...................... 20
4.2 CoordinateSystemTransformation..................... 21
4.3 ExistedSensorFusionMethod ....................... 21
4.4 ProposedSensorFusionMethod ...................... 23
4.5 FalsePositiveRemoval........................... 26
4.6 SearchSpaceReducation.......................... 28
5 Performance Evaluation 34
5.1 EvaluationofSensorFusionMethod.................... 35
5.2 EvaluationofFalsePositiveRemoval ................... 37
5.3 PerformanceMeasurementonNVIDIATX2. . . . . . . . . . . . . . . . 38
6 Conclusion 39
Bibliography 40
dc.language.isoen
dc.title針對安全攸關之嵌入式即時系統的異質性資訊融合zh_TW
dc.titleHeterogeneous Sensing Fusion for Safety Critical Embedded Real-time Systemsen
dc.typeThesis
dc.date.schoolyear106-1
dc.description.degree碩士
dc.contributor.oralexamcommittee逄愛君,叢培貴
dc.subject.keyword異質性資訊融合,異質性影像感測器系統,緊急煞車輔助系統,物體偵測,zh_TW
dc.subject.keywordHeterogeneous Sensing Fusion,Heterogeneous Camera-Based Sytem,Tri-focal camera,AEBS,Object Detection,en
dc.relation.page41
dc.identifier.doi10.6342/NTU201800542
dc.rights.note同意授權(全球公開)
dc.date.accepted2018-02-12
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf34.2 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved