請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8167完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 蔡欣穆(Hsin-Mu Tsai) | |
| dc.contributor.author | Hao-Jen Hsiao | en |
| dc.contributor.author | 蕭皓仁 | zh_TW |
| dc.date.accessioned | 2021-05-20T00:49:30Z | - |
| dc.date.available | 2020-08-24 | |
| dc.date.available | 2021-05-20T00:49:30Z | - |
| dc.date.copyright | 2020-08-24 | |
| dc.date.issued | 2020 | |
| dc.date.submitted | 2020-08-18 | |
| dc.identifier.citation | [1]A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza, et al. A low power, fully event-based gesture recognition system. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7243–7252, 2017.
[2]J. Bruna and S. Mallat. Invariant scattering convolution networks.IEEE transactions on pattern analysis and machine intelligence, 35(8):1872–1886, 2013. [3]N. F. Chen. Pseudo-labels for supervised learning on dynamic vision sensor data,applied to object detection under ego-motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 644–653, 2018. [4]K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015. [5]B. K. Horn and B. G. Schunck. Determining optical flow. In Techniques and Applications of Image Understanding, volume 281, pages 319–331. International Society for Optics and Photonics, 1981. [6]B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka,P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, et al. An empirical evaluation of deep learning on highway driving.arXiv preprint arXiv:1504.01716, 2015. [7]iniVation. DAVIS240 hardware support. https://inivation.com/wp-content/uploads/2019/08/DAVIS240.pdf. [8]iniVation. DV (Dynamic Vision System). https://inivation.gitlab.io/dv/dv-docs/. [9]R. Kanjee, A. K. Bachoo, and J. Carroll. Vision-based adaptive cruise control using pattern matching. In 2013 6th Robotics and Mechatronics Conference (RobMech),pages 93–98. IEEE, 2013. [10]N. Kolotouros, G. Pavlakos, and K. Daniilidis. Convolutional mesh regression for single-image human shape reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4501–4510, 2019. [11]X. Lagorce, G. Orchard, F. Galluppi, B. E. Shi, and R. B. Benosman. Hots: a hierarchy of event-based time-surfaces for pattern recognition.IEEE transactions on pattern analysis and machine intelligence, 39(7):1346–1359, 2016. [12]P. Lichtsteiner, C. Posch, and T. Delbruck. A 128x128 120 db 15us latency asynchronous temporal contrast vision sensor.IEEE journal of solid-state circuits,43(2):566–576, 2008. [13]M. Lin, Q. Chen, and S. Yan. Network in network.arXiv preprint arXiv:1312.4400,2013. [14]J. Manderscheid, A. Sironi, N. Bourdis, D. Migliore, and V. Lepetit. Speed invariant time surface for learning to detect corner points with event-based cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 10245–10254, 2019. [15]A. I. Maqueda, A. Loquercio, G. Gallego, N. García, and D. Scaramuzza. Event-based vision meets deep learning on steering prediction for self-driving cars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 5419–5427, 2018. [16]E. Mueggler, C. Bartolozzi, and D. Scaramuzza. Fast event-based corner detection.2017. [17]Z. Niu, M. Zhou, L. Wang, X. Gao, and G. Hua. Ordinal regression with multiple output cnn for age estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4920–4928, 2016. [18]B. Ramesh, H. Yang, G. M. Orchard, N. A. Le Thi, S. Zhang, and C. Xiang. Dart:distribution aware retinal transform for event-based cameras.IEEE transactions on pattern analysis and machine intelligence, 2019. [19]S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015. [20]C. Shorten and T. M. Khoshgoftaar. A survey on image data augmentation for deep learning.Journal of Big Data, 6(1):60, 2019. [21]A. Sironi, M. Brambilla, N. Bourdis, X. Lagorce, and R. Benosman. Hats: Histograms of averaged time surfaces for robust event-based object classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 1731–1740, 2018. [22]G. P. Stein, O. Mano, and A. Shashua. Vision-based acc with a single camera: boundson range and range rate accuracy. In IEEE IV2003 Intelligent Vehicles Symposium.Proceedings (Cat. No. 03TH8683), pages 120–125. IEEE, 2003. [23]A. Z. Zhu, L. Yuan, K. Chaney, and K. Daniilidis. Ev-flownet: Self-supervised optical flow estimation for event-based cameras.arXiv preprint arXiv:1802.06898,2018. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8167 | - |
| dc.description.abstract | 自近期智慧車輛的蓬勃發展以來,主動巡航控制系統(ACC)一直都是最受歡迎的先進駕駛輔助系統(ADAS)之一。近期許多研究想利用相機及電腦視覺技術來實現ACC,主要原因是可以擴展該相機去實現車上其他視覺相關的智能功能,此外價格也比光達低廉許多。然而常規相機只有較低的時間分辨率且缺乏計算相對車速的能力,這極大的限制了系統的效能。本論文提出了一相對車速估算模型,該模型基於動態視覺感測器及卷積神經網路。使用動態視覺感測器能解決常規相機字在高速行駛下遇到的許多問題,並且能夠進行縱向的運動估算。本系統的FPS達到40Hz,超越了一般車用都卜勒雷達。實驗結果顯示,相對車速估算的平均誤差低於1.4 km/h。 | zh_TW |
| dc.description.abstract | Since the recent flourish development of intelligent vehicles, Adaptive Cruise Control (ACC) has always been one of the most popular Advanced Driver Assistance System (ADAS). Many recent studies want to use single-camera and computer vision technology to implement ACC. The reason is that a camera can be expanded to implement other visual intelligent functions. Also, the expense of the camera is much lower than LiDAR. However,conventional camera lacks the ability to estimate relative speed and only has a low time resolution, which greatly limits the performance of the system.This thesis presents a vehicle relative speed estimation model based on Dynamic Vision Sensor (DVS) and convolutional neural network (CNN) for Adaptive Cruise Control. Also a visual sensor, DVS is an asynchronous cam-era with high temporal resolution and overcome many problems of conventional cameras in high-speed driving conditions. The key innovation of this work is that we use visual sensors for longitudinal motion estimation. More-over, we design two novel data augmentation methods specifically for DVS streaming data. The speed estimation FPS of our system can reach 40 Hz, surpassing Doppler radar-based systems. Experimental results show that error of speed estimation is less than 1.4 km/h. | en |
| dc.description.provenance | Made available in DSpace on 2021-05-20T00:49:30Z (GMT). No. of bitstreams: 1 U0001-1708202012595000.pdf: 2884134 bytes, checksum: 2ee97dc06304e881c9999bef62ac6355 (MD5) Previous issue date: 2020 | en |
| dc.description.tableofcontents | 口試委員會審定書 ii 誌謝 iii 摘要 iv Abstract v CHAPTER 1 Introduction 1 CHAPTER 2 Related Work 5 2.1 Vision-based Adaptive Cruise Control 5 2.2 Event-based deep learning 5 CHAPTER 3 Preliminary 7 3.1 Dynamic vision sensor 7 3.2 Event Encoding methods 9 3.2.1 Frequency events Encoding 9 3.2.2 Surface of active events Encoding 10 3.3 Convolutional Neural Network 11 3.3.1 Convolution layer 11 3.3.2 Global average pooling 12 CHAPTER 4 System Design 14 4.1 Overview 14 4.2 Event-Frame Encoder 16 4.2.1 Objection detection 17 4.2.2 Speed estimation 17 4.3 Object Detection Model 19 4.4 Event Frame Preprocessing 20 4.4.1 Coordinate Encoding 20 4.4.2 Centralization 21 4.5 Data Augmentation 23 4.5.1 Shift Sampling 24 4.5.2 Temporal Flip 25 4.5.3 Horizontal Flip 26 4.6 Speed Estimate Model 27 4.6.1 Quadrant Issue 27 CHAPTER 5 Implementation 30 5.1 DVS240 30 5.2 LiDAR 30 CHAPTER 6 Experiment 32 6.1 Data Set 32 6.1.1 Experimental Setup 32 6.1.2 Data Collection 32 6.2 Loss Function 33 6.3 Performance Metrics 33 6.4 Speed Estimate Model Evaluation 34 6.4.1 Effect of Adaptive Average Pooling 2×2 34 6.4.2 Effect of Preprocessing 36 6.4.3 Effect of Data Augmentation 38 6.4.4 Best performance model 40 6.4.5 Evaluate on other styles of vehicle 41 CHAPTER 7 Conclusion 43 Bibliography 44 | |
| dc.language.iso | en | |
| dc.title | 使用動態視覺影像進行相對車速估測 | zh_TW |
| dc.title | Vehicle Relative Speed Estimation with Dynamic Vision Sensor | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 108-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 林忠緯(Chung-Wei Lin),林靖茹(Ching-Ju Lin),陳冠文(Kuan-Wen Chen) | |
| dc.subject.keyword | 相對車速估算,動態視覺感測器,卷積神經網路, | zh_TW |
| dc.subject.keyword | vehicle speed estimation,dynamic vision sensor,convolutional neural network, | en |
| dc.relation.page | 46 | |
| dc.identifier.doi | 10.6342/NTU202003725 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2020-08-19 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1708202012595000.pdf | 2.82 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
