請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85943
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 洪一平(Yi-Ping Hung) | |
dc.contributor.author | JunYong Jeon | en |
dc.contributor.author | 田濬榕 | zh_TW |
dc.date.accessioned | 2023-03-19T23:29:52Z | - |
dc.date.copyright | 2022-09-23 | |
dc.date.issued | 2022 | |
dc.date.submitted | 2022-09-21 | |
dc.identifier.citation | SteffenGauglitz,TobiasHöllerer,andMatthewTurk.Evaluationofinterestpointde- tectors and feature descriptors for visual tracking. International journal of computer vision, 94(3):335–360, 2011. MartinZukal,PetrCika,andRadimBurget.Evaluationofinterestpointdetectorsfor scenes with changing lightening conditions. In 2011 34th International Conference on Telecommunications and Signal Processing (TSP), pages 579–583. IEEE, 2011. Vassileios Balntas, Karel Lenc, Andrea Vedaldi, and Krystian Mikolajczyk. Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5173–5182, 2017. Torsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys, Josef Sivic, et al. Benchmarking 6dof outdoor visual localization in changing conditions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8601–8610, 2018. Zhengyou Zhang. Determining the epipolar geometry and its uncertainty: A review. International journal of computer vision, 27(2):161–195, 1998. Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, pages 3354–3361. IEEE, 2012. Jürgen Sturm, Nikolas Engelhard, Felix Endres, Wolfram Burgard, and Daniel Cre- mers. A benchmark for the evaluation of rgb-d slam systems. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pages 573–580. IEEE, 2012. KyleWilsonandNoahSnavely.Robustglobaltranslationswith1dsfm.InEuropean conference on computer vision, pages 61–75. Springer, 2014. Wenzheng Song, Masanori Suganuma, Xing Liu, Noriyuki Shimobayashi, Daisuke Maruta, and Takayuki Okatani. Matching in the dark: a dataset for matching image pairs of low-light scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6029–6038, 2021. David G Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004. Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. In European conference on computer vision, pages 404–417. Springer, 2006. Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self- supervised interest point detection and description. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 224–236, 2018. MihaiDusmanu,IgnacioRocco,TomasPajdla,MarcPollefeys,JosefSivic,Akihiko Torii, and Torsten Sattler. D2-net: A trainable cnn for joint description and detection of local features. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, pages 8092–8101, 2019. JeromeRevaud,PhilippeWeinzaepfel,CésarDeSouza,NoePion,GabrielaCsurka, Yohann Cabon, and Martin Humenberger. R2d2: repeatable and reliable detector and descriptor. arXiv preprint arXiv:1906.06195, 2019. Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, José Neira, Ian Reid, and John J Leonard. Past, present, and future of simultaneous lo- calization and mapping: Toward the robust-perception age. IEEE Transactions on robotics, 32(6):1309–1332, 2016. Baihui Tang, Zhengyi Liu, and Sanxing Cao. Ar application research based on orb-slam. In International Conference on VR/AR and 3D Displays, pages 78–88. Springer, 2020. Johannes L Schonberger, Hans Hardmeier, Torsten Sattler, and Marc Pollefeys. Comparative evaluation of hand-crafted and learned local features. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1482–1491, 2017. Jared Heinly, Enrique Dunn, and Jan-Michael Frahm. Comparative evaluation of binary features. In European Conference on Computer Vision, pages 759–773. Springer, 2012. Christoph Strecha, Wolfgang Von Hansen, Luc Van Gool, Pascal Fua, and Ul- rich Thoennessen. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In 2008 IEEE conference on computer vision and pattern recognition, pages 1–8. Ieee, 2008. Michael E Wall, Andreas Rechtsteiner, and Luis M Rocha. Singular value decom- position and principal component analysis. In A practical approach to microarray data analysis, pages 91–109. Springer, 2003. RichardHartleyandAndrewZisserman.Multipleviewgeometryincomputervision. Cambridge university press, 2003. Konstantinos G Derpanis. Overview of the ransac algorithm. Image Rochester NY, 4(1):2–3, 2010. Jia-Wang Bian, Yu-Huan Wu, Ji Zhao, Yun Liu, Le Zhang, Ming-Ming Cheng, and Ian Reid. An evaluation of feature matchers for fundamental matrix estimation. arXiv preprint arXiv:1908.09474, 2019. Christopher Choy, Jaesik Park, and Vladlen Koltun. Fully convolutional geometric features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8958–8966, 2019. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85943 | - |
dc.description.abstract | 特徵提取在攝影機定位領域有廣泛的應用,特徵提取的精確度對於追蹤攝影機至關重要。為此,本研究透過兩視點跟蹤流估計基矩陣的比較,驗證了特徵之性能。在這個過程中,我們提出了一個方法來比較特徵運用在整個領域上的性能,並使用一個新的概念,規範對稱幾何距離曲線。除了常用的數據集,包括視點或亮度的變化,我們還使用專門用於低曝光條件下的數據集來評估相機傳感器的靈敏度。低曝光和噪聲的方法是用來定義強健特徵的另一種視角,對於未來諸如自動駕駛汽車或增強現實等應用中的特徵提取方法的發展有重大影響。 | zh_TW |
dc.description.abstract | Feature-based localization method is widely used in the camera-based localization field, and accurate feature extraction is crucial for obtaining the precise camera tracking result. To this end, in this study, the performance of the feature was verified through a comparison of the estimated fundamental matrices following a two-view tracking flow. In this paper, we present a new evaluation metric, the NSGD error curve, to compare the overall performances of various feature methods. In addition to use the comparative evaluation dataset for viewpoint or luminance changes, we also applied an low exposure and noise specific dataset to evaluate the performances of the feature extraction methods in low luminance situation. This new approach to low exposure and noise, another criteria that defines robust features, will have significant meanings for researching feature methods in applications such as Autonomous Vehicles or Argumented Reality. | en |
dc.description.provenance | Made available in DSpace on 2023-03-19T23:29:52Z (GMT). No. of bitstreams: 1 U0001-2908202200415300.pdf: 14701081 bytes, checksum: c31c20a991353e51a6d5e9b92a75386f (MD5) Previous issue date: 2022 | en |
dc.description.tableofcontents | 摘要 i Abstract ii Contents iii List of Figures vi List of Tables viii Chapter 1 Introduction 1 Chapter 2 Related Works 4 Chapter 3 Experiment Design 10 3.1 Image Sequence 11 3.2 Feature Extraction 11 3.3 Initial Matching 12 3.4 Estimate Fundamental Matrix 12 3.5 Outlier Rejection 13 3.6 Evaluate Fundamental Matrix using NSGD curve 14 Chapter 4 Experiment Fundamentals 15 4.1 Evaluation Metrics 15 4.1.1 Symmetry Geometric Distance (SGD) 16 4.1.2 NormalizedSGD(NSGD) 17 4.1.3 Area under NSGD curve 17 4.2 Datasets 19 4.2.1 KITTI Dataset 19 4.2.2 TUM SLAM Dataset 20 4.2.3 Community Photo Collection (CPC) Dataset 21 4.2.4 Matching in The Dark Dataset 21 4.3 Feature Methods 23 4.3.1 Handcrafted Methods 23 4.3.1.1 SIFT 23 4.3.1.2 SURF 25 4.3.2 Deep-Learning Methods 27 4.3.2.1 SuperPoint 27 4.3.2.2 D2-Net 30 4.3.2.3 R2D2 32 4.3.2.4 ASLFeat 35 Chapter 5 Experimental Results 37 5.1 Experimental Results in KITTI dataset 38 5.2 Experimental Results in TUM dataset 39 5.3 Experimental Results in CPC dataset 41 5.4 Experimental Results in MID Dataset 43 5.4.1 Experimental Results in Indoor sequence 45 5.4.1.1 Different Shutter Speed, Fixed ISO 45 5.4.1.2 Different ISO, Fixed Shutter Speed 46 5.4.2 Experimental Results in Outdoor sequence 47 5.4.2.1 Different Shutter Speed, Fixed ISO 47 5.4.2.2 Different ISO, Fixed Shutter Speed 47 Chapter 6 Discussions 49 Chapter 7 Conclusions 53 References 54 | |
dc.language.iso | en | |
dc.title | 人工特徵與深度學習特徵在基礎矩陣估算上之效能表現 | zh_TW |
dc.title | Performance of Fundamental Matrix Estimation Using Handcrafted and Deep Learning Features | en |
dc.type | Thesis | |
dc.date.schoolyear | 110-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 石勝文(Sheng-Wen Shih),陳冠文(Kuan-Wen Chen) | |
dc.subject.keyword | 雙視圖跟蹤,特徵提取,基於深度學習的特徵,人工特徵,基本矩陣,低曝光,歸一化對稱幾何距離誤差曲線, | zh_TW |
dc.subject.keyword | two view tracking,feature extraction,deep learning feature,handcrafted feature,fundamental matrix,low exposure,normalized symmetric geometry distance error curve, | en |
dc.relation.page | 57 | |
dc.identifier.doi | 10.6342/NTU202202903 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2022-09-22 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
dc.date.embargo-lift | 2022-09-23 | - |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-2908202200415300.pdf | 14.36 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。