請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/23556
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 張瑞益 | |
dc.contributor.author | Ming-Hung Chen | en |
dc.contributor.author | 陳明宏 | zh_TW |
dc.date.accessioned | 2021-06-08T05:03:54Z | - |
dc.date.copyright | 2011-02-20 | |
dc.date.issued | 2011 | |
dc.date.submitted | 2011-02-15 | |
dc.identifier.citation | [1]台灣網路資訊中心, http://stat.twnic.net.tw/.
[2]IBM Research - Almaden , http://www.qbic.almaden.ibm.com. [3]林大元,“基於使用者關聯性行為探勘之影像內容檢索,”國立成功大學資訊工程學系碩士論文, 2006. [4]David G. Lowe,“Object Recognition from Local Scale-Invariant Features,”Proceedings IEEE International Conference Vision, vol. 2, 1999, pp.1150-1157 [5]David G. Lowe,“Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Conference Vision, vol. 60, no. 2, 2004, pp.91-110 [6]Harris, C. and Stephens, M. ,“A combined corner and edge detector,”Fourth Alvey Vision Conference. Manches, 1988, UK : 147-151. [7]Yan Ke, Rahul Sukthankar,“PCA-SIFT: A More Distinctive Representation for Local Image Descriptors,”IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 2004, pp.506-503. [8]H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, “SURF: speeded up robust features”, Computer Vision and Image Understanding, vol.110, 2008. [9]Luo Juan, Oubong Gwun,“A Comparison of SIFT, PCA-SIFT and SURF,”International Journal of Image Processing, vol. 3, 2009, pp.143-152. [10]J.L. Bentley,“Multidimensional Binary Search Trees Used for Associative Searching,”Communications of the ACM, 18 (1975), pp. 509-517. [11]A. Moore,“An introductory tutorial on kd-trees,”tech. report Technical Report No. 209, Computer Laboratory, University of Cambridge, Carnegie Mellon University, 1991. [12]林裕貿,“用影像合成達成相機模組之影像穩定,”國立台灣大學資訊工程研究所碩士論文, 2006. [13]吳加山,“使用尺度不變性特徵轉換與立體視覺之即時三維物體識別,”國立台灣科技大學機械工程系碩士論文, 2008. [14]黃漢哲,“SIFT演算法應用於航測影像拼接之研究,”國立台灣中山大學海洋環境及工程學系碩士論文, 2009. [15]李振偉,“水下無人載具之視覺懸停控制,”國立台灣中山大學機械與機電工程學系碩士論文, 2008. [16]郭坤星,“圖像搜尋平台介面與服務設計之研究,”國立中央大學資訊管理研究所碩士論文, 2009. [17]施政瑋,“設計資料庫系統與CBIR於圖像搜尋及管理之整合應用,”國立雲林科技大學設計運算研究所碩士論文, 2007. [18]張哲維,“基於尺度不變性特徵轉換之掌靜脈辨識系統,”國立台灣科技大學資訊工程學系碩士論文, 2010. [19]邱駿展,“三維物件之辨識與姿態估測,”國立台北科技大學自動化科技研究所碩士論文, 2009. [20]賴泓瑞,“以模型樣版為基礎之建物三維點雲建模演算法,”國立成功大學測量及空間資訊學系碩士論文, 2009. [21]趙煇,“SIFT特徵匹配技術講義,”http://contact.ys168.com/. [22]田知本、沈允中、徐士璿,“Feature Matching,” http://www.csie.ntu.edu.tw/~cyy | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/23556 | - |
dc.description.abstract | 研究主要透過影像處理的方式,回報使用者目前的位置資訊,主要的影像識別技術由SIFT (Scale-invariant feature transform)作為核心,擷取影像的特徵點後再進行特徵匹配。雖然SIFT對於影像的亮度變化有良好的穩定性,但對於亮度反差過大的街景影像還是無法處理的很好,若直接以直條圖等化 (Histogram Equalization)對影像作調整,則影像中對比度較差的區域會產生過多的雜訊,造成比對時的負擔。因此本研究提出了「條件式」的直條圖等化 (Histogram Equalization) 來降低亮度變化對於影像比對的影響,以提升街景比對之成功率。此外,針對原著所提出的特徵點匹配演算法BBF (Best Bin First)也做了相當程度的修改,藉由不同的參數對K-D Tree進行搜尋,可以得到不同程度的特徵點匹配點數與執行時間。
藉由SIFT進行特徵點匹配後,可得到兩影像特徵點匹配數目,在資料庫中得到最多特徵點匹配數者並不一定是最鄰近來源影像的街景,因為有可能來源影像與資料庫中的街景皆不相鄰,但還是可以得到近似於0之特徵點匹配數目,因此在最後街景驗證過程中,將每張街景影像加入了「鄰近景點的資訊」,透過特「徵點匹配數」與「鄰近景點資訊」作為驗證的依據,除了可改善上述的情況發生,提升街景回報的準確率之外,還能將使用者所在位置描述得更明確,如使用者所在位置靠近某景點,或是處於某兩景點之間等等。 利用本研究之方法所實作的系統,在最後分別以不同的觀點及拍攝環境來呈現多個實驗結果。在綜合實驗結果中,將測試資料日、夜間街景1000張與資料庫中的街景影像做比對,在日間的正確率98.2%,夜間的正確率為95.8%。 | zh_TW |
dc.description.abstract | The feature of this research is to report the current position of the user by image processing. The main core of this image recognition technology is SIFT (Scale-invariant feature transform) which runs feature matching after extracting the features from the images. Although SIFT has stable performance on illumination changes of images, it still shows discrepancies when contrast gets too high. It will produce more noises if using the Histogram Equalization method directly. In order to increase accuracy, this research runs conditionally Histogram Equalization which can decrease the impact of high contrast gap. In addition, my algorithm is modified from the original which has certain amount of the BBF (Best Bin First). My algorithm also obtains features outcomes and process time by searching K-D Tree with different arguments.
This main idea is to obtain the numbers of features of two images after running SIFT; however, the highest numbers of features coming up from database does not mean it is the right results. Sometimes the matching numbers of features shows a number which is close to zero, the source image does not match the database; therefore, the last verification process assigns “nearby scene information” to every street image by verifying with both “numbers of features” and “nearby scene information” so that it not only improves accuracy but also reports a more correct user location; for instance, it will show if the user is very close to a specific scene or located in the middle of certain two scenes. This system we propose will generate various outputs base on different viewpoints and environments. Under 1000 different testing data for both daytime and night time to obtain matching results from current database, our research runs for 98.2% and 95.8% accuracy. | en |
dc.description.provenance | Made available in DSpace on 2021-06-08T05:03:54Z (GMT). No. of bitstreams: 1 ntu-100-R97525083-1.pdf: 15309487 bytes, checksum: cfddeac0219a9a5f0c2214ea4c84a085 (MD5) Previous issue date: 2011 | en |
dc.description.tableofcontents | 口試委員審定書..................................i
致謝...........................................ii 中文摘要......................................iii ABSTRACT.......................................iv 論文目錄........................................v 圖目錄........................................vii 表目錄.........................................xi Chapter 1 緒論...................................1 1.1 研究動機與目的.........................1 1.2 相關研究...............................4 1.3 論文架構...............................7 Chapter 2 街景影像前處理.........................9 2.1 解析度調整............................10 2.2 增強影像對比度........................12 2.2.1 日間對比度調整........................12 2.2.2 夜間對比度調整........................20 2.2.3 日夜間環境之條件判定..................24 Chapter 3 特徵點匹配與除錯......................26 3.1 特徵點選取............................26 3.1.1 多尺度空間取極值......................26 3.1.2 特徵點篩選............................28 3.2 建立特徵點比對資訊....................32 3.3 特徵點匹配............................34 3.3.1 特徵點相似度計算......................34 3.3.2 加速特徵點匹配效率....................37 3.4 特徵點除錯............................47 Chapter 4 街景比對結果驗證......................50 4.1 街道影像標籤化........................50 4.2 街景比對驗證方法......................51 Chapter 5 實驗結果與討論........................58 5.1 實驗設備..............................58 5.2 拍攝規則..............................59 5.2.1 街景拍攝距離的選定....................59 5.2.2 拍攝角度範圍限制......................60 5.2.3 拍攝街景的地點限制....................63 5.3 實驗結果..............................65 5.3.1 狹義的街景比對驗證結果................65 5.3.2 廣義的街景比對驗證結果................67 5.3.3 特殊環境之街景比對驗證結果............70 Chapter 6 結論與未來展望........................71 6.1 結論..................................71 6.2 未來展望..............................72 REFERENCE ......................................73 | |
dc.language.iso | zh-TW | |
dc.title | 街景辨識系統 | zh_TW |
dc.title | Streetscape Recognition System | en |
dc.type | Thesis | |
dc.date.schoolyear | 99-1 | |
dc.description.degree | 碩士 | |
dc.contributor.coadvisor | 丁肇隆 | |
dc.contributor.oralexamcommittee | 黃乾綱,吳文中,王家輝 | |
dc.subject.keyword | 尺度不變特徵轉換,直條圖等化,BBF,K維樹, | zh_TW |
dc.subject.keyword | SIFT,Histogram,BBF,K-D Tree, | en |
dc.relation.page | 74 | |
dc.rights.note | 未授權 | |
dc.date.accepted | 2011-02-15 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 工程科學及海洋工程學研究所 | zh_TW |
顯示於系所單位: | 工程科學及海洋工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-100-1.pdf 目前未授權公開取用 | 14.95 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。