Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/30595
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor洪一平(Yi-Ping Hung)
dc.contributor.authorCheng-Yu Peien
dc.contributor.author裴振宇zh_TW
dc.date.accessioned2021-06-13T02:09:44Z-
dc.date.available2012-07-03
dc.date.copyright2007-07-03
dc.date.issued2007
dc.date.submitted2007-06-26
dc.identifier.citation[1] http://www.jurassicpark.com/
[2] http://www.shinobithemovie.com/
[3] L. Rosenblum and M. Macedonia, “Tangible Augmented Interfaces for Structural Molecular Biology,” IEEE Computer Graphics and Applications, vol. 25, no. 2, pp. 13-17, 2005.
[4] H. Tamura, H. Yamamoto, and A. Katayama, “Mixed Reality: Future Dreams Seen at the Border between Real and Virtual Worlds,” IEEE Computer Graphics and Applications, vol. 21, no. 6, pp. 64-70, 2001.
[5] A.D. Cheok, K.H. Goh, W. Liu, F. Farzbiz, S.W. Fong, S.Z. Teo, Y. Li, and X. Yang, “Human Pacman: A Mobile Wide-Area Entertainment System Based on Physical, Social, and Ubiquitous Computing,” Personal and Ubiquitous Computing, vol. 8, no. 2, pp. 71-81, 2004.
[6] http://www.jp.playstation.com/scej/title/eoj/
[7] T.H.D. Nguyen, T.C.T Qui, K. Xu, A.D. Cheok, S.L Teo, Z.Y. Zhou, A. Mallawaarachchi, S.P. Lee, W. Liu, H.S. Teo, L.N. Thang, Y. Li, and H. Kato, “Real-Time 3D Human Capture System for Mixed-Reality Art and Entertainment.” IEEE trans. Visualization and Computer Graphics, vol. 11, no. 6, pp. 706-721, 2005.
[8] C.-R. Huang, C.-S. Chen, and P.-C. Chung, “Tangible Photorealistic Virtual Museum,” IEEE Computer Graphics and Applications, vol. 25, no. 1, pp.15-17, 2005.
[9] M.C. Juan, M Alcaniz, C. Monserrat, C. Botella, R.M. Banos, and B. Guerrero, “Using Augmented Reality to Treat Phobias,” IEEE Computer Graphics and Applications, vol. 25, no. 6, pp. 31-37, 2005.
[10] D. Balazs, and E. Attila, “Volumetric Medical Intervention Aiding Augmented Reality Device,” Information and Communication Technologies, vol. 1, pp. 1091-1096, 2006.
[11] Y.-P Hung, C.-S Chen, Y.-P Tsai and S.-W Lin, “Augmenting Panoramas with Object Movies by Generating Novel Views with Disparity-Based View Morphing,” J. Visualization and Computer Animation, vol. 13, no. 4, 2002, pp. 237-247
[12] W. Hoff and T. Vincent, “Analysis of head pose accuracy in augmented reality,” IEEE trans. Visualization and Computer Graphics, vol. 6, no. 4. pp. 319-334, 2000.
[13] S. You, U. Neumann, R. Azuma, “Orientation tracking for outdoor augmented reality registration,” IEEE Computer Graphics and Applications, vol. 19, no. 6, pp. 36-42, 1999.
[14] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
[15] H. Kato and M. Billinghurst, “Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System,” Proc. IEEE and ACM International Workshop on Augmented Reality, pp. 85-94, 1999.
[16] M. Billinghurst, A. Cheok, S Prince, and H. Kato, “Real world teleconferencing,” IEEE Computer Graphics and Applications, vol. 22, no. 6, pp.11-13, 2002.
[17] J. Fruend, M. Grafe, C. Matysczok, and A. Vienenkoetter, “AR-based training and support of assembly workers in automobile industry,” Proc. The First IEEE International Augmented Reality Toolkit Workshop, 2002.
[18] J. Gausenmeier, C. Matysczok, and R. Radkowski, “AR-based Modular Construction System for Automobile Advance Development,” Proc. IEEE International Augmented Reality Toolkit Workshop, pp.72-73, 2003.
[19] J.M.S Dias, N. Barata, P. Santos, A. Correia, P. Nande, and R. Bastos, “In your hand computing: tangible interfaces for mixed reality,” Proc. IEEE International Augmented Reality Toolkit Workshop, pp.29-31, 2003.
[20] H. Kato, K. Tachibana, M. Tanabe, T. Nakajima, and Y. Fukuda, “MagicCup: a tangible interface for virtual objects manipulation in table-top augmented reality,” Proc. IEEE International Augmented Reality Toolkit Workshop, pp. 75-76, 2003.
[21] A.J. Davison, “Real-Time Simultaneous Localisation and Mapping with a Single Camera,” Proc. IEEE International Conference on Computer Vision, vol. 2 pp. 1403-1410, 2003.
[22] A.J. Davison, I.D. Reid, N.D. Molton, and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM, ” IEEE trans. on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052-1067, 2007.
[23] M. L. Yuan, S.K. Ong, and A.Y.C. Nee, “Registration Using Natural Features for Augmented Reality Systems,” IEEE trans on Visualization and Computer Graphics, vol. 12, no. 4, pp. 569-580, 2006.
[24] D.G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
[25] I. Gordon and D.G. Lowe, “Scene Modelling, Recognition and Tracking with Invariant Image Features,” IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 110-119, 2004.
[26] C.-R. Huang, C.-S. Chen and P.-C. Chung, “Contrast Context Histogram – A Discriminating Local Descriptor for Image Matching,” International Conference on Pattern Recognition, vol. 4, pp. 53–56, 2006.
[27] Z. Zhang, R. Deriche, O. Faugeras, and Q. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence, vol. 78, pp. 87–119, 1995.
[28] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proc. The Fourth Alvey Vision Conference, pp. 147–151, 1988.
[29] C. Schmid and R. Mohr, “Local Grayvalue Invariants for Image Retrieval,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 5, pp. 530–534, 1997.
[30] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005.
[31] Y. Ke and R. Sukthankar, “PCA-SIFT: A More Distinctive Representation for Local Image Descriptors,” Proc. Computer Vision and Pattern Recognition, vol. 2, pp. 506–513, 2004.
[32] D. Comaniciu, V. Ramesh, and P. Meer, “Real-time Tracking of Non-rigid Objects Using Mean Shift,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 142–151, 2000.
[33] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 509–522, 2002
[34] D.F. DeMenthon and L.S. Davis, “Model-Based Object Pose in 25 Line of Code,” International Journal of Computer Vision, vol. 15, pp. 123-141, 1995.
[35] M.A. Fischler and R.C. Bolls, “Random Sample Consensus: A Paradigmfor Model Fitting with Application to Image Analysis and Automated Cartography,” Comm. ACM, vol. 24, pp. 381-395, 1981.
[36] R.M. Haralick et al., “Analysis and Solutions of the Three Point Perspective Pose Estimation Problem,” Proc. IEEE International Conference on Computer Vision and Pattern Recognition, pp. 592-598, 1991.
[37] R. Horaud, B. Canio, and O. Leboullenx, “An Analytic Solution for the Perspective 4-Point Problem,” Computer Vision, Graphics, and Image Understanding, no. 1, pp. 33-44, 1989.
[38] D.G. Lowe, “Robust Model-Based Motion Tracking through the Integration of Search and Estimation,” International Journal of Computer Vision, vol. 8, no. 2, pp. 113-122, 1992.
[39] J.S.C. Yuan, “A General Photogrammetric Method for Determining Object Position and Orientation,” IEEE Trans. Robotics and Automation, vol. 5, pp. 129-142, 1989.
[40] C.-S. Chen, and W.-Y. Chang, “On Pose Recovery for Generalized Visual Sensors,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 7, 2004.
[41] R. Gupta and R. Hartley, “Linear Pushbroom Cameras,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 9, pp. 963-975, Sept. 1997.
[42] F. Huang, S.K. Wei, and R. Klette, “Geometrical Fundamentals of Polycentric Panoramas,” Proc. IEEE International Conference on Computer Vision, vol. 1, pp 560-565, July 2001.
[43] K. Mikolajczyk and C. Schmid, “Indexing based on scale invariant interest points,” Proc. International Conference on Computer Vision, pp. 525–531, 2001
[44] B.K.P. Horn, “Closed-Form Solution of Absolute Orientation Using Unit Quaternions,” J. Optical Soc. Am., A, vol. 4, pp. 629-642, 1987.
[45] K.S. Arun, T.S. Huang, and S.D. Blostein, “Least-Squares Fitting of Two 3-D Point Sets,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 9, no. 5, pp. 698-700, 1987.
[46] B.K.P. Horn, H.M. Hilden, and S. Negahdaripour, “Closed-form Solution of Absolute Orientation Using Orthonormal Matrices,” J. Optical Soc. Am., A, vol. 5, no. 7, pp. 1127-1135, 1988
[47] W.M. Walker, L. Shao, and R.A. Volz, “Estimating 3-D Location Parameters Using Dual Number Quaternions,” CVGIP: Image Understanding, vol. 54, no. 4, pp. 358-367, 1991.
[48] A. Lorusso, D.W. Eggert, and R.B. Fisher, “A Comparison of Four Algorithms for Estimating 3-D Rigid Transformations,” Proc. Sixth British Machine Vision Conf., pp. 237-246, 1995.
[49] R.M. Haralick and L.G. Shapiro, Computer and Robot Vision volume 2, chapter 17, pp.227-229.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/30595-
dc.description.abstract在本論文中,我們提出了一個三維背景特徵模型(3DBFM)來記錄場景中顯著的特徵點的三維位置及特徵點的外觀。藉由建立影像中出現的特徵點與三維背景模型之對應關係,我們使用以ICP為基礎之相機參數估算演算法計算出對應的相機外在參數,並使用這些相機參數於增添式實境上,讓虛擬物品正確地呈現在影像之中。另外,在我們持續拍攝影片中,三維背景模型也會不斷的更新其儲存資料,包含三維位置及特徵點外觀。對於其他沒對應到三維背景模型之影像中的特徵點,我們會觀察它們一段時間,計算出其三維位置並加入三維背景模型,故三維背景模型會漸漸增加其作用範圍。我們的方法好處在於在瞬間光線變化及部分遮蔽的狀況下,依然可以持續運行不受干擾。甚至在於環境中全部特徵點被遮蔽時,只要一重新觀察到三維背景模型所記錄的特徵點,就可以立刻回復原先正常的狀態。這樣的概念和特性,使得我們的方法更能應用增添式實境系統。zh_TW
dc.description.abstractIn this thesis, we present a descriptor based approach for augmented reality by using a 3D background feature model (3DBFM). 3DBFM contains 3D positions of scene objects and their image appearance distributions. To describe image appearances, we use a new descriptor, contrast context histogram (CCH), which has been shown high matching accuracies but less computation time. By matching the image features with the features in the 3DBFM, we can get 3D-2D correspondences. Then, we adopt iterated closet point (ICP) based algorithm to estimate the camera pose. According to the camera pose, new scene points, which are not in the 3DBFM, can be learned. The experiments showed that our approach can match features under significant changes of illumination and scales. Even long term occlusion occurs; the system can still work after matching feature without any additional penalty.en
dc.description.provenanceMade available in DSpace on 2021-06-13T02:09:44Z (GMT). No. of bitstreams: 1
ntu-96-R94922106-1.pdf: 3377891 bytes, checksum: 9e37c743baf7c7373062b6b81aaa1309 (MD5)
Previous issue date: 2007
en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
中文摘要 iii
英文摘要 iv
Chapter 1 Introduction 1
Chapter 2 Background Knowledge 6
2.1. Camera Model 6
2.2 Invariant Local Features 9
2.3 Perspective N Point Problem 12
Chapter 3 Our Approach 14
3.1 3D Background Feature Model 14
3.2 Feature Extraction and Matching in 3DBFM 15
3.3 Camera Pose Estimation: 17
3.4 Outlier Removal 21
3.5 Expanding 3DBFM 22
3.6 Updating 3DBFM 24
3.7 System Overview 25
Chapter 4 Experiments 27
Chapter 5 Conclusions 40
Reference 41
dc.language.isoen
dc.subjectinvariant local featureen
dc.subjectaugmented realityen
dc.subjectcamera pose estimationen
dc.title以三維背景特徵模型為基礎之增添式實境zh_TW
dc.titleThree Dimensional Background Feature Model for Augmented Realityen
dc.typeThesis
dc.date.schoolyear95-2
dc.description.degree碩士
dc.contributor.coadvisor陳祝嵩(Chu-Song Chen)
dc.contributor.oralexamcommittee楊佳玲(Chia-Lin Yang),張鈞法(Chun-Fa Chang)
dc.subject.keyword增添式實境,相機位置估測,不變性區域特徵點,zh_TW
dc.subject.keywordaugmented reality,camera pose estimation,invariant local feature,en
dc.relation.page46
dc.rights.note有償授權
dc.date.accepted2007-06-27
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-96-1.pdf
  未授權公開取用
3.3 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved