Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/39150
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor傅楸善(Chiou-Shann Fuh)
dc.contributor.authorTzu-Fan Hsuen
dc.contributor.author徐子凡zh_TW
dc.date.accessioned2021-06-13T17:04:50Z-
dc.date.available2016-08-10
dc.date.copyright2011-08-10
dc.date.issued2011
dc.date.submitted2011-07-14
dc.identifier.citation[1] L. J. Angot, W. J. Huang and K. C. Liu, “A 2D to 3D Video and Image Conversion Technique Based on a Bilateral Filter,” Proceedings of SPIE-IS&T Electronic Imaging, SPIE, Vol. 7526, 75260D, San Jose, pp. 1-5, 2010
[2] S. Battiato, S. Curti, M. L. Cascia, E. Scordato, and M. Tortora, “Depth Map Generation by Image Classification,” Proceedings of SPIE IS&T/SPIE Annual Symposium on Electronic Imaging, San Jose, pp. 95-104, 2004.
[3] V. Cantoni, L. Lombardi, M. Porta and N. Sicard, “Vanishing Point Detection: Representation Analysis and New Approaches,” Proceedings of International Conference on Image Analysis and Processing, Palermo, Italy, pp. 1-5, 2001.
[4] C. C. Cheng, C. T. Li and L. G. Chen, “A 2D-to-3D Conversion System Using Edge Information,” Proceedings of IEEE International Conference Consumer Electronics, Las Vegas, pp. 377-378, 2010.
[5] D. Comaniciu and P. Meer, “Mean Shift: A Robust Approach toward Feature Space Analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 603-619, 2002.
[6] Sandy Knoll Software, “3D Maker,” http://www.tabberer.com/sandyknoll/more/3dmaker/3dmaker.html, 2011.
[7] P. Harman, J. Flack, S. Fox, and M. Dowley, “Rapid 2D to 3D Conversion,” Proceedings of SPIE, Vol. 4660, pp.78-86, 2002.
[8] Y. J. Jung, A. Baik, J. Kim, and D. Park, “A Novel 2D-to-3D Conversion Technique Based on Relative Height Depth Cue,” Proceedings of SPIE-IS&T Electronic Imaging, SPIE Vol. 7237, 72371U, 2009.
[9] C. Kim, “Segmenting a Low-Depth-of-Field Image Using Morphological Filters and Region Merging,” IEEE Transactions on Image Processing, Vol. 14, No. 10, pp. 1503-1511, 2005.
[10] W. N. Lai, and W. C. Chen, “Introduction to 2D to 3D Technology,” Images and Recognition, Vol. 16, No. 2, pp. 61-75, 2010.
[11] P. Li and R. K. Gunnewiek, “On Creating Depth Maps from Monoscopic Video Using Structure from Motion,” Proceedings of IEEE Workshop on Content Generation and Coding for 3D-Television, Eindhoven, The Netherlands, pp. 508-515, 2006.
[12] G. Popa, “Travel Guides: The Great Pyramid of Giza,”
http://www.metrolic.com/travel-guides-the-great-pyramid-of-giza-147358, 2011.
[13] P. Scott, “Parallel Railway Lines,”
http://web.me.com/paulscott.info/maths-gallery/2/22.railway-lines.html, 2011.
[14] Wikipedia, “Depth Of Field,” http://en.wikipedia.org/wiki/Depth_of_field, 2011.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/39150-
dc.description.abstract近年來3D效果越來越被重視,尤其是在2008年的阿凡達電影推出後,不僅是電影,在家用電視和電腦螢幕以及相片方面也漸漸成為未來科技的主流。而3D技術主要又區分為需要3D眼鏡或是裸視3D,根據使用者所在位置有不同的需要,如家用電視適合裸視3D技術,在電影院或是遊樂場等等則適合3D眼鏡相關技術。產生3D影像可透過單張影像、兩張影像或是動態影像當做輸入,每種輸入使用的方法與適用的場合以及困難點均不相同,接著根據不同的對應方式產生相對應的3D影像。
在此篇論文中我們設計一種演算法可以對應在大部分場景,接著透過偵測該場景來產生一張最適合此張影像的初始深度圖,再進一步處理來產生最終深度圖,最後根據該深度資訊來生成對應的3D立體影像
zh_TW
dc.description.abstract3D effects in recent years are getting more and more attention, especially the movie Avatar released in 2008. Not only the movie, but also TV, monitor, and digital photo will be fashionable to have 3D effect. The technology of 3D can be divided into 3D glasses-based and without 3D glasses: different choices according to user’s requirements. For instance, the technology without 3D glasses suits TV at home. Moreover, the 3D glasses-based technology suits theater and amusement park. 3D images can be generated by single image, dual images, and video. The methods for each type of input are quite different.
In this thesis, we design an algorithm for most scenarios. First, we create an initial depth map. Next, we generate final depth map. Finally, we create the corresponding 3D anaglyph image according to the depth information.
en
dc.description.provenanceMade available in DSpace on 2021-06-13T17:04:50Z (GMT). No. of bitstreams: 1
ntu-100-R98922132-1.pdf: 7381138 bytes, checksum: d1a81482eb0479f73182ee49ca577784 (MD5)
Previous issue date: 2011
en
dc.description.tableofcontents誌 謝 i
摘 要 ii
Abstract iii
目 錄 iv
圖 錄 iv
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Introduction to 3D content 1
1.1 Introduction to 3D Technology 2
Chapter 2 Related Work 9
Chapter 3 Proposed Method 22
Chapter 4 Experimental and Results 34
Chapter 5 Conclusion and Future Work 52
5.1 Conclusion 52
5.2 Future Work 52
Reference 53
dc.language.isoen
dc.subject單一影像zh_TW
dc.subject深度圖zh_TW
dc.subjectsingle imageen
dc.subjectdepth map estimationen
dc.title2D單一影像之深度圖生成zh_TW
dc.title2D Still Image Depth Map Estimationen
dc.typeThesis
dc.date.schoolyear99-2
dc.description.degree碩士
dc.contributor.oralexamcommittee邱立誠,蔡宛整
dc.subject.keyword深度圖,單一影像,zh_TW
dc.subject.keyworddepth map estimation,single image,en
dc.relation.page54
dc.rights.note有償授權
dc.date.accepted2011-07-14
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-100-1.pdf
  未授權公開取用
7.21 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved