請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/63711完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 李明穗(Ming-Sui Lee) | |
| dc.contributor.author | Cheng-En Wu | en |
| dc.contributor.author | 吳承恩 | zh_TW |
| dc.date.accessioned | 2021-06-16T17:17:00Z | - |
| dc.date.available | 2012-08-20 | |
| dc.date.copyright | 2012-08-20 | |
| dc.date.issued | 2012 | |
| dc.date.submitted | 2012-08-18 | |
| dc.identifier.citation | [1] W.J. Tam, C.Vazquez, F. Speranza, “Three-dimensional TV: A Novel Method For Generating Surrogate Depth Maps Using Colour Information,” in SPIE 2009.
[2] V. P. Namboodiri, S. Chaudhuri, “Recovery Of Relative Depth From A Single Observation Using An Uncalibrated(Real-Aperture) Camera,” in CVPR 2008 [3] D. Kim, D. Min, and K. Sohn, “A Stereoscopic Video Generation Method Using Stereoscopic Display Characterization and Motion Analysis,” in IEEE Transactions on Broadcasting, vol. 54, no. 2, pp. 188-197, 2008 [4] C.C. Cheng, C.T. Li, and L.G. Chen, “Video 2D-To-3D Conversion Based On Hybrid Depth Cueing,” in Journal of the Society for Information Display, Volume 18, Issue 9, pp. 704-716, 2010. [5] V. Nedovic, A.W.M. Smeulders, and A. Redert, J.M. Geusebroek “Stages as Models of Scene Geometry,” in PAMI 2010 [6] A. Saxena, S. H. Chung, and A.Y. Ng, “Learning Depth from Single Monocular Image,” in NIPS 2005 [7] D. Hoiem, A.A. Efros, and M. Hebert, “Automatic Photo Pop-up”, in SIGGRAPH 2005 [8] D. Hoiem, A.A. Efros, and M. Hebert, “Geometric Context from a Single Image,” in ICCV 2005 [9] M. Guttmann, L. Wolf, and D. Cohen-Or, “Semi-automatic Stereo Extraction from Video Footatge,” in ICCV 2009 [10] X, Yan, Y. Yang, G. Er, and Q. Dai, “Depth Map Generation for 2D-to-3D Conversion by Limited User Inputs and Depth Propagation,” in 3DTV 2011. [11] R. Phan, R. Rzeszutek, and D. Androutsos, “Semi-automatic 2D to 3D Image 52 Conversion Using a Hybrid Random Walks and Graph Cuts Based Approach,” in ICASSP 2011 [12] V.Cantoni, L. Lombardi, M. Porta, and N. Sicard, “Vanishing point detection: Representation Analysis and New Approaches,” in Universita di Pavia IEEE 2001 [13] R. G. V. Gioi, J. Jakubowicz and J. - M. Morel “LSD: A Line Segment Detector.” in Image processing on Line 2012 [14] M. Dimiccoli, and P. Salembier,“Exploiting T-junctions for Depth Segregation in Single Images,” in ICASSP 2009 [15] Y.M. Tsai, Y.L. Chang, and L.G. Chen, “Block-based Vanishing Line and Vanishing Point Detection for 3D Scene Reconstruction, ” in ISPACS 2006. [16] D. Burazerovic, P. Vandewalle, R-P. Berretty, “Automatic Depth Profiling of 2D Cinema- and Photographic Images,” in ICIP 2009. [17] P. F. Felzenszwalb and D.P. Huttenlocher “Efficient Graph-Based Image Segmentation,” in IJCV 2004. [18] P. Sundberg, T. Brox, M. Maire, P. Arbelaez, and J. Malik, “Occlusion Boundary Detection and Figure/Ground Assignment from Optical Flow,” in CVPR 2011 | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/63711 | - |
| dc.description.abstract | 三維視覺呈現(3-D display technology)是近年影像處理研究與產業界十分關注的議題。在產生三維視覺效果時,通常需要仰賴深度圖(depth map)來重建三維視覺的立體效果,目前市面上電影產業常用的深度圖產生方法通常為採用人工手繪的方式,耗費大量的成本和時間。為了降低製作深度圖的成本,許多針對自動或半自動生成深度圖的研究在近年來成為研究的焦點。
本篇論文提出一個三層架構系統:首先對影像進行分析,取得場景資訊,然後利用絕對深度探測與相對深度探測產生完整的深度圖。此方法可以應用在單張圖片或影片上,針對影片的輸入,特別另外使用了時序一致性(temporal coherence)的條件以求得更一致的結果。 實驗結果顯示此方法除了能夠有效成功的偵測出深度,隨著相關演算法的進步,此方法也能更趨近於全自動,同時這個方法也是對於深度探測的問題提供了一個新的方向發展。 | zh_TW |
| dc.description.abstract | 3-D display technology is one of popular topics in recent years. In the process of generating 3-D visual perspective, depth map plays an important role in rebuilding the stereoscopic effect. So far, manually drawing the depth map is the mainstream in movie industry. However, it costs lots of money and time. Therefore, many automatic and semi-automatic depth estimation methods have been published in recent years.
In this thesis, a three-phase and semi-automatic system is proposed. First, the input image/frames are analyzed to extract the information of scene. Then absolute depth estimation and relative depth estimation are employed to generate depth map. The proposed system is applicable to both single image and image sequence. For sequence inputs, temporal coherence is obtained to make the depth maps between frames being smooth and continuous. The experimental results show that this method can estimate depth successfully. The effectiveness of the proposed system also gives development for automatic depth estimation. With the improvement of segmentation algorithm, generating depth map automatically will come true in the future. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T17:17:00Z (GMT). No. of bitstreams: 1 ntu-101-R99944029-1.pdf: 6438546 bytes, checksum: b27540fb86d855dc2894beaa822262b6 (MD5) Previous issue date: 2012 | en |
| dc.description.tableofcontents | 口試委員會審定書……………………………...............................................................#
誌謝 ...................................................................................................................................i 中文摘要 .......................................................................................................................... ii ABSTRACT .................................................................................................................... iii CONTENTS .....................................................................................................................iv LIST OF FIGURES ..........................................................................................................vi LIST OF TABLES ............................................................................................................ix Chapter 1 Introduction ......................................................................................... 1 1.1 Introduction of 3-D Display .............................................................................. 1 1.2 Introduction of Depth Estimation ..................................................................... 2 1.3 Thesis Organization .......................................................................................... 4 Chapter 2 Related Work ....................................................................................... 5 2.1 Depth Estimation Method Overview ................................................................ 5 2.2 Depth Estimation from Cues ............................................................................. 5 2.3 Learning Based Depth Estimation .................................................................... 7 2.4 Semi-automatic Depth Estimation .................................................................... 9 Chapter 3 Depth Estimation from Multiple Cues ............................................ 12 3.1 System Overview ............................................................................................ 12 3.2 Scene Analysis ................................................................................................ 13 3.3 Absolute Depth Estimation ............................................................................. 15 3.3.1 Vanishing Point Detection ................................................................... 15 3.4 Relative Depth Estimation .............................................................................. 19 v 3.4.1 T-Junction Detection ........................................................................... 19 3.4.2 Segment ordering and Ambiguities Removal ..................................... 21 3.5 Depth Map Generation .................................................................................... 23 3.5.1 Absolute Depth Cues ........................................................................... 23 3.5.2 Relative Depth Cues ............................................................................ 24 3.5.3 Post-processing ................................................................................... 27 3.6 Temporal Coherence ....................................................................................... 27 3.6.1 Constraint of Vanishing Region .......................................................... 28 3.6.2 Consistency of Properties .................................................................... 29 3.6.3 Constraint of Depth Change ................................................................ 31 Chapter 4 Experimental Results ........................................................................ 33 4.1 Results with Manual Segmentation ................................................................ 33 4.1.1 Results of Image .................................................................................. 33 4.1.2 Results of Sequence ............................................................................ 37 4.2 Results with Automatic Segmentation Algorithm ........................................... 41 4.3 Comparison ..................................................................................................... 44 4.4 More Results ................................................................................................... 46 Chapter 5 Conclusion and Future Work ........................................................... 49 5.1 Conclusion ...................................................................................................... 49 5.2 Discussion and Future Work ........................................................................... 49 REFERENCE .................................................................................................................. 51 | |
| dc.language.iso | en | |
| dc.subject | 深度線索 | zh_TW |
| dc.subject | 深度估測 | zh_TW |
| dc.subject | T字交界 | zh_TW |
| dc.subject | Depth estimation | en |
| dc.subject | Depth cues | en |
| dc.subject | T-junction | en |
| dc.title | 基於多種單眼線索之深度估測 | zh_TW |
| dc.title | Depth Estimation from Multiple Monocular Cues | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 100-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 廖偉凱(Wei-Kai Liao),莊永裕(Yung-Yu Chuang) | |
| dc.subject.keyword | 深度估測,深度線索,T字交界, | zh_TW |
| dc.subject.keyword | Depth estimation,Depth cues,T-junction, | en |
| dc.relation.page | 52 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2012-08-18 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-101-1.pdf 未授權公開取用 | 6.29 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
