Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電子工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/48668
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳良基(Liang-Gee Chen)
dc.contributor.authorChao-Chung Chengen
dc.contributor.author鄭朝鐘zh_TW
dc.date.accessioned2021-06-15T07:07:28Z-
dc.date.available2015-12-10
dc.date.copyright2010-12-10
dc.date.issued2010
dc.date.submitted2010-11-16
dc.identifier.citation[1] C. Fehn 'A 3DTV system based on video plus depth information', 37th Asilomar Conf. Signals, Syst. Comp., 2003.
[2] Andr Redert, Marc Op de Beeck, Christoph Fehn, Wijnand IJsselsteijn, Marc Pollefeys, Luc Van Gool, Eyal Ofek, Ian Sexton, and Philip Surman. “ATTEST: Advanced Three-Dimensional Television System Technologies,” Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission (3DPVT.02), 2002.
[3] S. Geman and D. Geman, “Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images,” IEEE Trans PAMI, vol. 6, no. 6, pp. 721-741, 1984.
[4] Y. Boykov, O. Veksler, and R. Zabih, “Fast Approximate Energy Minimization via Graph Cuts,” in Proc. ICCV, vol. 1 pp. 377-384, 1999.
[5] W. Freeman, E. C. Pasztor, and O. T. Carmichael, “Learning the Low Level Vision,” IJCV, vol. 70, no. 1, pp. 41-54, 2000.
[6] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother, “A Comparative Study of Energy Minimization Methods for Markov Random Fields with Smoothness-Based Priors,” IEEE Trans PAMI, vol. 30, no. 6, pp. 1068-1080, 2008.
[7] J.-C. Tuan, T.-S. Chang, and C.-W. Jen, “On the Data Reuse and Memory Bandwidth Analysis for Full-Search Block-Matching VLSI architecture,” IEEE Trans CSVT, vol. 12, no. 1, pp. 61-72, 2002.
[8] T. Yu, R.-S. Lin, B. Super, and B. Tang, “Efficient Message Representations for Belief Propagation,” in Proc. ICCV, 2007.
[9] Y-.C. Yseng, N. Chang, and T.-S. Chang, “Low Memory Cost Block-Based Belief Propagation for Stereo Correspondence,” in Proc. ICME, pp. 1415-1418, 2007.
[10] T.-C. Chen, S.-Y. Chien, Y.-W. Huang, C.-H. Tsai, C.-Y. Chen, T.-W. Chen, and L.-G. Chen, “Analysis and Architecture Design of an HDTV720p 30 Frames/s H.264/AVC Encoder,” IEEE Trans CSVT, vol. 16, no. 6, pp. 673-688, 2006.
[11] The Middlebury computer vision pages. Available: http://vision.middlebury.edu/
[12] D. Scharstein and R. Szeliski, “A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms”, IJCV, vol. 47, pp.7-42, 2002.
[13] S. Birchfield and C. Tomasi, “A Pixel Dissimilarity that is Insensitive to Image Sampling,” IEEE Trans PAMI, vol. 20, no. 4, pp. 401-406, 1998.
[14] P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient Belief Propagation for Early Vision,” IJCV, vol. 70, no. 1, pp. 41-54, 2006.
[15] C.-C. Cheng, C.-K. Liang, Y.-C. Lai, Homer H. Chen, L.-G. Chen, “Analysis of Belief Propagation for Hardware Realization,” in Proc. SiPS, 2008.
[16] C.-K. Liang, C.-C. Cheng, Y.-C. Lai, H. H. Chen, and L.-G. Chen, “Hardware efficient belief propagation,” in Proc. of CVPR, 2009.
[17] Q. Yang, et al., “Real-time global stereo matching using hierarchical belief propagation,” in Proc. BMVC, 2006.
[18] C.-C. Cheng, C.-K. Liang, Y.-C. Lai, H. H. Chen, L.-G. Chen, “Fast belief propagation process element for high-quality stereo estimation,” in Proc. ICASSP, 2009.
[19] J. Sun, N. N. Zheng, and H. Y. Shum, 'Stereo matching using belief propagation,' IEEE Trans. PAMI, vol. 25, no. 7, pp. 787-800, July 2003.
[20] J. Díaz, E. Ros, R. Carrillo, and A. Prieto, “Real-time system for high-image resolution disparity estimation,” IEEE Trans. Image Processing, vol. 16, no. 1, pp. 280-285, 2007.
[21] A. Criminisi, A. Blake, and C. Rother, “Efficient dense stereo with occlusions for new view-synthesis by four-state dynamic programming,” IJCV, vol. 71, no. 1, pp. 89-110, 2007.
[22] M. Gong and Y.-H. Yang, “Near real-time reliable stereo matching using programming graphics hardware,” in Proc. CVPR, 2005.
[23] H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” in IEEE Trans PAMI, 2009.
[24] S. Kimura, T. Shinbo, H. Yamaguchi, E. Kawamura, and K. Nakano, “A convolver-based real-time stereo machine (SAZAN),” in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit., vol. 1. Jun. 1999, pp. 457-463.
[25] M. Kuhn, S. Moser, O. Isler, F. K. Gurkaynak, A. Burg, N. Felber, H. Kaeslin, and W. Fichtner, “Efficient ASIC implementation of a real-time depth mapping stereo vision system,” in Proc. IEEE Int. Circuits System (MWSCAS ’03), vol. 3. Dec. 2003, pp. 1478-1481.
[26] A. Darabiha, J. Rose, and W. J. Maclean, “Video-rate stereo depth measurement on programmable hardware,” in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit., vol. 1. Jun. 2003, pp. 203-210.
[27] Y. Jia, X. Zhang, M. Li, and L. An, “A miniature stereo vision machine (MSVM-III) for dense disparity mapping,” in Proc. 17th Int. Conf. Pattern Recognit., vol. 1. Aug. 2004, pp. 728-731.
[28] J. I. Woodfill, G. Gordon, and R. Buck, “Tyzx DeepSea high speed stereo vision system,” in Proc. IEEE Comput. Soc. Workshop Real-Time 3-D Sensors Use Conf. Comput. Vision Pattern Recog., Jun. 2004, pp. 41-46.
[29] S. Jin, J. Cho, X. D. Pham, K. M. Lee, S.-K. Park, M. Kim, and J. W. Jeon, 'FPGA Design and Implementation of a Real-Time Stereo Vision System,' in IEEE Trans on Circuits and Systems for Video Technology, vol. 20, no. 1, pp. 15-26, Jan. 2010.
[30] P. Harman, J. Flack, S. Fox, M. Dowley, “Rapid 2D to 3D conversion,” in Proc. SPIE Vol. 4660,Stereoscopic Displays and Virtual Reality Systems IX, 2002.
[31] Y. Matsumoto, H. Terasaki, K. Sugimoto, and T. Arakawa, “Conversion system of monocular image sequence to stereo using motion parallax,” SPIE Photonic West 3012, pp. 108, 1997.
[32] W.-Y. Chen and Y.-L. Chang and S.-F. Lin and L.-F. Ding and L.-G. Chen, “Efficient depth image based rendering with edge dependent depth filter and interpolation,” in Proc. ICME, pp. 1314-1317, 2005.
[33] S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor, system description, issues and solutions,” IEEE Workshop on Real-Time 3D Sensors and Their Use, 2004.
[34] J. Park and C. Kim, “Extracting focused object from low depth-of-field image sequences,” in Proc. SPIE VCIP, vol. 6077, 2006.
[35] Y.-M. Tsai, Y.-L. Chang, and L.-G. Chen, “Block-based vanishing line and vanishing point detection for 3d scene reconstruction,” in Proc. Intl. Symp. Intelligent Signal Proc. and Comm. Syst., 2006.
[36] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proc. ICCV, pp. 839-846, January 1998.
[37] S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” in MIT Technical Report (MIT-CSAIL- TR-2006-073), 2006.
[38] C.-C. Cheng, C.-T. Li, P.-S. Huang, T.-K. Lin, Y.-M. Tsai, and L.-G. Chen, “A block-based 2D-to-3D conversion system with bilateral filter,” in Proc. IEEE Int. Conf. Consumer Electronics, 2009.
[39] C.-C. Cheng, C.-T. Li, and L.-G. Chen, “A 2D-to-3D conversion system using edge information,” in Proc. IEEE Int. Conference. Consumer Electronics, 2010.
[40] D. Kim, D. Min, and K. Sohn, “A Stereoscopic Video Generation Method Using Stereoscopic Display Characterization and Motion Analysis,” in IEEE Trans. On Broadcasting, Vol. 54, Issue 2, pp. 188-197, 2008.
[41] D. Hoiem, A. A. Efros, and M. Hebert, “Automatic photo pop-up,” in Proc. ACM SIGGRAPH, 2005.
[42] H. Murata et al., “A real-time 2-D to 3-D image conversion technique using computed image depth,”in Proc. SID Digest of Technical Papers, 32.2, pp. 919-922, 1998.
[43] H. Murata et al. “Conversion of two-dimensional images to three dimensions,” in Proc. SID Digest of Technical Papers, pp. 859-862, 1995.
[44] T. Iinuma, H. Murata, S. Yamashita, K. Oyamada, “Natural stereo depth creation methodology for a real-time 2D-to-3D image conversion,” in Proc. SID Symposium Digest of Technical Papers, 2000.
[45] S. Battiato, S. Curti, E. Scordato, M. Tortora, and M. La Cascia, “Depth map generation by image classification,” Three-Dimensional Image Capture and Applications VI, vol. 5302, pp. 95-104, 2004.
[46] J.-Y. Chang, C.-C. Cheng, S.-Y. Chien, and L.-G. Chen, “Relative depth layer extraction for monoscopic video by use of multidimensional filter,” in Proc. ICME, 2006.
[47] Y.-L. Chang, et al, “Depth map generation for 2D-to-3D conversion by short-term motion assisted color segmentation,” in Proc. ICME, 2007.
[48] W. J. Tam, and L. Zhang, “3D-TV content generation: 2D-to-3D conversion,” in Proc. ICME, pp. 1869-1872 2006.
[49] Y. J. Jung, A. Baik, J. Kim, and D. Park, “A novel 2D-to-3D conversion technique based on relative height depth cue,” in SPIE Electronics Imaging, Stereoscopic Displays and Applications XX, 2009.
[50] Chul-Ho Choi, Byong-Heon Kwon, and Myung-Ryul Choi, ”A real-time fieldsequential stereoscopic image converter,” in IEEE Transactions on Consumer Electronics, 2004.
[51] Sung-Yeol Kim, Sang-Beom Lee, and Yo-Sung Ho, ”Three-dimensional natural video system based on layered representation of depth maps,” in IEEE Transactions on Consumer Electronics, 2006.
[52] G. Economou, V. Pothos and A. Ifantis, “Geodesic distance and MST based image segmentation”, in Proc. European Signal Processing Conf, 2004.
[53] ITU-R Recommendation BT.500-10, (2000). “Methodology for the subjective assessment of the quality of television pictures.”
[54] W.-Y. Chen and Y.-L. Chang and S.-F. Lin and L.-F. Ding and L.-G. Chen, “Efficient depth image based rendering with edge dependent depth filter and interpolation', IEEE Intl. Conf. on Multimedia & Expo, 1314-1317 (2005).
[55] S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor, system description, issues and solutions,” IEEE Workshop on Real-Time 3D Sensors and Their Use (2004).
[56] F. Dufaux and J. Konrad, “Efficient, robust, and fast global motion estimation for video coding,” IEEE Trans. Image Processing 44, 108-116 (1998).
[57] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” IEEE Intl. Conference on Computer Vision, 839-846, (1998).
[58] I. A. Ideses, L. P. Yaroslavsky, B. Fishbain, R. Vistuch, “3D from Compressed 2D Video,” Proc. SPIE 6490 (2007).
[59] D. Hoiem, A. A. Efros, and M. Hebert, “Automatic photo pop-up,” Proc. ACM SIGGRAPH (2005).
[60] V. Nedovic, A. W. Smeulders, A. Redert, and J. M. Geusebroek, “Depth information by stage classification,” IEEE Intl. Conf. on Computer Vision (2007).
[61] T. Iinuma, H. Murata, S. Yamashita, K. Oyamada, “Natural stereo depth creation methodology for a real-time 2D-to-3D image conversion,” SID Symposium Digest (2000).
[62] W. J. Tam, and L. Zhang, “3D-TV content generation: 2D-to-3D conversion,” IEEE Intl. Conf. on Multimedia & Expo, 1869-1872 (2006).
[63] A. Woods, “Image distortions in stereoscopic video system,” SPIE Stereoscopic Displays and Applications, 36-48 (1993).
[64] Ichihara, S., Kitagawa, N.,& Akutsu H.”Contrast and depth perception: Effects of texture contrast and area contrast,” Perception 36, 686 – 695 (2007).
[65] Mather G. “The use of image blur as a depth cue,” Perception 26, 1147 – 1158 (1997).
[66] A. J. Preetham , Peter Shirley , Brian Smits, A practical analytic model for daylight, Proc. ACM SIGGRAPH (1999).
[67] Robert Patterson, “Review Paper: Human factors of stereo displays: An update”, J. Soc. Info. Display 17, No.12, 987-996 (2009).
[68] Guibal CR, Dresp B, 'Interaction of color and geometric cues in depth perception: when does red mean near?' Psychological Research 69, 30-40 (2004).
[69] Brewster, D., “Notice of a chromatic stereoscope,” Philosophical Magazine 4, No. 3, 31. (1851).
[70] Dengler, M., and Nitschke, W., “Color stereopsis: a model for depth reversals based on border contrast,” Perception & Psychophysics 53, 150–156 (1993).
[71] Chenglei Wu, Guihua Er, Xudong Xie, Tao Li, Xun Cao, Qionghai Dai, 'A Novel Method for Semi-automatic 2D to 3D Video Conversion', 3DTV-CON (2008).
[72] Robert P. O'Shea, Shane G. Blackburn, and Hiroshi Ono, 'Contrast as a depth cue,' Vision Research 34 No.12 1595-1604 (1994)
[73] H. Shao, X. Cao, G. Er, “Objective Quality Assessment of Depth Image Based Rendering in 3DTV System”, Proc. IEEE 3DTV Conference (2009).
[74] Chao-Chung Chen, Chung-Te Li, Yi-Min Tsai, and Liang-Gee Chen “A Quality-Scalable Depth-Aware Video Processing System,” SID Symposium Digest of Technical Papers, 2009
[75] Richard Gregory,'Knowledge in perception and illusion' Philosophical Transactions of the Royal Society of London, Series B 352, pp. 1121–1128, 1997.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/48668-
dc.description.abstract隨著數位視訊科技的蓬勃發展,人類對影像品質的追求不斷的提升,從大畫面高解析度的數位電視,已開始進入到能呈現三維立體播放效果的高品質數位電視,隨著各家世界大廠不斷精進的顯示品質,三維立體電視將在我們的日常生活中扮演重要的角色。在此趨勢演進底下,三維立體影像的內容提供將成為新一代顯示器最需克服的難題。然而在拍攝三維立體影像上,以往皆需要特別的儀器設備才能拍攝,如主動式深度感測器與多視角相機,這種方式只適用於新內容的製作。而從二維影像發展以來,有大量的資料皆以二維平面的方式儲存,這些資料並無法在新一代的三維立體顯示器直接顯示出立體效果,因此如何從二維影像去生成三維立體影像,已經成為目前重要的課題。
為了從二維影像還原回三維影像,我們從人眼視覺系統去觀察,在人眼的深度感知中,包含了多種深度線索,如雙眼視差、移動視差、光影、圖像皆可幫助人類去感知畫面的深度。本論文中,我們探討兩大部分,分別是從雙眼影像的視差生成深度、與從單視角影像的深度線索生成深度,並對其提出新型的演算法與架構設計。
在第一部分我們對基於信心傳遞演算法運算雙眼視差的方法做了運算複雜度、頻寬記憶體使用量的分析,信心傳遞演算法為近年在雙眼視差深度生成中,效能表現極高的演算法,但因其需要大量的頻寬、運算量與記憶體,雖然本身擁有可大量平行化的特性,在達到即時深度運算上,仍有許多進步的空間,在本論文中,提出兩種不同的演算法,包含使用區域最佳化的磚塊式信心傳遞演算法、利用平滑代價特性的快速訊息運算法,分別減少了頻寬、記憶體使用量與運算複雜度,對於高畫質的應用,我們也提出了新型的三級管線化架構設計,達到即時運算HDTV720P雙視角影像深度生成之高效能晶片設計。
第二部分,我們探討並嘗試將人眼不同的深度線索資訊取得,並轉換成三維立體電視可以播放的深度生成演算法,我們提出了創新的演算法,包含基於多重深度線索之深度生成法、基於經驗法則之物件化深度指定演算法、與基於人類對光影與色彩之深度感知特性生成三維立體影像之深度演算法,從不同面向來解決這個問題,並提出了一個可展示的系統平台,結合三維立體播放套件,利用多核心中央處理器與CUDA圖形加速器的多執行序運算分工達到HDTV1080P即時影像轉換之展示系統。
zh_TW
dc.description.abstractDigital video technology has played an important role in our daily life. With the evolution of the display technologies, display systems can provide higher visual quality to enrich human life. Emerging 3D displays provide better visual experience than conventional 2D displays. 3D technology enriches the contents of many applications, such as broadcasting, movie, gaming, photographing, camcorder, education, etc. In this dissertation, the video signal conversion for 3D image and video are discussed in two different parts: depth from stereo vision and single view video 2D-to-3D conversion. The depth from stereo vision estimate depth from the correspondences of stereo views. The 2D-to-3D conversion generates the depth map of 2D video, and then uses the depth map to render 2D video to 3D video.
Stereo matching can be formulated as an energy minimization problem on a 2D MRF. Among many MRF global optimization method, belief propagation gives high quality and has highly potential to achieve real-time processing. However, because of costly iterative operations and high memory and bandwidth demand, algorithms such as belief propagation conventionally used for stereo matching are computationally expensive for real-time system implementation. In Part I, the background of stereo matching using belief propagation is first described. Second, two kinds of algorithms, called tile-based belief propagation and fast message computation algorithm, which reduce the complexity of the bandwidth, memory, and computation of general BP are proposed to make the real-time processing become possible. Third, an efficient VLSI architecture of real-time, high-performance stereo matching is presented. The design combines the fast message computation method with the tile-based BP to create a parallel and flexible architecture. The VLSI architecture benefits from the proposed hardware design techniques that help reduce the bandwidth consumption and improve the efficiency of stereo matching. These techniques include a 3-stage pipeline, fully-parallel processing elements for message update, and a boundary message reuse scheme. When operating at 227 MHz, the architecture can generate HDTV720p disparity maps at 30 fps.
In Part II, we try to generate depth map from single view content. Three kinds of algorithms are proposed. The first algorithm uses three depth cues based on motion parallax, geometrical perspective, and color. The depth cue based algorithm is computation extensive. Therefore, the second algorithm uses a new concept that applies a prior hypothesis to assign the depth of grouped object without doing the depth cue extraction. The algorithm is suitable for single 2D image. Finally, the third algorithm which uses the human depth perception on color and lighting is proposed. The method has very low computational complexity and low side effect quality. The corresponding real-time demo system is also presented.
In summary, this dissertation presents an efficient stereo matching hardware architecture which combined the tile-based BP with the fast message computation method for generation high quality depth map from stereo video. For 2D video to 3D video conversion, three kinds of algorithms are proposed. The algorithms generate depth from depth cues, prior hypothesis, and human depth perception. A demo system of 2D-to-3D conversion system that integrated with 3D vision kit is also implemented. The proposed 2D-to-3D conversion can not only produce high quality depth map for 2D video but also can achieve real-time processing in HDTV specification.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T07:07:28Z (GMT). No. of bitstreams: 1
ntu-99-D94943010-1.pdf: 6219675 bytes, checksum: 51a0cb7197af3251558448ec2d17e09a (MD5)
Previous issue date: 2010
en
dc.description.tableofcontentsList of Figures ix
List of Tables xvii
Chapter 1 Introduction 1
1.1. Human Depth Perception 2
1.1.1. Binocular Vision 2
1.1.2. Depth and Disparity 3
1.1.3. Depth Cues 4
1.2. 3D Display 5
1.2.1. Stereoscopic 5
1.2.2. Autostereoscopic 7
1.3. 3D Content Format 9
1.3.1. Multiple 2D 9
1.3.2. 2D-plus-Depth 10
1.4. Dissertation Organization 13
Part I Stereo Matching Using Belief Propagation 15
Chapter 2 Study of Stereo Matching Using Belief Propagation 17
2.1. Stereo Matching 18
2.2. Background of Stereo Matching and Belief Propagation 18
2.2.1. Similarity Measurement 19
2.2.2. Finding Optimal Labels using Belief Propagation 21
2.3. Summary 26
Chapter 3 Analysis and Algorithm Design of Belief Propagation for Stereo Matching Application 27
3.1. Introduction 28
3.2. Proposed Algorithm 29
3.3. Bandwidth and Memory Analysis 32
3.3.1. Proposed Tile-based Belief Propagation 32
3.3.2. Data Reuse for Data Term Calculation 34
3.4. Experimental Results 37
3.5. Conclusion 46
Chapter 4 Fast Belief Propagation for Stereo Matching Application 48
4.1. Message Computation of Belief Propagation 49
4.2. Proposed Fast Message Computation Algorithm 50
4.2.1. Proposed Algorithm 54
4.2.2. Processing Element Design 54
4.3. Comparison and Discussion 57
4.4. Conclusions 57
Chapter 5 Architecture Design of Stereo Matching Using Tile-based Belief Propagation 59
5.1. Introduction 60
5.2. Analysis of Stereo Matching 61
5.2.1. Similarity Measurement Window Size 61
5.2.2. Bandwidth and Memory Analysis 63
5.3. Proposed Architecture 70
5.3.1. Three-Stage MB Pipeline Architecture 71
5.3.2. Fast Message PE Design 73
5.3.3. Boundary Message Reuse 73
5.4. Implementation Results 74
5.5. Discussion 80
5.6. Conclusion 80
Part II 2D to 3D Video Conversion 83
Chapter 6 2D-to-3D Conversion Background 85
6.1. Inverse Projection Problem 86
6.2. Depth Perception of Single View 87
6.2.1. Monocular Depth Cues 87
6.3. Study on Previous Works 95
6.3.1. Single frame based methods 96
6.3.2. Multiple frame based methods 96
6.4. Summary 97
Chapter 7 2D-to-3D Video Conversion Using Multiple Depth Cuing 99
7.1. Introduction 100
7.2. Proposed Video 2D-to-3D Conversion System 100
7.2.1. Depth from Motion Parallax (DMP) 102
7.2.1.1. Camera Motion Compensation 102
7.2.1.2. Block-Based Motion Estimation with Smoothness Prior 104
7.2.2. Depth from Geometrical Perspective (DGP) 107
7.2.3. Depth from Relative Position (DRP) 112
7.2.4. Depth Fusion and Post-Filtering 113
7.2.5. 3D Visualization 115
7.3. Experimental Results 120
7.3.1. Analysis of Computational Complexity 124
7.3.2. Analysis of Subjective Visual Quality 124
7.3.3. Analysis of Objective Visual Quality 127
7.4. Conclusions 131
Chapter 8 Object-based 2D-to-3D Conversion 133
8.1. Introduction 134
8.2. Proposed Algorithm 138
8.2.1. Block-Based Region Grouping 138
8.2.2. Depth from Prior Hypothesis 140
8.2.3. 3D Image Visualization and Depth Image-Based Rendering 141
8.3. Experimental Results 144
8.3.1. Analysis of Computational Complexity 144
8.3.2. Analysis of Visual Quality 146
8.4. Conclusions 152
Chapter 9 Algorithm and System Design of Perception-Based 2D-to-3D Conversion System 153
9.1. Introduction 154
9.2. Proposed Algorithm 156
9.2.1. Global Depth Map Generation 158
9.2.2. Local Depth Refinement 158
9.3. Psychology Experiment of Depth Perception 161
9.4. Experimental Results 162
9.5. System Implementation 165
9.6. Implementation Results 170
9.7. Conclusions 172
Chapter 10 Conclusions 173
10.1. Principle Contributions 173
10.1.1. Stereo Matching Using Belief Propagation 174
10.1.2. 2D-to-3D Conversion 175
10.2. Future Directions 175
10.2.1. Quality Enhancement of 3D Content 176
10.2.2. Interactive 3D Interface 176
Bibliography 177
dc.language.isozh-TW
dc.subject 積體電路設計zh_TW
dc.subject 二維轉三維 zh_TW
dc.subject 訊號處理zh_TW
dc.subject三維影像zh_TW
dc.subject Signal Processingen
dc.subject 2D-to-3D Conversionen
dc.subject3D Videoen
dc.subject VLSI Designen
dc.title三維影像訊號處理之演算法和架構設計zh_TW
dc.titleAlgorithm and Architecture Design for 3D Video Signal Processingen
dc.typeThesis
dc.date.schoolyear99-1
dc.description.degree博士
dc.contributor.oralexamcommittee簡韶逸(Shao-Yi Chien),蔡朝旭(Chao-Hsu Tsai),李鎮宜(Chen-Yi Lee),陳美娟(Mei-Juan Chen),李佩君(Pei-Jun Lee),賴永康(Yeong-Kang Lai),陳宏銘(Homer H. Chen)
dc.subject.keyword三維影像, 二維轉三維 , 積體電路設計, 訊號處理,zh_TW
dc.subject.keyword3D Video, 2D-to-3D Conversion, VLSI Design, Signal Processing,en
dc.relation.page183
dc.rights.note有償授權
dc.date.accepted2010-11-16
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電子工程學研究所zh_TW
顯示於系所單位:電子工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-99-1.pdf
  未授權公開取用
6.07 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved