請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/34040
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳永耀(Yung-Yaw Chen) | |
dc.contributor.author | Cheng-Wei Chen | en |
dc.contributor.author | 陳政維 | zh_TW |
dc.date.accessioned | 2021-06-13T05:52:24Z | - |
dc.date.available | 2014-08-03 | |
dc.date.copyright | 2011-08-03 | |
dc.date.issued | 2011 | |
dc.date.submitted | 2011-07-26 | |
dc.identifier.citation | [1] J. D. Pfautz, 'Depth Perception in Computer Graphics ' Ph.D, Trinity College, University of Cambridge, 2000.
[2] S. Nagata, 'How to Reinforce Perception of Depth in Single Two-Dimensional Pictures,' in Pictorial Communication in Virtual and Real Environments, 2nd ed Florence: Taylor Francis, Inc., 1993, pp. 527-545. [3] A. Levin, R. Fergus, F. Durand, and W. T. Freeman, 'Image and Depth from a Conventional Camera with a Coded Aperture,' presented at the ACM SIGGRAPH, San Diego, California, 2007. [4] G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library. Sebastopol: O'Reilly Media, 2008. [5] S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus : A Real Aperture Imaging Approach. New York: Springer, 1999. [6] M. Subbarao and C. Tao, 'Accurate Recovery of Three-Dimensional Shape from Image Focus,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, pp. 266-274, 1995. [7] S. J. Zhuo and T. Sim, 'On the Recovery of Depth from a Single Defocused Image,' in Proceedings of Computer Analysis of Images and Patterns, 2009, pp. 889-897. [8] I. Blayvas, R. Kimmel, and E. Rivlin, 'Role of optics in the accuracy of depth-from-defocus systems,' Journal of the Optical Society of America a-Optics Image Science and Vision, vol. 24, pp. 967-972, Apr 2007. [9] F. Zilly, J. Kluger, and P. Kauff, 'Production Rules for Stereo Acquisition,' in Proceedings of the IEEE, 2011, pp. 1-17. [10] D. Forsyth and J. Ponce, Computer Vision: A Modern Approach. Upper Saddle River, New Jersey: Prentice Hall, 2002. [11] V. P. Namboodiri and S. Chaudhuri, 'Recovery of Relative Depth from a Single Observation Using an Uncalibrated (Real-Aperture) Camera,' in IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1-6. [12] H. Y. Lin and K. D. Gu, 'Depth Recovery Using Defocus Blur at Infinity,' in 19th International Conference on Pattern Recognition, 2008, pp. 1068-1071. [13] A. Shpunt, D. Rais, and N. Galezer, 'Reference Image Techniques for Three-dimensional Sensing,' US Patent US 2010/0225746 Al, 2010. [14] J. Ma and S. I. Olsen, 'Depth from Zooming,' Journal of the Optical Society of America a-Optics Image Science and Vision, vol. 7, pp. 1883-1890, Oct 1990. [15] P. Grossmann, 'Depth from Focus,' Pattern Recognition Letters, vol. 5, pp. 63-69, Jan 1987. [16] A. P. Pentland, 'A New Sense for Depth of Field,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, pp. 523-531, 1987. [17] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision. Reading, Mass.: Addison-Wesley Pub. Co., 1992. [18] U. R. Dhond and J. K. Aggarwal, 'Structure from Stereo-A Review,' IEEE Transactions on Systems, Man, and Cybernetics, vol. 19, pp. 1489-1510, 1989. [19] M. Subbarao and G. Surya, 'Depth from Defocus: A spatial Domain Approach,' International Journal of Computer Vision, vol. 13, pp. 271-294, 1994. [20] T. S. Huang and A. N. Netravali, 'Motion and Structure from Feature Correspondences: A Review,' Proceedings of the IEEE, vol. 82, pp. 252-268, 1994. [21] J. M. Lavest, G. Rives, and M. Dhome, 'Three-Dimensional Reconstruction by Zooming,' IEEE Transactions on Robotics and Automation, vol. 9, pp. 196-207, 1993. [22] J. M. Lavest, G. Rives, and M. Dhome, 'Modeling an Object of Revolution by Zooming,' IEEE Transactions on Robotics and Automation, vol. 11, pp. 267-271, Apr 1995. [23] J. M. Lavest, C. Delherm, B. Peuchot, and N. Daucher, 'Implicit Reconstruction by Zooming,' Computer Vision Image Understanding, vol. 66, pp. 301-315, 1997. [24] A. Claus Siggaard, 'An Analysis of Five Depth Recovery Techniques,' 1994. [25] S. K. Nayar and Y. Nakagawa, 'Shape from Focus,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, pp. 824-831, 1994. [26] A. Pentland, T. Darrell, M. Turk, and W. Huang, 'A Simple, Real-Time Range Camera,' in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1989, pp. 256-261. [27] A. Pentland, S. Scherock, T. Darrell, and B. Girod, 'Simple Range Cameras Based on Focal Error,' Journal of the Optical Society of America a-Optics Image Science and Vision, vol. 11, pp. 2925-2934, Nov 1994. [28] M. Subbarao, 'Determining Distance from Defocused Images of Simple Objects,' Computer Vision Laboratory, Dept. of Electrical Engineering, University of New York, Stony Brook, NY, Technical Report 89.07.20, 1989. [29] A. Horii, 'Depth from Defocusing,' Compuational Vision and Active Perception Laboratory, Royal Institute of Technology, Stockholm, Sweden, Technical report ISRN KTH/NA/P--92/16--SE, 1992. [30] J. Ens and P. Lawrence, 'An Investigation of Methods for Determining Depth from Focus,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, pp. 97-108, Feb 1993. [31] Y. Xiong and S. A. Shafer, 'Depth from Focusing and Defocusing,' in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1993, pp. 68-73. [32] G. Surya and M. Subbarao, 'Depth from Defocus by Changing Camera Aperture: A Spatial Domain Approach,' in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1993, pp. 61-67. [33] J. G. Sivak, T. A. Curry, K. T. Wilson, M. G. Callender, and E. L. Irving, 'Defocus Induced Ametropia in Chicks - Role of Magnification and Orbital Changes,' Vision Research, vol. 35, pp. 1321-1321, Oct 1995. [34] M. Watanabe and S. K. Nayar, 'Rational Filters for Passive Depth from Defocus,' International Journal of Computer Vision, vol. 27, pp. 203-225, May 1998. [35] O. Ghita, P. F. Whelan, and J. Mallon, 'Computational Approach for Depth from Defocus,' Journal of Electronic Imaging, vol. 14, 2005. [36] H. Liu, Y. Jia, H. Cheng, and S. Wei, 'Depth Estimation from Defocus Images Based on Oriented Heat-Flows,' in Second International Conference on Machine Vision, 2009, pp. 212-215. [37] L. Hong, J. Yu, C. Hong, and W. Sui, 'Depth Recovery from Defocus Images Using Total Variation,' in Second International Conference on Computer Modeling and Simulation, 2010, pp. 146-150. [38] T. Rajabzadeh, A. Vahedian, and H. Pourreza, 'Static Object Depth Estimation Using Defocus Blur Levels Features,' in 6th International Conference on Wireless Communications Networking and Mobile Computing, 2010, pp. 1-4. [39] V. Aslantas and D. T. Pham, 'Depth from Automatic Defocusing,' Optics Express, vol. 15, pp. 1011-1023, 2007. [40] G. Dane, Y. Yan, and L. Yen-Chi, 'Regularized Depth from Defocus,' in 15th IEEE International Conference on Image Processing, 2008, pp. 1520-1523. [41] W. Yangjie, D. Zaili, and W. Chengdong, 'Global Depth from Defocus with Fixed Camera Parameters,' in International Conference on Mechatronics and Automation, 2009, pp. 1887-1892. [42] H. Liu, 'Depth Retrieval Based on Optical Defocus of Imaging System,' in 2nd International Conference on Advanced Computer Control, 2010, pp. 319-322. [43] S. H. Lai, C. W. Fu, and S. Y. Chang, 'A Generalized Depth Estimation Algorithm with a Single Image,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 405-411, Apr 1992. [44] M. Subbarao, 'Parallel Depth Recovery by Changing Camera Parameters,' in Second International Conference on Computer Vision, 1988, pp. 149-155. [45] M. Subbarao and N. Gurumoorthy, 'Depth recovery from blurred edges,' in Computer Society Conference on Computer Vision and Pattern Recognition, 1988, pp. 498-503. [46] V. P. Namboodiri and S. Chaudhuri, 'On Defocus, Diffusion and Depth Estimation,' Pattern Recognition Letters, vol. 28, pp. 311-319, 2007. [47] A. N. Rajagopalan, S. Chaudhuri, and R. Chellappa, 'Quantitative analysis of error bounds in the recovery of depth from defocused images,' Journal of the Optical Society of America a-Optics Image Science and Vision, vol. 17, pp. 1722-1731, Oct 2000. [48] A. Saxena, A. Ng, and S. Chung, 'Learning Depth from Single Monocular Images,' in Conference on Neural Information Processing Systems, 2005. [49] A. Saxena, J. Schulte, and A. Y. Ng, 'Depth Estimation Using Monocular and Stereo Cues,' in Proceedings of the 20th International Joint Conference on Artifical intelligence, Hyderabad, India, 2007, pp. 2197-2203. [50] A. Saxena, S. H. Chung, and A. Y. Ng, '3-D Depth Reconstruction from a Single Still Image,' International Journal Computer Vision, vol. 76, pp. 53-69, 2008. [51] A. Saxena, S. Min, and A. Y. Ng, 'Make3D: Learning 3D Scene Structure from a Single Still Image,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, pp. 824-840, 2009. [52] D. Hoiem, A. A. Efros, and M. Hebert, 'Automatic Photo Pop-Up,' ACM Transactions on Graphics, vol. 24, pp. 577-584, 2005. [53] D. Hoiem, A. A. Efros, and M. Hebert, 'Recovering Surface Layout from an Image,' International Journal of Computer Vision, vol. 75, pp. 151-172, 2007. [54] A. Saxena, 'Monocular Depth Perception and Robotic Grasping of Novel Objects,' Ph.D, Standford University, 2009. [55] M. Watanabe and S. K. Nayar, 'Telecentric Optics for Focus Analysis,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, pp. 1360-1365, Dec 1997. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/34040 | - |
dc.description.abstract | 利用影像擷取裝置獲取被攝物的距離——即深度還原——可廣泛地利用在智慧型三維立體應用技術開發。「散焦測距法」為電腦視覺研究中深度還原的方法之一,其主要概念為在單一視角優勢下,利用散焦模糊與影像深度距離資訊的幾何關聯,在影像中量測模糊程度以還原深度。本論文基於傅立葉轉換的時頻域縮放關係提出一個新的方法,以散焦邊緣梯度取得的線擴散函數頻譜能量做為模糊程度的量化標準,從而由單張影像中邊緣區域的模糊程度與預先提供的相機內部參數進行影像深度還原。不同於過去提出的散焦測距法,此新方法簡單直覺並且相當有效地量化模糊程度,而不用對造成模糊的擴散函數進行建模及參數調和,進而避免因為諸如高斯函數建模所造成的誤差,提升深度還原的準確度。本論文並以未經校正的消費型數位相機進行實驗,結果顯示新提出的深度還原方法具有相當良好的精確度。 | zh_TW |
dc.description.abstract | Recovering depth information, distances between objects and the camera, from images is a convenient and practical approach for intelligent 3D technologies. Depth-from-Defocus (DFD), one of the depth recovery methods, utilizes defocus blur to estimate the depth information. The approach has an advantage of using only one single image instead of multiple images to achieve depth recovery. In this thesis, a novel idea to represent the defocus blur amount by the spectral energy of line spread function that is directly derived from defocused step edge is proposed. As a result, the depth information can be recovered from a single image using spectral energy with known internal camera parameters. Unlike previous DFD methods, our method does not model spread functions with Gaussian functions. The direct usage of the spectral energy as the amount of blurriness eliminates the modeling error existent with the Gaussian function-based approaches. The experiments using an uncalibrated commercial digital camera have validated the proposed method and shown a considerably good accuracy in depth recovery. | en |
dc.description.provenance | Made available in DSpace on 2021-06-13T05:52:24Z (GMT). No. of bitstreams: 1 ntu-100-R98921002-1.pdf: 7830647 bytes, checksum: 482785483abf511f04b7852cd8b068ca (MD5) Previous issue date: 2011 | en |
dc.description.tableofcontents | 誌謝 i
中文摘要 ii Abstract iii Contents iv List of Figures vii List of Tables xv Chapter 1 Introduction 1 1.1 Motivation and Problem Definition 1 1.2 Previous Approaches 6 1.3 Proposed Approach 9 1.4 Thesis Overview 10 Chapter 2 State-of-the-Art 12 2.1 Multiple Viewpoints Approaches 15 2.1.1 Stereo Vision 18 2.1.2 Structure from Motion 21 2.2 Depth from Zooming 24 2.3 Depth from Focus 27 2.4 Depth from Defocus 30 2.4.1 Multiple Images Based Methods 34 2.4.2 Single Image Based Methods 36 Chapter 3 Defocus Blur Model and Line Spread Function Extraction 43 3.1 Camera Model and Defocus Blur 43 3.2 Line Spread Function Extraction 48 3.2.1 Mathematical Derivation 48 3.2.2 Image Pre-processing and Feature Extraction 50 3.3 Summary 56 Chapter 4 Recovering Depth Using Spectral Energy 58 4.1 Spectral Energy 58 4.1.1 Blurring Phenomenon in Frequency Domain 60 4.1.2 Spectral Energy Computation 62 4.2 Depth Recovery Formulation 64 4.3 Constraints 67 4.4 Summary 69 Chapter 5 Experimental Results 71 5.1 Noise Effect Simulation 74 5.2 Energy-Depth Relationship Function Initialization Fitting 76 5.3 Performance Comparison with Gaussian Modeling Approach 81 5.4 Depth Recovery from Various Patterns 84 5.5 Exposure Time Effects on Spectral Energy 87 5.6 Edge Location Effects on Spectral Energy 90 5.7 Disturbance Identification 93 5.8 Properties of Robustness 96 5.9 Summary 100 Chapter 6 Conclusions and Discussions 102 6.1 Conclusions 102 6.2 Discussions and Future Work 104 References 105 | |
dc.language.iso | en | |
dc.title | 深度還原自單張失焦影像之線擴散函數頻譜能量 | zh_TW |
dc.title | Depth Recovery by Spectral Energy with Line Spread Functions from a Single Defocused Image | en |
dc.type | Thesis | |
dc.date.schoolyear | 99-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 傅立成(Li-Chen Fu),顏家鈺(Jia-Yush Yen),連豊力(Feng-Li Lian) | |
dc.subject.keyword | 深度還原,散焦模糊,散焦測距法,頻譜能量,點擴散函數,線擴散函數, | zh_TW |
dc.subject.keyword | Depth recovery,defocus blur,Depth from defocus,spectral energy,point spread function,line spread function, | en |
dc.relation.page | 112 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2011-07-26 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-100-1.pdf 目前未授權公開取用 | 7.65 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。