請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56938
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 莊永裕(Yung-Yu Chuang) | |
dc.contributor.author | Pei-Hsuan Lin | en |
dc.contributor.author | 林佩璇 | zh_TW |
dc.date.accessioned | 2021-06-16T06:31:33Z | - |
dc.date.available | 2018-08-22 | |
dc.date.copyright | 2014-08-22 | |
dc.date.issued | 2014 | |
dc.date.submitted | 2014-08-06 | |
dc.identifier.citation | [1] E. Adelson and J. Bergen, “The plenoptic function and the elements of early vision,”
Computational models of visual processing, vol. 1, 1991. [2] E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 99–106, 1992. [3] Ratrix. 3d light field cameras. [Online]. Available: http://www.raytrix.de/ [4] Lytro. the lytro camera. [Online]. Available: http://www.lytro.com [5] A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proc. IEEE ICCP, 2009, pp. 1–8. [6] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR, vol. 2, 2005. [7] R. C. Bolles, H. H. Baker, David, and H. Marimont, “Epipolarplane image analysis: An approach to determining structure from motion,” in Intern..1. Computer Vision, 1987, pp. 1–7. [8] T. Bishop and P. Favaro, “Plenoptic depth estimation from multiple aliased views,” in Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 1622–1629. [9] ——, “Full-resolution depth map estimation from an aliased plenoptic light field,” Computer Vision–ACCV 2010, pp. 186–200, 2011. [10] S. Wanner, J. Fehr, and B. Jaehne, “Generating epi representations of 4D light fields with a single lens focused plenoptic camera,” 2011. [11] S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D lightfields,” 2012. [12] B. Goldluecke and S. Wanner, “The variational structure of disparity and regularization of 4D light fields,” 2013. [13] S. Wanner, S. Meister, and B. Goldluecke, “Datasets and benchmarks for densely sampled 4D light fields,” 2013. [14] M. Diebold and B. Goldluecke, “Epipolar plane image refocusing for improved depth estimation and occlusion handling,” 2013. [15] M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” Dec. 2013. [Online]. Available: http://graphics.berkeley.edu/papers/Tao-DFC-2013-12/ [16] C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH), vol. 32, no. 4, pp. 73:1–73:12, 2013. [17] D. Dansereau, O. Pizarro, and S. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, June 2013, pp. 1027–1034. [18] python-lfp-reader : Python library and command-line scripts to read lytro lfp files. [Online]. Available: http://code.behnam.es/python-lfp-reader/ [19] D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,” in Computer Vision (ICCV), 2013 IEEE International Conference on, Dec 2013, pp. 3280–3287. [20] nrpatel/lfptools ·GitHub. [Online]. Available: https://github.com/nrpatel/lfptools [21] J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schnorr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler, T. Kroger, J. Lellmann, N. Komodakis, B. Savchynskyy, and C. Rother, “A comparative study of modern inference techniques for structured discrete energy minimization problems,” CoRR, vol. abs/1404.0533, 2014. [22] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 11, pp. 1222– 1239, Nov. 2001. [Online]. Available: http://dx.doi.org/10.1109/34.969114 [23] V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp. 65–81, 2004. [24] Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/maxflow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 9, pp. 1124–1137, Sep. 2004. [Online]. Available: http://dx.doi.org/10.1109/TPAMI.2004.60 | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56938 | - |
dc.description.abstract | 光場相機乃近幾年開始在市面上流通的新型態相機,其特色為在主鏡頭組與感光元件之間多了一組微鏡頭陣列,讓它能夠收集比一般相機更多的資訊。光場相機被認為具有研究的價值,且有助於光場影像應用的研究與發展;然而,對於光場影像的研究迄今尚未成熟,在硬體與軟體上的各種限制更讓通往研究的道路有著重重困難。我們的研究動機便因此而誕生:若有一個制式化且簡便的流程,為研究者提供場景的深度,對於編輯影像等應用的發展將會大有幫助。
因此,我們提出針對Lytro相機之影像做深度估計之方法。我們的演算法藉由對轉換成核面型式的資料進行可適性視窗匹配,以得到計算場景的深度,並利用馬可夫隨機場優化演算法,對估計的結果做優化的動作。因為考慮到Lytro相機本身的特性、以及影像本身品質上出現瑕疵的可能性,我們的結果和一些現有的方法相較之下表現得較為優異。最後,我們相信這個研究能夠為其他光場影像領域的研究者開啟一道大門,並期望能有助於在計算攝影學研究上的突破。 | zh_TW |
dc.description.abstract | A light field camera, also called a plenoptic camera, is recently becoming accessible in the market. With a micro-lens array locating between the main lens and the sensor, it is able to collect more information than a common camera can do. It is believed that the additional information have the power to open a new era in the field of computation photography. For example, depth of the scene can be estimated, and the depth value can also aid in the applications such as image editing. However, because the development environment is still immature, there exists a bunch of inconvenience for researchers who want to use the camera. Lytro, which we use in this paper, is the cheapest light field camera now in the market, but their producer doesn't allow users to access the data unless they use the official viewer, not to mention developing other applications. Though there are some toolboxes provided by the third-party to decode the light field pictures, it is still an open problem to obtain such a depth map.
We present a method for estimating depth of scenes captured by a Lytro camera. Depth value is computed by adaptive windowing matching on epipolar plane images (EPI) and we achieve data refinement by Markov Random Field (MRF) optimization algorithm, hoping to enhance robustness to noise or other weakness due to hardware limitation. We compare our results with those from existing method for light field depth estimation and show that our method outperforms in most cases. As a result, we believe our work gives researchers or developers hoping to achieve applications such as light field inpainting possibility to break through the limit of existing methods. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T06:31:33Z (GMT). No. of bitstreams: 1 ntu-103-R01944002-1.pdf: 16393628 bytes, checksum: 90e1f529d1a08d5bc15ad548396b2cc2 (MD5) Previous issue date: 2014 | en |
dc.description.tableofcontents | 誌謝i
摘要ii Abstract iii Contents iv List of Figures vi 1 Introduction 1 2 Background and Related Work 4 2.1 The Plenoptic Camera and Light Field Photography . . . . . . . . . . . . 4 2.2 Depth Estimation for Light Fields . . . . . . . . . . . . . . . . . . . . . 5 2.3 Decoding and Calibration for Lytro images . . . . . . . . . . . . . . . . 6 3 Depth Estimation and Data Refinement 8 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 4D Light Field Representation . . . . . . . . . . . . . . . . . . . . . . . 8 3.3 Source Confidence Measure . . . . . . . . . . . . . . . . . . . . . . . . 10 3.4 Adaptive Window for Estimation . . . . . . . . . . . . . . . . . . . . . . 11 3.5 Depth Computation by Window Matching . . . . . . . . . . . . . . . . . 12 3.6 Data Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4 Experiment Results and Discussion 18 4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.2 Results comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3 Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5 Conclusions and Future Work 28 Bibliography 29 | |
dc.language.iso | zh-TW | |
dc.title | 基於核面影像上可適性視窗匹配之Lytro影像深度估計 | zh_TW |
dc.title | Depth Estimation for Lytro Images by Adaptive Window Matching on EPI | en |
dc.type | Thesis | |
dc.date.schoolyear | 102-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 吳賦哲,葉正聖 | |
dc.subject.keyword | 光場相機,深度估計,核面,Lytro光場相機, | zh_TW |
dc.subject.keyword | light fields,depth estimation,EPI,Lytro, | en |
dc.relation.page | 31 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2014-08-06 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-103-1.pdf 目前未授權公開取用 | 16.01 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。