Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56938
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕(Yung-Yu Chuang)
dc.contributor.authorPei-Hsuan Linen
dc.contributor.author林佩璇zh_TW
dc.date.accessioned2021-06-16T06:31:33Z-
dc.date.available2018-08-22
dc.date.copyright2014-08-22
dc.date.issued2014
dc.date.submitted2014-08-06
dc.identifier.citation[1] E. Adelson and J. Bergen, “The plenoptic function and the elements of early vision,”
Computational models of visual processing, vol. 1, 1991.
[2] E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 99–106,
1992.
[3] Ratrix. 3d light field cameras. [Online]. Available: http://www.raytrix.de/
[4] Lytro. the lytro camera. [Online]. Available: http://www.lytro.com
[5] A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proc. IEEE
ICCP, 2009, pp. 1–8.
[6] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light
field photography with a hand-held plenoptic camera,” Computer Science Technical
Report CSTR, vol. 2, 2005.
[7] R. C. Bolles, H. H. Baker, David, and H. Marimont, “Epipolarplane image analysis:
An approach to determining structure from motion,” in Intern..1. Computer Vision,
1987, pp. 1–7.
[8] T. Bishop and P. Favaro, “Plenoptic depth estimation from multiple aliased views,”
in Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International
Conference on. IEEE, 2009, pp. 1622–1629.
[9] ——, “Full-resolution depth map estimation from an aliased plenoptic light field,”
Computer Vision–ACCV 2010, pp. 186–200, 2011.
[10] S. Wanner, J. Fehr, and B. Jaehne, “Generating epi representations of 4D light fields
with a single lens focused plenoptic camera,” 2011.
[11] S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D lightfields,”
2012.
[12] B. Goldluecke and S. Wanner, “The variational structure of disparity and regularization
of 4D light fields,” 2013.
[13] S. Wanner, S. Meister, and B. Goldluecke, “Datasets and benchmarks for densely
sampled 4D light fields,” 2013.
[14] M. Diebold and B. Goldluecke, “Epipolar plane image refocusing for improved depth
estimation and occlusion handling,” 2013.
[15] M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining
defocus and correspondence using light-field cameras,” Dec. 2013. [Online].
Available: http://graphics.berkeley.edu/papers/Tao-DFC-2013-12/
[16] C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction
from high spatio-angular resolution light fields,” ACM Transactions on
Graphics (Proceedings of ACM SIGGRAPH), vol. 32, no. 4, pp. 73:1–73:12, 2013.
[17] D. Dansereau, O. Pizarro, and S. Williams, “Decoding, calibration and rectification
for lenselet-based plenoptic cameras,” in Computer Vision and Pattern Recognition
(CVPR), 2013 IEEE Conference on, June 2013, pp. 1027–1034.
[18] python-lfp-reader : Python library and command-line scripts to read lytro lfp files.
[Online]. Available: http://code.behnam.es/python-lfp-reader/
[19] D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibration pipeline of the
lytro camera for high quality light-field image reconstruction,” in Computer Vision
(ICCV), 2013 IEEE International Conference on, Dec 2013, pp. 3280–3287.
[20] nrpatel/lfptools ·GitHub. [Online]. Available: https://github.com/nrpatel/lfptools
[21] J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schnorr, S. Nowozin, D. Batra,
S. Kim, B. X. Kausler, T. Kroger, J. Lellmann, N. Komodakis, B. Savchynskyy,
and C. Rother, “A comparative study of modern inference techniques for structured
discrete energy minimization problems,” CoRR, vol. abs/1404.0533, 2014.
[22] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via
graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 11, pp. 1222–
1239, Nov. 2001. [Online]. Available: http://dx.doi.org/10.1109/34.969114
[23] V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph
cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp.
65–81, 2004.
[24] Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/maxflow
algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 26, no. 9, pp. 1124–1137, Sep. 2004. [Online]. Available:
http://dx.doi.org/10.1109/TPAMI.2004.60
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56938-
dc.description.abstract光場相機乃近幾年開始在市面上流通的新型態相機,其特色為在主鏡頭組與感光元件之間多了一組微鏡頭陣列,讓它能夠收集比一般相機更多的資訊。光場相機被認為具有研究的價值,且有助於光場影像應用的研究與發展;然而,對於光場影像的研究迄今尚未成熟,在硬體與軟體上的各種限制更讓通往研究的道路有著重重困難。我們的研究動機便因此而誕生:若有一個制式化且簡便的流程,為研究者提供場景的深度,對於編輯影像等應用的發展將會大有幫助。
  因此,我們提出針對Lytro相機之影像做深度估計之方法。我們的演算法藉由對轉換成核面型式的資料進行可適性視窗匹配,以得到計算場景的深度,並利用馬可夫隨機場優化演算法,對估計的結果做優化的動作。因為考慮到Lytro相機本身的特性、以及影像本身品質上出現瑕疵的可能性,我們的結果和一些現有的方法相較之下表現得較為優異。最後,我們相信這個研究能夠為其他光場影像領域的研究者開啟一道大門,並期望能有助於在計算攝影學研究上的突破。
zh_TW
dc.description.abstractA light field camera, also called a plenoptic camera, is recently becoming accessible in the market. With a micro-lens array locating between the main lens and the sensor, it is able to collect more information than a common camera can do. It is believed that the additional information have the power to open a new era in the field of computation photography. For example, depth of the scene can be estimated, and the depth value can also aid in the applications such as image editing. However, because the development environment is still immature, there exists a bunch of inconvenience for researchers who want to use the camera. Lytro, which we use in this paper, is the cheapest light field camera now in the market, but their producer doesn't allow users to access the data unless they use the official viewer, not to mention developing other applications. Though there are some toolboxes provided by the third-party to decode the light field pictures, it is still an open problem to obtain such a depth map.
We present a method for estimating depth of scenes captured by a Lytro camera. Depth value is computed by adaptive windowing matching on epipolar plane images (EPI) and we achieve data refinement by Markov Random Field (MRF) optimization algorithm, hoping to enhance robustness to noise or other weakness due to hardware limitation. We compare our results with those from existing method for light field depth estimation and show that our method outperforms in most cases. As a result, we believe our work gives researchers or developers hoping to achieve applications such as light field inpainting possibility to break through the limit of existing methods.
en
dc.description.provenanceMade available in DSpace on 2021-06-16T06:31:33Z (GMT). No. of bitstreams: 1
ntu-103-R01944002-1.pdf: 16393628 bytes, checksum: 90e1f529d1a08d5bc15ad548396b2cc2 (MD5)
Previous issue date: 2014
en
dc.description.tableofcontents誌謝i
摘要ii
Abstract iii
Contents iv
List of Figures vi
1 Introduction 1
2 Background and Related Work 4
2.1 The Plenoptic Camera and Light Field Photography . . . . . . . . . . . . 4
2.2 Depth Estimation for Light Fields . . . . . . . . . . . . . . . . . . . . . 5
2.3 Decoding and Calibration for Lytro images . . . . . . . . . . . . . . . . 6
3 Depth Estimation and Data Refinement 8
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 4D Light Field Representation . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Source Confidence Measure . . . . . . . . . . . . . . . . . . . . . . . . 10
3.4 Adaptive Window for Estimation . . . . . . . . . . . . . . . . . . . . . . 11
3.5 Depth Computation by Window Matching . . . . . . . . . . . . . . . . . 12
3.6 Data Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Experiment Results and Discussion 18
4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2 Results comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5 Conclusions and Future Work 28
Bibliography 29
dc.language.isozh-TW
dc.subject核面zh_TW
dc.subjectLytro光場相機zh_TW
dc.subject深度估計zh_TW
dc.subject光場相機zh_TW
dc.subjectlight fieldsen
dc.subjectLytroen
dc.subjectEPIen
dc.subjectdepth estimationen
dc.title基於核面影像上可適性視窗匹配之Lytro影像深度估計zh_TW
dc.titleDepth Estimation for Lytro Images by Adaptive Window Matching on EPIen
dc.typeThesis
dc.date.schoolyear102-2
dc.description.degree碩士
dc.contributor.oralexamcommittee吳賦哲,葉正聖
dc.subject.keyword光場相機,深度估計,核面,Lytro光場相機,zh_TW
dc.subject.keywordlight fields,depth estimation,EPI,Lytro,en
dc.relation.page31
dc.rights.note有償授權
dc.date.accepted2014-08-06
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-103-1.pdf
  未授權公開取用
16.01 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved