請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/61851完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 李世光 | |
| dc.contributor.author | Shan-Ching Chang | en |
| dc.contributor.author | 張善靖 | zh_TW |
| dc.date.accessioned | 2021-06-16T13:15:44Z | - |
| dc.date.available | 2015-08-08 | |
| dc.date.copyright | 2013-08-08 | |
| dc.date.issued | 2013 | |
| dc.date.submitted | 2013-07-29 | |
| dc.identifier.citation | [1] 王嚴璋, '利用光場相機所獲得之聚焦於不同距離的影像產生標籤圖及擴展景深,' 清華大學電機工程學系學位論文, 清華大學, 2012.
[2] M. Levoy and P. Hanrahan, 'Light field rendering,' presented at the Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996. [3] R. Ng, 'Fourier slice photography,' in ACM Transactions on Graphics (TOG), 2005, pp. 735-744. [4] A. Gershun, P. H. Moon, and G. Timoshenko, The light field: Massachusetts Institute of Technology, 1939. [5] E. H. Adelson and J. R. Bergen, 'The plenoptic function and the elements of early vision,' Computational models of visual processing, vol. 1, 1991. [6] M. S. Landy and J. A. Movshon, Computational models of visual processing: Mit Press, 1991. [7] C. Zhang and T. Chen, 'Light Field Sampling,' Synthesis Lectures on Image, Video, and Multimedia Processing, vol. 2, pp. 1-102, 2006/01/01 2006. [8] S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, 'The lumigraph,' presented at the Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996. [9] B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, et al., 'High performance imaging using large camera arrays,' in ACM Transactions on Graphics (TOG), 2005, pp. 765-776. [10] B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, 'High-speed videography using a dense camera array,' in Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, 2004, pp. II-294-II-301 Vol.2. [11] B. S. Wilburn, M. Smulski, H.-H. K. Lee, and M. A. Horowitz, 'Light field video camera,' in Electronic Imaging 2002, 2001, pp. 29-36. [12] V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, 'Using plane+ parallax for calibrating dense camera arrays,' in Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, pp. I-2-I-9 Vol. 1. [13] R. Ng, 'Digital light field photography,' stanford university, 2006. [14] Lytro. light field camera. Available: https://www.lytro.com/ [15] C.-C. Chen, Y.-C. Lu, and M.-S. Su, 'Light field based digital refocusing using a DSLR camera with a pinhole array mask,' in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, 2010, pp. 754-757. [16] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, 'Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,' ACM Transactions on Graphics, vol. 26, p. 69, 2007. [17] R. Horstmeyer, G. Euliss, R. Athale, and M. Levoy, 'Flexible multimodal camera using a light field architecture,' in Computational Photography (ICCP), 2009 IEEE International Conference on, 2009, pp. 1-8. [18] T. Georgiev and C. Intwala, 'Light field camera design for integral view photography,' Adobe System, Inc., Technical Report, 2006. [19] T. Georgiev, 'New results on the plenoptic 2.0 camera,' in Signals, Systems and Computers, 2009 Conference Record of the Forty-Third Asilomar Conference on, 2009, pp. 1243-1247. [20] E. R. Dowski and W. T. Cathey, 'Extended depth of field through wave-front coding,' Applied Optics, vol. 34, pp. 1859-1866, 1995. [21] G. M. A. R. Harvey, B. Lucotte, and S. Mezouari. (2006). Wavefront coding: a new dimension in optical design. Available: http://powershow.com/view/91627-NDRjM/Optical_Designers_meeting_22nd_Sept_06a_r_harveyhw_ac_uk_powerpoint_ppt_presentation [22] C.-K. Liang, G. Liu, and H. H. Chen, 'Light field acquisition using programmable aperture camera,' in Image Processing, 2007. ICIP 2007. IEEE International Conference on, 2007, pp. V-233-V-236. [23] C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, 'Programmable aperture photography: multiplexed light field acquisition,' in ACM Transactions on Graphics (TOG), 2008, p. 55. [24] D. L. Dilaura, 'The Photic Field - Moon,P, Spencer,D,' Applied Optics, vol. 22, pp. 4166-4166, 1983. [25] 潘家弘, '利用傅立葉切片理論之數位變焦計算與硬體加速設計,' 臺灣大學電子工程學研究所學位論文, 臺灣大學, 2011. [26] 陳致傑, 利用光場資料之數位變焦演算法與硬體架構設計: 國立台灣大學電子工程研究所碩士論文, 2010. [27] R. E. Jacobson, S. F. Ray, and G. G. Attridge, The Manual of Photography: Focal Press, 1988. [28] A. A. Blaker, Applied depth of field: Focal Press, 1985. [29] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, 'Light field photography with a hand-held plenoptic camera,' Computer Science Technical Report CSTR, vol. 2, p. 11, 2005. [30] O. R. Associates. CODE V & LightTools. Available: http://www.opticalres.com/ [31] G. H. Matt Pharr Physically Based Rendering: From Theory to Implementation, 2 ed.: Morgan Kaufmann, 2010. [32] C. Kolb, D. Mitchell, and P. Hanrahan, 'A realistic camera model for computer graphics,' presented at the Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, 1995. [33] W. J. Smith, Modern lens design. New York: McGraw-Hill, 2005. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/61851 | - |
| dc.description.abstract | 近幾年手持式光場相機於影像攝影學上掀起了一場革命性的討論。雖然光場相機的概念早在多前年就已經被提出來討論,但卻受限於當時電腦技術的發展因此無法被廣泛的應用。而在一般光學相機系統中,相機之感測器為一個二維平面,於捕捉現實世界中的四維光場資訊時,所擷取到之二維資訊為整體四維空間場域投影至相機二維感測器平面之結果。換句話說,一般相機之感測器所能紀錄之資訊僅僅為空間光場中之一部分而已,其餘維度之資訊並無法被記錄下來。
本論文目標為計算真實四維光場資訊中之像素射線分佈並擷取此像素射線成像與分析比對。我們採用以物理概念為基礎之渲染系統軟體PBRT(Physical Based Rendering Tracing)來進行光線追跡。初期將會實踐軟體之簡易相機,接著再加以修改相機演算法加入透鏡組來實踐真實相機效果。接下來將此套軟體環境中引入四維光場概念,並對虛擬場景搭配修改後之真實相機來進行光場擷取與呈現。利用此方式所擷取到之光場資料會儲存於光場檔案中,之後會直接對光場檔案進行渲染以及變焦等處理,不用再對整個場景重新進行渲染。最後則將真實光場量測資料導入,置換於PBRT軟體環境中所建立的四維光場中,先會對一張量測資料來做像素射線之光能量擷取與分析,再來利用兩張量測資料來做處理。而本論文會將過往實驗時的貝索光束量測資料與生物組織量測資訊導入。後期利用影像處理方法,對光像素射線之能量分佈圖進行光能量分析,並與真實量測資料來比較。由實驗結果中可看出,透過本論文研究方法,於渲染虛擬場景時可以大幅減少渲染時間至少兩倍以上,另外驗證能夠於PBRT此套軟體環境中達到真實資料之光場光線追跡的目的,並能有效地擷取透過真實資訊後之像素射線光能量分佈呈現於一張相片上。 | zh_TW |
| dc.description.abstract | Recently, the invention of hand-held light field camera induces a revolution in photography. Although the concept of light field camera had been proposed for many years, the limitation of lacking proper computation technology has prevented the light field camera from being widely used. In typical camera systems, the sensor array of a camera is on a two-dimensional plane. To adopt a two dimensional sensor array to capture four dimensional light field data in the world, the data collected represents two dimensional projection of the four dimensional light field data. As the result, the information we get is only part of the whole four dimensional data, other information is simply lost.
The main part of this work is to develop the computation approach to calculate the distribution of pixel ray of four-dimensional light field of the real measurement light field data. We use the PBRT (Physical Based Rendering Tracing) software, i.e., the concept of physically based rendering system, to compute the ray tracing necessary to create light-field camera. Firstly, we will use only the simple camera for simplicity and for verifying the system performance. We then add lens system to practice the effect of this modified camera. Secondly, we will introduce the concept of four dimensional light field to the modified camera and capture the light field of virtual scene with the modified camera. With this approach, the captured light field data will be stored in the file of light field. We will then render or zoom this file of light field only, i.e., no need to re-render the entire scene to save time and computation resources. Thirdly, we replace the four dimensional light field by inserting measured real data into PBRT. However, we will capture and analyze the result of pixel ray light energy by using one piece of the measurement data firstly and then using the two measurement data of light field to replace the four dimensional light field. The measurement data used was derived from the Bessel beam data obtained from our team’s previous research results and the newly biological tissue data. Finally, we analyze the pixel ray light energy by image processing and compare the computed result with that the measured data. The result showed virtual scene rendering developed in this thesis can significantly reduce the rendering time needed. The final result clearly demonstrated that the method developed in this thesis, can effectively capture the pixel ray light energy, which can be used to recreated any photo with any specified focal point. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T13:15:44Z (GMT). No. of bitstreams: 1 ntu-102-R00543010-1.pdf: 6080472 bytes, checksum: 7e5de31c39207bdd53ee253a8b0e99c7 (MD5) Previous issue date: 2013 | en |
| dc.description.tableofcontents | 口試委員會審定書 #
致謝 I 中文摘要 II ABSTRACT III 目錄 V 圖目錄 VII 表目錄 XI 第 一 章 緒論 1 1.1 研究背景 1 1.2 文獻回顧 3 1.2.1 光場之發展 3 1.3 研究動機 13 1.4 論文架構 15 第 二 章 原理 16 2.1 光場 16 2.1.1 七維光場 16 2.1.2 五維光場 17 2.1.3 四維光場 18 2.2 感測器與焦距位置 19 2.3 圖像生成與景深 21 2.4 四維光場與二維感測器 23 2.5 色彩空間轉換 25 2.5.1 RGB色彩空間 25 2.5.2 HSV色彩空間 26 第 三 章 實驗系統與方法架構 27 3.1 基於物理概念為基礎之光線追蹤渲染系統 27 3.2 相機 30 3.3 光場擷取 35 第 四 章 實驗結果與討論 43 4.1 相機演算法之實驗模擬結果 43 4.2 真實相機演算法之實驗模擬結果 48 4.3 相機加入四維光場概念之實驗模擬結果 51 4.4 真實資料置換四維光場之渲染實驗結果 68 第 五 章 結論與未來展望 85 5.1 結論 85 5.2 未來展望 86 5.2.1 軟體系統改善 86 參考文獻 87 附錄 90 | |
| dc.language.iso | zh-TW | |
| dc.subject | 光線追跡 | zh_TW |
| dc.subject | 光場 | zh_TW |
| dc.subject | PBRT | zh_TW |
| dc.subject | 影像處理 | zh_TW |
| dc.subject | 像素射線 | zh_TW |
| dc.subject | PBRT | en |
| dc.subject | Light Field | en |
| dc.subject | Ray tracing | en |
| dc.subject | Image processing | en |
| dc.title | 以像素射線方法開發光場相機之研究 | zh_TW |
| dc.title | Preliminary Research on Pixel Ray Method Based Light Field Camera | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 101-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 高甫仁,黃君偉,周晟,林鼎晸 | |
| dc.subject.keyword | PBRT,光場,光線追跡,像素射線,影像處理, | zh_TW |
| dc.subject.keyword | PBRT,Light Field,Ray tracing,Image processing, | en |
| dc.relation.page | 91 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2013-07-29 | |
| dc.contributor.author-college | 工學院 | zh_TW |
| dc.contributor.author-dept | 應用力學研究所 | zh_TW |
| 顯示於系所單位: | 應用力學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-102-1.pdf 未授權公開取用 | 5.94 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
