Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電子工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/77282
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor簡韶逸zh_TW
dc.contributor.advisorShao-Yi Chienen
dc.contributor.author李洺曦zh_TW
dc.contributor.authorMing-Hsi Leeen
dc.date.accessioned2021-07-10T21:54:04Z-
dc.date.available2024-07-31-
dc.date.copyright2019-08-22-
dc.date.issued2019-
dc.date.submitted2002-01-01-
dc.identifier.citationC. A. Curcio and K. A. R. Allen, “Topography of ganglion cells in human retina.” The Journal of comparative neurology, vol. 300 1, pp. 5–25, 1990.
C. A. Curcio, K. R. Sloan, R. E. Kalina, and A. Hendrickson, “Human photoreceptor topography.” The Journal of comparative neurology, vol. 292 4, pp. 497–523, 1990.
A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph., vol. 35, no. 6, pp. 179:1–179:12, Nov. 2016.
J. Rovamo, V. Virsu, P. Laurinen, and L. Hyv¨arinen, “Resolution of gratings oriented along and across meridians in peripheral vision,” Investigative ophthalmology & visual science, vol. 23, pp. 666–70, 12 1982.
B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph., vol. 31, no. 6, pp. 164:1–164:10, Nov. 2012.
“Vrworks - variable rate shading (vrs),” 2018. [Online]. Available: https://developer.nvidia.com/vrworks/graphics/variablerateshading
M. Stengel, S. Grogorick, M. Eisemann, and M. Magnor, “Adaptive image-space sampling for gaze-contingent real-time rendering,” Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering EGSR), vol. 35, no. 4, pp. 129–139, Jul 2016.
X. Meng, R. Du, M. Zwicker, and A. Varshney, “Kernel Foveated Rendering,” Proceedings of the ACM on Computer Graphics and Interactive Techniques (I3D), vol. 1, no. 5, pp. 1–20, 2018.
G. Nichols, R. Penmatsa, and C. Wyman, “Interactive, multiresolution image-space rendering for dynamic area lighting,” in Proceedings of the 21st Eurographics Conference on Rendering, 2010, pp. 1279–1288.
T. Ritschel, T. Engelhardt, T. Grosch, H.-P. Seidel, J. Kautz, and C. Dachsbacher, “Micro-rendering for scalable, parallel final gathering,” ACM Transactions on Graphics (TOG), vol. 28, p. 132, 12 2009.
“Microsoft hololens — mixed reality technology for business,” 2019. [Online]. Available: https://www.microsoft.com/en-us/hololens
H. Strasburger, I. Rentschler, and M. J¨uttner, “Peripheral vision and pattern recognition: A review,” Journal of vision, vol. 11, p. 13, 05 2011.
A. Cowey and E. T. Rolls, “Human cortical magnification factor and its relation to visual acuity,” Experimental Brain Research, vol. 21, no. 5, pp. 447–454, Dec 1974.
J. Rovamo and V. Virsu, “An estimation and application of the human cortical magnification factor,” Experimental Brain Research, vol. 37, no. 3, pp. 495–510, Nov 1979.
M. Weier, M. Stengel, T. Roth, P. Didyk, E. Eisemann, M. Eisemann, S. Grogorick, A. Hinkenjann, E. Kruijff, M. Magnor, K. Myszkowski, and P. Slusallek, “Perception-driven accelerated rendering,” Computer Graphics Forum, vol. 36, no. 2, pp. 611–643, 2017.
T. Ohshima, H. Yamamoto, and H. Tamura, “Gaze-directed adaptive rendering for interacting with virtual space,” in Proceedings of the IEEE 1996 Virtual Reality Annual International Symposium, 1996, pp. 103–110.
D. P. Luebke and B. Hallen, “Perceptually driven simplification for interactive rendering,” 01 2001, pp. 223–234.
H. Murphy and A. T. Duchowski, “Gaze-contingent level of detail rendering,” EuroGraphics, vol. 2001, 01 2001.
K. Vaidyanathan, M. Salvi, R. Toth, T. Foley, T. Akenine-M¨oller, J. Nilsson, J. Munkberg, J. Hasselgren, M. Sugihara, P. Clarberg, T. Janczak, and A. E. Lefohn, “Coarse pixel shading,” in High Performance Graphics, 2014.
Y. He, Y. Gu, and K. Fatahalian, “Extending the graphics pipeline with adaptive, multi-rate shading,” ACM Trans. Graph., vol. 33, no. 4, pp. 142:1–142:12, Jul. 2014.
J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” in ACM SIGGRAPH 2007 Papers, 2007.
L. Yang, P. V. Sander, and J. Lawrence, “Geometry-aware framebuffer level of detail,” Comput. Graph. Forum, vol. 27, pp. 1183–1188, 06 2008.
R. Wang, R. Wang, K. Zhou, M. Pan, and H. Bao, “An efficient gpu-based approach for interactive global illumination,” in ACM SIGGRAPH 2009 Papers, 2009, pp. 91:1–91:8.
H. Dammertz, D. Sewtz, J. Hanika, and H. P. A. Lensch, “Edge-avoiding `A-trous wavelet transform for fast global illumination filtering,” in Proceedings of the Conference on High Performance Graphics, 2010, pp. 67–75.
P. Bauszat, M. Eisemann, and M. Magnor, “Guided image filtering for interactive high-quality global illumination,” in Proceedings of the Twenty-second Eurographics Conference on Rendering, 2011, pp. 1361–1368.
A. Keller, “Instant radiosity,” in Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’97, 1997, pp. 49–56.
B. Karis, “High Quality Temporal Supersampling,” ACM SIGGRAPH 2014 Advances in Real-Time Rendering in Games course, 2014.
L. J. F. Pedersen, “Temporal Reprojection Anti-Aliasing in INSIDE,” Game Developers Conference 2016, 2016.
JEGX, “Mesh voxelization,” 2016. [Online]. Available: https://www.geeks3d.com/20160531/mesh-voxelization/
C. Crassin, F. Neyret, S. Lefebvre, and E. Eisemann, “Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering,” in SI3D, 2009.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/77282-
dc.description.abstract近幾年,在虛擬實境(Virtual Reality)以及擴增實境(Augmented Reality)技術日漸火紅之下,高解析度以及高刷新率的渲染系統需求使得視點渲染技術(Foveated Rendering)也日漸吸引研究者們的注意。此方法係一種利用人眼視覺系統的特性,透過將運算資源集中在人眼視點中心(Fovea)以保持視線中心清晰度,同時去降低視線外圍區域(Periphery)的影像質量,進而減少整體渲染花費的運算方法。
許多方法專注於將幾何圖形做次取樣後以較低的解析度進行片元渲染。然而,由於接續於其後的螢幕空間升取樣往往在視線外圍區域帶來使用者能觀察到的模糊效果,使得這些方法無法更進一步地降低著色率。再者,目前大部分的先進實時渲染系統都具備演算全局光照(Global Illumination)的能力,這在視點渲染技術裡並沒有被周全考慮過。
本篇論文引出了視點渲染以及全局光照之間的相似處,並且提出了第一套使用幾何感知重建法來近似光照的視點渲染系統。我們主要著重於常常佔據渲染效能的全局光照。其核心為一個由緩衝器組成的金字塔結構,並儲存根據使用者視線位置取樣的幾何資訊。透過這個金字塔結構,我們能將漫射全局光照從一個低解析度的著色圖重建回來,並且保有視線中心的清晰度。最終,直接光照與間接光照將會被融合成最終的畫幀。實驗結果顯示,我們所提出之演算法能達到6倍至8倍的加速,同時保有最小的視覺損失。
zh_TW
dc.description.abstractRecently, while applications of Virtual Reality and Augmented Reality (VR/AR) are getting popular, foveated rendering, a rendering schema for reducing rendering cost by utilizing properties of the human visual system, is also getting more attention especially for high-resolution and high-refresh-rate VR/AR systems. It reduced rendering time by allocating primary of computational resources to the fovea for fidelity preservation and decreasing image quality of the periphery.
Many methods have focused on subsampling geometry and run fragment shading at a relatively low rate. However, they cannot reduce the shading rate more aggressively since those following screen-space upsampling processes would bring a blurred result at the peripheral region. This artifact would usually make rendering result distinguishable from the original for users. Moreover, a large portion of state-of-the-art real-time rendering systems involves real-time global illumination, which is not fully considered in existing foveated rendering algorithms.
In this thesis, we draw an analogy between foveated rendering and global illumination and propose the first foveated rendering algorithm base on a geometry-aware reconstructing strategy to approximate illumination. We put our emphasis on the computation of global illumination which usually dominates rendering performance. Our key component is a pyramid of buffers storing geometry information which is sampled according to the gaze position of users. With this pyramid, we can reconstruct diffuse global illumination from a low-resolution shaded image while keeping fidelity at the fovea. In the end, direct illumination is blended with indirect illumination to form an entire frame. Our experiments show that we can get 6x-8x speedup in rendering time with a minimal perceptual loss-of-detail.
en
dc.description.provenanceMade available in DSpace on 2021-07-10T21:54:04Z (GMT). No. of bitstreams: 1
ntu-108-R06943006-1.pdf: 29818145 bytes, checksum: 6d7f932d000fb0b919682f838ff6ef86 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontentsAbstract (P.i)
List of Figures (P.v)
List of Tables (P.ix)
1 Introduction (P.1)
2 Related Work (P.4)
2.1 Human Visual System (P.4)
2.2 Foveated Rendering (P.6)
2.3 Low-Resolution Approximation with Illumination Upsampling (P.11)
3 Point-Based Global Illumination (P.13)
3.1 Analogy to Global Illumination Estimation (P.13)
3.2 Hierarchical Structure (P.14)
4 Proposed Algorithm (P.17)
4.1 Separation of Indirect Illumination (P.17)
4.2 Geometry Subsampling (P.18)
4.2.1 Deferred Shading (P.18)
4.2.2 Gaze-Contingent Subsampling (P.19)
4.2.3 Pyramid Construction (P.20)
4.3 Geometry-Aware Illumination Reconstruction (P.22)
4.4 Upsampling (P.24)
4.4.1 Illuminance Gathering (P.24)
4.4.2 Convolution (P.25)
4.5 Final Blending (P.26)
5 Performance Evaluation (P.27)
5.1 User Study (P.28)
5.1.1 Setup (P.28)
5.1.2 Procedure (P.28)
5.1.3 Participants (P.30)
5.1.4 Result (P.30)
5.2 Performance (P.32)
5.2.1 Time Consumption (P.32)
5.2.2 Acceleration (P.32)
5.2.3 Scalability (P.35)
5.2.4 Disparity Threshold (P.36)
5.2.5 Stability (P.37)
6 Limitations (P.40)
6.1 Parameters Setting (P.40)
6.2 Temporal and Spatial Aliasing (P.40)
6.3 Memory Usage (P.41)
7 Conclusion (P.42)
Reference (P.43)
-
dc.language.isoen-
dc.subject全局光照zh_TW
dc.subject互動實時渲染zh_TW
dc.subject虛擬實境zh_TW
dc.subject視線局部zh_TW
dc.subject視點渲染zh_TW
dc.subject知覺zh_TW
dc.subjectFoveated Renderingen
dc.subjectGaze-Contingenten
dc.subjectPerceptionen
dc.subjectInteractive Real-time Renderingen
dc.subjectGlobal Illuminationen
dc.subjectVirtual Realityen
dc.title基於幾何感知重建法之眼神局部光照近似系統zh_TW
dc.titleGaze-Contingent Illumination Approximation with Geometry-Aware Reconstructionen
dc.typeThesis-
dc.date.schoolyear107-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee李潤容;張鈞法;張家銘zh_TW
dc.contributor.oralexamcommitteeRuen-Rone Lee;Chun-Fa Chang;Chia-Ming Changen
dc.subject.keyword視點渲染,視線局部,知覺,互動實時渲染,全局光照,虛擬實境,zh_TW
dc.subject.keywordFoveated Rendering,Gaze-Contingent,Perception,Interactive Real-time Rendering,Global Illumination,Virtual Reality,en
dc.relation.page46-
dc.identifier.doi10.6342/NTU201902710-
dc.rights.note未授權-
dc.date.accepted2019-08-12-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電子工程學研究所-
顯示於系所單位:電子工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-107-2.pdf
  未授權公開取用
29.12 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved