Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69404
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor徐慰中(Wei-Chung Hsu)
dc.contributor.authorLiang-Chi Tsengen
dc.contributor.author曾亮齊zh_TW
dc.date.accessioned2021-06-17T03:14:50Z-
dc.date.available2018-07-19
dc.date.copyright2018-07-19
dc.date.issued2018
dc.date.submitted2018-07-09
dc.identifier.citation[1] Google, “Cardboard,” https://vr.google.com/cardboard/.
[2] A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 297–306, ACM Press/Addison-Wesley Publishing Co., 2000.
[3] P. Gautron, M. Droske, C. Wächter, L. Kettner, A. Keller, N. Binder, and K. Dahm, “Path space similarity determined by fourier histogram descriptors,” in ACM SIGGRAPH 2014 Talks, p. 39, ACM, 2014.
[4] A. Keller, C. Wächter, M. Raab, D. Seibert, D. van Antwerpen, J. Korndörfer, and L. Kettner, “The iray light transport simulation and rendering system,” in ACM SIGGRAPH 2017 Talks, p. 34, ACM, 2017.
[5] C. Fehn, “Depth-image-based rendering (dibr), compression, and transmission for a new approach on 3d-tv,” 2004.
[6] R. W. Xuan Yu and J. Yu, “Real-time depth of field rendering via dynamic light field generation and filtering,” Computer Graphics Forum, vol. 29, no. 7, pp. 2099–2107, 2010.
[7] L. Y. C. Zhu, Y. Zhao and M. Tanimoto, 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges. Springer, 2013.
[8] J. Li, M. Lu, and Z. N. Li, “Continuous depth map reconstruction from light fields,” IEEE Trans. Image Processing, vol. 24, pp. 3257–3265, Nov 2015.
[9] S. Li, C. Zhu, and M. T. Sun, “Hole filling with multiple reference views in dibr view synthesis,” IEEE Trans. Multimedia, vol. PP, no. 99, pp. 1–1, 2018.
[10] K. Bala, B. Walter, and D. P. Greenberg, “Combining edges and points for interactive high-quality rendering,” ACM Trans. Graph., vol. 22, pp. 631–640, July 2003.
[11] E. H. Adelson, J. R. Bergen, et al., “The plenoptic function and the elements of early vision,” 1991.
[12] L. McMillan and G. Bishop, “Plenoptic modeling: An image-based rendering system,” in Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pp. 39–46, ACM, 1995.
[13] M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 31–42, ACM, 1996.
[14] M. Slater, J. Mortensen, P. Khanna, and I. Yu, “A virtual light field approach to globalillumination,” in Computer Graphics International, 2004. Proceedings, pp. 102–109, IEEE, 2004.
[15] J. Lehtinen, T. Aila, S. Laine, and F. Durand, “Reconstructing the indirect light field for global illumination,” ACM Trans. Graph., vol. 31, no. 4, p. 51, 2012.
[16] M. McGuire, M. Mara, D. Nowrouzezahrai, and D. Luebke, “Real-time global illumination using precomputed light field probes,” in Proceedings of the 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, p. 2, ACM, 2017.
[17] S. Boulos, I. Wald, and C. Benthin, “Adaptive ray packet reordering,” in Interactive Ray Tracing, 2008. RT 2008. IEEE Symposium on, pp. 131–138, IEEE, 2008.
[18] K. Garanzha and C. Loop, “Fast ray sorting and breadth-first packet traversal for gpu ray tracing,” in Computer Graphics Forum, vol. 29, pp. 289–298, Wiley Online Library, 2010.
[19] J. Gunther, S. Popov, H.-P. Seidel, and P. Slusallek, “Realtime ray tracing on gpu with bvh-based packet traversal,” in 2007 IEEE Symposium on Interactive Ray Tracing, pp. 113–118, IEEE, 2007.
[20] J. Hermes, N. Henrich, T. Grosch, and S. Mueller, “Global illumination using parallel global ray-bundles.,” in VMV, pp. 65–72, 2010.
[21] Y. Tokuyoshi, T. Sekine, T. da Silva, and T. Kanai, “Adaptive ray-bundle tracing with memory usage prediction: Efficient global illumination in large scenes,” in Computer Graphics Forum, vol. 32, pp. 315–324, Wiley Online Library, 2013.
[22] J. Novák, V. Havran, and C. Dachsbacher, “Path regeneration for interactive path tracing,” Proc EUROGRAPHICS Short Papers, 2010.
[23] I. Wald, “Active thread compaction for gpu path tracing,” in Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics, pp. 51–58, ACM, 2011.
[24] J. S. Lee, “Digital image enhancement and noise filtering by use of local statistics,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. PAMI-2, pp. 165–168, March 1980.
[25] T. Ritschel, T. Engelhardt, T. Grosch, H.-P. Seidel, J. Kautz, and C. Dachsbacher, “Micro-rendering for scalable, parallel final gathering,” ACM Trans. Graph., vol. 28, pp. 132:1–132:8, Dec. 2009.
[26] H. Dammertz, D. Sewtz, J. Hanika, and H. P. A. Lensch, “Edge-avoiding Àtrous wavelet transform for fast global illumination filtering,” in Proceedings of the Conference on High Performance Graphics, HPG ’10, (Aire-la-Ville, Switzerland, Switzerland), pp. 67–75, Eurographics Association, 2010.
[27] P. Shirley, T. Aila, J. Cohen, E. Enderton, S. Laine, D. Luebke, and M. McGuire, “A local image reconstruction algorithm for stochastic rendering,” in Symposium on Interactive 3D Graphics and Games, I3D ’11, (New York, NY, USA), pp. 9–14 PAGE@5, ACM, 2011.
[28] C. R. A. Chaitanya, A. S. Kaplanyan, C. Schied, M. Salvi, A. Lefohn, D. Nowrouzezahrai, and T. Aila, “Interactive reconstruction of monte carlo image sequences using a recurrent denoising autoencoder,” ACM Trans. Graph., vol. 36, pp. 98:1–98:12, July 2017.
[29] S. Bako, T. Vogels, B. Mcwilliams, M. Meyer, J. NováK, A. Harvill, P. Sen, T. Derose, and F. Rousselle, “Kernel-predicting convolutional networks for denoising monte carlo renderings,” ACM Trans. Graph., vol. 36, pp. 97:1–97:14, July 2017.
[30] F. Rousselle, C. Knaus, and M. Zwicker, “Adaptive sampling and reconstruction using greedy error minimization,” ACM Trans. Graph., vol. 30, pp. 159:1–159:12, Dec. 2011.
[31] X. Liu, H. Sun, and E. Wu, “A hybrid method of image synthesis in ibr for novel viewpoints,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST ’00, (New York, NY, USA), pp. 55–60, ACM, 2000.
[32] W.-Y. Chen, Y.-L. Chang, S.-F. Lin, L.-F. Ding, and L.-G. Chen, “Efficient depth image based rendering with edge dependent depth filter and interpolation,” in 2005 IEEE International Conference on Multimedia and Expo, pp. 1314–1317, July 2005.
[33] P. Ramanathan, M. Kalman, and B. Girod, “Rate-distortion optimized interactive light field streaming,” IEEE Trans. Multimedia, vol. 9, pp. 813–825, June 2007.
[34] C. Birklbauer, S. Opelt, and O. Bimber, “Rendering Gigaray Light Fields,” Computer Graphics Forum, 2013.
[35] T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra, “Overview of the h.264/avc video coding standard,” IEEE Trans. Circuits and Systems for Video Technology, vol. 13, pp. 560–576, July 2003.
[36] Google, “Protocol buffers,” https://developers.google.com/protocol-buffers/.
[37] AMD and GPUOpen, “Radeon-rays,” https://gpuopen.com/gaming-product/radeonrays/.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69404-
dc.description.abstract隨著虛擬實境(VR)與擴增實境(AR)的蓬勃發展,發展出即時且 具有全域照明效果的渲染技術逐漸成為許多研究的主軸。然而,現今 的客戶端裝置如智慧型手機等仍然無法負擔全域照明演算法的龐大計 算量。例如光線追蹤(Ray-Tracing)演算法需要在場景中計算數百萬條 的光束,因而無法在客戶端裝置上即時運算。為了解決這個問題,部 分學者提出使用光場渲染(Light Field Rendering)技術來支援客戶端裝 置的顯示。這些光場影像可以透過預先計算後,傳送至客戶端裝置並 進行即時色彩取樣來顯示畫面。除了可以讓使用者擁有自由視角顯示 之外,還提供許多相機效果如景深與變焦。為了要快速且有效率的產 生這些光場圖,我們提出了結合DIBR (Depth-Image-Based-Rendering) 與光線追蹤的光場渲染演算法。透過動態錯誤偵測與回饋機制,我們 可以在傳統光線追蹤與DIBR 間取得最佳平衡。此外,為了更進一步 利用光場影像間像素共用的特性來加速計算,我們提出了多層次渲染 的概念。為了證明我們的概念可行,我們基於這個架構實作了一套包 含伺服器與客戶端的雲端光場渲染系統雛形。經實驗證實,藉由我們 的新方法,渲染系統可以在簡單的場景如Cornell Box 中加速達224%。 就算是複雜場景如Conference Room 或Sponza Palace,速度提升也可 達到100% 以上。zh_TW
dc.description.abstractReal-time global illumination rendering is very desirable for emerging applications such as Virtual Reality (VR) and Augmented Reality (AR). However, client devices have difficulties to support photo-realistic rendering, such as Ray-Tracing, due to insufficient computing resources. Many modern frameworks adopted Light Field rendering to support device displaying. A Light Field can be precomputed and store in cloud. During runtime, the display extracts the colors from the Light Field to generate arbitrary real time viewpoints or re-focusing within a predefined area. To efficiently compute the Light Field, We have combined DIBR (Depth-Image-Based-Rendering) and traditional ray-tracing in an adaptive fashion to synthesize images. By measuring the color errors during runtime, we adaptively determine the right balance between DIBR and Ray Tracing. To further optimize the computation efficiency, we also added a multi-level design to exploit the degree of shareable pixels among images to control the computation for error removal. In order to demonstrate our idea, we implemented a cloud-based Light Field rendering system with viewer application. Using our approach, we can reach similar quality with much fewer ray samples. Experiments show that we achieved up to 224% speedup in Light Field generation for relative simple scenes like the Cornell Box, and about 100% speed up for complex scenes like the Conference Room and the Sponza Palace.en
dc.description.provenanceMade available in DSpace on 2021-06-17T03:14:50Z (GMT). No. of bitstreams: 1
ntu-107-R05922035-1.pdf: 30330424 bytes, checksum: 991e2c75c7d5fa0cf28ec34eb66268b9 (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents口試委員會審定書 iii
誌謝 v
Acknowledgements vii
摘要 ix
Abstract xi
1 Introduction 1
2 Related Works 7
2.1 Light Field 7
2.2 Ray-Tracing and Acceleration 7
2.3 Image Denoising 8
2.4 DIBR and 3DTV 9
2.5 Client-server Light Field Rendering 10
3 Deign and Algorithm 13
3.1 System Overview 13
3.1.1 Server 13
3.1.2 Client 15
3.2 Adaptive Light Field Rendering System 15
3.2.1 Overview 15
3.2.2 Standard Rendering 18
3.2.3 Sample Sharing 18
3.2.4 Multi-level Rendering 25
3.2.5 Variance Detection and Feedback System 42
3.2.6 Task Scheduling 47
3.3 Image Compression and Streaming 59
3.4 Light Field Display and Cardboard Integration 61
3.5 Interaction with the Scene 64
4 Performance Evaluation 77
5 Conclusion 85
Bibliography 87
dc.language.isoen
dc.subject自由視角顯示zh_TW
dc.subject光線追蹤zh_TW
dc.subject雲端渲染zh_TW
dc.subject深度圖影像渲染zh_TW
dc.subject全域照明zh_TW
dc.subject光場渲染zh_TW
dc.subjectGlobal Illuminationen
dc.subjectRay-Tracingen
dc.subjectLight Fielden
dc.subjectDepth-Image-Based Renderingen
dc.subjectCloud-Based Computationen
dc.subjectFree Viewpoint Displayen
dc.title使用可適性多層次渲染與去雜訊之高效雲端虛擬光場渲染系統zh_TW
dc.titleEfficient Cloud-based Synthetic Light Field Rendering System with Adaptive Multi-level Sampling and Filteringen
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree碩士
dc.contributor.oralexamcommittee張鈞法(Chun-Fa Chang),吳真貞(Jan-Jan Wu),洪鼎詠(Ding-Yong Hong)
dc.subject.keyword全域照明,光線追蹤,光場渲染,深度圖影像渲染,雲端渲染,自由視角顯示,zh_TW
dc.subject.keywordGlobal Illumination,Ray-Tracing,Light Field,Depth-Image-Based Rendering,Cloud-Based Computation,Free Viewpoint Display,en
dc.relation.page90
dc.identifier.doi10.6342/NTU201801384
dc.rights.note有償授權
dc.date.accepted2018-07-10
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
29.62 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved