Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85063
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳宏銘(Homer H. Chen)
dc.contributor.authorTing-Yu Huangen
dc.contributor.author黃庭宇zh_TW
dc.date.accessioned2023-03-19T22:41:17Z-
dc.date.copyright2022-09-30
dc.date.issued2022
dc.date.submitted2022-09-27
dc.identifier.citation[1]“Image-guided surgery,” Wikipedia. Jan. 23, 2022. Accessed: Jul. 17, 2022. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Image-guided_surgery&oldid=1067460434 [2]F. Alam, S. U. Rahman, S. Ullah, and K. Gulati, “Medical image registration in image guided surgery: Issues, challenges and research opportunities,” Biocybernetics and Biomedical Engineering, vol. 38, no. 1, pp. 71–89, Jan. 2018. [3]P. Jackson, R. Simon, and C. Linte, “Surgical Tracking, Registration, and Navigation Characterization for Image-guided Renal Interventions,” in 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC), Jul. 2020, pp. 5081–5084. [4]Z. Zhao et al., “Augmented reality technology in image-guided therapy: State-of-the-art review,” Proc Inst Mech Eng H, vol. 235, no. 12, pp. 1386–1398, Dec. 2021. [5]H.-G. Ha and J. Hong, “Augmented Reality in Medicine,” Hanyang Medical Reviews, vol. 36, p. 242, Jan. 2016. [6]J. H. Shuhaiber, “Augmented Reality in Surgery,” Archives of Surgery, vol. 139, no. 2, pp. 170–174, Feb. 2004. [7]J. Kim, D. Kane, and M. S. Banks, “The rate of change of vergence–accommodation conflict affects visual discomfort,” Vision Research, vol. 105, pp. 159–165, Dec. 2014. [8]S. Reichelt, R. Haeussler, G. Fütterer, and N. Leister, “Depth cues in human visual perception and their realization in 3D displays,” in Three-Dimensional Imaging, Visualization, and Display 2010 and Display Technologies and Applications for Defense, Security, and Avionics IV, May 2010, vol. 7690, pp. 92–103. [9]T. Sielhorst, C. Bichlmeier, S. M. Heining, and N. Navab, “Depth Perception – A Major Issue in Medical AR: Evaluation Study by Twenty Surgeons,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006, Berlin, Heidelberg, 2006, pp. 364–372. [10]G. A. Santoro and B. Fortling, “The Advantages of Volume Rendering in Three-Dimensional Endosonography of the Anorectum,” Dis Colon Rectum, vol. 50, no. 3, pp. 359–368, Mar. 2007. [11]E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane, “A Three-Color, Solid-State, Three-Dimensional Display,” Science, vol. 273, no. 5279, pp. 1185–1189, Aug. 1996. [12]G. E. Favalora et al., “100-million-voxel volumetric display,” in Cockpit Displays IX: Displays for Defense Applications, Aug. 2002, vol. 4712, pp. 300–312. [13]J. F. Heanue, M. C. Bashaw, and L. Hesselink, “Volume Holographic Storage and Retrieval of Digital Data,” Science, vol. 265, no. 5173, pp. 749–752, Aug. 1994. [14]F. Yaraş, H. Kang, and L. Onural, “State of the Art in Holographic Displays: A Survey,” Journal of Display Technology, vol. 6, no. 10, pp. 443–454, Oct. 2010. [15]A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph., vol. 36, no. 4, p. 85:1-85:16, Jul. 2017. [16]G. Westheimer, “The maxwellian view,” Vision Research, vol. 6, no. 11, pp. 669–682, Dec. 1966. [17]R. S. Johnston and S. R. Willey, “Development of a commercial retinal scanning display,” in Helmet- and Head-Mounted Displays and Symbology Design Requirements II, May 1995, vol. 2465, pp. 2–13. [18]T. Lin et al., “Maxwellian near-eye display with an expanded eyebox,” Opt. Express, OE, vol. 28, no. 26, pp. 38616–38625, Dec. 2020. [19]S. Suyama, M. Date, and H. Takada, “Three-Dimensional Display System with Dual-Frequency Liquid-Crystal Varifocal Lens,” Jpn. J. Appl. Phys., vol. 39, no. 2R, p. 480, Feb. 2000. [20]K. Akşit, W. Lopes, J. Kim, P. Shirley, and D. Luebke, “Near-eye varifocal augmented reality display using see-through screens,” ACM Trans. Graph., vol. 36, no. 6, p. 189:1-189:13, Nov. 2017. [21]N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proceedings of the National Academy of Sciences, vol. 114, no. 9, pp. 2183–2188, Feb. 2017. [22]K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph., vol. 23, no. 3, pp. 804–813, Aug. 2004. [23]T. Zhan, J. Xiong, J. Zou, and S.-T. Wu, “Multifocal displays: review and prospect,” PhotoniX, vol. 1, no. 1, p. 10, Mar. 2020. [24]D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph., vol. 32, no. 6, p. 220:1-220:10, Nov. 2013. [25]M. Yamaguchi, “Light-field and holographic three-dimensional displays [Invited],” J. Opt. Soc. Am. A, JOSAA, vol. 33, no. 12, pp. 2348–2364, Dec. 2016. [26]S. A. Cholewiak, G. D. Love, P. P. Srinivasan, R. Ng, and M. S. Banks, “Chromablur: rendering chromatic eye aberration improves accommodation and realism,” ACM Trans. Graph., vol. 36, no. 6, p. 210:1-210:12, Nov. 2017. [27]B. Kress, “Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets,” 2020. [28]E. Keppel, “Approximating Complex Surfaces by Triangulation of Contour Lines,” IBM Journal of Research and Development, vol. 19, no. 1, pp. 2–11, Jan. 1975. [29]H. Fuchs, Z. M. Kedem, and S. P. Uselton, “Optimal surface reconstruction from planar contours,” Commun. ACM, vol. 20, no. 10, pp. 693–702, Oct. 1977. [30]A. G. Schreyer and S. K. Warfield, “Surface Rendering,” in 3D Image Processing: Techniques and Clinical Applications, D. Caramella and C. Bartolozzi, Eds. Berlin, Heidelberg: Springer, 2002, pp. 31–34. [31]T. Saito and T. Takahashi, “Comprehensible rendering of 3-D shapes,” in Proceedings of the 17th annual conference on Computer graphics and interactive techniques, New York, NY, USA, Sep. 1990, pp. 197–206. [32]R. A. Drebin, L. Carpenter, and P. Hanrahan, “Volume rendering,” SIGGRAPH Comput. Graph., vol. 22, no. 4, pp. 65–74, Jun. 1988. [33]R. J. Lapeer, R. S. Rowland, and M. S. Chen, “PC-based volume rendering for medical visualisation and augmented reality based surgical navigation,” in Proceedings. Eighth International Conference on Information Visualisation, 2004. IV 2004., Jul. 2004, pp. 67–72. [34]S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surgical Oncology, vol. 20, no. 3, pp. 189–201, Sep. 2011. [35]N. Tatarchuk, J. Shopf, and C. DeCoro, “Advanced interactive medical visualization on the GPU,” Journal of Parallel and Distributed Computing, vol. 68, no. 10, pp. 1319–1328, Oct. 2008. [36]T. Hachisuka et al., “Multidimensional adaptive sampling and reconstruction for ray tracing,” in ACM SIGGRAPH 2008 papers, New York, NY, USA, Aug. 2008, pp. 1–10. [37]S. Parker, M. Parker, Y. Livnat, P.-P. Sloan, C. Hansen, and P. Shirley, “Interactive ray tracing for volume visualization,” in ACM SIGGRAPH 2005 Courses, New York, NY, USA, Jul. 2005, pp. 15-es. [38]J. Wang, F. Yang, and Y. Cao, “A cache-friendly sampling strategy for texture-based volume rendering on GPU,” Visual Informatics, vol. 1, no. 2, pp. 92–105, Jun. 2017. [39]S. Fang, S. Huang, R. Srinivasan, and R. Raghavan, “Deformable volume rendering by 3D texture mapping and octree encoding,” in Proceedings of Seventh Annual IEEE Visualization ’96, Oct. 1996, pp. 73–80. [40]I. Boada, I. Navazo, and R. Scopigno, “Multiresolution volume visualization with a texture-based octree,” Visual Comp, vol. 17, no. 3, pp. 185–197, May 2001. [41]B. Liu, G. J. Clapworthy, F. Dong, and E. C. Prakash, “Octree Rasterization: Accelerating High-Quality Out-of-Core GPU Volume Rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 10, pp. 1732–1745, Oct. 2013. [42]A. Yu, R. Li, M. Tancik, H. Li, R. Ng, and A. Kanazawa, “PlenOctrees for Real-time Rendering of Neural Radiance Fields,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2021, pp. 5732–5741.. [43]K. R. Subramanian and D. S. Fussell, “Applying space subdivision techniques to volume rendering,” in Proceedings of the First IEEE Conference on Visualization: Visualization `90, Oct. 1990, pp. 150–159. [44]Y. Huo, R. Wang, T. Hu, W. Hua, and H. Bao, “Adaptive matrix column sampling and completion for rendering participating media,” ACM Trans. Graph., vol. 35, no. 6, p. 167:1-167:11, Nov. 2016. [45]S. Zellmann, J. P. Schulze, and U. Lang, “Binned k-d Tree Construction for Sparse Volume Data on Multi-Core and GPU Systems,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 3, pp. 1904–1915, Mar. 2021. [46]T. Ertl et al., “Adaptive sampling in three dimensions for volume rendering on GPUs,” in 2007 6th International Asia-Pacific Symposium on Visualization, Feb. 2007, pp. 113–120. [47]B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis,” in Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I, Berlin, Heidelberg, Aug. 2020, pp. 405–421. [48]N. Morrical, W. Usher, I. Wald, and V. Pascucci, “Efficient Space Skipping and Adaptive Sampling of Unstructured Volumes Using Hardware Accelerated Ray Tracing,” in 2019 IEEE Visualization Conference (VIS), Oct. 2019, pp. 256–260.. [49]H. H. Barrett, “III The Radon Transform and Its Applications,” in Progress in Optics, vol. 21, E. Wolf, Ed. Elsevier, 1984, pp. 217–286. [50]M. J. Ackerman, “The Visible Human Project,” Proceedings of the IEEE, vol. 86, pp. 504–511, Mar. 1998. [51]A. Kalra, “Chapter 9 - Developing FE Human Models From Medical Images,” in Basic Finite Element Method as Applied to Injury Biomechanics, K.-H. Yang, Ed. Academic Press, 2018, pp. 389–415. [52]J. Kruger and R. Westermann, “Acceleration techniques for GPU-based volume rendering,” in IEEE Visualization, 2003. VIS 2003., Oct. 2003, pp. 287–292. [53]“Ray - Box Intersection.” https://education.siggraph.org/static/HyperGraph/raytrace/rtinter3.htm (accessed Jun. 19, 2022). [54]J. Han, J. Pei, and H. Tong, Data Mining: Concepts and Techniques. Morgan Kaufmann, 2022. [55]Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, Apr. 2004. [56]PetaRay Continuous Focus Demo, (May 12, 2021). Accessed: Jul. 15, 2022. [Online Video]. Available: https://www.youtube.com/watch?v=7yGKxktYt0I
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85063-
dc.description.abstract影像導引醫療處置具備侵入性低、準確度高、後遺症少等特性,近年來被廣泛應用於臨床;若將擴增實境技術與影像導引醫療處置結合,可以進一步提升手術表現。然而在觀看一般的擴增實境顯示器時,執刀醫師會受視覺輻輳調節衝突所苦而感受到暈眩與不適;所幸光場擴增實境顯示器不會造成視覺輻輳調節衝突,能夠提供最舒適的觀看體驗。在本論文中,我們將傳統的電腦斷層掃描資料轉換為光場,使得手術導引內容可以正確地呈現於光場擴增實境顯示器上。我們針對手術導引內容,搭配現有醫學影像顯示方式,設計一套轉移函數(transfer function),使得由電腦斷層掃描影像生成之三維渲染內容,能順利呈現於光場擴增實境顯示器上,並允許醫生自由地調整導引內容的樣貌。此外,我們在考量到處理速度的重要性後,提出了一種適用於無結構體數據三維渲染的自適應性採樣方法,以減少採集的樣本數量;相較於均勻採樣方法,該方法能在更短的處理時間內,產生更高品質的渲染結果。最後,我們利用實驗室之前開發的近眼光場擴增實境眼鏡驗證所設計的光場生成方法的有效性。結果顯示,我們設計的方法可以有效率地將電腦斷層掃描資料轉換為光場,提供醫生高品質的手術導引內容。zh_TW
dc.description.abstractAugmented reality (AR) technology has received considerable attention for image-guided medical procedures because it can be applied to make surgeries and therapies less invasive, more precise, and safer. However, due to the issue of vergence-accommodation conflict (VAC), conventional AR displays can easily cause visual discomfort or eyestrain to surgeons during medical procedures. Light field AR displays are free of the VAC and regarded as the ultimate display as they provide a natural and comfortable visual experience by reproducing light rays of the virtual object. In this thesis, we propose a systematic solution that enables traditional 3-D medical data to be converted to a format for a light field display. Our solution consists of a preprocessing module that converts tomographic data from the DICOM format to the 3-D volumetric data format and a volume rendering technique that efficiently generates high-quality light field content. To generate virtual object suitable for medical procedures, we propose a window transfer function that complies with the traditional display of medical data and allows users to freely adjust the contrast, brightness, and transparency to enhance the appearance of the organs under examination. To improve computational efficiency, we propose a novel adaptive sampling scheme to reduce the number of required samples for medical 3-D rendering. Our experiments quantitatively and qualitatively verify the effectiveness of the proposed adaptive sampling scheme, which performs favorably against the common linear sampling scheme. Finally, we verify the effectiveness of our proposed solution by displaying the resulting light fields on a near-eye light field display prototype. It shows that our solution to medical data conversion is suitable for real-world light field displays.en
dc.description.provenanceMade available in DSpace on 2023-03-19T22:41:17Z (GMT). No. of bitstreams: 1
U0001-2308202223421300.pdf: 2166348 bytes, checksum: e7b4f51c21f77494fd1f2c35e83d1dd1 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents誌謝 i 中文摘要 ii ABSTRACT iii CONTENTS v LIST OF FIGURES vii LIST OF TABLES ix Chapter 1 Introduction 1 Chapter 2 Related Work 5 2.1 3-D Displays 5 2.2 Medical 3-D Rendering 6 2.3 Adaptive sampling 7 Chapter 3 Light Field Rendering of Medical Data 9 3.1 Datasets 10 3.2 Data preprocessing 10 3.3 Ray Marching 12 3.4 Windowing Transfer Function 14 3.5 Adaptive Sampling 15 Chapter 4 Results 21 4.1 Evaluation of Windowing Transfer Function 21 4.2 Evaluation of Adaptive Sampling 25 4.3 Evaluation on a Near-Eye Light Field AR Display 31 Chapter 5 Conclusion 33 REFERENCE 35
dc.language.isoen
dc.subject光場顯示器zh_TW
dc.subject影像導引醫療處置zh_TW
dc.subject自適應性採樣zh_TW
dc.subject醫學三維渲染zh_TW
dc.subject擴增實境zh_TW
dc.subjectadaptive samplingen
dc.subjectImage-guided medical procedureen
dc.subjectaugmented realityen
dc.subjectlight field displayen
dc.subjectmedical 3-D renderingen
dc.title三維醫學資料之光場顯示zh_TW
dc.titleLight Field Display of 3-D Medical Dataen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee鍾孝文(Hsiao-Wen Chung),黃升龍(Sheng-Lung Huang),林晃巖(Hoang Yan Lin),陳世杰(Shyh-Jye Chen)
dc.subject.keyword影像導引醫療處置,擴增實境,光場顯示器,醫學三維渲染,自適應性採樣,zh_TW
dc.subject.keywordImage-guided medical procedure,augmented reality,light field display,medical 3-D rendering,adaptive sampling,en
dc.relation.page38
dc.identifier.doi10.6342/NTU202202733
dc.rights.note同意授權(限校園內公開)
dc.date.accepted2022-09-28
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
dc.date.embargo-lift2022-09-30-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
U0001-2308202223421300.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
2.12 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved