請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86677完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 施吉昇(Chi-Sheng Shih) | |
| dc.contributor.author | Yu-Shian Lin | en |
| dc.contributor.author | 林昱賢 | zh_TW |
| dc.date.accessioned | 2023-03-20T00:10:41Z | - |
| dc.date.copyright | 2022-08-08 | |
| dc.date.issued | 2022 | |
| dc.date.submitted | 2022-08-03 | |
| dc.identifier.citation | S. Sakata, P. M. Grove, and A. R. L. Stevenson, “Effect of 3-Dimensional Vision on Surgeons Using the da Vinci Robot for Laparoscopy: More Than Meets the Eye,” JAMA Surgery, vol. 151, no. 9, pp. 793–794, 09 2016. [Online]. Available: https://doi.org/10.1001/jamasurg.2016.0412 K. Dawson-Howe and D. Vernon, “Simple pinhole camera calibration,” Interna- tional Journal of Imaging Systems and Technology, vol. 5, pp. 1 – 6, 10 2005. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. F. Remondino and C. Fraser, “Digital camera calibration methods. considerations and comparisons,” H.-G. Maas and D. Schneider, Eds., vol. XXXVI, no. 5. Rio de Janeiro: ISPRS, 2006, pp. 266 – 272, iSPRS Commission V Symposium ’Im- age Engineering and Vision Metrology’; Conference Location: Dresden, Germany; Conference Date: September 25-27, 2006. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. AVC, 1988, pp. 23.1–23.6, doi:10.5244/C.2.23. I. Martynov, J.-K. Kamarainen, and L. Lensu, “Projector calibration by “inverse camera calibration”,” in Image Analysis, A. Heyden and F. Kahl, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 536–544. D. Moreno and G. Taubin, “Simple, accurate, and robust projector-camera calibra- tion,” in 2012 Second International Conference on 3D Imaging, Modeling, Process- ing, Visualization Transmission, 2012, pp. 464–471. G.Falcao,N.Hurtos,andJ.Massich,“Plane-basedcalibrationofaprojector-camera system,” VIBOT master, vol. 9, no. 1, pp. 1–12, 2008. F. Sadlo, T. Weyrich, R. Peikert, and M. Gross, “A practical structured light acqui- sition system for point-based geometry and texture,” in Proceedings Eurographic- s/IEEE VGTC Symposium Point-Based Graphics, 2005., 2005, pp. 89–145. M. Kimura, M. Mochimaru, and T. Kanade, “Projector calibration using arbitrary planes and calibrated camera,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007, pp. 1–2. Z. Song and R. C. K. Chung, “Grid point extraction and coding for structured light system,” Optical Engineering, vol. 50, no. 9, pp. 1 – 12, 2011. [Online]. Available: https://doi.org/10.1117/1.3615649 P. Vuylsteke and A. Oosterlinck, “Range image acquisition with a single binary- encoded light pattern,” IEEE Transactions on Pattern Analysis and Machine Intelli- gence, vol. 12, no. 2, pp. 148–164, 1990. F. MacWilliams and N. Sloane, “Pseudo-random sequences and arrays,” Proceed- ings of the IEEE, vol. 64, pp. 1715 – 1729, 01 1977. T. Tao, J. C. Koo, and H. R. Choi, “A fast block matching algorthim for stereo correspondence,” in 2008 IEEE Conference on Cybernetics and Intelligent Systems, 2008, pp. 38–41. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, vol. 60, no. 2, p. 91–110, nov 2004. [Online]. Available: https://doi.org/10.1023/B:VISI.0000029664.99615.94 S. Sakata, M. O. Watson, P. M. Grove, and A. R. Stevenson, “The Conflicting Ev- idence of Three-dimensional Displays in Laparoscopy: A Review of Systems Old and New,” Ann Surg, vol. 263, no. 2, pp. 234–239, Feb 2016. G.Kramida,“Resolvingthevergence-accommodationconflictinhead-mounteddis- plays,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 7, pp. 1912–1931, 2016. K. Khabarlak and L. Koriashkina, “Fast facial landmark detection and applications: A survey,” 2021. [Online]. Available: https://arxiv.org/abs/2101.10808 J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit., vol. 37, pp. 827–849, 2004. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photon., vol. 3, no. 2, pp. 128–160, Jun 2011. [Online]. Available: http://opg.optica.org/aop/abstract.cfm?URI=aop-3-2-128 Z. Song, S. Tang, F. Gu, C. Shi, and J. Feng, “Doe-based structured-light method for accurate 3d sensing,” Optics and Lasers in Engineering, vol. 120, pp. 21–30, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0143816618317408 H. Nguyen, Y. Wang, and Z. Wang, “Single-shot 3d shape reconstruction using structured light and deep convolutional neural networks,” Sensors, vol. 20, no. 13, 2020. [Online]. Available: https://www.mdpi.com/1424-8220/20/13/3718 M. Halioua and H.-C. Liu, “Optical three-dimensional sensing by phase measuring profilometry,” Optics and Lasers in Engineer- ing, vol. 11, no. 3, pp. 185–215, 1989. [Online]. Available: https://www.sciencedirect.com/science/article/pii/0143816689900316 T. Jia, Y. Liu, X. Yuan, W. Li, D. Chen, and Y. Zhang, “Depth measurement based on a convolutional neural network and structured light,” Measurement Science and Technology, vol. 33, no. 2, p. 025202, dec 2021. [Online]. Available: https://doi.org/10.1088/1361-6501/ac329d C. Schmalz, F. Forster, A. Schick, and E. Angelopoulou, “An endoscopic 3d scanner based on structured light,” Medical image analysis, vol. 16 5, pp. 1063–72, 2012. R. Furukawa, G. Nagamatsu, S. Oka, T. Kotachi, Y. Okamoto, S. Tanaka, and H. Kawasaki, “Simultaneous shape and camera-projector parameter estimation for 3d endoscopic system using cnn-based grid-oneshot scan,” Healthcare Technology Letters, vol. 6, 10 2019. Z. Song and C. Chung, “Determining both surface position and orientation in structured-light-based sensing,” IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, vol. 32, no. 10, pp. 1770–1780, oct 2010. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86677 | - |
| dc.description.abstract | 三維內視鏡比傳統內視鏡提供更強的深度感知資訊,因此受到廣 泛討論。雖然在深度感知上已經有明顯改善,目前的三維內視鏡還未 能提供精確的深度值,且長時間使用下容易因視覺輻輳調節衝突讓醫 生感到頭暈及噁心等不適。為了改善上述情況,我們結合結構光及深 度學習技術,提出一個設計給內視鏡的三維量測架構。本研究的三維 量測架構包含自行開發的神經網路SLResnet,以及座標校準演算法。 為了減少不可控因素,我們基於投影機-相機系統上開發及測試演算 法,並為內視鏡的可能使用情境設計實驗。我們的方法透過階高塊的 驗證,最大相對深度誤差可在0.396毫米以內。此方法亦能在深度變 化70度以內進行穩定的預測。在反光存在,且假設待測物體的表面是 連續的情境下,我們的方法能還原至多連續6格結構光圖案被反光遮蔽 的區域。我們亦模擬重建咽喉內的上顎,結果顯示99.7百分比的三維 點符合相對深度差1毫米的規範。基於上述實驗證明了將我們的結構光 技術應用於內視鏡的可行性。 | zh_TW |
| dc.description.abstract | The 3D endoscope has been widely discussed due to its ability to pro- vide more concrete depth information than that of traditional 2D endoscope. While 3D endoscope has significantly improved users’ depth perception, it still does not give accurate depth value. And by focusing on the 3D endo- scopic system for a long time, the user may suffer from symptoms such as dizziness and nausea resulting from the vergence-accommodation conflict. To improve the mentioned problems, this work integrated structured light technique and deep learning and proposed a 3D reconstruction framework for the endoscopic system. This work includes a self-developed SLResnet and a coordinate refinement algorithm. To minimize uncontrollable factors, this work developed our algorithms on the projector-camera system and de- signed several experiments to simulate the potential scenarios in endoscopy. With a 2 mm gauge block for verification, the maximum relative depth error of our method achieved 0.396 mm. This method can reconstruct at steepness up to 70 degrees stably. Under reflection and assuming that the target ob- ject’s surface is continuous, our approach can at most recover the area of 6 connected structured light grids. This work also simulated human upper jaw reconstruction, and over 99.7 percent of the 3D points passed the required 1 mm relative depth error. Based on our experiments, it’s feasible to incorpo- rate the advantage of structured light technique into the endoscope system for advanced depth perception. | en |
| dc.description.provenance | Made available in DSpace on 2023-03-20T00:10:41Z (GMT). No. of bitstreams: 1 U0001-1807202215345100.pdf: 31277161 bytes, checksum: e722dd5e04744b7ce1534cdc24c13bce (MD5) Previous issue date: 2022 | en |
| dc.description.tableofcontents | 口試委員會審定書 i 致謝 ix 摘要 x Abstract xi 1 Introduction 1 1.1 Motivation.................................. 1 1.2 Contribution................................. 2 1.3 ThesisOrganization............................. 3 2 Background and Related Works 4 2.1 Background................................. 4 2.1.1 CameraModelandCameraCalibration . . . . . . . . . . . . . . 4 2.1.2 ProjectorCalibration........................ 6 2.1.3 Pseudo-RandomArray....................... 7 2.1.4 3DSensingTechnology ...................... 7 2.1.5 StereoEndoscope.......................... 11 2.1.6 KeypointDetection......................... 11 2.2 RelatedWorks................................ 12 3 System Architecture and Problem Definition 14 3.1 SystemArchitecture............................. 14 3.1.1 3DEndoscope ........................... 14 3.1.2 Emulated 3D Endoscope Using Projector as Light Source . . . . 16 3.1.3 HardwareSpecification....................... 17 3.1.4 Assumptions ............................ 18 3.2 ProblemDefinition ............................. 20 3.3 ChallengesofTheProblem......................... 21 4 Design and Implementation 22 4.1 Single-ShotStructuredLightPattern.................... 22 4.2 3DSurfaceReconstruction ......................... 25 4.3 PatternDecoding .............................. 26 4.3.1 SLResnet.............................. 26 4.3.2 DatasetGeneration......................... 28 4.4 CoordinateRefinement ........................... 32 5 Experiment Evaluation 35 5.1 ExperimentSettingandPerformanceMetrics . . . . . . . . . . . . . . . 35 5.2 RelativeDepthAccuracyEvaluation.................... 45 5.3 DepthMapResolutionAnalysis ...................... 45 5.4 ImpactofTheSurfaceSteepness...................... 47 5.5 RobustnessAgainstReflection ....................... 49 5.6 3DReconstructionofObjectWithLiquid ................. 55 5.7 3DReconstructionofCurvedSurface ................... 56 5.8 3DReconstructionofMasks ........................ 56 6 Conclusion 58 Bibliography 59 2.1 Planarcheckerboardpattern. ........................ 6 2.2 Illustrationoftriangulation.......................... 8 3.1 Thesystemarchitectureofthe3Dendoscope. . . . . . . . . . . . . . . . 15 3.2 Thespecificationsofthe3Dendoscope. .................. 15 3.3 Physicalsystemsetup. ........................... 16 3.4 Specificationsoftheprojector-camerasystem. . . . . . . . . . . . . . . . 17 3.5 FLIRGS3-U3-41C6NIRcameramodel................... 17 3.6 TokinaTC1220-12MPcameralens. .................... 18 3.7 NECVT700projector. ........................... 18 3.8 Image of structured light pattern under optical simulation. . . . . . . . . . 19 4.1 Elements of structured light pattern. Note that the elements at bottom half are dots .................................... 23 4.2 32×32structuredlightpattern. ....................... 23 4.3 3Dsurfacereconstructionworkflow. .................... 25 4.4 ArchitectureofSLResnet. ......................... 26 4.5 Residualblocks................................ 27 4.6 A record composed of an input image and a sequence of image coordi- natesofgirdpoints.............................. 29 4.7 Datacollectionoverview........................... 29 4.8 (left) image of a row of grid points, (right) image of a column of grid points. 30 4.9 Cross-shapedkernel. ............................ 32 5.1 2mmgaugeblock.............................. 36 5.2 Thegridsdeformundersharpdepthdifference. . . . . . . . . . . . . . . 38 5.3 Deformationsunderdifferentsteepness .................. 38 5.4 Illustration of the angle used in depth steepness experiment . . . . . . . . 38 5.5 Basedatausedtosimulatevariouslevelsofreflection . . . . . . . . . . . 39 5.6 Illustration of targeted object with/without liquid . . . . . . . . . . . . . 40 5.7 The Anatomical Nasal Cavity Throat Anatomy Model. The framed area is the upper jaw simulated in curved surface reconstruction experiment . . 41 5.8 The forehead of the mask used to simulate human upper jaw. The framed areawasreconstructedandevaluated.................... 41 5.9 Thegroundtruthpointcloudofthemask ................. 42 5.10TheerrorchartoftheICPprocess ..................... 42 5.11 The human mask and santa mask to be reconstructed. The framed area wasreconstructedandevaluated....................... 43 5.12 Thegroundtruthofthehumanmaskandsantamask. . . . . . . . . . . . 44 5.13 TheerrorchartoftheICPprocessofhumanmask. . . . . . . . . . . . . 44 5.14 TheerrorchartoftheICPprocessofsantamask. . . . . . . . . . . . . . 45 5.15 Resultof2mmgaugeblockreconstruction . . . . . . . . . . . . . . . . 46 5.16 Reconstruction results of various depth steepness scenes. The unit of the pointcloudgraphsiscm .......................... 48 5.17 Partial Reconstruction results of 4.5cm diameter reflection with various opacity.Theunitofthepointcloudgraphsiscm. . . . . . . . . . . . . . 50 5.18 Reconstruction results of various sizes of reflection. The unit of the point cloudgraphsiscm ............................. 51 5.19 Reconstruction results of 3.5cm diameter reflection with various opacity. Theunitofthepointcloudgraphsiscm .................. 52 5.20 Reconstruction results of 4cm diameter reflection with various opacity. Theunitofthepointcloudgraphsiscm .................. 53 5.21 Reconstruction results of 4.5cm diameter reflection with various opacity. Theunitofthepointcloudgraphsiscm .................. 54 5.22Inputimageofplanarboardwithliquid .................. 55 5.23 Reconstructedplanarboardwithliquid(mm) . . . . . . . . . . . . . . . 55 5.24 Thereconstructedcenteroftheforehead(mm). . . . . . . . . . . . . . . 56 5.25Thereconstructedhumanmask(mm) ................... 57 5.26Thereconstructedsantamask(mm) .................... 57 5.1 Estimatedstepdifferenceof2mmgaugeblock(mm) . . . . . . . . . . . 45 5.2 Estimated angle between the wall and the planar board (degree) . . . . . . 47 5.3 standard deviation of the fitted planar board with 3D point cloud (mm) . . 47 5.4 standard deviation of fitted planar board under various diameter of reflec-tion(mm).................................. 49 5.5 standard deviation of fitted planar board under various opacity of reflec-tion(mm).................................. 49 | |
| dc.language.iso | en | |
| dc.subject | 結構光 | zh_TW |
| dc.subject | 三維感測 | zh_TW |
| dc.subject | 三維感測 | zh_TW |
| dc.subject | 結構光 | zh_TW |
| dc.subject | 內視鏡 | zh_TW |
| dc.subject | 內視鏡 | zh_TW |
| dc.subject | endoscope | en |
| dc.subject | 3D sensing | en |
| dc.subject | endoscope | en |
| dc.subject | structured light | en |
| dc.subject | structured light | en |
| dc.subject | 3D sensing | en |
| dc.title | 基於格點結構光圖案實現卷積神經網路端到端三維重建 | zh_TW |
| dc.title | End to End 3D Reconstruction with CNN Method Using Grid Point Based Structured Light Pattern | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 110-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 譚慶鼎(Ching-Ting Tan),傅楸善(Chiou-Shann Fuh),廖弘源(Hong-Yuan Liao),叢培貴(Pei-Kuei Tsung) | |
| dc.subject.keyword | 三維感測,結構光,內視鏡, | zh_TW |
| dc.subject.keyword | 3D sensing,structured light,endoscope, | en |
| dc.relation.page | 61 | |
| dc.identifier.doi | 10.6342/NTU202201527 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2022-08-03 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
| dc.date.embargo-lift | 2025-01-01 | - |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1807202215345100.pdf | 30.54 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
