Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 機械工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92026
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳亮嘉zh_TW
dc.contributor.advisorLIANG-CHIA CHENen
dc.contributor.author黃聖皓zh_TW
dc.contributor.authorSheng-Hao Huangen
dc.date.accessioned2024-02-27T16:37:38Z-
dc.date.available2024-02-28-
dc.date.copyright2022-03-15-
dc.date.issued2021-
dc.date.submitted2002-01-01-
dc.identifier.citation[1]"Automated optical inspection system." https://elivepr.com/2021/07/global-trends-for-automated-optical-metrology-industry-2021

[2]"CMM." https://ecatalog.mitutoyo.com/CRYSTA-Apex-EX-500T700T900T-PH20-Equipped-5-Axis-CNC-CMM-C1840.aspx

[3]"LMI Gocator 3000." https://www.stemmer-imaging.com/en-nl/products/series/lmi-gocator-3100

[4]D. Marr and T. Poggio, "A computational theory of human stereo vision," Proceedings of the Royal Society of London. Series B. Biological Sciences, vol. 204, no. 1156, pp. 301-328, 1979.

[5]S. Foix, G. Alenya, and C. Torras, "Lock-in time-of-flight (ToF) cameras: A survey," IEEE Sensors Journal, vol. 11, no. 9, pp. 1917-1926, 2011.

[6]M. Hansard, S. Lee, O. Choi, and R. P. Horaud, Time-of-flight cameras: principles, methods and applications. Springer Science & Business Media, 2012.

[7]J. G. D. França, M. A. Gazziro, A. N. Ide, and J. H. Saito, "A 3D scanning system based on laser triangulation and variable field of view," in IEEE International Conference on Image Processing 2005, 2005, vol. 1: IEEE, pp. I-425.

[8]J. L. Posdamer and M. Altschuler, "Surface measurement by space-encoded projected beam systems," Computer graphics and image processing, vol. 18, no. 1, pp. 1-17, 1982.

[9]D. Caspi, N. Kiryati, and J. Shamir, "Range imaging with adaptive color structured light," IEEE Transactions on Pattern analysis and machine intelligence, vol. 20, no. 5, pp. 470-480, 1998.

[10]R. J. Valkenburg and A. M. McIvor, "Accurate 3D measurement using a structured light system," Image and Vision Computing, vol. 16, no. 2, pp. 99-110, 1998.

[11]I. Ishii, K. Yamamoto, K. Doi, and T. Tsuji, "High-speed 3D image acquisition using coded structured light projection," in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007: IEEE, pp. 925-930.

[12]P. Surynková, "Surface reconstruction," in Proceedings of the 17th Annual Conference of Doctoral Students-WDS, 2009.

[13]W. Krattenthaler, K. J. Mayer, and H. P. Duwe, "3D-surface measurement with coded light approach," presented at the Proceedings of the 17th meeting of the Austrian Association for pattern recognition on Image analysis and synthesis, 1994.

[14]"Solomon AccuPick 3D." https://www.chinatimes.com/newspapers/20200724000467-260210?chdtv

[15]"Welding robot." https://kknews.cc/news/pklkglp.html

[16]S. Kumar, P. K. Tiwari, and S. Chaudhury, "An optical triangulation method for non-contact profile measurement," in 2006 IEEE International Conference on Industrial Technology, 2006: IEEE, pp. 2878-2883.

[17]C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, "Phase shifting algorithms for fringe projection profilometry: A review," Optics and Lasers in Engineering, vol. 109, pp. 23-59, 2018.

[18]S. Zhang, "Absolute phase retrieval methods for digital fringe projection profilometry: A review," Optics and Lasers in Engineering, vol. 107, pp. 28-37, 2018.

[19]Y. Hu, Q. Chen, S. Feng, and C. Zuo, "Microscopic fringe projection profilometry: A review," Optics and Lasers in Engineering, p. 106192, 2020.

[20]C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, "Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review," Optics and lasers in engineering, vol. 85, pp. 84-103, 2016.

[21]陳亮嘉、范光照、邱奕契及陳金聖, 自動化光學檢測. 臺灣,高立圖書有限公司, 2015.

[22]P. Jia, J. Kofman, and C. E. English, "Comparison of linear and nonlinear calibration methods for phase-measuring profilometry," Optical Engineering, vol. 46, no. 4, p. 043601, 2007.

[23]S. Wang, C. J. Tay, C. Quan, and H. M. Shang, "Investigation of membrane deformation by a fringe projection method," Applied optics, vol. 41, no. 1, pp. 101-107, 2002.

[24]H. Guo, H. He, Y. Yu, and M. Chen, "Least-squares calibration method for fringe projection profilometry," Optical Engineering, vol. 44, no. 3, p. 033603, 2005.

[25]"Camera calibration." https://www.mathworks.com/help/vision/ug/camera-calibration.html

[26]"Pinhole camera model." https://www.mathworks.com/help/vision/ug/camera-calibration.html

[27]Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 11, 2000.

[28]J. J. Moré, "The Levenberg-Marquardt algorithm: implementation and theory," in Numerical analysis: Springer, 1978, pp. 105-116.

[29]R. B. Rusu, N. Blodow, Z. Marton, A. Soos, and M. Beetz, "Towards 3D object maps for autonomous household robots," in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007: IEEE, pp. 3191-3198.

[30]"Statistical outlier removal." https://www.programmersought.com/article/74566814125

[31]H. Moravec, "Robot spatial perceptionby stereoscopic vision and 3d evidence grids," Perception, 1996.

[32]R. L. Graham and F. F. Yao, "Finding the convex hull of a simple polygon," Journal of Algorithms, vol. 4, no. 4, pp. 324-331, 1983.

[33]D.-T. Lee and B. J. Schachter, "Two algorithms for constructing a Delaunay triangulation," International Journal of Computer & Information Sciences, vol. 9, no. 3, pp. 219-242, 1980.

[34]R. Mencl and H. Muller, "Interpolation and approximation of surfaces from three-dimensional scattered data points," in Scientific Visualization Conference (dagstuhl'97), 1997: IEEE, pp. 223-223.

[35]F. Merat, "Introduction to robotics: Mechanics and control," IEEE Journal on Robotics and Automation, vol. 3, no. 2, pp. 166-166, 1987.

[36]C. S. Chua and R. Jarvis, "Point signatures: A new representation for 3d object recognition," International Journal of Computer Vision, vol. 25, no. 1, pp. 63-85, 1997.

[37]A. E. Johnson and M. Hebert, "Using spin images for efficient object recognition in cluttered 3D scenes," IEEE Transactions on pattern analysis and machine intelligence, vol. 21, no. 5, pp. 433-449, 1999.

[38]D.-C. Hoang, L.-C. Chen, and T.-H. Nguyen, "Sub-OBB based object recognition and localization algorithm using range images," Measurement Science and Technology, vol. 28, no. 2, p. 025401, 2016.

[39]O. Carmichael, D. Huber, and M. Hebert, "Large data sets and confusing scenes in 3-d surface matching and recognition," in Second International Conference on 3-D Digital Imaging and Modeling (Cat. No. PR00062), 1999: IEEE, pp. 358-367.

[40]D. F. Huber and M. Hebert, "A new approach to 3-d terrain mapping," in Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No. 99CH36289), 1999, vol. 2: IEEE, pp. 1121-1127.

[41]S. Gottschalk, M. C. Lin, and D. Manocha, "OBBTree: A hierarchical structure for rapid interference detection," in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996, pp. 171-180.

[42]P. J. Besl and N. D. McKay, "Method for registration of 3-D shapes," in Sensor fusion IV: control paradigms and data structures, 1992, vol. 1611: International Society for Optics and Photonics, pp. 586-606.

[43]A. Censi, "An ICP variant using a point-to-line metric," in 2008 IEEE International Conference on Robotics and Automation, 2008: Ieee, pp. 19-25.

[44]D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, "The trimmed iterative closest point algorithm," in Object recognition supported by user interaction for service robots, 2002, vol. 3: IEEE, pp. 545-548.

[45]A. Segal, D. Haehnel, and S. Thrun, "Generalized-icp," in Robotics: science and systems, 2009, vol. 2, no. 4: Seattle, WA, p. 435.

[46]Y. Chen and G. Medioni, "Object modelling by registration of multiple range images," Image and vision computing, vol. 10, no. 3, pp. 145-155, 1992.

[47]S. Rusinkiewicz and M. Levoy, "Efficient variants of the ICP algorithm," in Proceedings third international conference on 3-D digital imaging and modeling, 2001: IEEE, pp. 145-152.

[48]F. Zhao, Q. Huang, and W. Gao, "Image matching by normalized cross-correlation," in 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, 2006, vol. 2: IEEE, pp. II-II.

[49]J.-C. Yoo and T. H. Han, "Fast normalized cross-correlation," Circuits, systems and signal processing, vol. 28, no. 6, pp. 819-843, 2009.

[50]D. W. Eggert, A. Lorusso, and R. B. Fisher, "Estimating 3-D rigid body transformations: a comparison of four major algorithms," Machine vision and applications, vol. 9, no. 5, pp. 272-290, 1997.

[51]"TI DLP3010EVM-LC." https://www.ti.com/tool/DLP3010EVM-LC

[52]"TI DLP3010 DMD." https://www.ti.com/product/DLP3010

[53]"DLP projector imaging principle." https://programmersought.com/article/20054103136

[54]"Basler acA2040-120um." https://www.baslerweb.com/en/products/cameras/area-scan-cameras/ace/aca2040-120um

[55]"Edmund 12mm f/8 HPr Series." https://www.edmundoptics.com.tw/p/12mm-f-11-150-500mm-primary-wd-hpr-series-fixed-focal-length-lens/38834

[56]"Basler acA2040 120um documentation." https://docs.baslerweb.com/aca2040-120um

[57]"Basler camera circuit diagrams." https://docs.baslerweb.com/circuit-diagrams-(ace)

[58]"Basler camera I/O timing characteristics " https://docs.baslerweb.com/io-timing-characteristics-(ace-ace-2-boost)

[59]"Brooks Automation Atmospheric Single-Arm Robot User’s Manual." https://www.artisantg.com/info/Brooks_PRI_Automation_ABM_407B_1_S_CE_S293_Manual_20163184612.pdf

[60]"New Port M-ILCS200CCL." https://www.newport.com/p/M-ILS200CCL

[61]"Mitutoyo STRATO-Apex CMM." https://shop.mitutoyo.eu/web/mitutoyo/en/mitutoyo/STRATO-APEX%20%20500%252F900%20Series/STRATO-Apex%20574%20CNC%20CMM/$catalogue/mitutoyoData/PR/355-701/index.xhtml;jsessionid=64FE6A4822DE9EBDFB84BFFB43CF342C

[62]"CyberOptics - WaferSense." http://www.cyberoptics.com/products/wafersense-auto-teaching-system

[63]T. P. Koninckx and L. Van Gool, "Real-time range acquisition by adaptive structured light," IEEE transactions on pattern analysis and machine intelligence, vol. 28, no. 3, pp. 432-445, 2006.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92026-
dc.description.abstract本論文旨在開發創新式機械手臂終端位置與姿態檢測系統,藉由三維數位結構光投射量測法建立物件之三維形貌資訊,結合晶圓三軸機械手臂終端特徵作為標的,並以點雲配準技術求得物體相對位置與姿態變化之精確數據。本方法能同時克服傳統二維影像處理法無法獲取物件高度資訊之弊病,以及需要額外標定物作為偵測基準之問題,成功實現無特定標定物之三維位置與姿態變化偵測演算法。
利用數位結構光投射量測系統可達成三維表面特徵快速重建,並建立具高辨識度之物件特徵點雲,以做為相對位置與姿態變化偵測數據基礎。利用本研究方法可獲得機械手臂終端 X、Y、Z、Roll、Pitch、Yaw 位置與姿態變化,克服傳統二維影像檢測方式僅能獲取 X、Y 方向上偏位的限制。
本論文提出以物件特徵點雲資訊為基礎,使用區域表面積特徵表述符有效地實現三維空間中物件初始對位,更利用優化的最近點迭代法以快速且精確地擬合模型及物件點雲,達成物件相對位置及姿態變化偵測。此方法不需要額外特徵標的來建立其量測基準,故可實現無標定物之位置與姿態變化偵測方法。
為了檢驗所提出系統之可行性與量測精度,使用所發展之三維光學探頭量測標準球校正單元,根據實驗量測結果,已證實該系統於全量測範圍全深(100 mm)內可達60 μm之精確度與 100 μm之深度解析。另一方面,應用於機械手臂線上自動化位置與姿態變化檢測,根據實驗數據顯示在量測範圍全深內之三維方向上達 100 μm之定位精確度。
zh_TW
dc.description.abstractThis paper aims to develop an innovative robot arm end-effector position and orientation detection system by using digital structured light projection profilometry to reconstruct the 3D profile of an object, also combining the features of the wafer tri-axis robotic arm as the standard, and using point cloud alignment techniques to obtain the relative variation of the position and orientation of the object. This method can overcome the disadvantages of the traditional 2D image processing method, which cannot obtain object's depth information and also need an additional target to be the reference, successfully complete the detection of the position and orientation of the robot arm end-effector.
The use of digital structured light projection profilometry can achieve rapid reconstruction of three-dimensional surface profile, and establish a highly recognizable object feature point cloud as the foundation for detection of the variation in the position and orientation. Using the proposed method, the variations in the position and orientation of the robot arm end-effector including X, Y, Z, Roll, Pitch, and Yaw can be obtained, which overcomes the limitation that the traditional two-dimensional image inspection method can only obtain the translations in the X and Y directions.
The proposed method based on the object point cloud, applied the regional surface area descriptor to roughly align the objects in the 3D space, and the optimized iterative closest point algorithm is used to fast and accurately align the object point clouds, and so as to detect the variations in the position and orientation of the object. This method does not require additional target to measure the variations in the position and orientation of the object, so it can achieve target-free measurement.
To test the feasibility of the proposed system and verify its measurement accuracy, a standard sphere calibration target was measured by the developed 3D optical probe.
From the measurement results, it reveals that the 3D optical probe could achieve 60 μm measurement accuracy and 100 μm depth resolution within the entire measurement range of 100 mm. On the other hand, for the automatic detection of the position and orientation on the robot arm end-effector, the experimental results show that it achieves the positioning accuracy of 100 um in the X,Y and Z position within the entire measurement range.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-02-27T16:37:38Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-02-27T16:37:38Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents目錄
誌謝 III
中文摘要 IV
Abstract V
目錄 VII
圖目錄 XII
表目錄 XIX
符號目錄 XXI
Chapter 1 緒論 1
1.1 研究背景 1
1.2 研究動機 3
1.3 研究目的 5
1.4 論文架構 6
Chapter 2 文獻回顧 7
2.1 引言 7
2.2 深度資訊重建 7
2.2.1 飛時測距量測法(Time of Flight) 8
2.2.2 雷射三角法 9
2.2.3 數位結構光投射量測法 11
2.3 數位條紋投射量測法 12
2.3.1 相位移法(Phase shifting profilometry) 12
2.3.2 線性相位與高度轉換 16
2.3.3 非線性相位與高度轉換 18
2.3.4 非線性相位與高度校正 19
2.4 相機校正(Camera calibration) 22
2.4.1 針孔相機模型(Pinhole camera model)與鏡頭畸變(Lens distortion) 22
2.4.2 張正友相機校正法(Zhang's camera calibration) 27
2.5 三維點雲前處理 30
2.5.1 離群值剔除 30
2.5.2 體素濾波(Voxel Grid Filter ) 31
2.5.3 德勞內三角化(Delaunay triangulation) 31
2.6 剛體轉換(Rigid Transformation) 34
2.7 點雲粗配準(Coarse Alignment)方法 38
2.7.1 點簽名法(Point Signature) 38
2.7.2 自旋圖像(Spin Image) 40
2.7.3 表面積特徵描述法 42
2.8 點雲精配準(Fine Alignment)方法 45
2.8.1 最近點迭代法(Iterative Closest Point, ICP) 45
2.8.2 變化的最近點迭代法 47
2.9 文獻回顧總結 51
Chapter 3 研究方法 53
3.1 研究架構 53
3.2 點雲資料前處理 54
3.2.1 雜訊、離群值去除 54
3.2.2 降低點雲資料量之演算法則 56
3.3 點雲粗配準 59
3.3.1 區域表面積特徵描述符(Regional Area Based Descriptor) 60
3.3.2 特徵比對 63
3.4 點雲精配準 67
3.5 姿態誤差評估 71
3.6 研究方法結論 74
Chapter 4 光學量測探頭之發展與系統整合 75
4.1 系統架構 75
4.2 系統設計與軟硬體規格 76
4.2.1 數位投光裝置 76
4.2.2 影像感測模組 79
4.2.3 數位投光裝置與影像感測模組之同步控制 83
4.2.4 三維結構光量測模組系統設計 87
4.2.5 三軸晶圓機械手臂 90
4.2.6 線性位移平台 91
4.2.7 運算處理器與GUI控制軟體 92
Chapter 5 發展系統之量測結果與分析 95
5.1 實驗系統之參數設置 95
5.1.1 實驗環境參數 95
5.1.2 光學量測探頭之系統參數 95
5.1.3 點雲前處理演算法參數 97
5.1.4 點雲粗配準(Coarse alignment)演算法參數設置 97
5.1.5 點雲精配準(Fine alignment)演算法參數設置 98
5.2 結構光量測模組之系統校正 98
5.2.1 結構光量測模組之相機校正 98
5.2.2 結構光量測模組之相位與高度校正 101
5.3 結構光量測模組精度驗證 104
5.3.1 球心距離準確度驗證 105
5.3.2 重複度驗證 107
5.4 物件三維形貌量測 110
5.4.1 點雲資料前處理 111
5.5 機械手臂之姿態偵測 112
5.5.1 資料庫建立 113
5.5.2 機械手臂之姿態變化 118
5.6 機械手臂重複作業之姿態變化量測 121
5.7 機械手臂姿態變化誤差分析 124
5.8 系統之技術核心與指標能力分析 128
Chapter 6 結論與未來展望 130
6.1 結論 130
6.2 未來展望 131
附錄 A 三次元量測儀量測標準球校正單元數據 139
附錄 B 儀科中心結構光量測模組精度驗證報告書 140
附錄 C 機械手臂作業時之100次姿態變化紀錄 150
附錄 D 研究成果 153
圖目錄
圖 1 1線上自動化光學檢測系統[1] 1
圖 1 2接觸式量測系統[2] 2
圖 1 3非接觸式量測系統[3] 2
圖 1 4機械手臂自動化物料取放[14] 3
圖 1 5機械手臂自動化焊接[15] 3
圖 1 6 (a)晶圓破片情形;(b)晶圓掉落情形 4
圖 2 1飛時測距(ToF)量測架構time[5] 8
圖 2 2雷射掃描儀系統架構[16] 9
圖 2 3雷射三角法幾何關係圖[16] 10
圖 2 4弦波條紋結構光影像 11
圖 2 5數位結構光投射量測法[10] 12
圖 2 6五組不同相移角度之正弦條紋影像灰階強度分佈(條紋週期為50 pixels) 16
圖 2 7數位條紋投射量測法架構[22] 17
圖 2 8非線性相位與高度轉換校正示意圖[22] 22
圖 2 9針孔相機成像模型(Pinhole camera model)[25] 23
圖 2 10物體與世界座標系、影像座標系、相機坐標系之成像關係[26] 24
圖 2 11鏡頭之徑向畸變(Radial lens distortion)[25] 25
圖 2 12切向畸變(Tangential distortion )[25] 26
圖 2 13切向畸變造成影像產生像差 26
圖 2 14棋盤格校正圖片[27] 28
圖 2 15張正友相機校正法流程[27] 29
圖 2 16帶有雜訊的重建點雲(左),去除雜訊與離群值的重建點雲(右)[30] 30
圖 2 17體素濾波(Voxel Grid filter)示意圖[31] 31
圖 2 18凸多邊形(Convex hull) [33] 32
圖 2 19最大化最小內角示意圖[33] 32
圖 2 20不符合空圓性質之三角化[33] 33
圖 2 21滿足空圓性質之三角化[33] 33
圖 2 22 α-shape[34] 34
圖 2 23右手定則下之座標系[35] 35
圖 2 24右手坐標系下沿x、y、z軸之旋轉運動[35] 35
圖 2 25點簽名(Point Signature); (a)球與物體表面之交線輪廓, (b)參考方向, (c) 與交線輪廓C之投影距離簽名[36] 39
圖 2 26點簽名實例[36] 39
圖 2 27自旋圖像之二維特徵計算過程[37] 40
圖 2 28鴨子模型上三個點所生成之自旋圖像[37] 41
圖 2 29物體點雲之具方向包圍盒[38] 43
圖 2 30次具方向之包圍盒[38] 43
圖 2 31物體之表面積分佈直方圖[38] 43
圖 2 32三維虛擬相機[38] 44
圖 2 33不同視角下虛擬相機紀錄之點雲(a-c)與對應之表面積分佈(d-f) [38, 42] 44
圖 2 34最近點迭代法演算流程[42] 46
圖 2 35最近點迭代法配準示意圖[42] 47
圖 2 36最近點迭代法之對應點匹配方法[47];(a)最近點搜尋法;(b)法向量射擊法(Normal shooting);(c)投影法 48
圖 2 37各種對應點匹配方法之性能分析[47] 49
圖 2 38最近點迭代法之誤差計算方法[46];(a) 點至點之距離;(b) 點至面之距離 49
圖 2 39誤差計算方式對於真實場景之模型收斂速度比較[47] 50
圖 2 40優化的最近點迭代法框架[47] 50
圖 3 1研究方法流程圖 53
圖 3 2離群值剔除示意圖[29] 54
圖 3 3 (左)原始點雲資料;(右)離群值剔除後的點雲資料 55
圖 3 4精密陶瓷塊規 57
圖 3 5結構光探頭擷取影像 57
圖 3 6精密陶瓷之原始重建點雲資訊 57
圖 3 7剔除離群點後之點雲資訊 57
圖 3 8經Voxel Grid Filter降低資料量後之點雲資訊 58
圖 3 9 Voxel Grid Filter誤差分析結果 58
圖 3 10經Delaunay Triangulation降低資料量後之點雲資訊 58
圖 3 11 Delaunay Triangulation誤差分析結果 58
圖 3 12點雲粗配準流程圖 59
圖 3 13虛擬相機與模型點雲示意圖 60
圖 3 14 (a) ~ (f)為在不同視角下由虛擬相機所拍攝之模型點雲 61
圖 3 15物件點雲之具方向包圍盒 62
圖 3 16物件點雲之次具方向包圍盒 63
圖 3 17物件點雲之表面積分佈 63
圖 3 18特徵比對流程圖 64
圖 3 19 (a)模型點雲之具方向包圍盒;(b)物件點雲之具方向包圍盒 65
圖 3 20模板點雲與模型點雲在空間中之關係示意圖 66
圖 3 21最近點迭代法之參考點雲與目標點雲 68
圖 3 22經典最近點迭代法之配準結果 68
圖 3 23廣義最近點迭代法之配準結果 69
圖 3 24優化的最近點迭代法之配準結果 69
圖 3 25誤差與迭代次數之效能分析 69
圖 3 26誤差與時間之效能分析 70
圖 3 27模型點雲與轉換點雲 72
圖 3 28粗配準後之擬合結果 72
圖 3 29精配準前之擬合結果(左);精配準後之擬合結果(右) 73
圖 4 1自動化機械手臂終端姿態檢測系統架構 75
圖 4 2 TI DLP3010EVM-LC 數位微鏡投影模組[51] 76
圖 4 3 DLP3010晶片[52] 76
圖 4 4數位微鏡頭影模組投影原理示意圖[53] 77
圖 4 5 Basler acA2040-120um[54] 79
圖 4 6 CCD sensor與FoV示意圖 80
圖 4 7 12mm f/8 HPr Series 定焦鏡頭[55] 81
圖 4 8 12mm f/8 HPr Series Fixed focal length lens 之MTF圖[55] 82
圖 4 9系統同步訊號架構圖 83
圖 4 10相機輸出入腳位圖,取自CMOS相機規格書[56] 84
圖 4 11相機GPIO腳位電路圖[57] 84
圖 4 12相機光偶合腳位電路圖[57] 84
圖 4 13相機輸入腳位時序特徵圖[58] 85
圖 4 14三維量測模組架構示意圖 87
圖 4 15結構光量測模組量測範圍示意圖 88
圖 4 16結構光量測模組模擬設計圖 88
圖 4 17實際量測模組之內部結構 89
圖 4 18實際量測模組之外觀 89
圖 4 19 Brooks Automation Atmospheric Single-Arm robot ATM-500[59] 90
圖 4 20機械手臂坐標系[59] 90
圖 4 21 M-ILS200CCL [60] 91
圖 4 22量測軟體之使用者介面 93
圖 4 23 量測軟體之使用者介面(位置與姿態變化圖表) 93
圖 5 1光學量測探頭之系統參數示意圖 95
圖 5 2棋盤格校正板 99
圖 5 3相機校正任意擺放之棋盤格拍攝影像 100
圖 5 4大理石參考平面 101
圖 5 5非線性相位與高度校正流程圖 102
圖 5 6結構光正弦條紋在不同深度下投影至參考面的影像 102
圖 5 7 (a)、(c)系統校正前之參考面量測結果;(b)、(d)系統校正後之參考面量測結果 103
圖 5 8 STRATO-Apex 574[61] 104
圖 5 9標準球校正單元 105
圖 5 10三次元量床對兩個標準球進行量測 105
圖 5 11校正單元擺放示意圖 106
圖 5 12校正單元之量測三維點雲數據 106
圖 5 13 30次量測結果之標準差分佈 109
圖 5 14三軸晶圓機械手臂之末端承載器 110
圖 5 15手臂末端承載器之原始重建點雲 110
圖 5 16離群值剔除後之點雲 111
圖 5 17德勞內三角化後之重建點雲 111
圖 5 18機械手臂姿態偵測概念示意圖 112
圖 5 19機械手臂至工作位置 113
圖 5 20量測模組之量測視野 114
圖 5 21相機所擷取之機械手臂影像 114
圖 5 22末端承載器之量測點雲 114
圖 5 23雜訊、離群值過濾後之量測點雲 115
圖 5 24降低資料量後之量測點雲 115
圖 5 25多視角模板點雲(a) ~ (h) 116
圖 5 26對應模板點雲之表面積特徵描述符 117
圖 5 27將手臂移至量測範圍內任一位置 118
圖 5 28原始重建點雲資料 118
圖 5 29點雲前處理後之重建點雲 119
圖 5 30粗配準後之物件點雲與模板點雲 119
圖 5 31精配準後之物件點雲與模板點雲 120
圖 5 32機械手臂移動路徑示意圖 122
圖 5 33機械手臂之實際移動位置 122
圖 5 34機械手臂100次之X, Y, Z平移量變化 123
圖 5 35機械手臂100次之Roll, Pitch, Yaw旋轉角變化 123
圖 5 36原始點雲(紅色)與轉換點雲(黃色) 125
圖 5 37姿態偵測演算法平移誤差 125
圖 5 38三維結構光量測模組誤差 126
圖 5 39誤差來源圓餅圖 127
圖 5 40 Cyber optics - WaferSense[62] 129
圖 6 1 2021科技部智慧機械永續創新成果展 153
圖 6 2 2020國際半導體展 153
圖 6 3 GPM第十屆全國大專院校AI智動化設備創作獎 154
表目錄
表 2 1主要相移法之特性比較[21] 14
表 2 2深度資訊重建方法比較 51
表 2 3粗配準方法之優缺點比較 52
表 4 1 DLP與LCD投影技術比較 78
表 4 2本實驗之檢測需求 79
表 4 3 Basler acA2040-120um規格[54] 80
表 4 4 12mm f/8 HPr Series規格[55] 82
表 4 5不同腳位之觸發時間延遲[58] 86
表 4 6 Brooks Automation Atmospheric Single-Arm robot ATM-500 技術規格[59] 91
表 4 7 M-ILS200CCL 技術規格[60] 91
表 4 8運算處理器之硬體規格 92
表 4 9本研究所使用之開源函式庫 92
表 5 1實驗環境參數 95
表 5 2光學量測探頭之系統參數 96
表 5 32點雲前處理演算法參數設置 97
表 5 4點雲粗配準演算法參數設置 97
表 5 5點雲精配準演算法參數設置 98
表 5 6棋盤格校正板規格 99
表 5 7相機校正所得之相機參數與畸變係數 100
表 5 8三次元量測儀量測標準球校正單元結果 107
表 5 9球心距離準確度量測結果 107
表 5 10三十次球心距離量測數據 109
表 5 11重複度實驗數據統計 109
表 5 12機械手臂姿態變化之誤差來源 126
表 5 13技術核心與指標能力比較: 128
-
dc.language.isozh_TW-
dc.subject最近點迭代法zh_TW
dc.subject位置與姿態變化zh_TW
dc.subject點雲配準zh_TW
dc.subject數位結構光投射量測法zh_TW
dc.subject區域表面積特徵描述符zh_TW
dc.subjectDigital structured light projection profilometryen
dc.subjectRegional surface area descriptoren
dc.subjectIterative closest pointen
dc.subjectPoint cloud alignmenten
dc.subjectVariations in the position and orientationen
dc.title運用光學量測於機械手臂末端效應器位置和姿態之高精度檢測zh_TW
dc.titleHigh-accuracy detection and monitoring on the position and orientation of the robotic arm end-effector using optical measurementen
dc.typeThesis-
dc.date.schoolyear110-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee林世聰;何昭慶;章明;葉勝利zh_TW
dc.contributor.oralexamcommitteeSHI-CONG LIN;ZHA-QING HE;MING CHANG;SHENG-LI YEen
dc.subject.keyword數位結構光投射量測法,點雲配準,位置與姿態變化,區域表面積特徵描述符,最近點迭代法,zh_TW
dc.subject.keywordDigital structured light projection profilometry,Point cloud alignment,Variations in the position and orientation,Regional surface area descriptor,Iterative closest point,en
dc.relation.page154-
dc.identifier.doi10.6342/NTU202102889-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2022-01-27-
dc.contributor.author-college工學院-
dc.contributor.author-dept機械工程學系-
dc.date.embargo-lift2026-08-17-
顯示於系所單位:機械工程學系

文件中的檔案:
檔案 大小格式 
ntu-110-2.pdf
  未授權公開取用
41.63 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved