請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21199完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林沛群(Pei-Chun Lin) | |
| dc.contributor.author | Yu-Hsun Wang | en |
| dc.contributor.author | 王右勛 | zh_TW |
| dc.date.accessioned | 2021-06-08T03:28:33Z | - |
| dc.date.copyright | 2021-03-08 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-02-04 | |
| dc.identifier.citation | [1] S. Yang and W. Li, Surface Finishing Theory and New Technology. Springer-Verlag Berlin Heidelberg, 2018, pp. 37 - 42. [2] W. B. Rowe, Principles of Modern Grinding Technology. Boston: William Andrew Publishing, 2009, pp. 1-14. [3] W. König, H. K. Tönshoff, J. Fromlowitz, and P. Dennis, 'Belt Grinding,' CIRP Annals, vol. 35, no. 2, pp. 487-494, 1986, doi: https://doi.org/10.1016/S0007-8506(07)60197-8. [4] G. Hammann, Modellierung des Abtragsverhaltens elastischer, robotergeführter Schleifwerkzeuge. Springer-Verlag Berlin Heidelberg, 1998. [5] L. Ri-xian, 'Defects Detection Based on Deep Learning and Transfer Learning,' Metallurgical and Mining Industry, no. 7, pp. 312 - 321, 2015. [6] T. Czimmermann et al., 'Visual-Based Defect Detection and Classification Approaches for Industrial Applications-A SURVEY,' Sensors (Basel), vol. 20, no. 5, 2020, doi: 10.3390/s20051459. [7] X. Xie, 'A Review of Recent Advances in Surface Defect Detection using Texture analysis Techniques,' Electronic letters on computer vision and image analysis (ELCVIA), vol. 7, no. 3, pp. 11-22, 2008. [8] R. Manish, A. Venkatesh, and S. Denis Ashok, 'Machine Vision Based Image Processing Techniques for Surface Finish and Defect Inspection in a Grinding Process,' Materials Today: Proceedings, vol. 5, no. 5, Part 2, pp. 12792-12802, 2018, doi: https://doi.org/10.1016/j.matpr.2018.02.263. [9] R. M. Haralick, K. Shanmugam, and I. Dinstein, 'Textural Features for Image Classification,' IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3, no. 6, pp. 610-621, 1973, doi: 10.1109/TSMC.1973.4309314. [10] I. S. Tsai, C.-H. Lin, and J.-J. Lin, 'Applying an Artificial Neural Network to Pattern Recognition in Fabric Defects,' Textile Research Journal, vol. 65, no. 3, pp. 123-130, 1995, doi: 10.1177/004051759506500301. [11] S. Akcay, A. Atapour Abarghouei, and T. Breckon, GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training. 2018. [12] T.-W. Tang, W.-H. Kuo, J.-H. Lan, C.-F. Ding, H. Hsu, and H.-T. Young, 'Anomaly Detection Neural Network with Dual Auto-Encoders GAN and Its Industrial Inspection Applications,' Sensors, vol. 20, p. 3336, 2020, doi: 10.3390/s20123336. [13] V. Suen et al., 'Noncontact Surface Roughness Estimation Using 2D Complex Wavelet Enhanced ResNet for Intelligent Evaluation of Milled Metal Surface Quality,' Applied Sciences, vol. 8, p. 381, 2018, doi: 10.3390/app8030381. [14] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep Residual Learning for Image Recognition,' in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90. [15] S. Mei, H. Yang, and Z. Yin, 'An Unsupervised-Learning-Based Approach for Automated Defect Inspection on Textured Surfaces,' IEEE Transactions on Instrumentation and Measurement, vol. 67, no. 6, pp. 1266-1277, 2018, doi: 10.1109/TIM.2018.2795178. [16] A. Krizhevsky, I. Sutskever, and G. Hinton, 'ImageNet Classification with Deep Convolutional Neural Networks,' Neural Information Processing Systems, vol. 25, 2012, doi: 10.1145/3065386. [17] K. Simonyan and A. Zisserman, 'Very Deep Convolutional Networks for Large-Scale Image Recognition,' arXiv 1409.1556, 09/04 2014. [18] C. Szegedy et al., 'Going deeper with convolutions,' in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1-9, doi: 10.1109/CVPR.2015.7298594. [19] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, 'Inception-v4, inception-ResNet and the impact of residual connections on learning,' presented at the Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, 2017. [20] Keras. 'Keras Applications.' https://keras.io/api/applications/ (accessed 01/17, 2021). [21] K. Wang, X. Gao, Y. Zhao, X. Li, D. Dou, and C. Xu, 'Pay Attention to Features, Transfer Learn Faster CNNs,' in ICLR, 2020. [22] S. J. Pan and Q. Yang, 'A Survey on Transfer Learning,' IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345-1359, 2010, doi: 10.1109/TKDE.2009.191. [23] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, 'A Survey on Deep Transfer Learning,' in Artificial Neural Networks and Machine Learning – ICANN 2018, Cham, V. Kůrková, Y. Manolopoulos, B. Hammer, L. Iliadis, and I. Maglogiannis, Eds., 2018: Springer International Publishing, pp. 270-279. [24] M. Oquab, L. Bottou, I. Laptev, and J. Sivic, 'Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks,' in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1717-1724, doi: 10.1109/CVPR.2014.222. [25] M. Iftene, L. Qingjie, and W. Yunhong, 'Very High Resolution Images Classification by Fusing Deep Convolutional Neural Networks,' presented at the The 5th International Conference on Advanced Computer Science Applications and Technologies (ACSAT 2017), 2017. [26] S. Dabeer, M. M. Khan, and S. Islam, 'Cancer diagnosis in histopathological image: CNN based approach,' Informatics in Medicine Unlocked, vol. 16, p. 100231, 2019, doi: https://doi.org/10.1016/j.imu.2019.100231. [27] T. Lin, A. RoyChowdhury, and S. Maji, 'Bilinear CNN Models for Fine-Grained Visual Recognition,' in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1449-1457, doi: 10.1109/ICCV.2015.170. [28] M. Badza and M. Barjaktarovic, 'Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network,' Applied Sciences, vol. 10, p. 1999, 2020, doi: 10.3390/app10061999. [29] K. Geras, S. Wolfson, S. Kim, L. Moy, and K. Cho, 'High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks,' Computing Research Repository(CoRR), 2017. [Online]. Available: http://arxiv.org/abs/1703.07047. [30] E. Ahmed and M. Moustafa, 'House Price Estimation from Visual and Textual Features,' 2016. [Online]. Available: https://arxiv.org/abs/1609.08399. [31] Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, 'Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,' Journal of Biomedical Optics, vol. 22, no. 12, 2017. [32] S. Lathuilière, P. Mesejo, X. Alameda-Pineda, and R. Horaud, 'A Comprehensive Analysis of Deep Regression,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 9, pp. 2065-2081, 2020, doi: 10.1109/TPAMI.2019.2910523. [33] Z. Zhao, P. Zheng, S. Xu, and X. Wu, 'Object Detection With Deep Learning: A Review,' IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212-3232, 2019, doi: 10.1109/TNNLS.2018.2876865. [34] R. Girshick, J. Donahue, T. Darrell, and J. Malik, 'Region-Based Convolutional Networks for Accurate Object Detection and Segmentation,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 1, pp. 142-158, 2016, doi: 10.1109/TPAMI.2015.2437384. [35] R. Girshick, 'Fast R-CNN,' in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440-1448, doi: 10.1109/ICCV.2015.169. [36] S. Ren, K. He, R. Girshick, and J. Sun, 'Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 2017, doi: 10.1109/TPAMI.2016.2577031. [37] X. Zhang, M. Cabaravdic, K. Kneupner, and B. Kuhlenkötter, 'Real-Time Simulation of Robot Controlled Belt Grinding Processes of Sculptured Surfaces,' International Journal of Advanced Robotic Systems, vol. 1, 2004, doi: 10.5772/5627. [38] X. Zhang, B. Kuhlenkötter, and K. Kneupner, 'An efficient method for solving the Signorini problem in the simulation of free-form surfaces produced by belt grinding,' International Journal of Machine Tools and Manufacture, vol. 45, no. 6, pp. 641-648, 2005, doi: https://doi.org/10.1016/j.ijmachtools.2004.10.006. [39] X. Zhang, K. Kneupner, and B. Kuhlenkötter, 'A new force distribution calculation model for high-quality production processes,' The International Journal of Advanced Manufacturing Technology, vol. 27, no. 7, pp. 726-732, 2006, doi: 10.1007/s00170-004-2229-x. [40] X. Ren, M. Cabaravdic, X. Zhang, and B. Kuhlenkötter, 'A local process model for simulation of robotic belt grinding,' International Journal of Machine Tools and Manufacture, vol. 47, pp. 962-970, 2007, doi: 10.1016/j.ijmachtools.2006.07.002. [41] V. Pandiyan, W. Caesarendra, T. Tjahjowidodo, and P. Gunasekaran, 'Predictive Modelling and Analysis of Process Parameters on Material Removal Characteristics in Abrasive Belt Grinding Process,' Applied Sciences, vol. 7, p. 363, 2017, doi: 10.3390/app7040363. [42] J. Qi and B. Chen, 'Surface Roughness Prediction Based on the Average Cutting Depth of Abrasive Grains in Belt Grinding,' in 2018 3rd International Conference on Mechanical, Control and Computer Engineering (ICMCCE), 2018, pp. 169-174, doi: 10.1109/ICMCCE.2018.00042. [43] Y. J. Wang, Y. Huang, Y. X. Chen, and Z. S. Yang, 'Model of an abrasive belt grinding surface removal contour and its application,' The International Journal of Advanced Manufacturing Technology, vol. 82, no. 9, pp. 2113-2122, 2016, doi: 10.1007/s00170-015-7484-5. [44] S. Bratan, A. Kolesov, S. Roshchupkin, and T. Stadnik, 'Theoretical-probabilistic model of the rotary belt grinding process,' MATEC Web Conf., 10.1051/matecconf/201712901078 vol. 129, 2017. [Online]. Available: https://doi.org/10.1051/matecconf/201712901078. [45] HIWIN. 'RA605-GC User manual.' https://www.hiwin.tw/download/tech_doc/mar/RA605-GC_User_Manual-(C).pdf (accessed 2021/01/17. [46] 陳柏勳, '基於位置與力複合誤差控制之雙機器手臂協同持物操作與學習演算法之應用,' 碩士學位, 機械工程學系, 國立台灣大學, 台北, 2018. [47] B. Komati, M. Pac, I. Ranatunga, c. clévy, D. Popa, and P. Lutz, 'Explicit Force Control vs Impedance Control for Micromanipulation,' in ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, 2013, vol. 1, doi: 10.1115/DETC2013-13067. [Online]. Available: https://doi.org/10.1115/DETC2013-13067 [48] A. Winkler and J. Suchý, 'Explicit and implicit force control of an industrial manipulator — An experimental summary,' in 2016 21st International Conference on Methods and Models in Automation and Robotics (MMAR), 2016, pp. 19-24, doi: 10.1109/MMAR.2016.7575081. [49] Geometrical Product Specifications (GPS) — Surface texture: Profile method — Terms, definitions and surface texture parameters, I. O. f. Standardization, Geneva, Switzerland., 1997. [50] A. Novini, 'Fundamentals of machine vision lighting,' in Proceedings of WESCON '93, 1993, pp. 44-52, doi: 10.1109/WESCON.1993.488407. [51] O. ENGINEERING. 'Lighting.' OPTO ENGINEERING. https://www.opto-e.com/basics/light-in-machine-vision (accessed 01/17, 2021). [52] G. BERGSTRÖM, 'Method for calibration of off-line generated robot program,' Master degree, AUTOMATIC CONTROL, CHALMERS UNIVERSITY OF TECHNOLOGY, Göteborg, Sweden, 2011. [Online]. Available: https://odr.chalmers.se/bitstream/20.500.12380/153281/1/153281.pdf [53] S. Vougioukas, 'Bias Estimation and Gravity Compensation for Force-Torque Sensors.,' presented at the 3rd WSEAS Symposium on Mathematical Methods and Computational Techniques in Electrical Engineering, Athens, Greece, 2001. [54] MathWorks. 'Single Camera Calibrator App.' https://www.mathworks.com/help/vision/ug/single-camera-calibrator-app.html (accessed 01/17, 2021). [55] Keras. 'Image data preprocessing.' https://keras.io/api/preprocessing/image/ (accessed 01/17, 2021). [56] Keras. 'Transfer learning fine-tuning.' https://keras.io/guides/transfer_learning/ (accessed 01/17, 2021). [57] S. J. Reddi, S. Kale, and S. Kumar, 'On the convergence of Adam Beyond,' Computing Research Repository(CoRR), 2018. [Online]. Available: http://arxiv.org/abs/1904.09237. [58] J. Ede and R. Beanland, 'Adaptive Learning Rate Clipping Stabilizes Learning,' Machine Learning: Science and Technology, vol. 1, no. 1, 2020, doi: 10.1088/2632-2153/ab81e2. [59] 吳品叡, '應用多重感測器研磨燒傷監測研究,' 碩士學位, 機械工程學系所, 國立中興大學, 台中市, 2015. [Online]. Available: https://hdl.handle.net/11296/nh4d7w [60] RoboDK. 'RoboDK: Simulator for industrial robots and offline programming.' https://robodk.com/ (accessed 01/17, 2021). [61] ABB. 'RobotStudio® The world's most used offline programming tool for robotics.' https://new.abb.com/products/robotics/robotstudio (accessed 01/17, 2021). [62] H. Hertz, 'On the contact of elastic solids,' Miscellaneous papers, pp. 147 - 162, 1881/1896. [Online]. Available: https://archive.org/details/cu31924012500306/page/n183/mode/2up. [63] Y. H. Wang, Y. C. Lo, and P. C. Lin, 'A Normal Force Estimation Model for a Robotic Belt-grinding System,' in 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 2020, pp. 1922-1928, doi: 10.1109/AIM43001.2020.9158825. [64] X. Xu, Y. Yang, G. Pan, D. Zhu, and S. Yan, 'A Robotic Belt Grinding Force Model to Characterize the Grinding Depth with Force Control Technology,' in Intelligent Robotics and Applications, Cham, Z. Chen, A. Mendes, Y. Yan, and S. Chen, Eds., 2018: Springer International Publishing, pp. 287-298. [65] C. Shih and F. Lian, 'Grinding Complex Workpiece Surface Based on Cyber-Physical Robotic Systems,' in 2019 IEEE International Conference on Industrial Cyber Physical Systems (ICPS), 2019, pp. 461-466, doi: 10.1109/ICPHYS.2019.8780361. [66] WACOH. 'DynPick Capacitive 6-axis force sensor(500N).' https://wacoh-tech.com/en/img/500RCD_w165_2.jpg (accessed 01/17, 2021). [67] K. He, G. Gkioxari, P. Dollár, and R. Girshick, 'Mask R-CNN,' in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980-2988, doi: 10.1109/ICCV.2017.322. [68] J. Long, E. Shelhamer, and T. Darrell, 'Fully convolutional networks for semantic segmentation,' in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3431-3440, doi: 10.1109/CVPR.2015.7298965. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21199 | - |
| dc.description.abstract | 砂帶研磨為經常使用於消除前道加工程序造成之瑕疵與毛邊的精加工製程。砂帶研磨加工機具接觸表面的柔軟、貼合特性,使得砂帶研磨更適合用於多變的曲面工件,如:渦輪葉片與水龍頭。既有的砂帶研磨研究聚焦在使用接觸輪的砂帶研磨上,而不使用接觸輪的砂帶研磨類型(文中簡稱為「自由狀態砂帶研磨」)卻可能因涉及砂帶變形而很少被提及。另一方面,由於研磨加工環境多粉塵、具高分貝噪音,加上整體製程勞力密集、產線加工程序繁複的性質,為減低對人體的傷害與降低人力成本,機器人研磨加工已是取代人工研磨的主要趨勢,如何精細化機械手臂加工至為關鍵。因此本研究主旨為增強機械手臂自動化研磨的能力、減少目前業界中機械手臂研磨加工對人工教點、微調與檢測的依賴。從兩個方面著手,其一是增加研磨系統之視覺檢測功能;其二則是加強研磨系統對於自由狀態砂帶研磨接觸力的預測功能。 本研究中的視覺檢測部分著重於金屬經研磨加工後的局部表面紋理檢測。詳述如何建立「不同砂帶目數」、「不同表面粗糙度」與「不同的砂帶磨耗程度」三個研磨後的紋理影像資料集。使用遷移式學習,利用已預先訓練過的卷積類神經網路模型,訓練三個不同的卷積類神經網路模型,分別進行「判別局部表面影像對應研磨砂帶號數」、「由局部表面影像估測表面粗糙度」以及「判別局部表面影像對應砂帶磨耗程度」三個實驗。結果證實運用遷移式學習可使卷積類神經網路模型快速學習研究中自建資料集的分類與迴歸任務,並亦能分辨出不同類別研磨表面的細微紋路。 本研究亦針對自由狀態砂帶研磨提出一個新的三維模型,以估測自由狀態的砂帶與工件之間的接觸力。此三維模型是以二維幾何模型為基礎疊加得出的結果,而後者的估測力是由砂帶的張力以及工件與砂帶的接觸狀況計算。此估測模型最後整合成一個工研院研發之機械手臂產線模擬器Ezsim的外掛功能。研究中利用不同外型與尺寸的試棒對此功能進行實機測試,結果顯示此模型能夠成功估測研磨正向力。因此本模型可在一些較為簡單的自由狀態砂帶研磨加工中替代昂貴的力規設備,提升產線上調整加工軌跡的效率。 | zh_TW |
| dc.description.abstract | Belt grinding is a commonly used finishing process that can reduce defects and burrs created by previous machining procedures. Due to the flexible characteristic of the contact surface of the manufacturing machines, belt grinding is more suitable for machining workpieces with complicated surfaces, such as turbine blades and faucets. Although there have been several studies discussing belt grinding, most of them focused on belt grinding with a contact wheel, while manufacturing processes that apply belt grinding with a free strand of the abrasive belt (also named as “free-form belt grinding” in this study) were rarely explored. In the meantime, due to the dusty and high-decibel noise environment of the grinding process, diverse processing procedures on the production line, and the labor-intensive nature of the whole manufacturing process, robotic grinding has emerged as the main method to replace manual grinding for reducing the potential harm to the human body while lowering labor costs. The key point is to heighten the sophistication of robotic arm processing. Therefore, the subject of this research is managing to enhance the manipulators’ ability of automatic grinding, which can be implemented from two aspects. The first one is adding visual inspection features to the grinding system. The second one is enhancing the ability of predicting the contact force of free-form belt grinding to the grinding system. The visual inspection section of this research focuses on the local metal surface textures that appear after the grinding process. It details the processes of establishing two image datasets, “Abrasive belts of different mesh numbers” and “Abrasive belts of different degrees of wear.” By utilizing transfer learning, which is based on other pre-trained convolutional neural networks (CNNs) to train the CNN models with three different structures. Then, I used three CNN models to conduct three experiments respectively, “Classifying the mesh number of abrasive belts corresponding to the local surface images”, “Estimating the surface roughness of the local surface images”, and “Classifying the degree of wear of abrasive belts corresponding to the local surface images.” The results show that transfer learning enables the convolutional neural network models to quickly adopt the classification and regression tasks of self-constructed datasets in the research, and confirms that the convolutional neural networks are able to distinguish the fine textures on different grinding surfaces. This research also proposes a new 3-dimensional (3D) model that estimates the normal force being generated between workpieces and the abrasive belts in free-form belt grinding. The 3D model was constructed using integrated 2D interaction forces between the workpieces and the abrasive belt, and the latter force derived based on the tension force of the abrasive belt and the geometric contact configuration. This estimation model is then integrated into a plug-in function of ITRI-developed robotic arm production line simulator EzSim. The model was experimentally evaluated using spheres, cylinders, and frustums grinding specimens, and the results confirm that the model can successfully predict normal forces. Therefore, this model can replace expensive force sensors in some simple free-form belt grinding processes and improve the efficiency of adjusting the machining trajectory on the production line. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-08T03:28:33Z (GMT). No. of bitstreams: 1 U0001-3001202113430300.pdf: 8995284 bytes, checksum: 6212a429bf6da5b99970be93291b8d3a (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | 口試委員審定書 I 致謝 II 摘要 III Abstract V 目錄 VIII 圖目錄 XI 表目錄 XV 第一章 緒論 1 1.1 前言 1 1.2 研究動機 1 1.3 文獻回顧 2 1.3.1 視覺檢測 2 1.3.2 卷積類神經網路之結構變革 6 1.3.3 高解析度影像辨識 10 1.3.4 深度學習用於迴歸任務(Deep learning for regression) 13 1.3.5 物件偵測(Object detection) 14 1.3.6 砂帶研磨模擬與估測 15 1.4 貢獻 17 1.5 論文架構 18 第二章 平台架構 20 2.1 硬體架構 20 2.1.1 機器手臂 20 2.1.2 六軸力規 22 2.1.3 工業電腦控制器與個人電腦 23 2.1.4 相機 24 2.1.5 表面粗糙度量測儀 24 2.1.6 光源 25 2.1.7 定位平台 25 2.1.8 砂帶機與砂帶 26 2.2 軟體架構 26 2.2.1 TPD程式 26 2.2.2 MACRO程式 28 2.2.3 LabVIEW程式 28 2.2.4 機器學習程式 31 2.3 功能實作 31 2.3.1 逆向運動學 31 2.3.2 點位校正 38 2.3.3 TCP校正 39 2.3.4 力規偏差補償與重力補償 41 2.3.5 相機參數校正 43 第三章 局部表面紋理檢測 45 3.1 紋理檢測背景 45 3.2 判別局部表面影像對應研磨砂帶號數實驗 46 3.2.2 資料強化 51 3.2.3 遷移式學習(Transfer learning) 52 3.2.4 小結 57 3.3 表面粗糙度與研磨砂帶號數之關係 57 3.4 由局部表面影像估測表面粗糙度實驗 61 3.4.1 資料集 61 3.4.2 資料標籤 61 3.4.3 遷移試學習 62 3.5 判別局部表面影像對應砂帶磨耗程度 65 3.5.1 工具壽命估測器背景 65 3.5.2 不同磨耗程度砂帶的製作 65 3.5.3 試片樣品產生、拍攝與影像前處理 68 3.5.4 試片的粗糙度量測值 69 3.5.5 遷移試學習、訓練過程與結果 70 3.5.6 小結 73 3.6 章節總結 74 第四章 研磨接觸力估測模型 75 4.1 砂帶機研磨特性 75 4.2 環境架構 76 4.2.1 硬體架構 77 4.2.2 軟體架構 79 4.3 正向力估測模型 80 4.3.1 二維模型 81 4.3.2 三維模型 85 4.3.3 座標轉換 86 4.4 實驗 88 4.4.1 不同半徑之圓柱形試棒的比較 89 4.4.2 不同形狀之試棒的比較 90 4.4.3 不同接觸點的比較 93 4.4.4 實機加工軌跡力資訊 94 4.5 小結 95 第五章 結論與未來展望 97 5.1 結論 97 5.2 未來展望 98 5.2.1 局部表面紋理檢測 98 5.2.2 研磨接觸力估測模型 99 參考文獻 100 | |
| dc.language.iso | zh-TW | |
| dc.subject | 紋理檢測 | zh_TW |
| dc.subject | 遷移式學習 | zh_TW |
| dc.subject | 卷積類神經網路 | zh_TW |
| dc.subject | 模型 | zh_TW |
| dc.subject | 機器手臂 | zh_TW |
| dc.subject | 正向力 | zh_TW |
| dc.subject | 砂帶研磨 | zh_TW |
| dc.subject | texture inspection | en |
| dc.subject | model | en |
| dc.subject | normal force | en |
| dc.subject | robot | en |
| dc.subject | convolutional neural networks | en |
| dc.subject | transfer learning | en |
| dc.subject | belt grinding | en |
| dc.title | 自動化研磨系統之視覺檢測方法與研磨接觸力估測模型 | zh_TW |
| dc.title | A Visual Inspection Method and a Grinding Contact Force Estimation Model of Automatic Grinding Systems | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 109-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 連豊力(Feng-Li Lian),顏炳郎(Ping-Lang Yen) | |
| dc.subject.keyword | 卷積類神經網路,遷移式學習,紋理檢測,砂帶研磨,正向力,機器手臂,模型, | zh_TW |
| dc.subject.keyword | convolutional neural networks,transfer learning,texture inspection,belt grinding,robot,normal force,model, | en |
| dc.relation.page | 105 | |
| dc.identifier.doi | 10.6342/NTU202100271 | |
| dc.rights.note | 未授權 | |
| dc.date.accepted | 2021-02-04 | |
| dc.contributor.author-college | 工學院 | zh_TW |
| dc.contributor.author-dept | 機械工程學研究所 | zh_TW |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-3001202113430300.pdf 未授權公開取用 | 8.78 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
