請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86013
標題: | 領域自適應應用於自動化光學檢測 Applications of Domain Adaptation in Automatic Optical Inspection |
作者: | 李浩平 Hao-Ping Lee |
指導教授: | 李貫銘 Kuan-Ming Li |
關鍵字: | 智慧製造,人工智慧,深度學習,領域自適應,瑕疵檢測, Smart Manufacturing,Artificial Intelligence,Deep Learning,Domain Adaptation,Defect Inspection, |
出版年 : | 2022 |
學位: | 碩士 |
摘要: | 隨著卷積神經網路在圖像檢測領域的快速發展,近年來學界與業界致力於將卷積神經網路技術導入到自動化光學檢測的系統內,進行工業瑕疵檢測,邁向產線智慧化。然而,對於新的工件,即便已有過往之相似樣品的檢測模型,也會因樣品本質上的差異或取像光學架構的差異,而無法直接使用,檢測效果極差,故往往只能選擇對新資料重新標記訓練,而訓練檢測模型通常需要大量的瑕疵樣本標籤資料,其獲取方式往往仰賴手動標記,極耗費時間與人力。
有鑑於此,本研究使用領域自適應的技術,在未對待測之新資料進行標籤的情況下建立瑕疵分類模型。其原理為利用事先已標記且與待測目標資料相似的資源資料、待檢測的無標記目標資料進行模型訓練,讓神經網路最終可以成功檢測目標資料。本研究以實際產線的不同木種之木皮瑕疵影像、不同色系之布匹瑕疵影像與公開之金屬表面瑕疵資料集為例,比較領域自適應神經網路在不同工業應用的檢測表現,並分析網路擷取之特徵進行模型優化,根據結果統整出領域自適應技術應用於工業瑕疵檢測的合適流程。 經過本研究的實驗測試,將ResNet50作為特徵擷取器訓練領域自適應模型DANN (Domain-Adversarial Neural Network),並輔以熵調整 (Entropy Conditioning),能有效分類無標籤之瑕疵影像。相較於直接使用舊有相似資料之模型進行辨識,對於木皮類之瑕疵影像,分類準確率能從52.96%提升至84.93%;對於布匹類之瑕疵影像,分類準確率能從22.58%提升至73.75%;對於金屬表面之瑕疵影像,分類準確率能從31.13%提升至95.58%。另外,若對特徵擷取的特徵層進行優化選擇,分類準確率能再有所提升,木皮類提升至90.86%,布匹類提升至75.68%,金屬表面瑕疵類提升至96.22%。通過此流程能快速訓練出對無標籤新資料的辨識模型,有效節省人力與時間成本。 With the development of deep learning, convolutional neural network has achieved outstanding performance in the field of image detection. In recent years, the academia and industry have been committed to introducing convolutional neural network technology into the production line of automatic optical inspection. However, due to the differences of data characteristics and image acquisition methods, it is ineffective to recognize defects on new target data with a former model trained by similar data. Thus, engineers usually manually label the new target data and train a new defect recognition model, which brings a lot of labor costs. In view of the above, in this research, the defect recognition model is built by using domain adaptation, which is able to train a neural network on a labeled similar source dataset and secure a good accuracy on the unlabeled new target dataset. Wood and textile defect images gathered from actual production lines and an open dataset of metal surface defect images, NEU-CLS, are used as the verification data. Different kinds of domain adaptation models are compared. Also, feature extraction layers are analyzed to optimize the model. Finally, a general process to train a defect recognition model using domain adaptation is organized. According to the results, a classification model trained by the DANN (Domain-Adversarial Neural Network) domain adaptation method with a ResNet50 backbone and entropy conditioning algorithms is effective to recognize unlabeled defect images. In addition, the accuracy is further increased by choosing proper feature extraction layers. For the wood defect dataset, the accuracy increases to 90.86%. For the textile defect dataset, the accuracy increases to 75.68%. For the metal surface defect dataset, the accuracy increases to 96.22%. By following this process, an effective defect recognition model can be built without labeling new data. In other words, time and labor costs on labeling new data can be significantly reduced. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86013 |
DOI: | 10.6342/NTU202203562 |
全文授權: | 同意授權(全球公開) |
電子全文公開日期: | 2022-09-23 |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-110-2.pdf | 4.66 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。