請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86013完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 李貫銘 | zh_TW |
| dc.contributor.advisor | Kuan-Ming Li | en |
| dc.contributor.author | 李浩平 | zh_TW |
| dc.contributor.author | Hao-Ping Lee | en |
| dc.date.accessioned | 2023-03-19T23:32:52Z | - |
| dc.date.available | 2023-11-10 | - |
| dc.date.copyright | 2022-09-23 | - |
| dc.date.issued | 2022 | - |
| dc.date.submitted | 2002-01-01 | - |
| dc.identifier.citation | 參考文獻
[1] C. J. Lu and D. M. Tsai, "Automatic defect inspection for LCDs using singular value decomposition," in The International Journal of Advanced Manufacturing Technology, vol. 25, pp. 53-61, 2005. [2] J. Zhao, Q. J. Kong, X. Zhao, J. P. Liu and Y. C. Liu, "A method for detection and classification of glass defects in low resolution images," in 2011 Sixth International Conference on Image and Graphics (ICIG), pp. 642-647, 2011. [3] MARKETSANDMARKETS BLOG, "What are their major strategies to strengthen Automated Optical Inspection (AOI) System market presence?" June 2020. [Online] https://mnmblog.org/what-are-their-major-strategies-to-strengthen-automated-optical-inspection-aoi-system-market-presence.html [Accessed 07 03 2022] [4] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, November 1998. [5] Y. H. Won, H. Joo, and J. S. Kim, "Classification of Defects in the Polarizer of Display Panels using the Convolution Neural Network (CNN)." in International Journal of Computing, Communications and Instrumentation Engineering (IJCCIE), Vol. 4, No. 1, 2017. [6] Wikipedia, "Artificial neural network," February 2013. [Online] https://en.wikipedia.org/wiki/Artificial_neural_network [Accessed 07 03 2022] [7] 程式人生, "更好的理解分析深度卷積神經網路," January 2019. [Online] https://www.796t.com/content/1547645949.html [Accessed 07 03 2022] [8] G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," Science, 313(5786), 504-507, 2006 [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, 2012. [10] Sagar Sharma, "Activation Functions in Neural Networks," September 2017. [Online] https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6. [Accessed 07 03 2022] [11] Shivani Kolungade, "Object Detection, Image Classification and Semantic Segmentation using AWS Sagemaker," August 2020. [Online] https://medium.com/@kolungade.s/object-detection-image-classification-and-semantic-segmentation-using-aws-sagemaker-e1f768c8f57d [Accessed 07 03 2022] [12] Karen Simonyan and Andrew Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. [13] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, "Going Deeper with Convolutions," arXiv preprint arXiv:1409.4842, 2014. [14] S. Ioffe, and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in International conference on machine learning, pp. 448-456, June 2015. [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016 [16] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016. [17] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2013. [18] R. Girshick, "Fast R-CNN," in Proceedings of the IEEE international conference on computer vision, pp. 1440-1448, 2015. [19] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks," IEEE transactions on pattern analysis and machine intelligence, 39(6), 1137-1149, 2016. [20] Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár, "Microsoft coco: Common objects in context," in European conference on computer vision, pp. 740-755, September 2014. [21] Jonathan Long, Evan Shelhamer, and Trevor Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015. [22] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, pp. 234-241, October 2015. [23] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick, "Mask R-CNN," in Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017. [24] J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence, Dataset shift in machine learning, The MIT Press, 2009. [25] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He, "A Comprehensive Survey on Transfer Learning," arXiv preprint arXiv:1911.02685, 2019. [26] S. J. Pan and Q. Yang, "A Survey on Transfer Learning," in IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345-1359, Oct. 2010, doi: 10.1109/TKDE.2009.191. [27] Hao-Ping Lee, Ta-Wei Tang, Wan-Julin, Hakiem Hsu, and Kuan-Ming Li, "A CNN-based Defect Inspection for Smartphone Cover glass," International Journal of Mechanical and Production Engineering (IJMPE), pp. 42-45, Volume-9, Issue-1, 2021 [28] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell, "Deep Domain Confusion: Maximizing for Domain Invariance," arXiv preprint arXiv:1412.3474, 2014. [29] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky, "Domain-Adversarial Training of Neural Networks," arXiv preprint arXiv:1505.07818, 2015. [30] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan, "Learning Transferable Features with Deep Adaptation Networks," arXiv preprint arXiv:1502.02791, 2015. [31] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola, "A kernel two-sample test." The Journal of Machine Learning Research, 13(1), pp. 723-773, 2012 [32] Baochen Sun, Jiashi Feng, and Kate Saenko, "Return of Frustratingly Easy Domain Adaptation," arXiv preprint arXiv:1511.05547, 2015. [33] Baochen Sun and Kate Saenko, "Deep CORAL: Correlation Alignment for Deep Domain Adaptation," arXiv preprint arXiv:1607.01719, 2016. [34] Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Guolin Ke, Jingwu Chen, Jiang Bian, Hui Xiong, and Qing He, "Deep Subdomain Adaptation Network for Image Classification," arXiv preprint arXiv:2106.09388, 2021. [35] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, "Generative Adversarial Networks," arXiv preprint arXiv:1406.2661, 2014. [36] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan, "Conditional Adversarial Domain Adaptation," arXiv preprint arXiv:1705.10667, 2017. [37] Chaohui Yu, Jindong Wang, Yiqiang Chen, and Meiyu Huang, "Transfer Learning with Dynamic Adversarial Adaptation Network," arXiv preprint arXiv:1909.08184, 2019. [38] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Chi Su, Qingming Huang, and Qi Tian, "Gradually Vanishing Bridge for Adversarial Domain Adaptation," arXiv preprint arXiv:2003.13183, 2020. [39] Yaroslav Ganin and Victor Lempitsky, "Unsupervised domain adaptation by backpropagation," arXiv preprint arXiv:1409.7495, 2014. [40] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A Efros, and Trevor Darrell, "Cycada: Cycle-consistent adversarial domain adaptation," arXiv preprint arXiv:1711.03213, 2017. [41] Yu He, Kechen Song, Qinggang Meng, Yunhui Yan, "An End-to-end Steel Surface Defect Detection Approach via Fusing Multiple Hierarchical Features," IEEE Transactions on Instrumentation and Measuremente, 2020, 69(4), 1493-1504. [42] Teledyne FLIR, "Blackfly GigE," [Online] https://www.flir.com/products/blackfly-gige/?model=BFLY-PGE-50S5C-C&vertical=machine+vision&segment=iis [Accessed 07 03 2022] [43] Google Developer, " Imbalanced Data," [Online] https://developers.google.com/machine-learning/data-prep/construct/sampling-splitting/imbalanced-data [Accessed 07 03 2022] [44] Nvidia, "GEFORCE® GTX 1080 Ti." [Online] https://www.nvidia.com/zh-tw/geforce/products/10series/geforce-gtx-1080-ti/ [Accessed 07 03 2022] [45] Pytorch, "RESNET," [Online] https://pytorch.org/hub/pytorch_vision_resnet/ [Accessed 07 03 2022] [46] Jindong Wang, "Transferlearning Project," Git code. [Online] https://github.com/jindongwang/transferlearning [Accessed 07 03 2022] [47] M. SIRSAT - Data Science and Machine Learning, "What is Confusion Matrix and Advanced Classification Metrics?," April 2019. [Online] https://manishasirsat.blogspot.com/2019/04/confusion-matrix.html. [Accessed 07 03 2022] [48] 愷開, "淺談降維方法中的 PCA 與 t-SNE," July 2017. [Online] https://medium.com/d-d-mag/%E6%B7%BA%E8%AB%87%E5%85%A9%E7%A8%AE%E9%99%8D%E7%B6%AD%E6%96%B9%E6%B3%95-pca-%E8%88%87-t-sne-d4254916925b [Accessed 07 03 2022] [49] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba, "Learning Deep Features for Discriminative Localization," arXiv preprint arXiv:1512.04150, 2015. [50] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra, "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization," arXiv preprint arXiv:1610.02391, 2016. [51] Herath, Samitha, Mehrtash Harandi, and Fatih Porikli, "Learning an invariant hilbert space for domain adaptation." Proceedings of the IEEE conference on computer vision and pattern recognition, 2017. [52] Poojan Oza, Vishwanath A. Sindagi, Vibashan VS, and Vishal M. Patel, "Unsupervised Domain Adaptation of Object Detectors: A Survey," arXiv preprint arXiv:2105.13502, 2021. [53] P. P. Busto and J. Gall, "Open Set Domain Adaptation," 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 754-763, doi: 10.1109/ICCV.2017.88. [54] K. You, M. Long, Z. Cao, J. Wang, and M. I. Jordan, "Universal Domain Adaptation," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2715-2724, doi: 10.1109/CVPR.2019.00283. [55] Dong-Hyun LEE, "Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks." ICML 2013 Workshop : Challenges in Representation Learning (WREPL), 2013 | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86013 | - |
| dc.description.abstract | 隨著卷積神經網路在圖像檢測領域的快速發展,近年來學界與業界致力於將卷積神經網路技術導入到自動化光學檢測的系統內,進行工業瑕疵檢測,邁向產線智慧化。然而,對於新的工件,即便已有過往之相似樣品的檢測模型,也會因樣品本質上的差異或取像光學架構的差異,而無法直接使用,檢測效果極差,故往往只能選擇對新資料重新標記訓練,而訓練檢測模型通常需要大量的瑕疵樣本標籤資料,其獲取方式往往仰賴手動標記,極耗費時間與人力。
有鑑於此,本研究使用領域自適應的技術,在未對待測之新資料進行標籤的情況下建立瑕疵分類模型。其原理為利用事先已標記且與待測目標資料相似的資源資料、待檢測的無標記目標資料進行模型訓練,讓神經網路最終可以成功檢測目標資料。本研究以實際產線的不同木種之木皮瑕疵影像、不同色系之布匹瑕疵影像與公開之金屬表面瑕疵資料集為例,比較領域自適應神經網路在不同工業應用的檢測表現,並分析網路擷取之特徵進行模型優化,根據結果統整出領域自適應技術應用於工業瑕疵檢測的合適流程。 經過本研究的實驗測試,將ResNet50作為特徵擷取器訓練領域自適應模型DANN (Domain-Adversarial Neural Network),並輔以熵調整 (Entropy Conditioning),能有效分類無標籤之瑕疵影像。相較於直接使用舊有相似資料之模型進行辨識,對於木皮類之瑕疵影像,分類準確率能從52.96%提升至84.93%;對於布匹類之瑕疵影像,分類準確率能從22.58%提升至73.75%;對於金屬表面之瑕疵影像,分類準確率能從31.13%提升至95.58%。另外,若對特徵擷取的特徵層進行優化選擇,分類準確率能再有所提升,木皮類提升至90.86%,布匹類提升至75.68%,金屬表面瑕疵類提升至96.22%。通過此流程能快速訓練出對無標籤新資料的辨識模型,有效節省人力與時間成本。 | zh_TW |
| dc.description.abstract | With the development of deep learning, convolutional neural network has achieved outstanding performance in the field of image detection. In recent years, the academia and industry have been committed to introducing convolutional neural network technology into the production line of automatic optical inspection. However, due to the differences of data characteristics and image acquisition methods, it is ineffective to recognize defects on new target data with a former model trained by similar data. Thus, engineers usually manually label the new target data and train a new defect recognition model, which brings a lot of labor costs.
In view of the above, in this research, the defect recognition model is built by using domain adaptation, which is able to train a neural network on a labeled similar source dataset and secure a good accuracy on the unlabeled new target dataset. Wood and textile defect images gathered from actual production lines and an open dataset of metal surface defect images, NEU-CLS, are used as the verification data. Different kinds of domain adaptation models are compared. Also, feature extraction layers are analyzed to optimize the model. Finally, a general process to train a defect recognition model using domain adaptation is organized. According to the results, a classification model trained by the DANN (Domain-Adversarial Neural Network) domain adaptation method with a ResNet50 backbone and entropy conditioning algorithms is effective to recognize unlabeled defect images. In addition, the accuracy is further increased by choosing proper feature extraction layers. For the wood defect dataset, the accuracy increases to 90.86%. For the textile defect dataset, the accuracy increases to 75.68%. For the metal surface defect dataset, the accuracy increases to 96.22%. By following this process, an effective defect recognition model can be built without labeling new data. In other words, time and labor costs on labeling new data can be significantly reduced. | en |
| dc.description.provenance | Made available in DSpace on 2023-03-19T23:32:52Z (GMT). No. of bitstreams: 1 U0001-1909202213335500.pdf: 4768648 bytes, checksum: 230c957de55e9ff6590500cbd904c06e (MD5) Previous issue date: 2022 | en |
| dc.description.tableofcontents | 目錄
口試委員會審定書 I 誌謝 II 摘要 III Abstract IV 目錄 VI 圖目錄 VIII 表目錄 XI 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機與目的 3 1.3 研究方法 4 1.4 論文架構 5 第二章 文獻回顧 6 2.1 深度學習於影像辨識 6 2.2 領域自適應 15 2.2.1 遷移學習 15 2.2.2 差異型網路 18 2.2.3 對抗型網路 22 2.3 研究方向可行性分析 27 小結 27 第三章 實驗流程與方法 28 3.1 資料集建立 28 3.1.1 光學取像 28 3.1.2 瑕疵影像裁取 31 3.1.3 製作訓練資料集 35 3.2 分類模型建立 38 3.2.1 建立訓練環境 38 3.2.2 訓練ResNet50模型 40 3.2.3 訓練領域自適應模型 41 3.2.4 訓練多鑑別器領域自適應模型 42 3.2.5 超參數設定 44 3.3 實驗結果評判 45 3.3.1 混淆矩陣 45 3.3.2 t-SNE (t-Distributed Stochastic Neighbor Embedding) 46 3.3.3 Grad-CAM (Gradient-weighted Class Activation Mapping) 47 小結 48 第四章 實驗結果與討論 49 4.1 ResNet50分類結果 49 4.2 領域自適應模型分類結果 51 4.2.1 分類準確率 52 4.2.2 可視化分析 56 4.3 改變鑑別器之適應特徵層實驗 65 小結 68 第五章 結論與未來展望 69 5.1 結論 69 5.2 未來展望 70 參考文獻 72 圖目錄 圖1-1 AOI市場規模發展 [3] 1 圖1-2 研究流程圖 4 圖2-1 人工神經網路 [6] 6 圖2-2 卷積計算 [7] 6 圖2-3 LeNet網路架構 [4] 7 圖2-4 Sigmoid與ReLU比較 [10] 8 圖2-5 影像分類、物件分割、影像分割示意圖 [11] 8 圖2-6 VGG16網路架構 9 圖2-7 Inception Module 10 圖2-8 Batch Normalization 10 圖2-9 Residual Block [15] 11 圖2-10 Resnet50網路架構 11 圖2-11 YOLO流程圖 [16] 12 圖2-12 語意分割與實物分割差異 [20] 13 圖2-13 U-Net網路結構 [22] 14 圖2-14 遷移學習範例 [25] 15 圖2-15 遷移學習的解決辦法 [26] 16 圖2-16 源域與目標域資料對齊 [28] 17 圖2-17 (a) 不均勻對齊 (b) 均勻對齊 [29] 17 圖2-18 DDC網路架構 [28] 19 圖2-19 DAN網路架構 [30] 19 圖2-20 DEEPCORAL網路架構 [33] 20 圖2-21 子領域概念 [34] 21 圖2-22 DSAN網路架構 [34] 21 圖2-23 DANN網路架構 [29] 22 圖2-24 CDAN網路架構 [36] 23 圖2-25 邊緣對齊與局部對齊 [37] 24 圖2-26 DAAN網路架構 [37] 24 圖2-27 中間域概念 [38] 25 圖2-28 GVB網路架構 [38] 26 圖3-1 BFLY-PGE-50S5C-C 29 圖3-2 光學取像架構圖 30 圖3-3 影像裁取工具介面 32 圖3-4 NEU-CLS 瑕疵範例 [41] 37 圖3-5 GeForce GTX 1080 Ti [44] 38 圖3-6 Pytorch對不同層數ResNet的實現 [45] 40 圖3-7 GVB論文的開源程式碼實現的基本discriminator 42 圖3-8 多鑑別器的領域自適應模型架構 43 圖3-9 混淆矩陣 [47] 45 圖3-10 MNIST 資料分布的可視化 [48] 46 圖3-11 以Grad-CAM產生之關注區域熱力圖 [50] 47 圖4-1 office 31 範例圖 [51] 55 圖4-2 白橡→胡桃之混淆矩陣 56 圖4-3 白橡→胡桃之正確辨識案例Grad-CAM圖 56 圖4-4 白橡→胡桃之錯誤辨識案例Grad-CAM圖與原圖 57 圖4-5 胡桃→白橡之混淆矩陣 57 圖4-6 胡桃→白橡之正確辨識案例Grad-CAM圖 58 圖4-7 胡桃→白橡之錯誤辨識案例Grad-CAM圖與原圖 58 圖4-8 胡桃→白橡之適應特徵層t-SNE圖 59 圖4-9 黑布→白布之混淆矩陣 59 圖4-10 黑布→白布之正確辨識案例Grad-CAM圖 60 圖4-11 黑布→白布之錯誤辨識案例Grad-CAM圖與原圖 60 圖4-12 白布→黑布之混淆矩陣 60 圖4-13 白布→黑布之正確辨識案例Grad-CAM圖 61 圖4-14 白布→黑布之錯誤辨識案例Grad-CAM圖、原圖與其他錯誤案例 61 圖4-15 白布→黑布之適應特徵層t-SNE圖 62 圖4-16 s200→s64之混淆矩陣 62 圖4-17 s200→s64之正確辨識案例Grad-CAM圖 63 圖4-18 s200→s64之錯誤辨識案例Grad-CAM圖與原圖 63 圖4-19 s64→s200之混淆矩陣 63 圖4-20 s64→s200之正確辨識案例Grad-CAM圖 64 圖4-21 s200→s64之適應特徵層t-SNE圖 64 表目錄 表2-1 R-CNN、Fast R-CNN、Faster R-CNN速度比較 [17, 18, 19] 13 表3-1 面掃描式與線掃描式取像比較 28 表3-2 面相機規格 [42] 29 表3-3 瑕疵樣本影像數量 31 表3-4 木皮裁切後瑕疵影像數量與範例圖 33 表3-5 布匹裁切後瑕疵影像數量與範例圖 34 表3-6 木皮資料集影像張數 36 表3-7 布匹資料集影像張數 36 表3-8 金屬表面瑕疵資料集影像張數 37 表3-9 圖形處理器規格 [44] 39 表3-10 電腦設備規格表 39 表3-11 軟體環境配置表 40 表3-12 模型訓練之需要資訊 41 表4-1 ResNet50分類木皮瑕疵結果 49 表4-2 ResNet50分類布匹瑕疵結果 49 表4-3 ResNet50分類金屬瑕疵結果 50 表4-4 木皮之資料增強實驗結果 51 表4-5 布匹之資料增強實驗結果 51 表4-6 領域自適應模型辨識木皮瑕疵結果 52 表4-7 領域自適應模型辨識布匹瑕疵結果 53 表4-8 領域自適應模型辨識金屬表面瑕疵結果 54 表4-9 多鑑別器領域自適應模型辨識木皮瑕疵結果 65 表4-10 多鑑別器領域自適應模型辨識布匹瑕疵結果 66 表4-11 多鑑別器領域自適應模型辨識金屬表面瑕疵結果 67 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 領域自適應 | zh_TW |
| dc.subject | 人工智慧 | zh_TW |
| dc.subject | 智慧製造 | zh_TW |
| dc.subject | 瑕疵檢測 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | Defect Inspection | en |
| dc.subject | Deep Learning | en |
| dc.subject | Artificial Intelligence | en |
| dc.subject | Smart Manufacturing | en |
| dc.subject | Domain Adaptation | en |
| dc.title | 領域自適應應用於自動化光學檢測 | zh_TW |
| dc.title | Applications of Domain Adaptation in Automatic Optical Inspection | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 110-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 楊宏智;許志青;邱弘興 | zh_TW |
| dc.contributor.oralexamcommittee | Hong-Tsu Young;Hakiem Hsu;Horng-Shing Chiou | en |
| dc.subject.keyword | 智慧製造,人工智慧,深度學習,領域自適應,瑕疵檢測, | zh_TW |
| dc.subject.keyword | Smart Manufacturing,Artificial Intelligence,Deep Learning,Domain Adaptation,Defect Inspection, | en |
| dc.relation.page | 78 | - |
| dc.identifier.doi | 10.6342/NTU202203562 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2022-09-20 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 機械工程學系 | - |
| dc.date.embargo-lift | 2022-09-23 | - |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-110-2.pdf | 4.66 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
