請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91506完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 李家岩 | zh_TW |
| dc.contributor.advisor | Chia-Yen Lee | en |
| dc.contributor.author | 蕭瑞昕 | zh_TW |
| dc.contributor.author | Jui-Hsin Hsiao | en |
| dc.date.accessioned | 2024-01-28T16:18:23Z | - |
| dc.date.available | 2024-01-29 | - |
| dc.date.copyright | 2024-01-27 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-08-09 | - |
| dc.identifier.citation | Armanious, K., Jiang, C., Fischer, M., K ̈ustner, T., Hepp, T., Nikolaou, K., Gatidis, S., and Yang, B. (2020). Medgan: Medical image translation using gans. Computerized Medical Imaging and Graphics, 79:101684.
Chen, J., Chen, J., Chao, H., and Yang, M. (2018). Image blind denoising with generative adversarial network based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Chen, Y. and Pock, T. (2017). Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1256–1272. Dabov, K., Foi, A., Katkovnik, V., and Egiazarian, K. (2007). Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080–2095. Gao, T., Guo, Y., Zheng, X., Wang, Q., and Luo, X. (2019). Moire pattern removal with multi-scale feature enhancing network. In 2019 IEEE International Conference on Multimedia Expo Workshops (ICMEW), pages 240–245. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2020). Generative adversarial networks. Commun. ACM, 63(11):139–144 He, B., Wang, C., Shi, B., and Duan, L. (2019). Mop moir ́e patterns using mopnet. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 2424–2432. Heinrich, M. P., Stille, M., and Buzug, T. M. (2018). Residual u-net convolutional neural network architecture for low-dose ct denoising. Current Directions in Biomedical Engineering, 4(1):297–300. Kim, D.-W., Ryun Chung, J., and Jung, S.-W. (2019). Grdn:grouped residual dense network for real image denoising and gan-based real world noise modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Workshops. Kim, J.-H., Kong, K., and Kang, S.-J. (2022). Image demoireing via u net for detection of display defects. IEEE Access, 10:68645–68654. Lebrun, M., Colom, M., and Morel, J.-M. (2015). Multiscale image blind denoising. IEEE Transactions on Image Processing, 24(10):3149–3161. Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., and Tian, Q. (2020). Wavelet-based dual-branch network for image demoireing. Liu, X., Tanaka, M., and Okutomi, M. (2013). Single-image noise level estimation for blind denoising. IEEE Transactions on Image Processing, 22(12):5226–5237. Lu, H.-P., Su, C.-T., Yang, S.-Y., and Lin, Y.-P. (2020). Combination of convolutional and generative adversarial networks for defect image demoireing of thin-film transistor liquid-crystal display image. IEEE Transactions on Semiconductor Manufacturing, 33(3):413–423. Pang, Y., Lin, J., Qin, T., and Chen, Z. (2022). Image-to-image translation: Methods and applications. IEEE Transactions on Multimedia, 24:3859–3881. Park, H., Vien, A. G., Koh, Y. J., and Lee, C. (2021). Unpaired image demoir ́eing based on cyclic moir ́e learning. In 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pages 146–150. Qian, R., Tan, R. T., Yang, W., Su, J., and Liu, J. (2018). Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. Simonyan, K. and Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. Sun, Y., Yu, Y., and Wang, W. (2018). Moir ́e photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing, 27(8):4160–4172. Wang, Z., She, Q., and Ward, T. E. (2021). Generative adversarial networks in computer vision: A survey and taxonomy. ACM Comput. Surv., 54(2). Wei, Z., Wang, J., Nichol, H., Wiebe, S., and Chapman, D. (2012). A median-gaussian filtering framework for moir ́e pattern noise removal from x-ray microscopy image. Micron, 43(2):170–176. Yan, H., Chen, X., Tan, V. Y. F., Yang, W., Wu, J., and Feng, J. (2020). Unsupervised image noise modeling with self-consistent gan. Yang, J., Zhang, X., Cai, C., and Li, K. (2017). Demoir ́eing for screen-shot images with multi-channel layer decomposition. In 2017 IEEE Visual Communications and Image Processing (VCIP), pages 1–4. Yuan, S., Timofte, R., Leonardis, A., and Slabaugh, G. (2020). Ntire 2020 challenge on image demoireing: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. Yuan, S., Timofte, R., Slabaugh, G., Leonardis, A., Zheng, B., Ye, X., Tian, X., Chen, Y., Cheng, X., Fu, Z., Yang, J., Hong, M., Lin, W., Yang, W., Qu, Y., Shin, H.-K., Kim, J.-Y., Ko, S.-J., Dong, H., Guo, Y., Wang, J., Ding, X., Han, Z., Das, S. D., Purohit, K., Kandula, P., Suin, M., and Rajagopalan, A. N. (2019). Aim 2019 challenge on image demoireing: Methods and results. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3534–3545. Zhang, K., Zuo, W., Chen, Y., Meng, D., and Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155. Zheng, B., Yuan, S., Slabaugh, G., and Leonardis, A. (2020). Image demoireing with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91506 | - |
| dc.description.abstract | 在TFT-LCD面板的製造過程中,灰塵、顆粒或設備參數的變化可能使面板產生瑕疵,導致面板亮度差、對比度低。為了找出有缺陷的面板,自動光學檢測(AOI)技術常被用來拍攝圖像並應用基於機器學習的圖像識別方法來檢測和分類缺陷。然而,當相機用於捕捉面板的圖像時,摩爾波紋會扭曲缺陷的外觀,從而難以準確分類瑕疵面板,特別是雲紋缺陷(Mura defect)。因此,在捕獲面板圖像後去除摩爾波紋成為一個關鍵問題。 TFT-LCD 領域中有關摩爾圖案去除的現有文獻通常仰賴成對的數據集(一張帶有摩爾波紋的圖片,有其不帶有摩爾波紋的乾淨版本)來進行訓練,而這在實際製造環境中很難獲得。在本研究中,我們提出了一種無需成對數據即可解決 TFT-LCD 面板摩爾圖案去除問題的新方法。我們的方法利用自洽生成對抗網絡(SC-GAN)和 U-Net 作為生成器,無需成對數據即可實現摩爾波紋的去除。具體來說,我們的 SC-GAN 模型採用對抗性訓練框架,並結合各種損失函數來指導訓練過程。實驗結果透過峰值信噪比(PSNR)和結構相似指數度量(SSIM)兩個指標來將我們提出的模型和其他經典監督去噪模型進行比較。實驗表明,我們提出的模型可以有效地去除缺陷圖像中的摩爾波紋。 | zh_TW |
| dc.description.abstract | During the manufacturing of TFT-LCD panels, dust, particles or variability of the equipment parameters may result in defects, which cause the panels to have poor brightness and low contrast. To find out the defective panels, Automated Optical Inspection (AOI) techniques are commonly employed to capture images and applying machine learning-based image recognition methods to detect and classify defects. However, when the camera is used to capture images of the panels, Moire pattern can distort the appearance of defects, making it difficult to accurately determine defect types, particularly mura defects. Therefore, removing the Moire pattern after capturing the panel images becomes a critical problems. Previous studies on Moiré pattern removal in the TFT-LCD field often relies on paired datasets (an image with Moiré pattern, paired with an image without Moiré) to train the model, which are difficult to obtain in practical manufacturing settings. This study propose a novel method to address the issue of Moiré pattern removal in TFT-LCD panels without the need for paired data. Our approach leverages a Self-Consistent Generative Adversarial Network (SC-GAN) with U-Net as generator to achieve Moiré pattern removal without paired data. Specifically, our SC-GAN model employs an adversarial training framework and incorporates various loss functions to guide the training process. The peak-signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were used to compare with our proposed model and other classical supervised denoising models. The experiments show that our proposed model can effectively remove the Moire pattern from defect images. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-01-28T16:18:23Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-01-28T16:18:23Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 iii Abstract iv Contents vi List of Figures viii List of Tables x 1 Introduction 1 1.1 Background and Motivation 1 1.2 Research Objective 4 1.3 Thesis Architecture 5 2 Literature Review 8 2.1 Image Denoising and Demoiréing 8 2.2 Image Blind Denoising 12 2.3 GAN 13 2.4 U-Net 14 3 Methodology 17 3.1 Problem Description 17 3.2 Research Framework 18 3.3 Model Architecture 22 3.3.1 DnCNN 22 3.3.2 U-Net 23 3.3.3 Discriminator 24 4 Experiments 26 4.1 Experimental Settings 26 4.1.1 Dataset Collection 26 4.1.2 Training Process 29 4.2 Evaluation 32 4.2.1 Evaluation Metrics 32 4.2.2 Experimental Results 33 5 Conclusion and Future Works 41 5.1 Conclusion 41 5.2 Future Works 42 Bibliography 44 | - |
| dc.language.iso | en | - |
| dc.subject | 生成式對抗網路 | zh_TW |
| dc.subject | 薄膜電晶體液晶顯示器 | zh_TW |
| dc.subject | U網路 | zh_TW |
| dc.subject | 盲去噪 | zh_TW |
| dc.subject | 摩爾波紋去除 | zh_TW |
| dc.subject | U-Net | en |
| dc.subject | Demoiréing | en |
| dc.subject | Image blind denoising | en |
| dc.subject | Generative adversarial network | en |
| dc.subject | TFT-LCD | en |
| dc.title | 通過自洽生成對抗網路進行無監督圖像去噪以檢測面板缺陷 | zh_TW |
| dc.title | Unsupervised Image Demoiréing via Self-Consistent GAN for Detection of TFT-LCD Defects | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 陳建錦;陳以錚;藍俊宏 | zh_TW |
| dc.contributor.oralexamcommittee | Chien-Chin Chen;Yi-Cheng Chen;Jakey Blue | en |
| dc.subject.keyword | 摩爾波紋去除,盲去噪,生成式對抗網路,薄膜電晶體液晶顯示器,U網路, | zh_TW |
| dc.subject.keyword | Demoiréing,Image blind denoising,Generative adversarial network,TFT-LCD,U-Net, | en |
| dc.relation.page | 47 | - |
| dc.identifier.doi | 10.6342/NTU202303852 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2023-08-12 | - |
| dc.contributor.author-college | 管理學院 | - |
| dc.contributor.author-dept | 資訊管理學系 | - |
| dc.date.embargo-lift | 2026-08-31 | - |
| 顯示於系所單位: | 資訊管理學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 此日期後於網路公開 2026-08-31 | 6.41 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
