Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 工程科學及海洋工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93417
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor李佳翰zh_TW
dc.contributor.advisorJia-Han Lien
dc.contributor.author黃羿誌zh_TW
dc.contributor.authorI-Chih Huangen
dc.date.accessioned2024-07-31T16:13:32Z-
dc.date.available2024-08-01-
dc.date.copyright2024-07-31-
dc.date.issued2024-
dc.date.submitted2024-07-22-
dc.identifier.citation[1] Liang et al., “Growth and printability of multilayer phase defects on extreme ultraviolet mask blanks,” Journal of Vacuum Science & Technology B Microelectronics Processing and Phenomena, vol. 25, no. 6, Jun. 2007, doi: 10.1116/1.2779044.
[2] N. G. Orji et al., “Metrology for the next generation of semiconductor devices,” Nat. Electron, vol. 1, 2018, doi: 10.1038/s41928-018-0150-9.
[3] H. Kinoshita, T. Harada, Y. Nagata, T. Watanabe, and K. Midorikawa, “Development of EUV mask inspection system using high-order harmonic generation with a femtosecond laser,” Jpn. J. Appl. Phys., vol. 53, no. 8, p. 086701, Jul. 2014, doi: 10.7567/JJAP.53.086701.
[4] Y. Nagata, T. Harada, T. Watanabe, H. Kinoshita, and K. Midorikawa, “At wavelength coherent scatterometry microscope using high-order harmonics for EUV mask inspection,” Int. J. Extrem. Manuf., vol. 1, no. 3, p. 032001, Sep. 2019, doi: 10.1088/2631-7990/ab3b4e.
[5] P. Evanschitzky, N. Auth, T. Heil, C. Felix Hermanns, and A. Erdmann, “Mask defect detection with hybrid deep learning network,” Journal of Micro/Nanopatterning, Materials, and Metrology, vol. 20, no. 4, p. 041205, Oct. 2021, doi: 10.1117/1.JMM.20.4.041205.
[6] D. S. Woldeamanual, A. Erdmann, and A. Maier, “Application of deep learning algorithms for Lithographic mask characterization,” in Computational Optics II, SPIE, May 2018, pp. 46–57. doi: 10.1117/12.2312478.
[7] M. Sugawara, A. Chiba, H. Yamanashi, H. Oizumi, and I. Nishiyama, “Attenuated phase-shift mask for line patterns in EUV lithography,” Microelectronic Engineering, vol. 67–68, pp. 10–16, Jun. 2003, doi: 10.1016/S0167-9317(03)00055-8.
[8] A. Rastegar and V. Jindal, “EUV mask defects and their removal,” vol. 8352, p. 306-317, Feb. 2012, doi: 10.1117/12.923882.
[9] A. H. Guenther, L. S. Pedrotti, and C. Roychoudhuri, Fundamentals of Photonics. University of Connecticut, 2000.
[10] Z. H. Mohammed, “The Fresnel coefficient of thin film multilayer using transfer matrix method TMM,” IOP Conf. Ser.: Mater. Sci. Eng., vol. 518, no. 3, p. 032026, May 2019, doi: 10.1088/1757-899X/518/3/032026.
[11] E. Hecht, Optics, 5th ed. England: Pearson Education Limited, 2017.
[12] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi: 10.1038/nature14539.
[13] V. Sze, Y.-H. Chen, T.-J. Yang, and J. Emer, “Efficient processing of deep neural networks: A tutorial and survey,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2295–2329, Aug. 2017.
[14] D. Brezak, T. Bacek, D. Majetic, J. Kasac, and B. Novakovic, “A comparison of feed-forward and recurrent neural networks in time series forecasting,” presented at the 2012 IEEE Conference on Computational Intelligence for Financial Engineering and Economics, CIFEr 2012 - Proceedings, Mar. 2012, pp. 1–6. doi: 10.1109/CIFEr.2012.6327793.
[15] R. Chauhan, K. K. Ghanshala, and R. C. Joshi, “Convolutional neural network (CNN) for image detection and recognition,” in 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Dec. 2018, pp. 278–282. doi: 10.1109/ICSCCC.2018.8703316.
[16] R. M. Cichy and D. Kaiser, “Deep neural networks as scientific models,” Trends Cogn Sci, vol. 23, no. 4, pp. 305–317, Apr. 2019, doi: 10.1016/j.tics.2019.01.009.
[17] D. Bhatt et al., “CNN variants for computer vision: history, architecture, application, challenges and future scope,” Electronics, vol. 10, no. 20, Art. no. 20, Jan. 2021, doi: 10.3390/electronics10202470.
[18] K. Hara, D. Saito, and H. Shouno, “Analysis of function of rectified linear unit used in deep learning,” in IEEE Xplore, Killarney, Ireland: IEEE, Jul. 2015, pp. 1–8. doi: 10.1109/IJCNN.2015.7280578.
[19] T. P. Lillicrap, A. Santoro, L. Marris, C. J. Akerman, and G. Hinton, “Backpropagation and the brain,” Nat Rev Neurosci, vol. 21, no. 6, pp. 335–346, Jun. 2020, doi: 10.1038/s41583-020-0277-3.
[20] K.-Y. Cho et al., “The analysis of EUV mask defects using a wafer defect inspection system,” in Extreme Ultraviolet (EUV) Lithography, SPIE, Mar. 2010, pp. 470–484. doi: 10.1117/12.846482.
[21] P. He, J. Liu, H. Gu, J. Zhu, H. Jiang, and S. Liu, “EUV mask model based on modified Born series,” Opt. Express, OE, vol. 31, no. 17, pp. 27797–27809, Aug. 2023, doi: 10.1364/OE.498260.
[22] N. Davydova et al., “Imaging performance improvements by EUV mask stack optimization,” presented at the 27th European Mask and Lithography Conference, Dresden, Germany: Society of Photo-Optical Instrumentation Engineers (SPIE), Apr. 2011, pp. 319–331. doi: 10.1117/12.884504.
[23] 陳學禮, 鄭旭君, 洪鶯玲及朱鐵吉, 作者, 「用於次25 奈米極紫外光曝光機之新 穎反射型衰減式相位移光罩」, 科儀新知, 卷 27, 期 2, 頁 24–29, 10月 2005, doi: doi.org/10.29662/IT.200510.0003.
[24] C. Saravanan, “Color image to grayscale image conversion,” in 2010 Second International Conference on Computer Engineering and Applications, Mar. 2010, pp. 196–199. doi: 10.1109/ICCEA.2010.192.
[25] Z. Xu, X. Baojie, and W. Guoxin, “Canny edge detection based on Open CV,” in 2017 13th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), Oct. 2017, pp. 53–56. doi: 10.1109/ICEMI.2017.8265710.
[26] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE, Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
[27] S. Wang, W. Li, Y. Wang, Y. Jiang, S. Jiang, and R. Zhao, “An improved difference of gaussian filter in face recognition,” Journal of Multimedia, vol. 7, Dec. 2012, doi: 10.4304/jmm.7.6.429-433.
[28] P. Mohanaiah, P. Sathyanarayana, and L. GuruKumar, “Image texture feature extraction using GLCM approach,” vol. 3, no. 5, 2013.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93417-
dc.description.abstract在半導體製程中,光罩最主要的功能是將電路圖案轉移到晶圓上,因此希望光罩是毫無缺陷的,所以光罩檢測方法在面對越來越精細的製程和微小尺寸的元件時,可能會有檢測方面的問題。因此,希望使用人工智慧成為解決這些問題的方法之一,其能力在於從大量的數據中學習和提取模式,以達到檢測光罩缺陷的任務。
目前光罩上分成吸收層、多層膜和基板缺陷,本論文是專注在吸收層上的缺陷,吸收層上的缺陷有四種凸出、凹陷、大於尺寸和小於尺寸,本研究主要是在利用深度學習從遠場強度回推極紫外光的光罩缺陷種類和相對位置的想法,利用每一種缺陷都會對應到不同的強度資訊,利用強度矩陣轉換成線型圖的方式,用電腦視覺的方法去做深度學習,不藉由反傅立葉轉換和其他種演算法去推回光罩缺陷,因為在同調散射顯微鏡(Coherent Scatterometry Microscope, CSM)架構中,偵測相機只能接收到強度,並不能接收到相位訊號,目前只有用在吸收層的單一缺陷進行時域有限差分模擬,我們希望透過這一種方法,可以知道缺陷的種類和位置,方便進行清除光罩缺陷的後續處理。目前模型可以分辨出三種的缺陷種類,並且可以分類出缺陷的相對位置,對於凹陷、凸出和小於尺寸判斷出缺陷相對位置有較高的準確率,但是大於尺寸目前只有五成的準確率。
zh_TW
dc.description.abstractIn semiconductor manufacturing, the primary function of masks is to transfer circuit patterns onto wafers, thus a defect-free condition of the masks is essential. As processes become more intricate and component sizes decrease, challenges in mask inspection may arise. Consequently, artificial intelligence is hoped to be one of the methods to solve these problems, due to its ability to learn from and extract patterns from vast amounts of data for the task of detecting mask defects. At present, defects on the mask are categorized into those on the absorber layer, multilayer, and substrate, with a focus presently being on defects in the absorber layer.
There are four types of defects identified in the absorber layer: extrusions, intrusions, oversize, and undersize. This research primarily revolves around the concept of using deep learning to infer the types and relative positions of EUV mask defects from far-field intensity. By utilizing the specific intensity information corresponding to each defect type, the intensity matrix is transformed into line graphs. Deep learning is then conducted using computer vision techniques, without reliance on inverse Fourier transforms or other algorithms to deduce the mask defects, since only intensity information can be captured by the CCD in the CSM architecture. Currently, simulations are conducted with a single type of defect in the absorber layer using finite-difference time-domain simulations. It is hoped that this method will enable the direct identification of defect types and locations, facilitating subsequent processes for the removal of the defects. Currently, the model can distinguish three types of defects and classify their relative positions. It has a higher accuracy rate in determining the relative positions of defects for intrusions, extrusions, and undersized. However, for oversized defects, the current accuracy rate is only 50%.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-07-31T16:13:32Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-07-31T16:13:32Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌謝 i
中文摘要 ii
英文摘要 iii
研究貢獻 v
目次 vi
圖次 vii
表次 ix
Chapter 1 緒論 1
1.1 研究背景 1
1.2 文獻回顧 1
1.3 動機 5
1.4 架構 6
Chapter 2 研究方法和理論 7
2.1 光罩缺陷種類 8
2.2 極紫外光罩原理 9
2.3 深度學習 15
Chapter 3 模擬架構 20
3.1 時域有限差分 20
3.2 深度學習模型 25
Chapter 4 結果與討論 33
4.1 時域有限差分之結果 33
4.2 深度學習之結果 39
Chapter 5 結論與未來展望 44
5.1 結論 44
5.2 未來展望 44
參考文獻 46
-
dc.language.isozh_TW-
dc.title基於深度學習之遠場強度辨識應用在極紫外光光罩缺陷檢測zh_TW
dc.titleFar-Field Intensity Recognition Based on Deep Learning for EUV Mask Defect Inspectionen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee蔡坤諭;蕭惠心;李昭德;陳詩雯zh_TW
dc.contributor.oralexamcommitteeKuen-Yu Tsai;Hui-Hsin Hsiao;Chao-Te Lee;Shih-Wen Chenen
dc.subject.keyword殘差網路,深度學習,吸收層缺陷,線型圖,單一缺陷,zh_TW
dc.subject.keywordResNet,deep learning,absorber defects,line diagram,single defect,en
dc.relation.page49-
dc.identifier.doi10.6342/NTU202401828-
dc.rights.note未授權-
dc.date.accepted2024-07-23-
dc.contributor.author-college工學院-
dc.contributor.author-dept工程科學及海洋工程學系-
顯示於系所單位:工程科學及海洋工程學系

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf
  目前未授權公開取用
2.59 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved