請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91713
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 李百祺 | zh_TW |
dc.contributor.advisor | Pai-Chi Li | en |
dc.contributor.author | 陳欣頤 | zh_TW |
dc.contributor.author | Xin-Yi Chen | en |
dc.date.accessioned | 2024-02-22T16:22:15Z | - |
dc.date.available | 2024-02-23 | - |
dc.date.copyright | 2024-02-22 | - |
dc.date.issued | 2024 | - |
dc.date.submitted | 2024-02-02 | - |
dc.identifier.citation | [1] P. Laugier and G. Haïat, "Introduction to the physics of ultrasound," Bone quantitative ultrasound, pp. 29-45, 2011.
[2] Y. F. M. Nosir et al., "The apical long-axis rather than the two-chamber view should be used in combination with the four-chamber view for accurate assessment of left ventricular volumes and function," European Heart Journal, vol. 18, no. 7, pp. 1175-1185, 1997, doi: 10.1093/oxfordjournals.eurheartj.a015414. [3] D. Pasdeloup et al., "Real-Time Echocardiography Guidance for Optimized Apical Standard Views," Ultrasound in Medicine & Biology, vol. 49, no. 1, pp. 333-346, 2023/01/01/ 2023, doi: https://doi.org/10.1016/j.ultrasmedbio.2022.09.006. [4] C. Mitchell et al., "Guidelines for Performing a Comprehensive Transthoracic Echocardiographic Examination in Adults: Recommendations from the American Society of Echocardiography," (in eng), J Am Soc Echocardiogr, vol. 32, no. 1, pp. 1-64, Jan 2019, doi: 10.1016/j.echo.2018.06.004. [5] C. B. Burckhardt, "Speckle in ultrasound B-mode scans," IEEE Transactions on Sonics and ultrasonics, vol. 25, no. 1, pp. 1-6, 1978, doi: 10.1109/T-SU.1978.30978. [6] S. Susan, P. Agrawal, M. Mittal, and S. Bansal, "New shape descriptor in the context of edge continuity," CAAI Transactions on Intelligence Technology, vol. 4, no. 2, pp. 101-109, 2019, doi: https://doi.org/10.1049/trit.2019.0002. [7] Pai-Chi Li, 醫用超音波原理. Available: https://sites.google.com/view/pai-chilislab/courses. [8] Q. Li and C. He, "A New Thresholding Method in Wavelet Packet Analysis for Image De-noising," in 2006 International Conference on Mechatronics and Automation, 25-28 June 2006 2006, pp. 2074-2078, doi: 10.1109/ICMA.2006.257593. [9] Y. Zhang, W. Ding, Z. Pan, and J. Qin, "Improved Wavelet Threshold for Image De-noising," (in English), Frontiers in Neuroscience, Methods vol. 13, 2019-February-08 2019, doi: 10.3389/fnins.2019.00039. [10] S. Balocco, C. Gatta, O. Pujol, J. Mauri, and P. Radeva, "SRBF: Speckle reducing bilateral filtering," (in eng), Ultrasound Med Biol, vol. 36, no. 8, pp. 1353-63, Aug 2010, doi: 10.1016/j.ultrasmedbio.2010.05.007. [11] A. Buades, B. Coll, and J. M. Morel, "A non-local algorithm for image denoising," in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR''05), 20-25 June 2005 2005, vol. 2, pp. 60-65 vol. 2, doi: 10.1109/CVPR.2005.38. [12] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma, "Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization," Advances in neural information processing systems, vol. 22, 2009. [13] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," p. arXiv:1512.03385doi: 10.48550/arXiv.1512.03385. [14] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, "CBAM: Convolutional Block Attention Module," p. arXiv:1807.06521doi: 10.48550/arXiv.1807.06521. [15] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, "Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising," IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142-3155, 2017, doi: 10.1109/TIP.2017.2662206. [16] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, "Extracting and composing robust features with denoising autoencoders," in Proceedings of the 25th international conference on Machine learning, 2008, pp. 1096-1103. [17] O. Huang, W. Long, N. Bottenus, G. E. Trahey, S. Farsiu, and M. L. Palmeri, "MimickNet, Matching Clinical Post-Processing Under Realistic Black-Box Constraints," in 2019 IEEE International Ultrasonics Symposium (IUS), 6-9 Oct. 2019 2019, pp. 1145-1151, doi: 10.1109/ULTSYM.2019.8925597. [18] L. Zhu, C. W. Fu, M. S. Brown, and P. A. Heng, "A Non-local Low-Rank Framework for Ultrasound Speckle Reduction," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017 2017, pp. 493-501, doi: 10.1109/CVPR.2017.60. [19] M. H. K. Ustuner, T.-L. Ji, P.-C. Li, and C. Cinbis, "Imaging system display processor," U.S., 1996. [20] P. Zhou, J. Zhang, M. Xue, and L. Bao, "Directional relative total variation for structure–texture decomposition," IET Image Processing, vol. 13, no. 11, pp. 1835-1845, 2019. [21] R. Harrabi and E. B. Braiek, "Isotropic and anisotropic filtering techniques for image denoising: A comparative study with classification," in 2012 16th IEEE Mediterranean Electrotechnical Conference, 25-28 March 2012 2012, pp. 370-374, doi: 10.1109/MELCON.2012.6196451. [22] K. Z. Abd-Elmoniem, A. B. M. Youssef, and Y. M. Kadah, "Real-time speckle reduction and coherence enhancement in ultrasound imaging via nonlinear anisotropic diffusion," IEEE Transactions on Biomedical Engineering, vol. 49, no. 9, pp. 997-1014, 2002, doi: 10.1109/TBME.2002.1028423. [23] Y. Yongjian and S. T. Acton, "Speckle reducing anisotropic diffusion," IEEE Transactions on Image Processing, vol. 11, no. 11, pp. 1260-1270, 2002, doi: 10.1109/TIP.2002.804276. [24] P. Perona and J. Malik, "Scale-space and edge detection using anisotropic diffusion," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629-639, 1990, doi: 10.1109/34.56205. [25] K. Mei, B. Hu, B. Fei, and B. Qin, "Phase Asymmetry Ultrasound Despeckling With Fractional Anisotropic Diffusion and Total Variation," IEEE Transactions on Image Processing, vol. 29, pp. 2845-2859, 2020, doi: 10.1109/TIP.2019.2953361. [26] L. Zhu, W. Wang, J. Qin, K.-H. Wong, K.-S. Choi, and P.-A. Heng, "Fast feature-preserving speckle reduction for ultrasound images via phase congruency," Signal Processing, vol. 134, pp. 275-284, 2017/05/01/ 2017, doi: https://doi.org/10.1016/j.sigpro.2016.12.011. [27] D. E. Ilea, C. Duffy, L. Kavanagh, A. Stanton, and P. F. Whelan, "Fully automated segmentation and tracking of the intima media thickness in ultrasound video sequences of the common carotid artery," IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 60, no. 1, pp. 158-177, 2013, doi: 10.1109/TUFFC.2013.2547. [28] S. Shah, P. Ghosh, L. S. Davis, and T. Goldstein, "Stacked U-Nets: A No-Frills Approach to Natural Image Segmentation," p. arXiv:1804.10343doi: 10.48550/arXiv.1804.10343. [29] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," p. arXiv:1512.00567doi: 10.48550/arXiv.1512.00567. [30] V. Dumoulin and F. Visin, "A guide to convolution arithmetic for deep learning," p. arXiv:1603.07285doi: 10.48550/arXiv.1603.07285. [31] Y. Lan and X. Zhang, "Real-Time Ultrasound Image Despeckling Using Mixed-Attention Mechanism Based Residual UNet," IEEE Access, vol. 8, pp. 195327-195340, 2020, doi: 10.1109/ACCESS.2020.3034230. [32] I. J. Goodfellow et al., "Generative Adversarial Networks," p. arXiv:1406.2661doi: 10.48550/arXiv.1406.2661. [33] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks," in 2017 IEEE International Conference on Computer Vision (ICCV), 22-29 Oct. 2017 2017, pp. 2242-2251, doi: 10.1109/ICCV.2017.244. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91713 | - |
dc.description.abstract | 超音波在醫學領域中被廣為使用,透過其成像技術能獲得人體組織內部的結構,然而超音波影像的固有特性,包括散斑雜訊(speckle noise)、低對比度(low contrast)、影像假影(artifacts)以及成像過程中的訊號損失(signal dropouts),使得超音波影像的分析變得更加複雜。其中散斑雜訊的存在使得影像的解析度和對比度受到了限制,降低臨床診斷價值,因此去除影像中的雜訊對於臨床研究是一個重要的先決條件;另外在許多臨床指標(射血分數等等)的計算上涉及左心室邊界的界定,而超音波的固有特徵使得心臟影像產生不清晰輪廓,左心室的分割任務難度提升。本論文將研究架構分為臨床非心臟超音波影像與心臟超音波影像,針對非心臟超音波影像改善影像的清晰度、雜訊的平滑度與邊緣連續性;針對心臟超音波影像增強對比度,同時增強心內膜邊界的可見度,以提高臨床診斷的準確性。本研究介紹了兩個用於影像後處理的深度學習的模型架構,分別為深度殘差神經網路(Deep Residual Net)與混合注意力殘差UNet (Mixed-Attention Based Residual UNet),兩個模型皆帶有殘差模塊,改善深層網路退化的問題。在各項評量指標中,深度殘差神經網路擁有較好的雜訊平滑能力和邊緣連續性,混合注意力殘差UNet則獲得較佳的影像清晰度、峰值訊噪比與結構相似度。在心臟超音波影像中,深度殘差神經網路能更好的去除雜訊,並且心內膜的部分也有較明顯的增強。此外,深度學習方法也解決了臨床上人工後處理調整參數的繁瑣過程與基於濾波設計的降噪方法在影像處理上面臨的耗時問題,兩個模型在GPU上後處理的時間效能上皆可以達到100Hz以上的效率,成功克服上述提及耗時且不便的劣勢,並且模型套用於資料集外的影像也達到了同樣的去噪水平,不論是深度殘差神經網路與混合注意力殘差UNet在影像評量指標的表現上都有達到與資料集影像相同的效果。考量模型的後處理性能與影像細節資訊的保留度,深度殘差神經網路為本篇論文最終選擇的後處理模型。 | zh_TW |
dc.description.abstract | Since non-invasive, economical, and portable, ultrasound is a widely used imaging system in the medical field that can reveal internal anatomic structures. However, the inherent characteristics of ultrasound images, including speckle noise, low contrast, artifacts, and signal dropouts during the imaging process, make the analysis of ultrasound images more complex. The presence of speckle noise limits image resolution and contrast and affects clinical diagnosis. Therefore, removing speckle noise from images is crucial for clinical research. Additionally, many clinical indexes (such as ejection fraction) involve the identification of the endocardial border. The inherent characteristics result in unclear contours in cardiac images and increase the difficulty of left ventricle segmentation tasks. The paper divides the research framework into general ultrasound images in clinical practice and cardiac ultrasound images. For general ultrasound images, it focuses on improving image sharpness, noise smoothing, and edge continuity. For cardiac ultrasound images, the emphasis is on contrast enhancement while simultaneously enhancing the visibility of endocardial boundaries. It is helpful for the calculation of many clinical indicators, such as left ventricle ejection fraction (LVEF), strain curve, and A4C GLS. The research introduces two deep learning models for image post-processing: the Deep Residual Net and the Mixed-Attention Based Residual UNet. Both models incorporate residual blocks to address the degradation problem in deep networks. The assessments indicate that the Deep Residual Net exhibits superior noise smoothness and edge continuity, while the Mixed-Attention-based Residual UNet achieves better image sharpness, PSNR, and SSIM. In cardiac ultrasound images, the Deep Residual Net performs well in noise removal and enhances the endocardial border. The deep learning model can be applied to images out of the dataset. It solves the problem of spending time on parameter tuning in typical clinical research systems and overcomes the time-consuming issues associated with filter-based methods. Both models reach the same denoising levels on test images and demonstrate the time efficiency of over 100Hz frame rates on GPU. Considering the despeckle performance of the model and the preservation of image details, the Deep Residual Net is preferred. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-02-22T16:22:15Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2024-02-22T16:22:15Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 誌謝 i
中文摘要 ii Abstract iii 目次 iv 圖次 vii 表次 x 1 第一章、緒論 1 1.1 超音波成像 1 1.1.1 簡介 1 1.1.2 散射 1 1.1.3 非心臟超音波影像 2 1.1.4 心臟超音波影像 2 1.2 影像品質 3 1.2.1 斑點雜訊 3 1.2.2 影像對比度 4 1.3 影像評量指標 5 1.3.1 影像清晰度 5 1.3.2 雜訊平滑度 5 1.3.3 邊緣連續性 6 1.3.4 峰值訊噪比 9 1.3.5 結構相似度 9 1.3.6 對比度 10 1.3.7 專業影像評估 10 1.4 文獻回顧 11 1.4.1 頻率濾波 11 1.4.2 局部影像濾波 11 1.4.3 非局部影像濾波 12 1.4.4 低秩優化 13 1.4.5 殘差學習 13 1.4.6 混合注意力機制 14 1.5 研究動機 16 1.6 研究目標 17 1.7 研究架構 17 2 第二章、基於濾波設計研究方法 18 2.1 非局部低秩方法 18 2.1.1 非局部低秩方法架構 18 2.1.2 非局部補丁選擇 18 2.1.3 低秩優化 22 2.1.4 迭代 24 2.1.5 參數設定 24 2.2 擴散方法 25 2.2.1 各向同性擴散 25 2.2.2 各向異性擴散 26 2.2.3 分數階各向異性擴散 27 2.3 對比度增強方法 30 2.3.1 手動ROI圈選方法 31 2.3.2 自動ROI圈選方法 32 3 第三章、深度學習研究方法 34 3.1 深度學習 34 3.1.1 深度學習影像處理 34 3.1.2 深度學習模型 34 3.2 模型架構 35 3.2.1 深度殘差神經網路 35 3.2.2 混合注意力殘差UNet 37 3.3 影像資料集 39 3.3.1 非心臟超音波影像資料集 39 3.3.2 心臟超音波影像資料集 40 3.4 資料擴增 40 3.5 模型訓練參數 42 4 第四章、研究結果 44 4.1 非心臟超音波影像後處理結果 44 4.1.1 濾波設計方法 44 4.1.2 深度學習方法 46 4.2 心臟超音波影像後處理結果 51 4.2.1 對比度增強方法 51 4.2.2 深度學習方法 54 5 第五章、討論 57 5.1 濾波設計方法與深度學習方法比較 57 5.2 深度學習模型參數 58 5.3 資料集外分佈效能 59 5.4 時間效能 64 5.4.1 模型架構 65 5.4.2 模型大小 65 5.4.3 硬體設備 66 5.4.4 影像尺寸 66 5.5 深度學習模型跨平台整合 67 5.5.1 跨平台整合流程 67 5.5.2 跨平台框架 67 5.5.3 模型跨平台轉換 67 5.5.4 開發環境 68 5.5.5 運行結果 69 6 第六章、結論與未來工作 72 6.1 結論 72 6.2 未來工作 73 6.2.1 未配對影像之深度學習模型 73 7 參考文獻 75 | - |
dc.language.iso | zh_TW | - |
dc.title | 深度學習方法實現超音波影像後處理 | zh_TW |
dc.title | Post-processing of Ultrasound Images Using Deep Learning Methods | en |
dc.type | Thesis | - |
dc.date.schoolyear | 112-1 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 劉建宏;鄭耿璽;郭柏齡;謝寶育 | zh_TW |
dc.contributor.oralexamcommittee | Jian-Hung Liu;Geng-Shi Jeng;Po-Ling Kuo;Bao-Yu Hsieh | en |
dc.subject.keyword | 斑點雜訊,去斑,影像後處理,深度學習,深度神經網絡,殘差學習, | zh_TW |
dc.subject.keyword | speckle noise,despeckle,image post-processing,deep learning,deep neural network,residual learning, | en |
dc.relation.page | 78 | - |
dc.identifier.doi | 10.6342/NTU202400228 | - |
dc.rights.note | 未授權 | - |
dc.date.accepted | 2024-02-06 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 生醫電子與資訊學研究所 | - |
顯示於系所單位: | 生醫電子與資訊學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-112-1.pdf 目前未授權公開取用 | 9.84 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。