Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 醫學院
  3. 醫療器材與醫學影像研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90289
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor吳文超zh_TW
dc.contributor.advisorWen-Chau Wuen
dc.contributor.author吳蒨樺zh_TW
dc.contributor.authorQian-Hua Wuen
dc.date.accessioned2023-09-26T16:06:38Z-
dc.date.available2023-11-10-
dc.date.copyright2023-09-26-
dc.date.issued2023-
dc.date.submitted2023-08-10-
dc.identifier.citationDong, C., C.C. Loy, K. He, and X. Tang, Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans Pattern Anal Mach Intell, 2016. 38(2): p. 295-307.
Goodfellow, I.J., et al., Generative adversarial nets, in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. 2014, MIT Press: Montreal, Canada. p. 2672–2680.
Ledig, C., et al., Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv e-prints, 2016: p. arXiv:1609.04802.
He, K., X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition. arXiv e-prints, 2015: p. arXiv:1512.03385.
Pham, C.H., et al., Multiscale brain MRI super-resolution using deep 3D convolutional networks. Comput Med Imaging Graph, 2019. 77: p. 101647.
Lyu, Q., et al., Multi-Contrast Super-Resolution MRI Through a Progressive Network. IEEE Trans Med Imaging, 2020. 39(9): p. 2738-2749.
Gulrajani, I., et al., Improved Training of Wasserstein GANs. arXiv e-prints, 2017: p. arXiv:1704.00028.
Shan, H., et al., 3-D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning From a 2-D Trained Network. IEEE Trans Med Imaging, 2018. 37(6): p. 1522-1534.
Gatys, L.A., A.S. Ecker, and M. Bethge. Image Style Transfer Using Convolutional Neural Networks. in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.
Sajjadi, M.S.M., B. Schölkopf, and M. Hirsch, EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis. arXiv e-prints, 2016: p. arXiv:1612.07919.
Li, Z., et al., DeepVolume: Brain Structure and Spatial Connection-Aware Network for Brain MRI Super-Resolution. IEEE Trans Cybern, 2021. 51(7): p. 3441-3454.
Çiçek, Ö., et al., 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. arXiv e-prints, 2016: p. arXiv:1606.06650.
Tao, X., et al., Detail-revealing Deep Video Super-resolution. arXiv e-prints, 2017: p. arXiv:1704.02738.
Zhang, K., et al., SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography, 2022. 8(2): p. 905-919.
Chen, Y., et al., Efficient and Accurate MRI Super-Resolution using a Generative Adversarial Network and 3D Multi-Level Densely Connected Network. arXiv e-prints, 2018: p. arXiv:1803.01417.
Chen, Y., et al., Brain MRI Super Resolution Using 3D Deep Densely Connected Neural Networks. arXiv e-prints, 2018: p. arXiv:1801.02728.
Blasche, M. and C. Forman. Compressed Sensing – the Flowchart. 2016; Available from: https://www.mriquestions.com/uploads/3/4/5/7/34572113/siemens_mri_magnetom-world_compressed-sensing_compressed-sensing-flowchart_blasche-03520147.pdf.
Canny, J., A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell, 1986. 8(6): p. 679-98.
Krishna Pandey, R., N. Saha, S. Karmakar, and A.G. Ramakrishnan, MSCE: An edge preserving robust loss function for improving super-resolution algorithms. arXiv e-prints, 2018: p. arXiv:1809.00961.
Rolls, E.T., et al., Automated anatomical labelling atlas 3. Neuroimage, 2020. 206: p. 116189.
Zhao, J., M. Mathieu, and Y. LeCun, Energy-based Generative Adversarial Network. arXiv e-prints, 2016: p. arXiv:1609.03126.
Simonyan, K. and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv e-prints, 2014: p. arXiv:1409.1556.
La Rosa, F., et al., Multiple sclerosis cortical and WM lesion segmentation at 3T MRI: a deep learning method based on FLAIR and MP2RAGE. NeuroImage: Clinical, 2020. 27: p. 102335.
Sanchez, I. and V. Vilaplana, Brain MRI super-resolution using 3D generative adversarial networks. arXiv e-prints, 2018: p. arXiv:1812.11440.
Lin, J.-Y., Y.-C. Chang, and W. Hsu, Efficient and Phase-Aware Video Super-Resolution for Cardiac MRI. 2020. p. 66-76.
Nasser, S.A., et al., Perceptual cGAN for MRI Super-resolution. 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2022: p. 3035-3038.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90289-
dc.description.abstract磁振造影具有無輻射傷害以及軟組織高解析度等優點,但其成像時間長,病患可能會因運動雜訊造成影像品質降低,因此,若是能夠縮短掃描時間,便能減輕病患的負擔並降低成本。超解析度成像是一種提高影像解析度的技術,其優點在於不需額外硬體設備的支援,且近年來基於圖形運算單元發展,應用於磁振造影上的相關研究持續增加。而本研究結合超解析度成像與深度學習技術,將低解析度影像重建為高解析度影像,即使在短時間內進行掃描,獲得較低解析度的影像,也能透過超解析深度學習技術重建,從而實現造影流程的加速,並提升影像品質。
研究重點著重於模型架構的調整、殘差學習、位置感知、邊緣損失、感知損失以及低解析度影像的上採樣方法。為了驗證超解析度影像是否能夠在臨床上使用,我們將多發性硬化症的影像重建,並對高解析度影像與超解析度影像進行病灶分割,比較兩者的分割結果以證實病灶不會因超解析而消失。除了使用常見的結構相似性、峰值訊噪比及均方誤差評估影像,我們也透過數據的交叉訓練,將所有模型重建的超解析度影像映射至同一標準空間以生成不確定性圖,確認大腦各個區域的正確率與穩定性。
所有實驗中,使用位置感知的模型產生了最佳結果,結構相似性、峰值訊噪比與均方誤差分別為0.97、36.19以及0.1179,病灶分割的戴斯係數和真陽性率也都達到0.96,代表確實可以較短時間進行掃描,取得較低解析度的影像,再使用超解析度技術提高影像解析度,在可維持病灶資訊的前提下,提供臨床診斷所需的影像品質。
本研究專注於對大腦T1權重影像進行重建,若未來可針對其他大腦成像序列進行分析,將大幅提升研究價值,有望為醫療領域帶來更重要的實用價值。
zh_TW
dc.description.abstractThe advantages of magnetic resonance imaging include no radiation damage and high resolution of soft tissues. However, the imaging time is long and motion noise may result in degradation of image quality. Therefore, shortening the scanning process can reduce the burden on the patient and reduce costs. Super-resolution imaging is intended to improve the quality of images. One of its advantages is that it does not require additional hardware equipment to function. Due to the development of graphics processing units, magnetic resonance imaging research has increased in recent years. In this study, low-resolution images were reconstructed into high-resolution images using super-resolution imaging and deep learning technologies. Despite a scan being performed quickly to obtain a low-resolution image, it can be reconstructed with super-resolution deep learning technology, thereby improving image quality and accelerating imaging operations.
This research focuses on the adjustment of the model architecture, residual learning, position awareness, edge loss, perceptual loss, as well as upsampling methods for low-resolution images. For the purpose of verifying that super-resolution images can be used clinically, we will reconstruct the multiple sclerosis image and segment the lesion on the high-resolution image and the super-resolution image, and compare their segmentation results in order to confirm that super-resolution will not result in the disappearance of the lesion. Besides using the common structural similarity, peak signal-to-noise ratio and mean square error as criteria for evaluating images, we also map the super-resolution images reconstructed by all models into the same standard space through data cross-training in order to verify the accuracy and stability of each brain area.
The model using position awareness produced the best results in all experiments. In terms of structural similarity, peak signal-to-noise ratio, and mean square error, the results were 0.97, 36.19, and 0.1179, respectively. Furthermore, both the true positive rate and the dice coefficient reach 0.96 for lesion segmentation. As a result, it is indeed possible to scan in shorter times, obtain images of lower resolution, and then use super-resolution technology to improve image resolution and provide the image quality required for clinical diagnosis while maintaining lesion information.
The purpose of this study is to reconstruct brain T1-weighted images. Research value will be greatly improved if other brain imaging sequences are analyzed in the future, and it is anticipated to have more practical value in the medical field.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-26T16:06:38Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-09-26T16:06:38Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌謝 ii
摘要 iii
Abstract iv
Chapter 1 Introduction 1
1.1 Background 1
1.2 Purpose 2
1.3 Super-resolution 2
1.4 Approaches for super-resolution 3
1.4.1 Interpolation-based methods 3
1.4.2 Reconstruction-based methods 4
1.4.3 Example-based methods 5
Chapter 2 Methods of Super-Resolution 7
2.1 Super-resolution Neural Network 7
2.2 Super-resolution Generative Adversarial Network 8
2.3 3D Deep Neural Network for Multimodal Brain MRI Super-resolution 10
2.4 Multi-Contrast Super-Resolution MRI Through a Progressive Network 12
2.5 Cascade Neural-network Model for Brain MRI Super-resolution 14
2.6 Super-resolution Optimized Using a Perceptual-tuned Generative Adversarial Network 16
2.7 3D Multi-level Densely Connected Super-resolution Network with Generative Adversarial Network 17
Chapter 3 Experiments and Evaluation methods 20
3.1 Dataset 20
3.1.1 WU-Minn HCP 1200 Subjects Data 20
3.1.2 2008 MICCAI MS Lesion Segmentation Challenge 21
3.2 Preprocess 22
3.3 Down-sampling 24
3.4 Up-sampling 24
3.4.1 Upsampling based on the spatial domain 25
3.4.2 Upsampling based on the frequency domain 25
3.5 Patching, merging, and data augmentation 28
3.6 Training 29
3.6.1 Experiment 1 (edge loss) 29
3.6.2 Experiment 2 (residual learning with two-step training) 32
3.6.3 Experiment 3 (residual learning) 33
3.6.4 Experiment 4 (residual learning with different upsampling method) 34
3.6.5 Experiment 5 (position-aware residual learning) 34
3.6.6 Experiment 6 (residual learning using energy-based GAN) 38
3.6.7 Experiment 7 (position-aware residual learning using energy-based GAN) 40
3.6.8 Experiment 8 (position-aware residual learning with two-step training) 41
3.6.9 Experiment 9 (residual learning using perceptual loss) 41
3.6.10 Experiment 10 (position-aware residual learning using energy-based GAN and perceptual loss) 42
3.6.11 Experiment 11 (position-aware residual learning with different atlas) 42
3.7 Evaluation methods 45
3.7.1 Image evaluation metrics 45
3.7.2 MS lesion segmentation tool 47
3.7.3 Bootstrap uncertainty 49
Chapter 4 Results 51
Chapter 5 Discussion 63
5.1 Preprocessing of patching and merging data 63
5.2 Edge-based loss 63
5.3 Residual learning 64
5.4 Different methods of upsampling 64
5.5 Position-aware 65
5.6 Different types of GAN 65
5.7 Perceptual loss 66
5.8 Clinical feasibility testing using lesion segmentation 66
5.9 Uncertainty assessment 67
5.10 Training and inference time 67
Chapter 6 Conclusion 68
References 69
-
dc.language.isoen-
dc.subject重複抽樣不確定性zh_TW
dc.subject磁振造影zh_TW
dc.subject超解析度成像zh_TW
dc.subject生成對抗網路zh_TW
dc.subject殘差學習zh_TW
dc.subject位置感知zh_TW
dc.subjectresidual learningen
dc.subjectsuper-resolution imagingen
dc.subjectgenerative adversarial networksen
dc.subjectmagnetic resonance imagingen
dc.subjectbootstrap uncertaintyen
dc.subjectposition-awareen
dc.title超解析度大腦磁振造影技術zh_TW
dc.titleThe Development of Super-resolution for Brain MRI Imagesen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee趙一平;郭立威zh_TW
dc.contributor.oralexamcommitteeYi-Ping Chao;Li-Wei Kuoen
dc.subject.keyword磁振造影,超解析度成像,生成對抗網路,殘差學習,位置感知,重複抽樣不確定性,zh_TW
dc.subject.keywordmagnetic resonance imaging,super-resolution imaging,generative adversarial networks,residual learning,position-aware,bootstrap uncertainty,en
dc.relation.page70-
dc.identifier.doi10.6342/NTU202303956-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2023-08-10-
dc.contributor.author-college醫學院-
dc.contributor.author-dept醫療器材與醫學影像研究所-
dc.date.embargo-lift2025-08-14-
顯示於系所單位:醫療器材與醫學影像研究所

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf3.48 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved