請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90289
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 吳文超 | zh_TW |
dc.contributor.advisor | Wen-Chau Wu | en |
dc.contributor.author | 吳蒨樺 | zh_TW |
dc.contributor.author | Qian-Hua Wu | en |
dc.date.accessioned | 2023-09-26T16:06:38Z | - |
dc.date.available | 2023-11-10 | - |
dc.date.copyright | 2023-09-26 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-10 | - |
dc.identifier.citation | Dong, C., C.C. Loy, K. He, and X. Tang, Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans Pattern Anal Mach Intell, 2016. 38(2): p. 295-307.
Goodfellow, I.J., et al., Generative adversarial nets, in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. 2014, MIT Press: Montreal, Canada. p. 2672–2680. Ledig, C., et al., Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv e-prints, 2016: p. arXiv:1609.04802. He, K., X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition. arXiv e-prints, 2015: p. arXiv:1512.03385. Pham, C.H., et al., Multiscale brain MRI super-resolution using deep 3D convolutional networks. Comput Med Imaging Graph, 2019. 77: p. 101647. Lyu, Q., et al., Multi-Contrast Super-Resolution MRI Through a Progressive Network. IEEE Trans Med Imaging, 2020. 39(9): p. 2738-2749. Gulrajani, I., et al., Improved Training of Wasserstein GANs. arXiv e-prints, 2017: p. arXiv:1704.00028. Shan, H., et al., 3-D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning From a 2-D Trained Network. IEEE Trans Med Imaging, 2018. 37(6): p. 1522-1534. Gatys, L.A., A.S. Ecker, and M. Bethge. Image Style Transfer Using Convolutional Neural Networks. in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. Sajjadi, M.S.M., B. Schölkopf, and M. Hirsch, EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis. arXiv e-prints, 2016: p. arXiv:1612.07919. Li, Z., et al., DeepVolume: Brain Structure and Spatial Connection-Aware Network for Brain MRI Super-Resolution. IEEE Trans Cybern, 2021. 51(7): p. 3441-3454. Çiçek, Ö., et al., 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. arXiv e-prints, 2016: p. arXiv:1606.06650. Tao, X., et al., Detail-revealing Deep Video Super-resolution. arXiv e-prints, 2017: p. arXiv:1704.02738. Zhang, K., et al., SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography, 2022. 8(2): p. 905-919. Chen, Y., et al., Efficient and Accurate MRI Super-Resolution using a Generative Adversarial Network and 3D Multi-Level Densely Connected Network. arXiv e-prints, 2018: p. arXiv:1803.01417. Chen, Y., et al., Brain MRI Super Resolution Using 3D Deep Densely Connected Neural Networks. arXiv e-prints, 2018: p. arXiv:1801.02728. Blasche, M. and C. Forman. Compressed Sensing – the Flowchart. 2016; Available from: https://www.mriquestions.com/uploads/3/4/5/7/34572113/siemens_mri_magnetom-world_compressed-sensing_compressed-sensing-flowchart_blasche-03520147.pdf. Canny, J., A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell, 1986. 8(6): p. 679-98. Krishna Pandey, R., N. Saha, S. Karmakar, and A.G. Ramakrishnan, MSCE: An edge preserving robust loss function for improving super-resolution algorithms. arXiv e-prints, 2018: p. arXiv:1809.00961. Rolls, E.T., et al., Automated anatomical labelling atlas 3. Neuroimage, 2020. 206: p. 116189. Zhao, J., M. Mathieu, and Y. LeCun, Energy-based Generative Adversarial Network. arXiv e-prints, 2016: p. arXiv:1609.03126. Simonyan, K. and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv e-prints, 2014: p. arXiv:1409.1556. La Rosa, F., et al., Multiple sclerosis cortical and WM lesion segmentation at 3T MRI: a deep learning method based on FLAIR and MP2RAGE. NeuroImage: Clinical, 2020. 27: p. 102335. Sanchez, I. and V. Vilaplana, Brain MRI super-resolution using 3D generative adversarial networks. arXiv e-prints, 2018: p. arXiv:1812.11440. Lin, J.-Y., Y.-C. Chang, and W. Hsu, Efficient and Phase-Aware Video Super-Resolution for Cardiac MRI. 2020. p. 66-76. Nasser, S.A., et al., Perceptual cGAN for MRI Super-resolution. 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2022: p. 3035-3038. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90289 | - |
dc.description.abstract | 磁振造影具有無輻射傷害以及軟組織高解析度等優點,但其成像時間長,病患可能會因運動雜訊造成影像品質降低,因此,若是能夠縮短掃描時間,便能減輕病患的負擔並降低成本。超解析度成像是一種提高影像解析度的技術,其優點在於不需額外硬體設備的支援,且近年來基於圖形運算單元發展,應用於磁振造影上的相關研究持續增加。而本研究結合超解析度成像與深度學習技術,將低解析度影像重建為高解析度影像,即使在短時間內進行掃描,獲得較低解析度的影像,也能透過超解析深度學習技術重建,從而實現造影流程的加速,並提升影像品質。
研究重點著重於模型架構的調整、殘差學習、位置感知、邊緣損失、感知損失以及低解析度影像的上採樣方法。為了驗證超解析度影像是否能夠在臨床上使用,我們將多發性硬化症的影像重建,並對高解析度影像與超解析度影像進行病灶分割,比較兩者的分割結果以證實病灶不會因超解析而消失。除了使用常見的結構相似性、峰值訊噪比及均方誤差評估影像,我們也透過數據的交叉訓練,將所有模型重建的超解析度影像映射至同一標準空間以生成不確定性圖,確認大腦各個區域的正確率與穩定性。 所有實驗中,使用位置感知的模型產生了最佳結果,結構相似性、峰值訊噪比與均方誤差分別為0.97、36.19以及0.1179,病灶分割的戴斯係數和真陽性率也都達到0.96,代表確實可以較短時間進行掃描,取得較低解析度的影像,再使用超解析度技術提高影像解析度,在可維持病灶資訊的前提下,提供臨床診斷所需的影像品質。 本研究專注於對大腦T1權重影像進行重建,若未來可針對其他大腦成像序列進行分析,將大幅提升研究價值,有望為醫療領域帶來更重要的實用價值。 | zh_TW |
dc.description.abstract | The advantages of magnetic resonance imaging include no radiation damage and high resolution of soft tissues. However, the imaging time is long and motion noise may result in degradation of image quality. Therefore, shortening the scanning process can reduce the burden on the patient and reduce costs. Super-resolution imaging is intended to improve the quality of images. One of its advantages is that it does not require additional hardware equipment to function. Due to the development of graphics processing units, magnetic resonance imaging research has increased in recent years. In this study, low-resolution images were reconstructed into high-resolution images using super-resolution imaging and deep learning technologies. Despite a scan being performed quickly to obtain a low-resolution image, it can be reconstructed with super-resolution deep learning technology, thereby improving image quality and accelerating imaging operations.
This research focuses on the adjustment of the model architecture, residual learning, position awareness, edge loss, perceptual loss, as well as upsampling methods for low-resolution images. For the purpose of verifying that super-resolution images can be used clinically, we will reconstruct the multiple sclerosis image and segment the lesion on the high-resolution image and the super-resolution image, and compare their segmentation results in order to confirm that super-resolution will not result in the disappearance of the lesion. Besides using the common structural similarity, peak signal-to-noise ratio and mean square error as criteria for evaluating images, we also map the super-resolution images reconstructed by all models into the same standard space through data cross-training in order to verify the accuracy and stability of each brain area. The model using position awareness produced the best results in all experiments. In terms of structural similarity, peak signal-to-noise ratio, and mean square error, the results were 0.97, 36.19, and 0.1179, respectively. Furthermore, both the true positive rate and the dice coefficient reach 0.96 for lesion segmentation. As a result, it is indeed possible to scan in shorter times, obtain images of lower resolution, and then use super-resolution technology to improve image resolution and provide the image quality required for clinical diagnosis while maintaining lesion information. The purpose of this study is to reconstruct brain T1-weighted images. Research value will be greatly improved if other brain imaging sequences are analyzed in the future, and it is anticipated to have more practical value in the medical field. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-26T16:06:38Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-09-26T16:06:38Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 誌謝 ii
摘要 iii Abstract iv Chapter 1 Introduction 1 1.1 Background 1 1.2 Purpose 2 1.3 Super-resolution 2 1.4 Approaches for super-resolution 3 1.4.1 Interpolation-based methods 3 1.4.2 Reconstruction-based methods 4 1.4.3 Example-based methods 5 Chapter 2 Methods of Super-Resolution 7 2.1 Super-resolution Neural Network 7 2.2 Super-resolution Generative Adversarial Network 8 2.3 3D Deep Neural Network for Multimodal Brain MRI Super-resolution 10 2.4 Multi-Contrast Super-Resolution MRI Through a Progressive Network 12 2.5 Cascade Neural-network Model for Brain MRI Super-resolution 14 2.6 Super-resolution Optimized Using a Perceptual-tuned Generative Adversarial Network 16 2.7 3D Multi-level Densely Connected Super-resolution Network with Generative Adversarial Network 17 Chapter 3 Experiments and Evaluation methods 20 3.1 Dataset 20 3.1.1 WU-Minn HCP 1200 Subjects Data 20 3.1.2 2008 MICCAI MS Lesion Segmentation Challenge 21 3.2 Preprocess 22 3.3 Down-sampling 24 3.4 Up-sampling 24 3.4.1 Upsampling based on the spatial domain 25 3.4.2 Upsampling based on the frequency domain 25 3.5 Patching, merging, and data augmentation 28 3.6 Training 29 3.6.1 Experiment 1 (edge loss) 29 3.6.2 Experiment 2 (residual learning with two-step training) 32 3.6.3 Experiment 3 (residual learning) 33 3.6.4 Experiment 4 (residual learning with different upsampling method) 34 3.6.5 Experiment 5 (position-aware residual learning) 34 3.6.6 Experiment 6 (residual learning using energy-based GAN) 38 3.6.7 Experiment 7 (position-aware residual learning using energy-based GAN) 40 3.6.8 Experiment 8 (position-aware residual learning with two-step training) 41 3.6.9 Experiment 9 (residual learning using perceptual loss) 41 3.6.10 Experiment 10 (position-aware residual learning using energy-based GAN and perceptual loss) 42 3.6.11 Experiment 11 (position-aware residual learning with different atlas) 42 3.7 Evaluation methods 45 3.7.1 Image evaluation metrics 45 3.7.2 MS lesion segmentation tool 47 3.7.3 Bootstrap uncertainty 49 Chapter 4 Results 51 Chapter 5 Discussion 63 5.1 Preprocessing of patching and merging data 63 5.2 Edge-based loss 63 5.3 Residual learning 64 5.4 Different methods of upsampling 64 5.5 Position-aware 65 5.6 Different types of GAN 65 5.7 Perceptual loss 66 5.8 Clinical feasibility testing using lesion segmentation 66 5.9 Uncertainty assessment 67 5.10 Training and inference time 67 Chapter 6 Conclusion 68 References 69 | - |
dc.language.iso | en | - |
dc.title | 超解析度大腦磁振造影技術 | zh_TW |
dc.title | The Development of Super-resolution for Brain MRI Images | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 趙一平;郭立威 | zh_TW |
dc.contributor.oralexamcommittee | Yi-Ping Chao;Li-Wei Kuo | en |
dc.subject.keyword | 磁振造影,超解析度成像,生成對抗網路,殘差學習,位置感知,重複抽樣不確定性, | zh_TW |
dc.subject.keyword | magnetic resonance imaging,super-resolution imaging,generative adversarial networks,residual learning,position-aware,bootstrap uncertainty, | en |
dc.relation.page | 70 | - |
dc.identifier.doi | 10.6342/NTU202303956 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2023-08-10 | - |
dc.contributor.author-college | 醫學院 | - |
dc.contributor.author-dept | 醫療器材與醫學影像研究所 | - |
顯示於系所單位: | 醫療器材與醫學影像研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf 此日期後於網路公開 2025-08-14 | 3.48 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。