請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91504完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 傅楸善 | zh_TW |
| dc.contributor.advisor | Chiou-Shann Fuh | en |
| dc.contributor.author | 林正偉 | zh_TW |
| dc.contributor.author | Cheng-Wei Lin | en |
| dc.date.accessioned | 2024-01-28T16:17:52Z | - |
| dc.date.available | 2024-01-29 | - |
| dc.date.copyright | 2024-01-27 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-06-29 | - |
| dc.identifier.citation | [1] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481-2495, doi: 10.1109/TPAMI.2016.2644615, 2017.
[2] H. Bay, T. Tuytelaars, L. Van Gool, “SURF: Speeded Up Robust Features,” In: A. Leonardis, H. Bischof, A. Pinz, (eds) Computer Vision – Proceedings of European Conference on Computer Vision, Graz, Austria, LNCS 3951, pp. 404–417, 2006. [3] M. Brown and D. G. Lowe, “Automatic Panoramic Image Stitching Using Invariant Features,” International Journal of Computer Vision, Vol. 74, pp. 59–73, 2007. [4] J. Canny, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, doi: 10.1109/TPAMI.1986.4767851, 1986. [5] F. L. Carty, J. P. Cashman, J. Parvizi, A. Zoga, and W. B. Morrison, “Imaging of the Postoperative Hip,” Seminars in Musculoskeletal Radiology, Vol. 15, No. 4, pp. 357-71. 10.1055/s-0031-1286016, 2011. [6] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs,” Proceeding of International Conference on Learning Representations, San Diego, CA, pp. 1-14, 2015. [7] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834-848, doi: 10.1109/TPAMI.2017.2699184, 2017. [8] L. C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking Atrous Convolution for Semantic Image Segmentation,” arXiv:1706.05587, 2017. [9] L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” Proceedings of European Conference on Computer Vision, Munich, Germany, pp. 833-851, 2018. [10] C. T. Cheng, Y. Wang, H. W. Chen, et al., “A Scalable Physician-Level Deep Learning Algorithm Detects Universal Trauma on Pelvic Radiographs,” Nature Communications, Vol. 12, No. 1066, pp. 1-10 2021. [11] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, pp. 886-893 vol. 1, doi: 10.1109/CVPR.2005.17, 2005. [12] D. Daniel, T. Malisiewicz, and A. Rabinovich, “SuperPoint: Self-Supervised Interest Point Detection and Description,” Proceedings of CVPR Workshop, Salt Lake City, Utah, pp. 337-349, 2018. [13] J. Deng, W. Dong, R. Socher, L. -J. Li, K. Li and F. F. Li, “ImageNet: A Large-Scale Hierarchical Image Database,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, pp. 248-255, doi: 10.1109/CVPR.2009.5206848, 2009 [14] M. Dusmanu, I. Rocco, T. Pajdla, M. Pollefeys, J. Sivic, A. Torii, and T. Sattler. “D2-Net: A Trainable CNN for Joint Detection and Description of Local Features,” Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, California, pp. 8092-8101, 2019. [15] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communications of the ACM, Vol. 24, No. 6, pp. 381–395, 1981. [16] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proceedings of British Machine Vision Conference, Southampton, UK, pp. 147-151, 1988. [17] K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 770-778, doi: 10.1109/CVPR.2016.90, 2016. [18] P. V. C. Hough, “Method and Means for Recognizing Complex Patterns,” US Patent 3,069,654, Ser. No. 17,7156 Claims, 1962. [19] W. J. C. Koppert, M. M. A. Dietze, Martijn, S. Velden, J. H. L. Steenbergen, and H. W. A. M. de Jong, Hugo, “A Comparative Study of NaI(Tl), CeBr3, and CZT for Use in a Real-Time Simultaneous Nuclear and Fluoroscopic Dual-Layer Detector,” Physics in Medicine and Biology, Vol. 64, No. 13, 10.1088/1361-6560/ab267c, pp. 1-9, 2019. [20] I. D. Learmonth, C. Young, and C. Rorabeck, “The Operation of the Century: Total Hip Replacement,” The Lancet, Vol. 370, Issue 9597, pp. 1508-1519, 2007. [21] Z. Li and N. Snavely, “MegaDepth: Learning Single-View Depth Prediction from Internet Photos,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, pp. 2041-2050, doi: 10.1109/CVPR.2018.00218, 2018. [22] C. K. Liaw, “The Orientation of Acetabular Component –The Definitions and Measuring Methods,” PhD Thesis, Department of Computer Science and Information Engineering, National Taiwan University, 2008. [23] W. Liu, Y. Wang, T. Jiang, L. Zhang, and X. S. Hua, “Landmarks Detection with Anatomical Constraints for Total Hip Arthroplasty Preoperative Measurements,” Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, vol. 12264, pp. 670-679, 2020. [24] J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, pp. 3431-3440, doi: 10.1109/CVPR.2015.7298965, 2015. [25] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, pp. 91–110, 2004. [26] D. Nguyen, “Simulation and Experimental Study on Polishing of Spherical Steel by Non-Newtonian Fluids,” The International Journal of Advanced Manufacturing Technology, Vol. 107, 10.1007/s00170-020-05055-w, pp. 763–773, 2020. [27] D. Petek, D. Hannouche, and D. Suva, “Osteonecrosis of the Femoral Head: Pathophysiology and Current Concepts of Treatment,” EFORT Open Reviews, Vol. 4, No. 3, pp. 85-97. 10.1302/2058-5241.4.180036, 2019. [28] J. Revaud, P. Weinzaepfel, C. De Souza, and M. Humenberger, “R2D2: Repeatable and Reliable Detector and Descriptor,” Proceedings of Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 1-11, 2019. [29] E. Riba, D. Mishkin, D. Ponsa, E. Rublee, and G. Bradski, “Kornia: an Open Source Differentiable Computer Vision Library for PyTorch,” Proceedings of Winter Conference on Applications of Computer Vision, Snowmass Village, CO, pp. 3674-3683, 2020. [30] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, pp. 234-241, 2015. [31] E. Rosten and T. Drummond, “Machine Learning for High-Speed Corner Detection,” Proceedings of European Conference on Computer Vision, Graz, Austria, pp. 430–443, 2006. [32] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An Efficient Alternative to SIFT or SURF,” Proceedings of International Conference on Computer Vision, Barcelona, Spain, pp. 2564-2571, doi: 10.1109/ICCV.2011.6126544, 2011. [33] P. -E. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich, “SuperGlue: Learning Feature Matching with Graph Neural Networks,” Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, pp. 4937-4946, doi: 10.1109/CVPR42600.2020.00499, 2020. [34] J. Sun, Z. Shen, Y. Wang, H. Bao, and X. Zhou, “LoFTR: Detector-Free Local Feature Matching with Transformers,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, pp. 8918-8927, doi: 10.1109/CVPR46437.2021.00881, 2021. [35] M. Tyszkiewicz, P. Fua, and E. Trulls, “DISK: Learning Local Features with Policy Gradient,” Proceedings of Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 1-12, 2020. [36] B. Zhang, “Augmented Reality Virtual Glasses Try-on Technology Based on iOS Platform,” EURASIP Journal on Image and Video Processing, Vol. No. 132, pp. 1-19, doi: 10.1186/s13640-018-0373-8, 2018. [37] H. Zhao, J. Shi, X. Qi, X. Wang and J. Jia, “Pyramid Scene Parsing Network,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 6230-6239, doi: 10.1109/CVPR.2017.660, 2017. [38] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Scene Parsing through ADE20K Dataset,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 5122-5130, doi: 10.1109/CVPR.2017.544, 2017. [39] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 1856-1867, doi: 10.1109/TMI.2019.2959609, 2020. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91504 | - |
| dc.description.abstract | 本論文提出林對正: 一個電腦視覺的演算法,用於將手術前後的X光影像進行對齊,提供一套系統方便醫師用更有效率的方式比對手術前後的影像差異,取代傳統的手動對齊X光片的方法。這方法能夠在對正影像時,只將特定區域進行對正,解決因對象非剛體而無法線性轉換的問題。因此適合應用在進行全髖關節置換手術過程中比對骨骼位置,讓醫師可以檢查植入物是否安裝正確。
在我們的實驗中,我們以骨盆的X光影像作為實驗對象,每組影像為同個病患在不同時間所拍攝的X光照片。我們實驗了多個不同方法,藉由比對影像之間的相似特徵,計算這些特徵點的位移,便可以將影像進行對齊。 我們透過預先定義的對應關鍵點在影像對齊後的誤差來衡量演算法的表現好壞,這些關鍵點是骨骼系統在解剖學上的重要特徵。實驗的目標是最小化影像對齊後的關鍵點距離,計算這些對應點距離的均方差,作為對我們演算法的評分依據。 | zh_TW |
| dc.description.abstract | In this thesis, we propose LinAlign: a computer vision algorithm for aligning X-ray images before and after surgery, providing a system for surgeons to compare images before and after surgery more efficiently and replace traditional manually aligning radiographs. LinAlign allows only align specific area when aligning images, and solve the problem that linear transformation cannot be performed on non-rigid objects. Therefore, it is suitable for comparing the position of bones during the hip replacement surgery, allowing orthopedic surgeons to make sure that implants have been installed correctly.
In our experiment, we took the X-ray images of the pelvis as our experimental data: each set of images contains the X-ray photographs of the same patient taken at different times. We experiment with different methods. By comparing similar features between images and calculating the displacement of these feature points, the images can be aligned. We evaluate the performance of the algorithm by the error of pre-defined landmarks after alignment. These landmarks are anatomically important features of the skeletal system. The goal of our experiment is to minimize the distance of landmarks between image pair. We take the mean square error of these landmark distances as the performance metric to our algorithm. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-01-28T16:17:52Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-01-28T16:17:52Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
中文摘要 ii ABSTRACT iii CONTENTS v LIST OF FIGURES viii LIST OF TABLES xiv Chapter 1 Introduction 1 1.1 Overview 1 1.2 Mobile C-Arm 2 1.3 Total Hip Arthroplasty 4 1.4 Thesis Organization 9 Chapter 2 Related Works 10 2.1 Feature Detection 10 2.2 Feature Matching 11 2.3 Homography Estimation 13 2.4 Semantic Segmentation 14 2.5 Image Alignment 15 Chapter 3 Background 18 3.1 SIFT 18 3.2 RANSAC 22 3.3 LoFTR 23 3.4 U-Nets 25 3.4.1 U-Net 25 3.4.2 U-Net++ 27 3.5 DeepLab 29 3.5.1 DeepLabV1 29 3.5.2 DeepLabV2 30 3.5.3 DeepLabV3 31 3.5.4 DeepLabV3+ 32 Chapter 4 Methodology 34 4.1 Overview 34 4.2 Feature Matching 35 4.3 Pelvis Segmentation 38 4.4 Weighted Normalized DLT 40 4.4.1 DLT 40 4.4.2 Normalized DLT 42 4.4.3 Weighted Normalized DLT 43 4.5 Image Alignment 44 4.6 Results Visualization 45 4.7 Graphic User Interface (GUI) 46 4.7.1 Desktop Application 47 4.7.2 Web Application 49 Chapter 5 Experimental Results 53 5.1 Datasets 53 5.1.1 Semantic Segmentation Dataset 53 5.1.2 Landmarks Dataset 54 5.2 Feature Matching 57 5.3 Semantic Segmentation 60 5.4 Image Alignment 62 Chapter 6 Conclusion and Future Works 76 References 77 | - |
| dc.language.iso | en | - |
| dc.subject | 影像對準 | zh_TW |
| dc.subject | 特徵匹配 | zh_TW |
| dc.subject | 醫學影像處理 | zh_TW |
| dc.subject | 林對正 | zh_TW |
| dc.subject | 語意分割 | zh_TW |
| dc.subject | Image Alignment | en |
| dc.subject | LinAlign | en |
| dc.subject | Medical Image Processing | en |
| dc.subject | Feature Matching | en |
| dc.subject | Semantic Segmentation | en |
| dc.title | 林對正:全髖關節置換手術前後之X光影像對正 | zh_TW |
| dc.title | LinAlign: X-Ray Image Alignment before and after Total Hip Arthroplasty | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 方瓊瑤;邱立誠 | zh_TW |
| dc.contributor.oralexamcommittee | Chiung-Yao Fang;Li-Cheng Chiu | en |
| dc.subject.keyword | 林對正,醫學影像處理,特徵匹配,語意分割,影像對準, | zh_TW |
| dc.subject.keyword | LinAlign,Medical Image Processing,Feature Matching,Semantic Segmentation,Image Alignment, | en |
| dc.relation.page | 84 | - |
| dc.identifier.doi | 10.6342/NTU202301205 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2023-06-30 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊工程學系 | - |
| dc.date.embargo-lift | 2028-06-27 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 此日期後於網路公開 2028-06-27 | 6.3 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
