請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99578完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳世杰 | zh_TW |
| dc.contributor.advisor | Shyh-Jye Chen | en |
| dc.contributor.author | 徐郁 | zh_TW |
| dc.contributor.author | Yu Hsu | en |
| dc.date.accessioned | 2025-09-16T16:10:31Z | - |
| dc.date.available | 2025-09-17 | - |
| dc.date.copyright | 2025-09-16 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-08-04 | - |
| dc.identifier.citation | Altman, D.G. and Bland, J.M. (1983) ‘Measurement in Medicine: The Analysis of Method Comparison Studies’, Journal of the Royal Statistical Society. Series D (The Statistician), 32(3), pp. 307–317. Available at: https://doi.org/10.2307/2987937.
Archer, H. et al. (2023) ‘Deep learning generated lower extremity radiographic measurements are adequate for quick assessment of knee angular alignment and leg length determination’, Skeletal Radiology, 53(5), pp. 923–933. Available at: https://doi.org/10.1007/s00256-023-04502-5. Archer, H. et al. (2025) ‘Are artificial intelligence generated lower extremity radiographic measurements accurate in a cohort with implants?’, Skeletal Radiology [Preprint]. Available at: https://doi.org/10.1007/s00256-025-04936-z. COCO - Common Objects in Context (2025). Available at: https://cocodataset.org/#home (Accessed: 4 August 2025). Cullen, D. et al. (2025) ‘An AI-based system for fully automated knee alignment assessment in standard AP knee radiographs’, The Knee, 54, pp. 99–110. Available at: https://doi.org/10.1016/j.knee.2025.02.013. CVAT.ai Corporation (2023) ‘Computer Vision Annotation Tool (CVAT)’. Available at: https://github.com/cvat-ai/cvat (Accessed: 4 August 2025). Datta, D. (2025) ‘deepankardatta/blandr’. Available at: https://github.com/deepankardatta/blandr (Accessed: 4 August 2025). Davis, C. (ed.) (2023) Statistical Testing with jamovi Health: Second Edition. S.l.: VOR Press. ‘gamlj/gamlj’ (2025). GAMLj module for jamovi. Available at: https://github.com/gamlj/gamlj (Accessed: 4 August 2025). GitHub - jamovi/jamovi: jamovi - open software to bridge the gap between researcher and statistician (2025). Available at: https://github.com/jamovi/jamovi (Accessed: 4 August 2025). GitHub - ultralytics/ultralytics: Ultralytics YOLO 🚀 (2025). Available at: https://github.com/ultralytics/ultralytics (Accessed: 4 August 2025). Gurney, B. (2002) ‘Leg length discrepancy’, Gait & Posture, 15(2), pp. 195–206. Available at: https://doi.org/10.1016/S0966-6362(01)00148-5. Heller, M.T. et al. (2025) ‘Comparison of an AI-driven planning tool and manual radiographic measurements in total knee arthroplasty’, Computational and Structural Biotechnology Journal, 28, pp. 148–155. Available at: https://doi.org/10.1016/j.csbj.2025.04.009. Jiang, T. et al. (2023) ‘RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose’. arXiv. Available at: http://arxiv.org/abs/2303.07399 (Accessed: 1 June 2023). Khanam, R. and Hussain, M. (2024) ‘YOLOv11: An Overview of the Key Architectural Enhancements’. arXiv. Available at: https://doi.org/10.48550/arXiv.2410.17725. Kim, Y.-T. et al. (2024) ‘HKA-Net: clinically-adapted deep learning for automated measurement of hip-knee-ankle angle on lower limb radiography for knee osteoarthritis assessment’, Journal of Orthopaedic Surgery and Research, 19(1), p. 777. Available at: https://doi.org/10.1186/s13018-024-05265-y. Kreiss, S., Bertoni, L. and Alahi, A. (2019) ‘PifPaf: Composite Fields for Human Pose Estimation’, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA: IEEE, pp. 11969–11978. Available at: https://doi.org/10.1109/CVPR.2019.01225. Langensiepen, S. et al. (2013) ‘Measuring procedures to determine the Cobb angle in idiopathic scoliosis: a systematic review’, European Spine Journal, 22(11), pp. 2360–2371. Available at: https://doi.org/10.1007/s00586-013-2693-9. Lassalle, L. et al. (2024) ‘Evaluation of a deep learning software for automated measurements on full-leg standing radiographs’, Knee Surgery & Related Research, 36(1), p. 40. Available at: https://doi.org/10.1186/s43019-024-00246-1. LeCun, Y., Bengio, Y. and Hinton, G. (2015) ‘Deep learning’, Nature, 521(7553), pp. 436–444. Available at: https://doi.org/10.1038/nature14539. Li, L. et al. (2022) ‘Towards High Performance One-Stage Human Pose Estimation’, in Proceedings of the 4th ACM International Conference on Multimedia in Asia, pp. 1–5. Available at: https://doi.org/10.1145/3551626.3564968. Lindeberg, T. (1993) ‘Discrete Derivative Approximations with Scale-Space Properties : A Basis for Low-Level Feature Extraction’, Journal of Mathematical Imaging and Vision, 3(4), pp. 349–376. MacDessi, S.J. et al. (2021) ‘Coronal Plane Alignment of the Knee (CPAK) classification: a new system for describing knee phenotypes’, The Bone & Joint Journal, 103-B(2), pp. 329–337. Available at: https://doi.org/10.1302/0301-620X.103B2.BJJ-2020-1050.R1. Mardia, K.V., Kent, J.T. and Bibby, J.M. (2006) Multivariate Analysis. Amsterdam: Academic Press. Munea, T.L. et al. (2020) ‘The Progress of Human Pose Estimation: A Survey and Taxonomy of Models Applied in 2D Human Pose Estimation’, IEEE Access, 8, pp. 133330–133348. Available at: https://doi.org/10.1109/ACCESS.2020.3010248. Navarro, D. and Foxcroft, D. (2025) Learning Statistics with Jamovi: A Tutorial for Beginners in Statistical Analysis. Cambridge, UK: Open Book Publishers. Nelder, J.A. and Wedderburn, R.W.M. (1972) ‘Generalized Linear Models’, Journal of the Royal Statistical Society. Series A (General), 135(3), pp. 370–384. Available at: https://doi.org/10.2307/2344614. Richardson, P. and Machan, L. (2021) Jamovi for Psychologists. London: Red Globe Press. Sabharwal, S. and Zhao, C. (2009) ‘The Hip-Knee-Ankle Angle in Children: Reference Values Based on a Full-Length Standing Radiograph’, JBJS, 91(10), p. 2461. Available at: https://doi.org/10.2106/JBJS.I.00015. Salzmann, M. et al. (2024) ‘Artificial intelligence-based assessment of leg axis parameters shows excellent agreement with human raters: A systematic review and meta-analysis’, Knee Surgery, Sports Traumatology, Arthroscopy, 33(1), pp. 177–190. Available at: https://doi.org/10.1002/ksa.12362. Sheehy, L. et al. (2011) ‘Does measurement of the anatomic axis consistently predict hip-knee-ankle angle (HKA) for knee alignment studies in osteoarthritis? Analysis of long limb radiographs from the multicenter osteoarthritis (MOST) study’, Osteoarthritis and Cartilage, 19(1), pp. 58–64. Available at: https://doi.org/10.1016/j.joca.2010.09.011. Waldt, S. and Wörtler, K. (2013) Measurements and Classifications in Musculoskeletal Radiology. Illustrated edition. Stuttgart New York: Thieme. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99578 | - |
| dc.description.abstract | 背景: 醫學影像的量化分析在骨骼肌肉放射科醫學領域至關重要,但傳統手動測量方法耗時、費力且易受主觀性影響,導致結果變異性大。深度學習在一般影像關鍵點偵測上表現卓越,然而直接應用於醫學X射線影像仍面臨獨特挑戰,例如金屬植入物造成的遮擋和影像品質不佳等問題。
目的: 本研究旨在開發一個高效、客觀且高精度的自動化工具,透過深度學習模型實現下肢骨骼關鍵點的自動偵測與姿態的精確量測,以解決臨床上人工測量的痛點,提升診斷的標準化與精準度。 方法: 研究採用深度學習中基於直接關鍵點標註的策略,從全下肢站立位X射線影像中預測髖關節、膝關節和踝關節的精確像素坐標。資料集包含來自2897位病患的4396張長腿X射線影像,並採用平衡劃分策略,考量了金屬植入物等特徵的分佈,以確保模型的代表性與穩健性。模型的效能透過訓練損失、交叉驗證、預測與真實值比較視覺化、關鍵點坐標預測誤差分析及Bland-Altman分析進行評估。 結果: 本研究對深度學習模型進行了全面評估。訓練損失與交叉驗證結果顯示模型穩健且準確。模型在關鍵點座標預測誤差方面表現優異,廣義線性模型分析揭示患者年齡及性別與植入物有無的交互作用對誤差有顯著影響。Bland-Altman分析顯示髖膝踝角(HKA)的模型預測值與真實值之間偏差為 0.0197 度;髖踝距(HA)的偏差為 0.136 毫米。 結論: 本研究成功開發基於YOLOv11 Pose的下肢X射線骨骼關鍵點自動化偵測與姿態測量系統,其中YOLO11l-pose模型表現最優。為提升泛用性,本研究納入多樣化影像並分析座標誤差的特徵貢獻度,發現在平衡資料集劃分下,能達到單獨的性別、身高、體重、BMI及植入物有無則無顯著影響。儘管面臨真實值標註與資料來源限制,此工具仍具備顯著提升骨科診斷標準化與精準度的潛力。 | zh_TW |
| dc.description.abstract | Background: Quantitative analysis of medical images is crucial in musculoskeletal medicine or radiology, but traditional manual measurement methods are time-consuming, labor-intensive, subjective, and prone to variability. While deep learning excels in general image keypoint detection, its direct application to medical X-ray images faces unique challenges, such as obliterations caused by metal implants and suboptimal image quality.
Purpose: This study aims to develop an efficient, objective, and highly accurate automated tool using deep learning models for automatic detection of lower limb skeletal keypoints and precise posture measurement from X-ray images, thereby addressing the pain points of manual measurements in clinical practice and enhancing diagnostic standardization and accuracy. Methods: This study employs a deep learning strategy based on direct keypoint annotation to predict the precise pixel coordinates of hip, knee, and ankle joints from full-leg standing X-ray radiographs. The dataset comprises 4396 long-leg X-ray images from 2897 distinct patients, utilizing a balanced splitting strategy that considers the distribution of features like metal implants to ensure model representativeness and robustness. Model performance was evaluated using training loss, cross-validation, visualization of predicted versus ground truth comparisons, keypoint coordinate prediction error analysis, and Bland-Altman analysis. Results: This study comprehensively evaluated the deep learning model. Training loss and cross-validation results indicated model robustness and accuracy. Model demonstrated superior performance in keypoint coordinate prediction error, with Generalized Linear Model analysis revealing that patient age and the interaction between sex and implant presence significantly influenced error. Bland-Altman analysis showed a bias of 0.0197 degrees for Hip-Knee-Ankle (HKA) angle; and a bias of 0.136 millimeters for Hip-Ankle (HA) length. Conclusion: This study successfully developed an automated system for lower limb X-ray skeletal keypoint detection and posture measurement using the YOLOv11 Pose model, with YOLO11l-pose demonstrating superior performance. To enhance generalizability, diverse image data was incorporated, and feature contribution to coordinate errors was analyzed, finding that under balanced dataset division, individual sex, height, weight, BMI, and presence or absence of implants had no significant impact. Despite challenges like ground truth annotation refinement and data source limitations, this tool holds significant potential to enhance orthopedic diagnostic standardization and accuracy. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-09-16T16:10:31Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-09-16T16:10:31Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 論文口試委員會審定書 i
誌謝 ii 中文摘要 iii 英文摘要 v 目次 vii 圖次 x 表次 xii 第一章 引言 1 第二章 文獻回顧 4 2.1 人工智慧在骨骼X射線影像分析之策略與應用 4 2.2 商用軟體解決方案 6 2.3 骨骼 X 射線影像的關鍵測量參數及其臨床意義 7 2.3.1 軸線的定義 7 2.3.2 髖膝踝角及其臨床判讀 7 2.3.3 長度測量 8 2.3.4 角度測量 8 2.3.5 膝關節冠狀面力線分類 9 2.4 人工智慧在骨骼 X 射線影像測量中的挑戰與系統性回顧 9 第三章 材料與方法 11 3.1 材料 11 3.1.1 單位 11 3.1.2 特徵擷取及定義 11 3.1.3 資料集劃分 14 3.2 方法 21 3.2.1 關鍵點定義 21 3.2.2 深度學習/類神經網路演算法 29 3.2.3 評估指標或衡量標準 30 3.2.4 統計學方法 31 第四章 結果 33 4.1 訓練損失 33 4.2 交叉驗證結果 34 4.3 預測與真實值比較示例視覺化 35 4.4 關鍵點座標預測誤差分析 38 4.5 最佳模型座標誤差特徵貢獻度分析 39 4.6 Bland-Altman 分析 44 4.6.1 髖膝踝角 45 4.6.2 髖踝距 46 第五章 討論 48 5.1 與其他研究的差異 48 5.1.1 廣泛的影像納入標準,強調實用性 48 5.1.2 鎖定小兒族群的獨特考量 48 5.1.3 與直接回歸模型的比較:可解釋性優勢 48 5.1.4 泛用性評估的不足與本研究的補充 49 5.1.5 髖關節定義以股骨頭頂點與股骨頭中心點的不同 49 5.2 限制 53 5.2.1 真實值標註的潛在改進空間 53 5.2.2 關鍵點和解剖標誌點的數量限制 54 5.2.3 樣本來源的單一性與泛用性挑戰 54 5.2.4 沒有量化關節破壞程度 54 5.3 未來展望 54 5.3.1 擴展關鍵點和解剖標誌點的納入 55 5.3.2 嘗試訓練更大型模型並進行超參數優化 55 5.3.3 蒐集全年齡段平衡的影像資料 55 5.4 結論 55 參考文獻 57 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | X射線影像 | zh_TW |
| dc.subject | 骨骼關鍵點 | zh_TW |
| dc.subject | 姿態估計 | zh_TW |
| dc.subject | 髖膝踝角 | zh_TW |
| dc.subject | 醫療影像 | zh_TW |
| dc.subject | 人工智慧 | zh_TW |
| dc.subject | X-ray imaging | en |
| dc.subject | Artificial intelligence | en |
| dc.subject | Medical imaging | en |
| dc.subject | Hip-knee-ankle angle | en |
| dc.subject | Pose estimation | en |
| dc.subject | Skeletal keypoints | en |
| dc.subject | Deep learning | en |
| dc.title | 基於深度學習的下肢X射線骨骼關鍵點和姿態測量 | zh_TW |
| dc.title | Deep Learning-Based Lower Limb X-ray Skeletal Keypoint and Posture Measurement | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 王廷明;周呈霙 | zh_TW |
| dc.contributor.oralexamcommittee | Ting-Ming Wang;Cheng-Ying Chou | en |
| dc.subject.keyword | 深度學習,X射線影像,骨骼關鍵點,姿態估計,髖膝踝角,醫療影像,人工智慧, | zh_TW |
| dc.subject.keyword | Deep learning,X-ray imaging,Skeletal keypoints,Pose estimation,Hip-knee-ankle angle,Medical imaging,Artificial intelligence, | en |
| dc.relation.page | 60 | - |
| dc.identifier.doi | 10.6342/NTU202501605 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-08-04 | - |
| dc.contributor.author-college | 醫學院 | - |
| dc.contributor.author-dept | 臨床醫學研究所 | - |
| dc.date.embargo-lift | 2025-09-17 | - |
| 顯示於系所單位: | 臨床醫學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf | 2.47 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
