Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94494
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield??? | Value | Language |
---|---|---|
dc.contributor.advisor | 林永松 | zh_TW |
dc.contributor.advisor | Yeong-Sung Lin | en |
dc.contributor.author | 林品歷 | zh_TW |
dc.contributor.author | Pin-Li Lin | en |
dc.date.accessioned | 2024-08-16T16:21:38Z | - |
dc.date.available | 2024-08-17 | - |
dc.date.copyright | 2024-08-16 | - |
dc.date.issued | 2024 | - |
dc.date.submitted | 2024-08-08 | - |
dc.identifier.citation | A. Leardini, M. Benedetti, F. Catani, L. Simoncini, and S. Giannini, “An anatomically based protocol for the description of foot segment kinematics during gait,” Clinical Biomechanics, vol. 14, no. 8, pp. 528–536, 1999.
M. Carson, M. Harrington, N. Thompson, J. O'connor, and T. Theologis, “Kinematic analysis of a multi-segment foot model for research and clinical applications: a repeatability analysis,” Journal of Biomechanics, vol. 34, no. 10, pp. 1299–1307, 2001. S. Rao, C. Saltzman, and H. J. Yack, “Ankle rom and stiffness measured at rest and during gait in individuals with and without diabetic sensory neuropathy,” Gait & Posture, vol. 24, no. 3, pp. 295–301, 2006. C. Xu, D. Chai, J. He, X. Zhang, and S. Duan, “InnoHAR: A deep neural network for complex human activity recognition,” IEEE Access, vol. 7, pp. 9893–9902, 2019. Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun, “Deep learning for 3D point clouds: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 12, pp. 4338–4364, 2020. F. J. Lawin, M. Danelljan, P. Tosteberg, G. Bhat, F. S. Khan, and M. Felsberg, “Deep projective 3D semantic segmentation,” in Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I 17, pp. 95–107, Springer, 2017. B. Wu, A. Wan, X. Yue, and K. Keutzer, “SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D lidar point cloud,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1887–1893, May 2018. J. Huang and S. You, “Point cloud labeling using 3D convolutional neural network,” in 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2670–2675, April 2016. B. Graham, M. Engelcke, and L. Van Der Maaten, “3D semantic segmentation with submanifold sparse convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9224–9232, December 2018. A. Dai and M. Nießner, “3DMV: Joint 3D-multi-view prediction for 3D semantic scene segmentation,” in Proceedings of the European Conference on Computer Vision (ECCV), pp. 452–468, March 2018. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660, November 2017. B.-S. Hua, M.-K. Tran, and S.-K. Yeung, “Pointwise convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 984–993, December 2018. L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmentation with superpoint graphs,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4558–4567, December 2018. J. Hou, A. Dai, and M. Nießner, “3D-SIS: 3D semantic instance segmentation of RGB-D scans,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4421–4430, June 2019. W. Wang, R. Yu, Q. Huang, and U. Neumann, “SGPN: Similarity group proposal network for 3D point cloud instance segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2569–2578, December 2018. Z. Wang and F. Lu, “VoxSegNet: Volumetric CNNs for semantic part segmentation of 3D shapes,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 9, pp. 2919–2930, 2019. E. Kalogerakis, M. Averkiou, S. Maji, and S. Chaudhuri, “3D shape segmentation with projective convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3779–3788, November 2017. L. Yi, H. Su, X. Guo, and L. J. Guibas, “SyncSpecCNN: Synchronized spectral CNN for 3D shape segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2282–2290, July 2017. P. Wang, Y. Gan, P. Shui, F. Yu, Y. Zhang, S. Chen, and Z. Sun, “3D shape segmentation via shape fully convolutional networks,” Computers & Graphics, vol. 76, pp. 182–192, 2018. Z. Chen, K. Yin, M. Fisher, S. Chaudhuri, and H. Zhang, “BAE-NET: Branched autoencoder for shape co-segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8490–8499, March 2019. F. Yu, K. Liu, Y. Zhang, C. Zhu, and K. Xu, “PartNet: A recursive part decomposition network for fine-grained and hierarchical shape segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9491–9500, January 2020. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in Neural Information Processing Systems, vol. 30, pp. 6000–6010, 2017. H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V. Koltun, “Point transformer,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 16259–16268, February 2022. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, pp. 2980–2988, December 2017. MCM-Fischer, “MCM-Fischer/VSDFullBodyBoneModels: 3D surface models of the bones of the lower body created from CT datasets of the open source VSDFullBody collection,” Scientific Data, vol. 10, no. 763, 2023. M. Kistler, “VSDFullBody: The virtual skeleton database full body CT collection,” Journal of Medical Internet Research, vol. 15, no. 11, 2013. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94494 | - |
dc.description.abstract | 足踝複合體作為身體的重要支點,在承載體重的同時需要保持足夠的靈活性以適應各種地形。足部結構的錯位以及腳和踝部的受傷或疾病可能導致功能障礙。瞭解足部表面形態變化與內部力學之間的關係,可以提升臨床診斷的準確性和治療的效率。然而,由於現有測量技術的限制,我們尚缺乏非侵入性和無放射性的骨骼測量手段,並且無法取得動態的骨骼資訊,骨骼排列對足底表面形態的影響也仍不明確。此外,現有的足部模型理論大多基於研究人員的主觀判斷,缺乏足夠的實驗論證,且使用外部儀器測量會導致結果不精準,在臨床使用上有所限制。因此,本研究提出了一種創新的兩階段 AI 足部模型框架,結合 3D 點雲建模技術和基於深度學習及注意力機制的 Transformer 模型,可以有效利用足部皮膚生成足部骨骼結構。這種框架為 3D 足部建模提供了一種更快速、更病患友好的方法,填補了非侵入性、無放射性的骨骼測量技術缺口,並解決了現有足部模型在臨床應用上的困難。我們期望此 AI 足部模型能成為治療複雜足踝疾病的重要技術貢獻,增強對足踝複雜機制的理解和治療能力。不僅讓醫生更容易進行診斷,也能減少患者的負擔,並為足部醫療研究提供助力。 | zh_TW |
dc.description.abstract | The foot and ankle complex serves as a crucial support point for the body, balancing the need to bear weight while maintaining sufficient flexibility to adapt to various terrains. Misalignments in the foot structure, as well as injuries or diseases of the foot and ankle, can lead to functional impairments. Understanding the relationship between foot surface morphology and internal mechanics can enhance clinical diagnostic accuracy and treatment efficiency. However, current measurement technologies lack non-invasive and non-radiative methods for bone measurement, and dynamic bone information cannot be obtained. The impact of bone alignment on the plantar surface morphology also remains unclear. Additionally, existing foot model theories are mostly based on researchers' subjective judgments, lacking sufficient experimental validation, and the use of external instruments for measurement often results in inaccuracies, limiting their clinical applicability. To address these challenges, this study proposes an innovative two-stage AI foot model framework that combines 3D point cloud modeling techniques with Transformer models based on deep learning and attention mechanisms. This framework effectively uses foot skin data to generate foot bone structures. It offers a faster, more patient-friendly method for 3D foot modeling, filling the gap in non-invasive, non-radiative bone measurement technologies, and addressing the difficulties in clinical applications of existing foot models. We anticipate that this AI foot model will become a significant technological contribution to treating complex foot and ankle conditions, enhancing our understanding and treatment capabilities for the intricate mechanisms of the foot and ankle. It will not only make diagnosis easier for doctors but also reduce the burden on patients and support foot and ankle medical research. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-16T16:21:38Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2024-08-16T16:21:38Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 誌謝 i
摘要 iv Abstract v Contents vii List of Figures x List of Tables xi Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation 2 1.3 Objective 3 1.4 Clinical Implication 4 1.5 Contribution 5 Chapter 2 Literature Review 7 2.1 Foot Kinematics Analysis 7 2.2 Deep Learning and 3D Point Clouds Processing 9 2.2.1 Deep Learning 9 2.2.2 Deep Learning for 3D Point Clouds 10 2.2.2.1 Semantic Segmentation for Point Cloud 10 2.2.2.2 Instance Segmentation for Point Cloud 11 2.2.2.3 Part Segmentation for Point Cloud 11 2.2.3 Transformer for Point Clouds 12 2.2.3.1 Transformer 12 2.2.3.2 Point Transformer 13 2.2.4 Focal Loss 13 2.3 Summary 14 Chapter 3 Proposed Methods 15 3.1 Framework Overview 15 3.2 Foot Skin Segmentation 16 3.2.1 Skin Regions Labeling 17 3.2.2 Re-downsampling 18 3.2.3 Segmentation Model: Point Transformer 19 3.2.4 Training Segmentation Model 20 3.2.4.1 Cross Entropy Loss 20 3.2.4.2 Focal Loss 21 3.2.5 Evaluation Metrics 22 3.2.5.1 Accuracy 22 3.2.5.2 Intersection over Union and Mean Intersection over Union 23 3.3 Foot Bones Generation 25 3.3.1 Foot Bones Grouping 25 3.3.2 Matching Algorithm and Displacement Calculation 28 3.3.3 Generation Model: Modified Point Transformer 30 3.3.4 Training Generation Model 31 3.3.5 Evaluation Metrics 33 3.4 Model Implementation 33 Chapter 4 Experiments and Results 34 4.1 Open Access Dataset 34 4.2 Foot Skin Segmentation Experiments 35 4.2.1 Segmentation Based on the Oxford Foot Model 36 4.2.2 Detailed Region Segmentation of the Foot Skin 37 4.2.3 Enhancing Segmentation Performance of Small Regions 39 4.2.4 Concluding Remarks of Foot Skin Segmentation Experiments 43 4.3 Foot Bone Generation Experiments 44 4.3.1 Foot Bone Generation from Whole Foot Skin 45 4.3.2 Foot Bone Generation from Segmented Foot Skin 48 4.3.3 Foot Bone Generation with More Detailed Bone Group 51 4.3.4 Concluding Remarks of Foot Bone Generation Experiments 54 4.4 Limitation 56 Chapter 5 Conclusions 58 5.1 Conclusions 58 5.2 Future Work 60 References 62 | - |
dc.language.iso | en | - |
dc.title | AI 足部模型:基於 Transformer 和點雲技術的三維足骨建模 | zh_TW |
dc.title | AI Foot Model: 3D Foot Bone Modeling with Transformer and Point Cloud Techniques | en |
dc.type | Thesis | - |
dc.date.schoolyear | 112-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 蕭邱漢;呂東武;楊瀅臻 | zh_TW |
dc.contributor.oralexamcommittee | Qiu-Han Xiao;Tung-Wu Lu;Ying-Zhen Yang | en |
dc.subject.keyword | 足踝複合體,足部模型,立體點雲,注意力機制,深度學習, | zh_TW |
dc.subject.keyword | Foot and Ankle Complex,Foot Model,Three-Dimensional Point Clouds,Attention Mechanism,Deep Learning, | en |
dc.relation.page | 65 | - |
dc.identifier.doi | 10.6342/NTU202403967 | - |
dc.rights.note | 未授權 | - |
dc.date.accepted | 2024-08-12 | - |
dc.contributor.author-college | 管理學院 | - |
dc.contributor.author-dept | 資訊管理學系 | - |
Appears in Collections: | 資訊管理學系 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-112-2.pdf Restricted Access | 2.2 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.