請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96742完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 王凡 | zh_TW |
| dc.contributor.advisor | Farn Wang | en |
| dc.contributor.author | 王瀚緯 | zh_TW |
| dc.contributor.author | Han-Wei Wang | en |
| dc.date.accessioned | 2025-02-21T16:20:52Z | - |
| dc.date.available | 2025-12-10 | - |
| dc.date.copyright | 2025-02-21 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-12-17 | - |
| dc.identifier.citation | Hanwei Wang, Feipei Lai and Farn Wang, "Real-Time Multiple Human Height Measurements With Occlusion Handling Using LiDAR and Camera of a Mobile Device," in IEEE Access, vol. 12, pp. 122588-122596, 2024, doi: 10.1109/ACCESS.2024.3447153.
Tom J. Liu, Hanwei Wang, Mesakh Christian, Che-Wei Chang, Feipei Lai and Hao-Chih Tai, “Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera,” Scientific Reports 13, 680 (2023), doi: 10.1038/s41598-022-26812-9. Che Wei Chang, Hanwei Wang, Feipei Lai, Mesakh Christian, Shih-Chen Huang, Yo Shen Chen, “Comparison of 3D and 2D Area Measurement of Acute Burn Wounds with LiDAR Technique and Deep Learning Model”, In Prep. Hanwei Wang, Farn Wang, Che Wei Chang, and Feipei Lai, “Real-Time Fall Detection with Ground Height Awareness using LiDAR and a Camera of a Mobile Device”, In Prep. B. Masanovic, J. Gardasevic, and F. Arifi, "RELATIONSHIP BETWEEN FOOT LENGTH MEASUREMENTS AND BODY HEIGHT: A PROSPECTIVE REGIONAL STUDY AMONG ADOLESCENTS IN NORTHERN REGION OF KOSOVO," Anthropologie, vol. 57, no. 2, pp. 227-233, 2019, doi: 10.26720/anthro.18.01.23.1. J. C. Y. Cheng et al., "Can We Predict Body Height from Segmental Bone Length Measurements? A Study of 3,647 Children," Journal of Pediatric Orthopaedics, vol. 18, no. 3, pp. 387-393, 1998/05 1998, doi: 10.1097/01241398-199805000-00022. S. Popovic, D. Bjelica, G. Georgiev, D. Krivokapic, and R. Milasinovic, "Body Height and its Estimation Utilizing Arm Span Measurements in Macedonian Adults," The Anthropologist, vol. 24, no. 3, pp. 737-745, 2016/06 2016, doi: 10.1080/09720073.2016.11892070. C. Pelin, R. Zağyapan, C. Yazıcı, and A. Kürkçüoğlu, "Body Height Estimation from Head and Face Dimensions: A Different Method*," Journal of Forensic Sciences, vol. 55, no. 5, pp. 1326-1330, 2010/09 2010, doi: 10.1111/j.1556-4029.2010.01429.x. S. Gunel, H. Rhodin, and P. Fua, "What Face and Body Shapes Can Tell Us About Height," presented at the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019/10, 2019, doi: 10.1109/iccvw.2019.00226. C. BenAbdelkader and Y. Yacoob, "Statistical body height estimation from a single image," presented at the 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition, 2008/09, 2008, doi: 10.1109/afgr.2008.4813453. A. C. Gallagher, A. C. Blose, and T. Chen, "Jointly estimating demographics and height with a calibrated camera," presented at the 2009 IEEE 12th International Conference on Computer Vision, 2009/09, 2009, doi: 10.1109/iccv.2009.5459340. Y.-P. Guan, "Unsupervised human height estimation from a single image," Journal of Biomedical Science and Engineering, vol. 02, no. 06, pp. 425-430, 2009, doi: 10.4236/jbise.2009.26061. M. Sakina, I. Muhammad, and S. S. Abdullahi, "A multi-factor approach for height estimation of an individual using 2D image," Procedia Computer Science, vol. 231, pp. 765-770, 2024. M. Momeni-K, S.C. Diamantas, F. Ruggiero, and B. Siciliano, "Height Estimation from a Single Camera View," presented at the Proceedings of the International Conference on Computer Vision Theory and Applications, 2012, doi: 10.5220/0003866203580364. F. A. Andaló, G. Taubin, and S. Goldenstein, "Efficient height measurements in single images based on the detection of vanishing points," Computer Vision and Image Understanding, vol. 138, pp. 51-60, 2015/09 2015, doi: 10.1016/j.cviu.2015.03.017. L. Pradhan et al., "Feature Extraction from 2D Images for Body Composition Analysis," presented at the 2015 IEEE International Symposium on Multimedia (ISM), 2015/12, 2015, doi: 10.1109/ism.2015.117. A. Deak, O. Kainz, M. Michalko, and F. Jakab, "Estimation of human body height from uncalibrated image," presented at the 2017 15th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2017/10, 2017, doi: 10.1109/iceta.2017.8102474. Y. Chai and X. Cao, "A Real-Time Human Height Measurement Algorithm Based on Monocular Vision," presented at the 2018 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC), 2018/05, 2018, doi: 10.1109/imcec.2018.8469428. D. Bieler, S. Gunel, P. Fua, and H. Rhodin, "Gravity as a reference for estimating a person's height from video," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 8569-8577. Y. Liu, A. Sowmya, and H. Khamis, "Single camera multi-view anthropometric measurement of human height and mid-upper arm circumference using linear regression," (in eng), PloS one, vol. 13, no. 4, pp. e0195600-e0195600, 2018, doi: 10.1371/journal.pone.0195600. F. Tosti et al., "Human height estimation from highly distorted surveillance image," Journal of Forensic Sciences, vol. 67, no. 1, pp. 332-344, 2022. I. Samejima, K. Maki, S. Kagami, M. Kouchi, and H. Mizoguchi, "A body dimensions estimation method of subject from a few measurement items using KINECT," presented at the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2012/10, 2012, doi: 10.1109/icsmc.2012.6378315. M. Robinson, and M. B. Parkinson, "Estimating anthropometry with Microsoft Kinect," Proceedings of the 2nd International Digital Human Modeling Symposium, 2013. C. Ciampini, A. Petrillo, F. Zomparelli, and S. Groutas, "An innovative method for human height estimation combining video images and 3D laser scanning," Journal of Forensic Sciences, vol. 69, no. 1, pp. 301-315, 2024. A. Espitia-Contreras, P. Sanchez-Caiman, and A. Uribe-Quevedo, "Development of a Kinect-based anthropometric measurement application," presented at the 2014 IEEE Virtual Reality (VR), 2014/03, 2014, doi: 10.1109/vr.2014.6802056. H.-W. Lee et al., "Kinect Who's Coming—Applying Kinect to Human Body Height Measurement to Improve Character Recognition Performance," Smart Science, vol. 3, no. 2, pp. 117-121, 2015/01 2015, doi: 10.1080/23080477.2015.11665645. A. Naufal, C. Anam, C. E. Widodo, and G. Dougherty, "Automated Calculation of Height and Area of Human Body for Estimating Body Weight Using a Matlab-based Kinect Camera," Smart Science, vol. 10, no. 1, pp. 68-75, 2021/11/02 2021, doi: 10.1080/23080477.2021.1983940. A. M. S. B. Adikari, N. G. C. Ganegoda, and W. K. I. L. Wanniarachchi, "Non-Contact Human Body Parameter Measurement Based on Kinect Sensor," IOSR Journal of Computer Engineering, vol. 19, no. 3, pp. 80-85, 2017/05 2017, doi: 10.9790/0661-1903028085. F. Yin and S. Zhou, "Accurate Estimation of Body Height From a Single Depth Image via a Four-Stage Developing Network," presented at the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020/06, 2020, doi: 10.1109/cvpr42600.2020.00829. D.-s. Lee, J.-s. Kim, S. C. Jeong, and S.-k. Kwon, "Human Height Estimation by Color Deep Learning and Depth 3D Conversion," Applied Sciences, vol. 10, no. 16, p. 5531, 2020/08/10 2020, doi: 10.3390/app10165531. M. Hasan, R. Goto, J. Hanawa, H. Fukuda, Y. Kuno, and Y. Kobayashi, "Person Property Estimation Based on 2D LiDAR Data Using Deep Neural Network," in Intelligent Computing Theories and Application: 17th International Conference, ICIC 2021, Shenzhen, China, August 12–15, 2021, Proceedings, Part I 17, 2021: Springer, pp. 763-773. T. Krzeszowski, B. Dziadek, C. França, F. Martins, É. R. Gouveia, and K. Przednowek, "System for Estimation of Human Anthropometric Parameters Based on Data from Kinect v2 Depth Camera," (in eng), Sensors (Basel), vol. 23, no. 7, p. 3459, 2023, doi: 10.3390/s23073459. K. Bartol, D. Bojanić, T. Petković, and T. Pribanić, "A review of body measurement using 3D scanning," Ieee Access, vol. 9, pp. 67281-67301, 2021. Z. Mikalai, D. Andrey, H. S. Hawas, Н. Tеtiana, and S. Oleksandr, "Human body measurement with the iPhone 12 Pro LiDAR scanner," in AIP Conference Proceedings, 2022, vol. 2430, no. 1: AIP Publishing. E. Hjelmås and B. K. Low, "Face Detection: A Survey," Computer Vision and Image Understanding, vol. 83, no. 3, pp. 236-274, 2001/09 2001, doi: 10.1006/cviu.2001.0921. A. Kumar, A. Kaur, and M. Kumar, "Face detection techniques: a review," Artificial Intelligence Review, vol. 52, no. 2, pp. 927-948, 2018/08/04 2018, doi: 10.1007/s10462-018-9650-2. F. Martini, W. C. Ober, K. Welch, C. E. Ober, and R. T. Hutchings, Martini’s atlas of the human body. Boston: Pearson, 2015. Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000, doi: 10.1109/34.888718. C. K. Sen, "Human wounds and its burden: An updated compendium of estimates," Adv Wound Care (New Rochelle), vol. 8, pp. 39–48, 2019, doi:10.1089/wound.2019.0946. B. Song and A. Sacan, "2012 IEEE International Conference on Bioinformatics and Biomedicine," 2012, pp. 1–4. M. F. Ahmad Fauzi et al., "Computerized segmentation and measurement of chronic wound images," Comput Biol Med, vol. 60, pp. 74–85, 2015, doi:10.1016/j.compbiomed.2015.02.015. N. D. J. Hettiarachchi, R. B. H. Mahindaratne, G. D. C. Mendis, H. T. Nanayakkara, and N. D. Nanayakkara, "2013 IEEE Point-of-Care Healthcare Technologies (PHT)," 2013, pp. 298–301. A. F. M. Hani, L. Arshad, A. S. Malik, A. Jamil, and F. Y. B. Bin, "2012 4th International Conference on Intelligent and Advanced Systems (ICIAS2012)," 2012, pp. 362–367. K. Wantanajittikul, S. Auephanwiriyakul, N. Theera-Umpon, and T. Koanantakool, "The 4th 2011 Biomedical Engineering International Conference," 2011, pp. 169–173. Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, pp. 436–444, 2015, doi:10.1038/nature14539. A. Krizhevsky, I. Sutskever, and G. Hinton, "ImageNet classification with deep convolutional neural networks," Neural Information Processing Systems, vol. 25, 2012, doi:10.1145/3065386. J. Long, E. Shelhamer, and T. Darrell, "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)," 2015, pp. 3431–3440. C. Wang et al., "2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)," 2015, pp. 2415–2418. M. Goyal, M. H. Yap, N. D. Reeves, S. Rajbhandari, and J. Spragg, "2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC)," 2017, pp. 618–623. X. Liu et al., "2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)," 2017, pp. 1–7. C. Wang et al., "Fully automatic wound segmentation with deep convolutional neural networks," Sci Rep, vol. 10, p. 21897, 2020, doi:10.1038/s41598-020-78799-w. C. W. Chang et al., "Deep learning approach based on superpixel segmentation assisted labeling for automatic pressure ulcer diagnosis," PLoS One, vol. 17, p. e0264139, 2022, doi:10.1371/journal.pone.0264139. K. Wada, "Labelme: Image polygonal annotation with Python," 2018. [Online]. Available: https://github.com/wkentaro/labelme. O. Ronneberger, P. Fischer, and T. Brox, "MICCAI." K. He, X. Zhang, S. Ren, and J. Sun, "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)," 2016, pp. 770–778. J. Deng et al., "2009 IEEE Conference on Computer Vision and Pattern Recognition," 2009, pp. 248–255. K. He, G. Gkioxari, P. Dollár, and R. Girshick, "2017 IEEE International Conference on Computer Vision (ICCV)," 2017, pp. 2980–2988. T.-Y. Lin et al., "Microsoft COCO: Common objects in context," arXiv:1405.0312, 2014. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2014arXiv1405.0312L. D. Peterson, "Polygon coordinates and areas," 2019. [Online]. Available: https://www.themathdoctors.org/polygon-coordinates-and-areas. C. A. Schneider, W. S. Rasband, and K. W. Eliceiri, "NIH Image to ImageJ: 25 years of image analysis," Nat Methods, vol. 9, pp. 671–675, 2012, doi:10.1038/nmeth.2089. O. Ronneberger, P. Fischer, and T. Brox, "Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015," Springer International Publishing, 2015, pp. 234–241. X. Dong et al., "Automatic multiorgan segmentation in thorax CT images using U-net-GAN," Med Phys, vol. 46, pp. 2157–2168, 2019, doi:10.1002/mp.13458. Y. Zhang et al., "Automatic breast and fibroglandular tissue segmentation in breast MRI using deep learning by a fully-convolutional residual neural network U-Net," Acad Radiol, vol. 26, pp. 1526–1535, 2019, doi:10.1016/j.acra.2019.01.012. P. Blanc-Durand, A. Van Der Gucht, N. Schaefer, E. Itti, and J. O. Prior, "Automatic lesion detection and segmentation of 18F-FET PET in gliomas: A full 3D U-Net convolutional neural network study," PLoS One, vol. 13, p. e0195798, 2018, doi:10.1371/journal.pone.0195798. A. Fabijanska, "Segmentation of corneal endothelium images using a U-Net-based convolutional neural network," Artif Intell Med, vol. 88, pp. 1–13, 2018, doi:10.1016/j.artmed.2018.04.004. A. O. Vuola, S. U. Akram, and J. Kannala, “2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019),” 2019, pp. 208–212. V. Couteaux et al., “Automatic knee meniscus tear detection and orientation classification with Mask-RCNN,” Diagn. Interv. Imaging, vol. 100, pp. 235–242, 2019, doi: 10.1016/j.diii.2019.03.002. R. Zhang, C. Cheng, X. Zhao, and X. Li, “Multiscale Mask R-CNN-based lung tumor detection using PET imaging,” Mol. Imaging, vol. 18, p. 1536012119863531, 2019, doi: 10.1177/1536012119863531. J. Y. Chiao et al., “Detection and classification of breast tumors using Mask R-CNN on sonograms,” Medicine (Baltimore), vol. 98, p. e15200, 2019, doi: 10.1097/MD.0000000000015200. B. Garcia-Zapirain, M. Elmogy, A. El-Baz, and A. S. Elmaghraby, “Classification of pressure ulcer tissues with 3D convolutional neural network,” Med. Biol. Eng. Comput., vol. 56, pp. 2245–2258, 2018, doi: 10.1007/s11517-018-1835-y. N. Ohura et al., “Convolutional neural networks for wound detection: The role of artificial intelligence in wound care,” J. Wound Care, vol. 28, pp. S13–S24, 2019, doi: 10.12968/jowc.2019.28.Sup10.S13. S. Zahia, D. Sierra-Sosa, B. Garcia-Zapirain, and A. Elmaghraby, “Tissue classification and segmentation of pressure injuries using convolutional neural networks,” Comput. Methods Programs Biomed., vol. 159, pp. 51–58, 2018, doi: 10.1016/j.cmpb.2018.02.018. S. C. Wang et al., “Point-of-care wound visioning technology: Reproducibility and accuracy of a wound measurement app,” PLoS One, vol. 12, p. e0183139, 2017, doi: 10.1371/journal.pone.0183139. S. Kompalliy, V. Bakarajuy, and S. B. Gogia, “Cloud-driven application for measurement of wound size,” Stud. Health Technol. Inform., vol. 264, pp. 1639–1640, 2019, doi: 10.3233/SHTI190573. Y. Lucas et al., “Wound size imaging: Ready for smart assessment and monitoring,” Adv. Wound Care (New Rochelle), vol. 10, pp. 641–661, 2021, doi: 10.1089/wound.2018.0937. F. Taylor, S. M. Levenson, C. S. Davidson, N. C. Browder, and C. C. Lund, “Problems of protein nutrition in burned patients,” Ann. Surg., vol. 118, pp. 215–223, 1943. G. A. Knaysi, G. F. Crikelair, and B. Cosman, “The rule of nines: Its history and accuracy,” Plast. Reconstr. Surg., vol. 41, pp. 560–563, 1968. V. Harish et al., “Accuracy of burn size estimation in patients transferred to adult burn units in Sydney, Australia: An audit of 698 patients,” Burns, vol. 41, pp. 91–99, 2015. M. G. Baartmans et al., “Accuracy of burn size assessment prior to arrival in Dutch burn centers and its consequences in children: A nationwide evaluation,” Injury, vol. 43, pp. 1451–1456, 2012. D. Parvizi et al., “The potential impact of wrong TBSA estimations on fluid resuscitation in patients suffering from burns: Things to keep in mind,” Burns, vol. 40, pp. 241–245, 2014. G. Litjens et al., “A survey on deep learning in medical image analysis,” Med. Image Anal., vol. 42, pp. 60–88, 2017. J. Ker, L. Wang, J. Rao, and T. Lim, “Deep learning applications in medical image analysis,” IEEE Access, vol. 6, pp. 9375–9389, 2017. C. W. Chang et al., “Deep learning–assisted burn wound diagnosis: Diagnostic model development study,” JMIR Med. Inform., vol. 9, p. e22798, 2021. C. Jiao, K. Su, W. Xie, and Z. Ye, “Burn image segmentation based on Mask Regions with Convolutional Neural Network deep learning framework: More accurate and more convenient,” Burns Trauma, vol. 7, 2019. C. W. Chang et al., “Application of multiple deep learning models for automatic burn wound assessment,” Burns, 2022. L. B. Jørgensen, J. A. Sørensen, G. B. Jemec, and K. B. Yderstræde, “Methods to assess area and volume of wounds–A systematic review,” Int. Wound J., vol. 13, pp. 540–553, 2016. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778. Z. Dai et al., “Requirements for automotive LiDAR systems,” Sensors, vol. 22, p. 7532, 2022. S. Debnath, M. Paul, and T. Debnath, “Applications of LiDAR in agriculture and future research directions,” J. Imaging, vol. 9, p. 57, 2023. D. K. Langemo et al., “Two-dimensional wound measurement: Comparison of 4 techniques,” Adv. Wound Care, vol. 11, pp. 337–343, 1998. L. C. Rogers et al., “Digital planimetry results in more accurate wound measurements: A comparison to standard ruler measurements,” Int. J. Low. Extrem. Wounds, vol. 9, pp. 52–57, 2010. D. Langemo et al., “Measuring wound length, width, and area: Which technique?,” Adv. Skin Wound Care, vol. 21, pp. 42–45, 2008. A. O’Riordan, T. Newe, G. Dooly, and D. Toal, “Stereo vision sensing: Review of existing systems,” in Proc. 12th Int. Conf. Sens. Technol. (ICST), 2018, pp. 1–8. J. Geng, “Structured-light 3D surface imaging: A tutorial,” Adv. Opt. Photon., vol. 3, pp. 128–160, 2011. B. Behroozpour, P. A. Sandborn, M. C. Wu, and B. E. Boser, “LiDAR system architectures and circuits,” IEEE Commun. Mag., vol. 55, pp. 135–142, 2017. A. Shah, C. Wollak, and J. Shah, “Wound measurement techniques: Comparing the use of ruler method, 2D imaging, and 3D scanner,” J. Am. Coll. Clin. Wound Spec., vol. 5, pp. 52–57, 2013. F. L. Bowling et al., “Remote assessment of diabetic foot ulcers using a novel wound imaging system,” Wound Repair Regen., vol. 19, pp. 25–30, 2011. P. Plassmann and T. Jones, “MAVIS: A non-invasive instrument to measure area and volume of wounds,” Med. Eng. Phys., vol. 20, pp. 332–338, 1998. T. J. Liu et al., “Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera,” Sci. Rep., vol. 13, p. 680, 2023. E. G. Kee, R. Kimble, and K. Stockton, “3D photography is a reliable burn wound area assessment tool compared to digital planimetry in very young children,” Burns, vol. 41, pp. 1286–1290, 2015. K. Stockton et al., “3D photography is as accurate as digital planimetry tracing in determining burn wound area,” Burns, vol. 41, pp. 80–84, 2015. Z. M. Rashaan et al., “Three-dimensional imaging is a novel and reliable technique to measure total body surface area,” Burns, vol. 44, pp. 816–822, 2018. A. Bairagi et al., “A pilot study comparing two burn wound stereophotogrammetry systems in a pediatric population,” Burns, vol. 48, pp. 85–90, 2022. E. Farrar, O. Pujji, and S. Jeffery, “Three-dimensional wound mapping software compared to expert opinion in determining wound area,” Burns, vol. 43, pp. 1736–1741, 2017. N. Noury, et al., "Fall detection-principles and methods," in 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007, pp. 1663–1666. M. Mubashir, L. Shao, and L. Seed, "A survey on fall detection: Principles and approaches," Neurocomputing, vol. 100, pp. 144–152, 2013. R. Igual, C. Medrano, and I. Plaza, "Challenges, issues and trends in fall detection systems," Biomedical Engineering Online, vol. 12, no. 1, p. 66, 2013. A. K. Bourke, J. V. O’Brien, and G. M. Lyons, "Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm," Gait & Posture, vol. 26, no. 2, pp. 194–199, 2007. F. Bagala, et al., "Evaluation of accelerometer-based fall detection algorithms on real-world falls," PLOS One, vol. 7, no. 5, p. e37062, 2012. A. K. Bourke and G. M. Lyons, "A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor," Medical Engineering & Physics, vol. 30, no. 1, pp. 84–90, 2008. S.-G. Miaou, P.-H. Sung, and C.-Y. Huang, "A customized human fall detection system using omni-camera images and personal information," in 1st Transdisciplinary Conference on Distributed Diagnosis and Home Healthcare, 2006. D2H2., 2006, pp. 39–42. K. De Miguel, et al., "Home camera-based fall detection system for the elderly," Sensors, vol. 17, no. 12, p. 2864, 2017. Z.-P. Bian, et al., "Fall detection based on body part tracking using a depth camera," IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 2, pp. 430–439, 2014. W. Chen, et al., "Fall detection based on key points of human-skeleton using OpenPose," Symmetry, vol. 12, no. 5, p. 744, 2020. Z.-P. Bian, L.-P. Chau, and N. Magnenat-Thalmann, "Fall detection based on skeleton extraction," in Proc. 11th ACM SIGGRAPH Int. Conf. Virtual-Reality Continuum and its Applications in Industry, 2012, pp. 91–98. T.-L. Le and J. Morel, "An analysis on human fall detection using skeleton from Microsoft Kinect," in 2014 IEEE Fifth International Conference on Communications and Electronics (ICCE), 2014, pp. 282–287. G. Feng, et al., "Floor pressure imaging for fall detection with fiber-optic sensors," IEEE Pervasive Computing, vol. 15, no. 2, pp. 40–47, 2016. Y. Tang, et al., "iPrevent: A novel wearable radio frequency range detector for fall prevention," in 2016 IEEE International Symposium on Radio-Frequency Integration Technology (RFIT), 2016, pp. 1–3. H. Wang, et al., "RT-Fall: A real-time and contactless fall detection system with commodity WiFi devices," IEEE Transactions on Mobile Computing, vol. 16, no. 2, pp. 511–526, 2016. Y. Wang, K. Wu, and L. M. Ni, "Wifall: Device-free fall detection by wireless networks," IEEE Transactions on Mobile Computing, vol. 16, no. 2, pp. 581–594, 2016. A. Rezaei, et al., "Unobtrusive human fall detection system using mmwave radar and data driven methods," IEEE Sensors Journal, vol. 23, no. 7, pp. 7968–7976, 2023. D. Zhang, et al., "Lt-fall: The design and implementation of a life-threatening fall detection and alarming system," Proc. ACM Interactive, Mobile, Wearable, and Ubiquitous Technologies, vol. 7, no. 1, pp. 1–24, 2023. Z.-P. Bian, et al., "Fall detection based on body part tracking using a depth camera," IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 2, pp. 430–439, 2014. L. Yang, Y. Ren, and W. Zhang, "3D depth image analysis for indoor fall detection of elderly people," Digital Communications and Networks, vol. 2, no. 1, pp. 24–34, 2016. M. Kepski and B. Kwolek, "Fall detection using ceiling-mounted 3D depth camera," in 2014 International Conference on Computer Vision Theory and Applications (VISAPP), 2014, vol. 2, pp. 640–647. S. Gasparrini, et al., "A depth-based fall detection system using a Kinect® sensor," Sensors, vol. 14, no. 2, pp. 2756–2775, 2014. T.-H. Tsai and C.-W. Hsu, "Implementation of fall detection system based on 3D skeleton for deep learning technique," IEEE Access, vol. 7, pp. 153049–153059, 2019. X. Xiong, et al., "S3D-CNN: Skeleton-based 3D consecutive-low-pooling neural network for fall detection," Applied Intelligence, vol. 50, no. 10, pp. 3521–3534, 2020. M. Bouazizi, C. Ye, and T. Ohtsuki, "2D LIDAR-Based Approach for Activity Identification and Fall Detection," IEEE Internet of Things Journal, 2021. H. Miawarni, et al., "Fall detection system for elderly based on 2D lidar: A preliminary study of fall incident and activities of daily living (ADL) detection," in 2020 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), 2020, pp. 100–105. M. Bouazizi, et al., "A Novel Approach for Activity, Fall and Gait Detection Using Multiple 2D Lidars," in GLOBECOM 2023 - 2023 IEEE Global Communications Conference, 2023. C Z. Cao, et al., "OpenPose: Real-time multi-person 2D pose estimation using Part Affinity Fields," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 172–186, 2019. G. Varol, et al., "Bodynet: Volumetric inference of 3D human body shapes," in Proc. European Conference on Computer Vision (ECCV), 2018, pp. 20–37. A. Kendall, M. Grimes, and R. Cipolla, "Posenet: A convolutional network for real-time 6-dof camera relocalization," in Proc. IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2938–2946. R. Bajpai and D. Joshi, "MoveNet: A Deep Neural Network for Joint Profile Prediction Across Variable Walking Speeds and Slopes," IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–11, 2021. K. M. Bushby, et al., "Centiles for adult head circumference," Archives of Disease in Childhood, vol. 67, no. 10, pp. 1286–1287, 1992. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96742 | - |
| dc.description.abstract | 醫學測量在診斷、監測和治療各種疾病和健康問題方面扮演著重要角色,醫療環境需要醫護人員在短時間內做出迅速而準確的決策和行動,因此提供即時精準的醫學測量是十分重要的,然而現今所使用的一般傳統醫學測量方式,還需要許多人力介入判斷,不僅耗時費工,也缺乏統一的測量標準,我們必須尋找更快速精準且統一的辦法,以確保達到更好的醫療護理並改善醫療時間。
這篇論文提出一套架構,利用手持裝置的光學雷達與相機取得醫療待測物在真實世界的三維座標,加入機器學習方法的判斷輔助,達到非接觸式且即時的精準醫學測量,改善了一般傳統醫學測量利用人力判斷耗時費工的問題,且讓醫學測量有了統一測量標準,因為是由機器自動判斷,同樣的測量圖片不會有不同人判斷得到不同結果的問題。 本論文基於此架構提出了多項即時醫學測量的應用,例如:測量身高 [1]、測量褥瘡面積 [2]、測量燙傷面積 [3]以及跌倒偵測 [4],並在手持裝置上開發了多套醫學測量軟體來驗證此架構。醫療人員及一般民眾皆可以利用具備光學雷達與相機的手持裝置(例如iPhone Pro與iPad Pro),在下載安裝我們開發的軟體後即可做即時的醫學測量。手持裝置是很適合做為醫學測量的工具,因為手持裝置隨手可及的便利性以及無遠弗屆的傳輸能力,讓遠距醫療測量能真正普及在一般大眾的生活之中,不再是遙不可及的夢想。 | zh_TW |
| dc.description.abstract | Medical measurements play a crucial role in diagnosing, monitoring, and treating various diseases and health problems. Quick and accurate decisions are essential in the medical environment, which emphasizes the need for immediate and precise medical measurements. However, current traditional medical measurement methods rely heavily on human intervention and judgment, leading to time-consuming and labor-intensive processes that lack standardized measurement standards. Enhancing medical care requires faster, more accurate, and unified methods to optimize the time spent on medical procedures.
This paper introduces an architecture that utilizes the LiDAR and camera of a mobile device to capture real-world three-dimensional coordinates of medical objects for measurement. It also incorporates machine learning methods to make precise, non-contact medical measurements in real-time. This approach aims to reduce reliance on human judgment, streamlining medical measurements and establishing unified standards to eliminate discrepancies caused by different individuals making assessments. Building upon this architecture, the paper proposes several medical measurement applications, including Height Measurement [1], Bedsore Area Measurement [2], Burn Area Measurement [3], and Fall Detection [4]. Additionally, we develop sets of medical measurement software for mobile devices to validate this architecture. The applications enable medical staff and the general public to perform real-time medical measurements using mobile devices equipped with LiDAR and cameras (for example, iPhone Pro and iPad Pro), after downloading and installing our software. Utilizing mobile devices for medical measurements is highly advantageous due to their convenience and widespread accessibility, making telemedicine measurement more feasible for the general public. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-21T16:20:52Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-02-21T16:20:52Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 I
中文摘要及關鍵詞 II ABSTRACT AND KEYWORDS III CONTENTS IV LIST OF FIGURES VI LIST OF TABLES VII CHAPTER 1 INTRODUCTION 1 1.1 MEDICAL MEASUREMENT SYSTEM UTILIZING LIDAR, CAMERA, AND MACHINE LEARNING ON MOBILE DEVICES 1 1.1.1 Advantage of Medical Measurement Using LiDAR and Camera 2 1.1.2 Advantage of Finding Medical Objects Using Machine Learning Methods 3 1.1.3 Advantage of Medical Measurement Using Mobile Device 4 1.2 PROCESS OF OBTAINING THE 3D COORDINATES OF THE MEDICAL OBJECTS 5 1.2.1 Conversion of Coordinates in the Real-World 6 1.2.2 Medical Object Detection Methods 8 1.3 STRUCTURE OF THE DOCTORAL DISSERTATION 8 CHAPTER 2 MEASURE BODY HEIGHT 11 2.1 INTRODUCTION 11 2.2 METHODS 14 2.2.1 Acquire the 3D Coordinates of the Human Faces 16 2.2.2 Determine the Heights of the Ground and the Eye 17 2.2.3 Body Height Calculation 17 2.3 EXPERIMENTAL RESULTS 18 2.3.1 “Measuring Body Height” Mobile Application 18 2.3.2 Precision of Measuring Body Height 19 2.3.3 Accuracy of Multiple Body Heights Measured in One Shot 20 2.3.4 Accuracy of Multiple Body Heights Measured Wearing Glasses and Shoes in One Shot 21 2.3.5 Accuracy of Body Height Measurement When Part of the Body Is Obscured 23 2.3.6 Accuracy of Body Height Measurement at Different Camera Heights and Angles 24 2.3.7 Variable Brightness of the Scene 25 2.4 DISCUSSION 26 2.4.1 Comparison with Prior Work 27 2.4.2 Limitations 29 2.5 CONCLUSION 30 CHAPTER 3 MEASURE BEDSORE AREA 32 3.1 INTRODUCTION 32 3.2 METHODS 34 3.2.1 Annotating Data for Machine Learning Models 34 3.2.2 Segmentation Using U-Net 35 3.2.3 Segmentation Using Mask R-CNN 35 3.2.4 “Pressure Ulcer Measure” Mobile Application 36 3.2.5 Bedsore Area Calculation 37 3.2.6 Validation of Bedsore Area Measurement 39 3.3 EXPERIMENTAL RESULTS 40 3.3.1 Accuracy of Bedsore Area Measurement 40 3.4 DISCUSSION 42 3.4.1 Comparison with Prior Work 42 3.4.2 Limitations 46 3.5 CONCLUSION 46 CHAPTER 4 MEASURE BURN AREA 48 4.1 INTRODUCTION 48 4.2 METHODS 50 4.2.1 Annotating Data for Machine Learning Models 50 4.2.2 “Burn Evaluation Network” Mobile Application 51 4.2.3 Burn Area Measurement 52 4.3 EXPERIMENTAL RESULTS 56 4.3.1 Accuracy of Burn Wound Area Measurement 56 4.3.2 Experimental Result of Real Patients 58 4.4 DISCUSSION 64 4.4.1 3D Photography 65 4.4.2 3D Measurements 66 4.4.3 3D Burn Area Measurement 67 4.4.4 Comparison with Prior Work 68 4.4.5 Limitations 68 4.5 CONCLUSION 69 CHAPTER 5 FALL DETECTION 70 5.1 INTRODUCTION 71 5.2 METHODS 73 5.2.1 Acquire the 3D Coordinates of the Human Skeleton 74 5.2.2 Determine the Heights of the Ground and the Head 75 5.2.3 Fall Judgment 76 5.2.4 “Fallert” Mobile Application 78 5.3 EXPERIMENTAL RESULTS 79 5.3.1 Precision in Determining Whether the Head is Close to the Ground 79 5.3.2 Accuracy of Six Types of Fall Activities Detection 81 5.3.3 Distinguish Falls on The Bed/Sofa, Sit-ups, and Falls on The Ground 83 5.3.4 Dynamic Evaluation of Yoga Posture 84 5.3.5 Variable Brightness of the Scene 86 5.4 DISCUSSION 87 5.4.1 Limitations 89 5.4.2 Comparison with Prior Work 90 5.4.3 Future Work 91 5.5 CONCLUSION 91 CHAPTER 6 CONCLUSION 93 6.1 FUTURE WORK 93 6.2 PRELIMINARY APPLICATIONS FOR AUTOMATIC MEASUREMENT SYSTEM 94 6.3 CONCLUSION 95 REFERENCE 97 | - |
| dc.language.iso | en | - |
| dc.subject | 應用軟體 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 相機 | zh_TW |
| dc.subject | 光學雷達 | zh_TW |
| dc.subject | 手持裝置 | zh_TW |
| dc.subject | 醫學測量 | zh_TW |
| dc.subject | Mobile Device | en |
| dc.subject | Medical Measurement | en |
| dc.subject | Applications | en |
| dc.subject | LiDAR | en |
| dc.subject | Camera | en |
| dc.subject | Machine Learning | en |
| dc.title | 利用手持裝置上的光學雷達與相機透過機器學習方法在醫學測量的各種應用 | zh_TW |
| dc.title | Medical Measurement Applications Utilizing LiDAR and Camera with Machine Learning Methods on a Mobile Device | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-1 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 賴飛羆;林澤;李鴻璋;陳沛甫 | zh_TW |
| dc.contributor.oralexamcommittee | Feipei Lai;Che Lin;Hung-Chang Lee;Pei-Fu Chen | en |
| dc.subject.keyword | 醫學測量,應用軟體,光學雷達,相機,機器學習,手持裝置, | zh_TW |
| dc.subject.keyword | Medical Measurement,Applications,LiDAR,Camera,Machine Learning,Mobile Device, | en |
| dc.relation.page | 113 | - |
| dc.identifier.doi | 10.6342/NTU202404700 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2024-12-17 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| dc.date.embargo-lift | 2025-12-10 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-1.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 34.97 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
