請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/52653
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 蕭浩明(Hao-Ming Hsiao) | |
dc.contributor.author | Ke-Cheng Lin | en |
dc.contributor.author | 林科呈 | zh_TW |
dc.date.accessioned | 2021-06-15T16:21:58Z | - |
dc.date.available | 2020-08-24 | |
dc.date.copyright | 2020-08-24 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-08-11 | |
dc.identifier.citation | [1]R. Szeliski, 'Computer vision: algorithms and applications,' Springer Science Business Media, 2010. [2]L. G. Roberts, 'Machine perception of three-dimensional solids,' Massachusetts Institute of Technology, 1963. [3]D. Marr, 'Vision: A computational investigation into the human representation and processing of visual information, henry holt and co,' Inc., New York, vol. 2, no. 4.2, 1982. [4]P. J. Burt, and E. H. Adelson, 'A multiresolution spline with application to image mosaics,' ACM Transactions on Graphics (TOG), vol. 2, no. 4, pp. 217-236, 1983. [5]W. T. Freeman, and E. H. Adelson, 'The design and use of steerable filters,' IEEE Transactions on Pattern analysis and machine intelligence, vol. 13, no. 9, pp. 891-906, 1991. [6]S. M. Seitz, and C. R. Dyer, 'Photorealistic scene reconstruction by voxel coloring,' International Journal of Computer Vision, vol. 35, no. 2, pp. 151-173, 1999. [7]K. N. Kutulakos, and S. M. Seitz, 'A theory of shape by space carving,' International journal of computer vision, vol. 38, no. 3, pp. 199-218, 2000. [8]D. B. Mumford, and J. Shah, 'Optimal approximations by piecewise smooth functions and associated variational problems,' Communications on pure and applied mathematics, 1989. [9]Y. G. Leclerc, 'Constructing simple stable descriptions for image partitioning,' International journal of computer vision, vol. 3, no. 1, pp. 73-102, 1989. [10]D. G. Lowe, 'Object recognition from local scale-invariant features,' Proceedings of the seventh IEEE international conference on computer vision, vol. 2, pp. 1150-1157, 1999. [11]P. Viola, and M. Jones, 'Rapid object detection using a boosted cascade of simple features,' Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001, vol. 1, pp. I-I, 2001. [12]S. Lazebnik, C. Schmid, and J. Ponce, 'Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,' 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 2, pp. 2169-2178, 2006. [13]A. Krizhevsky, I. Sutskever, and G. E. Hinton, 'Imagenet classification with deep convolutional neural networks,' Advances in neural information processing systems, pp. 1097-1105, 2012. [14]C. Szegedy, W. Liu, Y. Jia, et al., 'Going deeper with convolutions,' Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015. [15]A. Bulling, J. A. Ward, H. Gellersen, and G. Troster, 'Eye movement analysis for activity recognition using electrooculography,' IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 4, pp. 741-753, 2010. [16]M. T. Ibrahim, T. M. Khan, S. A. Khan, et al., 'Iris localization using local histogram and other image statistics,' Optics and Lasers in Engineering, vol. 50, no. 5, pp. 645-654, 2012. [17]A. H. Javadi, Z. Hakimi, M. Barati, et al., 'SET: a pupil detection method using sinusoidal approximation,' Frontiers in neuroengineering, vol. 8, p. 4, 2015. [18]L. Świrski, A. Bulling, and N. Dodgson, 'Robust real-time pupil tracking in highly off-axis images,' Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 173-176, 2012. [19]F. Timm, and E. Barth, 'Accurate eye centre localisation by means of gradients,' Visapp, vol. 11, pp. 125-130, 2011. [20]W. Fuhl, T. Kübler, K. Sippel, et al., 'Excuse: Robust pupil detection in real-world scenarios,' International Conference on Computer Analysis of Images and Patterns, pp. 39-51, 2015. [21]W. Fuhl, T. C. Santini, T. Kübler, and E. Kasneci, 'Else: Ellipse selection for robust pupil detection in real-world environments,' Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research Applications, pp. 123-130, 2016. [22]T. Santini, W. Fuhl, and E. Kasneci, 'PuRe: Robust pupil detection for real-time pervasive eye tracking,' Computer Vision and Image Understanding, vol. 170, pp. 40-50, 2018. [23]T. T. AB, 'Accuracy and precision test method for remote eye trackers,' 2012. [Online]. Available: https://www.tobiipro.com/siteassets/tobii-pro/learn-and-support/use/what-affects-the-performance-of-an-eye-tracker/tobii-test-specifications-accuracy-and-precision-test-method.pdf/?v=2.1.1, Accessed on June 17, 2020. [24]P. Blignaut, 'Mapping the pupil-glint vector to gaze coordinates in a simple video-based eye tracker,' Journal of Eye Movement Research, 2014. [25]Z. Zhu, and Q. Ji, 'Eye and gaze tracking for interactive graphic display,' Machine Vision and Applications, vol. 15, no. 3, pp. 139-148, 2004. [26]C. Ma, K.-A. Choi, B.-D. Choi, and S.-J. Ko, 'Robust remote gaze estimation method based on multiple geometric transforms,' Optical Engineering, vol. 54, no. 8, p. 083103, 2015. [27]A. Villanueva, and R. Cabeza, 'A novel gaze estimation system with one calibration point,' IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 38, no. 4, pp. 1123-1138, 2008. [28]C. H. Morimoto, A. Amir, and M. Flickner, 'Detecting eye position and gaze from a single camera and 2 light sources,' Object recognition supported by user interaction for service robots, vol. 4, pp. 314-317, 2002. [29]A. Meyer, M. Böhme, T. Martinetz, and E. Barth, 'A single-camera remote eye tracker,' International Tutorial and Research Workshop on Perception and Interactive Technologies for Speech-Based Systems, pp. 208-211, 2006. [30]C. Hennessey, B. Noureddin, and P. Lawrence, 'A single camera eye-gaze tracking system with free head motion,' Proceedings of the 2006 symposium on Eye tracking research applications, pp. 87-94, 2006. [31]E. D. Guestrin, and M. Eizenman, 'General theory of remote gaze estimation using the pupil center and corneal reflections,' IEEE Transactions on biomedical engineering, vol. 53, no. 6, pp. 1124-1133, 2006. [32]S.-W. Shih, and J. Liu, 'A novel approach to 3-D gaze tracking using stereo cameras,' IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 34, no. 1, pp. 234-245, 2004. [33]D. Beymer, and M. Flickner, 'Eye gaze tracking using an active stereo head,' 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., vol. 2, pp. II-451, 2003. [34]J.-B. Huang, Q. Cai, Z. Liu, et al., 'Towards accurate and robust cross-ratio based gaze trackers through learning from simulation,' Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 75-82, 2014. [35]J. J. Kang, E. D. Guestrin, W. J. Maclean, and M. Eizenman, 'Simplifying the cross-ratios method of point-of-gaze estimation,' CMBES Proceedings, 2007. [36]D. W. Hansen, J. S. Agustin, and A. Villanueva, 'Homography normalization for robust gaze estimation in uncalibrated setups,' Proceedings of the 2010 Symposium on Eye-Tracking Research Applications, pp. 13-20, 2010. [37]Z. Zhang, and Q. Cai, 'Improving cross-ratio-based eye tracking techniques by leveraging the binocular fixation constraint,' Proceedings of the symposium on eye tracking research and applications, pp. 267-270, 2014. [38]F. L. Coutinho, and C. H. Morimoto, 'Augmenting the robustness of cross-ratio gaze tracking methods to head movement,' Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 59-66, 2012. [39]F. L. Coutinho, and C. H. Morimoto, 'Free head motion eye gaze tracking using a single camera and multiple light sources,' 2006 19th Brazilian Symposium on Computer Graphics and Image Processing, pp. 171-178, 2006. [40]C.-C. Lai, Y.-T. Chen, K.-W. Chen, et al., 'Appearance-based gaze tracking with free head movement,' 2014 22nd International Conference on Pattern Recognition, pp. 1869-1873, 2014. [41]A. George, and A. Routray, 'Real-time eye gaze direction classification using convolutional neural network,' 2016 International Conference on Signal Processing and Communications (SPCOM), pp. 1-5, 2016. [42]X. Zhang, Y. Sugano, M. Fritz, and A. Bulling, 'Appearance-based gaze estimation in the wild,' Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4511-4520, 2015. [43]W. Wang, Y. Huang, and R. Zhang, 'Driver gaze tracker using deformable template matching,' Proceedings of 2011 IEEE International Conference on Vehicular Electronics and Safety, pp. 244-247, 2011. [44]R. Venkateswarlu, 'Eye gaze estimation from a single image of one eye,' Proceedings Ninth IEEE International Conference on Computer Vision, pp. 136-143, 2003. [45]L. R. Young, and D. Sheena, 'Survey of eye movement recording methods,' Behavior research methods instrumentation, vol. 7, no. 5, pp. 397-429, 1975. [46]G. Fernández, F. Manes, L. E. Politi, et al., 'Patients with mild Alzheimer’s disease fail when using their working memory: evidence from the eye tracking technique,' Journal of Alzheimer's Disease, vol. 50, no. 3, pp. 827-838, 2016. [47]Z. Kapoula, Q. Yang, M. Vernet, et al., 'Spread deficits in initiation, speed and accuracy of horizontal and vertical automatic saccades in dementia with Lewy bodies,' Frontiers in neurology, vol. 1, p. 138, 2010. [48]M. Freeth, P. Chapman, D. Ropar, and P. Mitchell, 'Do gaze cues in complex scenes capture and direct the attention of high functioning adolescents with ASD? Evidence from eye-tracking,' Journal of autism and developmental disorders, vol. 40, no. 5, pp. 534-547, 2010. [49]L. Rello, and M. Ballesteros, 'Detecting readers with dyslexia using machine learning with eye tracking measures,' Proceedings of the 12th Web for All Conference, pp. 1-8, 2015. [50]P. Deans, L. O’Laughlin, B. Brubaker, et al., 'Use of eye movement tracking in the differential diagnosis of attention deficit hyperactivity disorder (ADHD) and reading disability,' Psychology, vol. 1, no. 04, p. 238, 2010. [51]O. Asiry, H. Shen, and P. Calder, 'Extending attention span of ADHD children through an eye tracker directed adaptive user interface,' Proceedings of the ASWEC 2015 24th Australasian Software Engineering Conference, pp. 149-152, 2015. [52]R. S. Keefe, J. M. Silverman, R. C. Mohs, et al., 'Eye tracking, attention, and schizotypal symptoms in nonpsychotic relatives of patients with schizophrenia,' Archives of general psychiatry, vol. 54, no. 2, pp. 169-176, 1997. [53]J. A. Lieberman, D. Jody, J. M. J. Alvir, et al., 'Brain morphology, dopamine, and eye-tracking abnormalities in first-episode schizophrenia: prevalence and clinical correlates,' Archives of General Psychiatry, vol. 50, no. 5, pp. 357-368, 1993. [54]U. Samadani, R. Ritlop, M. Reyes, et al., 'Eye tracking detects disconjugate eye movements associated with structural traumatic brain injury and concussion,' Journal of neurotrauma, vol. 32, no. 8, pp. 548-556, 2015. [55]P. Thiagarajan, and K. J. Ciuffreda, 'Versional eye tracking in mild traumatic brain injury (mTBI): effects of oculomotor training (OMT),' Brain injury, vol. 28, no. 7, pp. 930-943, 2014. [56]D. Giordano, C. Pino, C. Spampinato, et al., 'Eye tracker based method for quantitative analysis of pathological nystagmus,' 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), pp. 1-6, 2011. [57]J. Chen, K. Kwong, D. Chang, et al., 'Wearable sensors for reliable fall detection,' 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, pp. 3551-3554, 2006. [58]T. Zhang, J. Wang, L. Xu, and P. Liu, 'Fall detection by wearable sensor and one-class SVM algorithm,' Intelligent computing in signal processing and pattern recognition: Springer, pp. 858-863, 2006. [59]Q. Li, J. A. Stankovic, M. A. Hanson, et al., 'Accurate, fast fall detection using gyroscopes and accelerometer-derived posture information,' 2009 Sixth International Workshop on Wearable and Implantable Body Sensor Networks, pp. 138-143, 2009. [60]C. F. Lai, S. Y. Chang, H. C. Chao, and Y. M. Huang, 'Detection of cognitive injured body region using multiple triaxial accelerometers for elderly falling,' IEEE Sensors Journal, vol. 11, no. 3, pp. 763-770, 2010. [61]C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, 'Fall detection from human shape and motion history using video surveillance,' 21st International Conference on Advanced Information Networking and Applications Workshops (AINAW'07), vol. 2, pp. 875-880, 2007. [62]C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, 'Robust video surveillance for fall detection based on human shape deformation,' IEEE Transactions on circuits and systems for video Technology, vol. 21, no. 5, pp. 611-622, 2011. [63]X. Ma, H. Wang, B. Xue, et al., 'Depth-based human fall detection via shape features and improved extreme learning machine,' IEEE journal of biomedical and health informatics, vol. 18, no. 6, pp. 1915-1922, 2014. [64]P. Bromiley, P. Courtney, and N. Thacker, 'Design of a visual system for detecting natural events by the use of an independent visual estimate: a human fall detector,' in Empirical Evaluation Methods in Computer Vision: World Scientific, pp. 61-87, 2002. [65]M. Alwan, P. J. Rajendran, S. Kell, et al., 'A smart and passive floor-vibration based fall detector for elderly,' 2006 2nd International Conference on Information Communication Technologies, vol. 1, pp. 1003-1007, 2006. [66]B. U. Toreyin, A. B. Soyer, I. Onaran, and E. E. Cetin, 'Falling person detection using multi-sensor signal processing,' EURASIP Journal on Advances in Signal Processing, vol. 2008, pp. 1-7, 2007. [67]Z. Zivkovic, 'Improved adaptive Gaussian mixture model for background subtraction,' Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., vol. 2, pp. 28-31, 2004. [68]Z. Zivkovic, and F. Van Der Heijden, 'Efficient adaptive density estimation per image pixel for the task of background subtraction,' Pattern recognition letters, vol. 27, no. 7, pp. 773-780, 2006. [69]P. KaewTraKulPong, and R. Bowden, 'An improved adaptive background mixture model for real-time tracking with shadow detection,' in Video-based surveillance systems: Springer, pp. 135-144, 2002. [70]A. B. Godbehere, A. Matsukawa, and K. Goldberg, 'Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation,' 2012 American Control Conference (ACC), pp. 4305-4312, 2012. [71]OpenCV (Open Source Computer Vision Library). [Online]. Available: https://opencv.org/, Accessed on June 19, 2020. [72]Z. Cao, T. Simon, S. E. Wei, and Y. Sheikh, 'Realtime multi-person 2d pose estimation using part affinity fields,' Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7291-7299, 2017. [73]J. L. Chua, Y. C. Chang, and W. K. Lim, 'A simple vision-based fall detection technique for indoor video surveillance,' Signal, Image and Video Processing, vol. 9, no. 3, pp. 623-633, 2015. [74]F. Harrou, N. Zerrouki, Y. Sun, and A. Houacine, 'Vision-based fall detection system for improving safety of elderly people,' IEEE Instrumentation Measurement Magazine, vol. 20, no. 6, pp. 49-55, 2017. [75]R. C. Gonzalez, and R. E. Woods, 'Digital image processing,' Pearson Education, 2002. [76]R. Lienhart, and J. Maydt, 'An extended set of haar-like features for rapid object detection,' Proceedings. international conference on image processing, vol. 1, pp. I-I, 2002. [77]T. Mita, T. Kaneko, and O. Hori, 'Joint haar-like features for face detection,' Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, vol. 2, pp. 1619-1626, 2005. [78]J. Canny, 'A computational approach to edge detection,' IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679-698, 1986. [79]R. Tsai, 'A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,' IEEE Journal on Robotics and Automation, vol. 3, no. 4, pp. 323-344, 1987. [80]R. K. Kumar, A. Ilie, J. M. Frahm, and M. Pollefeys, 'Simple calibration of non-overlapping cameras with a mirror,' 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-7, 2008. [81]D. T. Kierstan Boyd, 'Parts of the Eye,' 2018. [Online]. Available: https://www.aao.org/eye-health/anatomy/parts-of-eye, Accessed on June 25, 2020. [82]I. Charfi, J. Miteran, J. Dubois, et al., 'Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboost-based classification,' Journal of Electronic Imaging, vol. 22, no. 4, p. 041106, 2013. [83]F. Murtaza, M. H. Yousaf, and S. A. Velastin, 'Multi-view human action recognition using 2D motion templates based on MHIs and their HOG description,' IET Computer Vision, vol. 10, no. 7, pp. 758-767, 2016. [84]T. Chen, J. Ma, and Z. Deng, 'Attributes of color represented by a spherical model,' Journal of Electronic Imaging, vol. 22, no. 4, p. 043032, 2013. [85]N. Otsu, 'A threshold selection method from gray-level histograms,' IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62-66, 1979. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/52653 | - |
dc.description.abstract | 近年來由於硬體技術的進步,以及影像資料的大量累積,電腦視覺領域蓬勃發展,如今已廣泛應用於各領域中,包含工廠自動化、無人駕駛與辨識系統等。隨著人們對醫療品質的要求逐漸上升,電腦視覺也拓展到醫學領域上,透過特殊的醫療攝影設備,結合智慧演算法輔助醫療人員進行診斷,例如藉由斷層掃描進行主動脈剝離診斷,或利用乳房攝影診斷乳房纖維腫瘤等。相較於醫療人員,電腦視覺具有更快、更穩定與更準確的判斷。然而如今應用於醫學領域之電腦視覺技術,主要著重於靜態影像分析,鮮少動態影像分析的例子。因此本研究以開發醫療型眼動儀與穩定跌倒偵測系統為目標,希望藉此將電腦視覺動態影像分析技術應用於醫學領域上。 本研究開發之醫療型眼動儀具備穩定追蹤眼動、準確估測視線與即時分析眼動情形等功能,有別於傳統醫療型眼動儀,無需任何穿戴裝置,且在極短時間內即可量測出受試者視線追蹤能力,以及眼球震顫的嚴重度。系統內部的眼動追蹤演算法,結合基於像素值與像素值之梯度二種特徵進行判定,達成更穩定的追蹤。視線估測法也結合了眼球模型與迴歸分析,降低計算複雜度的同時維持估測精準度。 另一方面本研究設計之跌倒偵測演算法,串聯各種特徵判定方式,包含跌倒時之整體動量、軸心方向與位置、輪廓形狀變化與昏迷可能性等四種方法,計算出六個特徵值,並藉由大量影像資料集測試,調整演算法參數,完成高偵測率且低錯誤警告率的偵測系統,且對運算成本要求極低。 | zh_TW |
dc.description.abstract | Over the last few years, due to the evolution of hardware technology and the accumulation of image data set, the field of computer vision has developed vigorously. Nowadays, it has been widely used in various fields, such as factory automation, self-driving systems and recognition systems. With the increasing requirements for medical quality, computer vision has also been expanded to the field of medical science, assisting medical personnel in diagnosis through combining special medical photography equipment with intelligent algorithms, such as automatic aortic dissection diagnosis by tomography, or automatic fibroadenoma diagnosis by mammography. Compared with medical personnel, computer vision provides faster, more stable and more accurate diagnosis. However, the computer vision technology applied in the medical field has mainly focused on image analysis. There are only few cases of video analysis in this field. Therefore, this research aims to develop a medical eye tracker and a stable fall detection system, hoping to apply video analysis technology to the medical field. The eye tracker we develop for medical use has the functions of stable eye movements tracking, accurate estimation of gaze direction, and real-time analysis of eye movements. Different from traditional medical eye trackers, it does not require any wearable devices, and is able to measure the gaze tracking ability and the severity of nystagmus of the subjects within very short time. The eye tracking algorithm used in the eye tracker combines intensity-based method with gradient-based method to achieve more stable tracking. In addition, the gaze estimation method combines the three-dimensional eyeball model method with regression analysis, reducing computational complexity and maintaining the accuracy of the estimation meanwhile. On the other hand, the fall detection system designed in our research adopts four feature determination methods to calculate six eigenvalues, including the total momentum, the main axis direction and position, the shape change and the possibility of coma while falling. By testing a large number of image data sets to the algorithm, we optimize the parameters of our detection system to achieve high detection rate, low false alarm rate and extremely low computing cost. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T16:21:58Z (GMT). No. of bitstreams: 1 U0001-0608202014264400.pdf: 4679404 bytes, checksum: 75f3d1968198ea3784bfa40287e5e9b4 (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | 口試委員審定書 i 誌謝 ii 摘要 iii Abstract iv 目錄 vi 圖目錄 ix 表目錄 xii 第一章 緒論 1 1.1 前言 1 1.2 研究動機與目的 4 第二章 文獻探討 5 2.1 眼動追蹤系統 5 2.1.1 影像瞳孔追蹤技術 5 2.1.2 視線估測 8 2.1.3 眼動追蹤之醫學應用 11 2.2 跌倒偵測系統 13 2.2.1 人體輪廓檢測技術 15 2.2.2 人體輪廓特徵與跌倒判定 17 第三章 動態影像分析應用於眼動追蹤系統 20 3.1 眼動追蹤系統機構設計 20 3.1.1 定位光源 22 3.1.2 相機模組 22 3.2 瞳孔追蹤演算法 24 3.2.1 影像前處理 24 3.2.2 瞳孔輪廓估算 27 3.2.3 尋找角膜反射 32 3.3 眼球模型與視線估測法實現 33 3.3.1 相機校正 34 3.3.2 相機與螢幕相對姿態校正 37 3.3.3 瞳孔中心–角膜反射定位法 43 3.3.4 視線座標映射 46 3.4 眼動追蹤系統之醫學應用 48 第四章 動態影像分析應用於跌倒偵測系統 54 4.1 影像資料 54 4.2 人體輪廓檢測演算法 54 4.2.1 背景檢測 54 4.2.2 背景去除 56 4.2.3 輪廓影像二值化 58 4.2.4 輪廓缺口縫補與雜訊抑制 59 4.3 人體輪廓動態特徵 61 4.3.1 基於整體動量 61 4.3.2 基於軸心方向與位置 62 4.3.3 基於輪廓外形變化 62 4.3.4 基於跌倒後昏迷可能性 64 4.4 跌倒偵測實現 65 第五章 結論與未來展望 71 5.1 眼動追蹤系統 71 5.1.1 結論 71 5.1.2 未來展望 71 5.2 跌倒偵測系統 72 5.2.1 結論 72 5.2.2 未來展望 73 第六章 參考文獻 74 | |
dc.language.iso | zh-TW | |
dc.title | 動態影像分析於醫學領域之應用 | zh_TW |
dc.title | Application of Motion Analysis in Medical Field | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳湘鳳(Shiang-Feng Chen),林峻永(Chun-Yeon Lin) | |
dc.subject.keyword | 電腦視覺,動態影像分析,眼動追蹤,視線估測,跌倒偵測, | zh_TW |
dc.subject.keyword | Computer vision,Video analysis,Eye tracking,Gaze estimation,Fall detection, | en |
dc.relation.page | 83 | |
dc.identifier.doi | 10.6342/NTU202002534 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2020-08-11 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 機械工程學研究所 | zh_TW |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-0608202014264400.pdf 目前未授權公開取用 | 4.57 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。