請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80915完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 鄭士康(Shyh-Kang Jeng) | |
| dc.contributor.author | Chih-Wei Ku | en |
| dc.contributor.author | 古智崴 | zh_TW |
| dc.date.accessioned | 2022-11-24T03:21:46Z | - |
| dc.date.available | 2021-11-08 | |
| dc.date.available | 2022-11-24T03:21:46Z | - |
| dc.date.copyright | 2021-11-08 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-09-17 | |
| dc.identifier.citation | 1. Luz, S., de la Fuente, S., Albert, P. (2018). A method for analysis of patient speech in dialogue for dementia detection. arXiv preprint arXiv:1811.09919. 2. Ieracitano, C., Mammone, N., Hussain, A., Morabito, F. C. (2020). A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia. Neural Networks, 123, 176-190. 3. Tanaka, H., Adachi, H., Kazui, H., Ikeda, M., Kudo, T., Nakamura, S. (2019, October). Detecting Dementia from Face in Human-Agent Interaction. In Adjunct of the 2019 International Conference on Multimodal Interaction (pp. 1-6). 4. 陳奕翔. (2020). 失智長者之語音資料庫建立與應用. 臺灣大學電信工程學研究所學位論文, 1-48. 5. Campbell, E. L., Docío-Fernández, L., Raboso, J. J., García-Mateo, C. (2020). Alzheimer's Dementia Detection from Audio and Text Modalities. arXiv preprint arXiv:2008.04617. 6. Liu, Y. Y. (2018). 基於獨白文字紀錄之失智症評估分類器. 臺灣大學電機工程學研究所學位論文, 1-52. 7. Mikolov, T., Chen, K., Corrado, G., Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 8. Jason Brownlee. October 6, 2017. How to Develop Word Embeddings in Python with Gensim. Deep Learning for Natural Language Processing. https://machinelearningmastery.com/develop-word-embeddings-python-gensim/ 9. Zhu, Y., Liang, X., Batsis, J. A., Roth, R. M. (2021). Exploring Deep Transfer Learning Techniques for Alzheimer’s Dementia Detection. Frontiers in computer science, 3. 10. Foraita, R., Spallek, J., Zeeb, H. (2014). Directed acyclic graphs. 11. Rabiner, L., Juang, B. (1986). An introduction to hidden Markov models. ieee assp magazine, 3(1), 4-16. 12. Zhang, H. P., Yu, H. K., Xiong, D., Liu, Q. (2003, July). HHMM-based Chinese lexical analyzer ICTCLAS. In Proceedings of the second SIGHAN workshop on Chinese language processing (pp. 184-187). 13. Moez Ali , Pycaret: An open source, low-code machine learning library in Python, July 2020, https://www.Pycaret.org 14. Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., ... Oliphant, T. E. (2020). Array programming with NumPy. Nature, 585(7825), 357-362. 15. Wes McKinney ( 2010 ). Data Structures for Statistical Computing in Python . In Proceedings of the 9th Python in Science Conference (pp. 56 - 61 ). 16. Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., Gulin, A. (2017). Catboost: unbiased boosting with categorical features. arXiv preprint arXiv:1706.09516. 17. Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., ... Liu, T. Y. (2017). Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems, 30, 3146-3154. 18. Abdi, H., Williams, L. J. (2010). Principal component analysis. Wiley interdisciplinary reviews: computational statistics, 2(4), 433-459. 19. Quinlan, J. R. (1986). Induction of decision trees. Machine learning, 1(1), 81-106. 20. Rish, I. (2001, August). An empirical study of the naive Bayes classifier. In IJCAI 2001 workshop on empirical methods in artificial intelligence (Vol. 3, No. 22, pp. 41-46). 21. Chen, T., Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794). 22. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. 23. Rumelhart, D. E., Hinton, G. E., Williams, R. J. (1985). Learning internal representations by error propagation. California Univ San Diego La Jolla Inst for Cognitive Science. 24. Devlin, J., Chang, M. W., Lee, K., Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. 25. Liou, C. Y., Cheng, W. C., Liou, J. W., Liou, D. R. (2014). Autoencoder for words. Neurocomputing, 139, 84-96. 26. Ramos, J. (2003, December). Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning (Vol. 242, No. 1, pp. 29-48). 27. Ouyang, X., Zhou, P., Li, C. H., Liu, L. (2015, October). Sentiment analysis using convolutional neural network. In 2015 IEEE international conference on computer and information technology; ubiquitous computing and communications; dependable, autonomic and secure computing; pervasive intelligence and computing (pp. 2359-2364). IEEE. 28. Hu, W., Gu, Z., Xie, Y., Wang, L., Tang, K. (2019, June). Chinese Text Classification Based on Neural Networks and Word2vec. In 2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC) (pp. 284-291). IEEE. 29. Borson, S., Scanlan, J., Brush, M., Vitaliano, P., Dokmak, A. (2000). The mini‐cog: a cognitive ‘vital signs’ measure for dementia screening in multi‐lingual elderly. International journal of geriatric psychiatry, 15(11), 1021-1027. 30. Hughes, C. P., Berg, L., Danziger, W., Coben, L. A., Martin, R. L. (1982). A new clinical scale for the staging of dementia. The British journal of psychiatry, 140(6), 566-572. 31. Teng, E. L., Hasegawa, K., Homma, A., Imai, Y., Larson, E., Graves, A., ... White, L. R. (1994). The Cognitive Abilities Screening Instrument (CASI): a practical test for cross-cultural epidemiological studies of dementia. International psychogeriatrics, 6(1), 45-58. 32. Folstein, M. E. (1975). A practical method for grading the cognitive state of patients for the children. J Psychiatr res, 12, 189-198. 33.E.Pfeiffer, “A Short Portable Mental Status Questionnaire for the Assessment of Organic Brain Deficit in Elderly Patients,” J. Am. Geriatr. Soc., 1975. 34. Goodglass, H., Kaplan, E., Weintraub, S. (1983). Boston naming test. Philadelphia, PA: Lea Febiger. 35. Jessen, F., Amariglio, R. E., Van Boxtel, M., Breteler, M., Ceccaldi, M., Chételat, G., ... Subjective Cognitive Decline Initiative (SCD‐I) Working Group. (2014). A conceptual framework for research on subjective cognitive decline in preclinical Alzheimer's disease. Alzheimer's dementia, 10(6), 844-852. 36. Breima, L. (2010). Random Forests. Machine Learning. 37. Schapire, R. E. (2013). Explaining adaboost. In Empirical inference (pp. 37-52). Springer, Berlin, Heidelberg. 38. Cox, D. R. (1958). The regression analysis of binary sequences. Journal of the Royal Statistical Society: Series B (Methodological), 20(2), 215–232. 39. Tharwat, A., Gaber, T., Ibrahim, A., Hassanien, A. E. (2017). Linear discriminant analysis: A detailed tutorial. AI communications, 30(2), 169-190. 40. Natekin, A., Knoll, A. (2013). Gradient boosting machines, a tutorial. Frontiers in neurorobotics, 7, 21. 41. Geurts, P., Ernst, D., Wehenkel, L. (2006). Extremely randomized trees. Machine learning, 63(1), 3-42. 42. Tharwat, A., Gaber, T., Ibrahim, A., Hassanien, A. E. (2017). Linear discriminant analysis: A detailed tutorial. AI communications, 30(2), 169-190. 43. Chollet, F. (2015). Keras: Deep learning library for theano and tensorflow. URL: https://keras. io/k, 7(8), T1. 44. Nair, V., Hinton, G. E. (2010, January). Rectified linear units improve restricted boltzmann machines. In Icml. 45. Hinton, G. E., Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. science, 313(5786), 504-507. 46. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1), 1929-1958. 47. Cui, Y., Che, W., Liu, T., Qin, B., Yang, Z., Wang, S., Hu, G. (2019). Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80915 | - |
| dc.description.abstract | 隨著高齡化社會的來臨,失智患者越來越成為一個需要重視的課題。而失智患者的篩檢,現階段主要是依靠醫師問診、認知功能評斷,腦部影像以及抽血檢查來完成。在未來失智症患者可能增多的前提下,我們希望借助機器學習與深度學習的技術,達成認知功能評斷的任務輔助。本論文使用的數據,主要來自臺北市立聯合醫院仁愛院區仁鶴軒以及臺大醫院記憶門診,被診斷為失智患者以及非失智患者(即正常人),對於「偷餅乾」圖片的獨白文本。我們預期:失智患者因為一些腦內功能的缺失,影響語言的表達能力、獨白內容的流暢度以及深度。本研究會從最基礎的機器學習分類,乃至深度學習模型分類進行分析。分析過程分為兩個階段,第一個部分是透過語言學的方式擷取特徵,希望能藉由詞性分布、以及與詞性相關的特徵進行分類;第二部份則透過詞嵌入的方式,加入上下文的考量,並導入BERT的相關技術,從中找出能有效分辨失智患者的演算法。 | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-24T03:21:46Z (GMT). No. of bitstreams: 1 U0001-1609202123562900.pdf: 2036949 bytes, checksum: 286f8754d83c8c5620bf0629a68fce5b (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | 中文摘要 i Abstract ii 目錄 iii 圖目錄 v 表目錄 vi 第一章、導論 1 1.1研究背景: 1 1.2文獻回顧: 1 1.3本文貢獻 2 1.4章節概要 3 第二章、背景知識 4 2.1失智症: 4 2.2文本前處理: 4 2.2.1斷詞 4 2.2.2詞性標註 6 2.2.3停用詞 6 第三章、研究方法 8 3.1 Pycaret 8 3.2 Pycaret中的機器學習分類模型 9 3.3適用於文本分類的深度學習模型 10 3.4 Natural language processing自然語言處理 10 3.4.1 Word2Vec 10 3.4.2 BERT 12 第四章、實驗設計 14 4.1資料蒐集 14 4.1.1失智症評估工具 14 4.1.2資料來源 16 4.1.3資料處理 17 4.2實驗架構 19 4.3特徵提取 20 4.3.1 第一部份實驗特徵提取 20 4.3.2第二部份實驗特徵提取 21 第五章、實驗與分析 22 5.1基於詞性標註與文本特徵的機器學習分類模型 22 5.1.1數據平衡與組別分類實驗 22 5.1.2 特徵工程 23 5.1.3模型選擇 26 5.1.4 特徵重要性 27 5.2基於詞性標註與文本特徵的深度學習分類模型 31 5.2.1淺層神經網路 31 5.2.2 Autoencoder 34 5.3 自然語言處理分類模型 38 5.3.1 Word2Vec 39 5.3.2 BERT 43 5.4結果討論 46 5.4.1 Jieba與Ckip之比較 46 5.4.2 實驗結果浮動之問題 46 5.4.3基於文本的失智患者分類器現況(以BERT實作為例) 47 第六章、結論 49 參考文獻 50 | |
| dc.language.iso | zh-TW | |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 失智症 | zh_TW |
| dc.subject | 自然語言處理 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 特徵工程 | zh_TW |
| dc.subject | Natural Language Processing | en |
| dc.subject | Machine Learning | en |
| dc.subject | Deep Learning | en |
| dc.subject | Dementia | en |
| dc.subject | Feature engineering | en |
| dc.title | 基於語料的失智患者辨識系統 | zh_TW |
| dc.title | A Corpus-Based System for Dementia Detection | en |
| dc.date.schoolyear | 109-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 高照明(Hsin-Tsai Liu),林怡君(Chih-Yang Tseng) | |
| dc.subject.keyword | 失智症,特徵工程,自然語言處理,機器學習,深度學習, | zh_TW |
| dc.subject.keyword | Dementia,Feature engineering,Natural Language Processing,Machine Learning,Deep Learning, | en |
| dc.relation.page | 54 | |
| dc.identifier.doi | 10.6342/NTU202103224 | |
| dc.rights.note | 同意授權(限校園內公開) | |
| dc.date.accepted | 2021-09-17 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
| 顯示於系所單位: | 電信工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1609202123562900.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 1.99 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
