請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83091完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 賴飛羆 | zh_TW |
| dc.contributor.advisor | Feipei Lai | en |
| dc.contributor.author | 陳彥斌 | zh_TW |
| dc.contributor.author | Yen-Pin Chen | en |
| dc.date.accessioned | 2023-01-08T17:01:12Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-01-06 | - |
| dc.date.issued | 2022 | - |
| dc.date.submitted | 2022-12-19 | - |
| dc.identifier.citation | 1. de Meijer C, Wouterse B, Polder J, Koopmanschap M. The effect of population aging on health expenditure growth: a critical review. European journal of ageing. 2013;10(4):353-61.
2. Dallmeyer S, Wicker P, Breuer C. How an aging society affects the economic costs of inactivity in Germany: empirical evidence and projections. European review of aging and physical activity. 2017;14(1):1-9. 3. Wang C, Li F, Wang L, Zhou W, Zhu B, Zhang X, et al. The impact of population aging on medical expenses: a big data study based on the life table. Bioscience trends. 2017. 4. World Population Ageing 2019 Highlights. In: Nations U, editor. https://www.un.org/en/development/desa/population/publications/pdf/ageing/WorldPopulationAgeing2019-Highlights.pdf: United Nations; 2019. 5. World Population Prospects 2019. https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/files/documents/2020/Jan/wpp2019_highlights.pdf: United Nations; 2019. 6. Global Health and Aging. https://www.who.int/ageing/publications/global_health.pdf World Health Organization. 7. Aboagye‐Sarfo P, Mai Q, Sanfilippo FM, Preen DB, Stewart LM, Fatovich DM. Growth in Western Australian emergency department demand during 2007–2013 is due to people with urgent and complex care needs. Emergency Medicine Australasia. 2015;27(3):202-9. 8. Knapman M, Bonner A. Overcrowding in medium‐volume emergency departments: Effects of aged patients in emergency departments on wait times for non‐emergent triage‐level patients. International Journal of Nursing Practice. 2010;16(3):310-7. 9. Rasouli HR, Esfahani AA, Nobakht M, Eskandari M, Mahmoodi S, Goodarzi H, et al. Outcomes of crowding in emergency departments; a systematic review. Archives of academic emergency medicine. 2019;7(1). 10. Ansah JP, Ahmad S, Lee LH, Shen Y, Ong MEH, Matchar DB, et al. Modeling Emergency Department crowding: Restoring the balance between demand for and supply of emergency medicine. Plos one. 2021;16(1):e0244097. 11. Woodworth L. Swamped: Emergency department crowding and patient mortality. Journal of health economics. 2020;70:102279. 12. Morley C, Unwin M, Peterson GM, Stankovich J, Kinsman L. Emergency department crowding: a systematic review of causes, consequences and solutions. PloS one. 2018;13(8):e0203316. 13. Bond K, Ospina M, Blitz S, Afilalo M, Campbell S, Bullard M, et al. Frequency, determinants and impact of overcrowding. Healthcare Quarterly. 2007;10(4):32-40. 14. Fee C, Weber EJ, Bacchetti P, Maak CA. Effect of emergency department crowding on pneumonia admission care components. The American journal of managed care. 2011;17(4):269-78. 15. Hsu C-M, Liang L-L, Chang Y-T, Juang W-C. Emergency department overcrowding: Quality improvement in a Taiwan Medical Center. Journal of the Formosan Medical Association. 2019;118(1):186-93. 16. Lin C-C, Liang H-F, Han C-Y, Chen L-C, Hsieh C-L. Professional resilience among nurses working in an overcrowded emergency department in Taiwan. International emergency nursing. 2019;42:44-50. 17. Hoot NR, Aronsky D. Systematic review of emergency department crowding: causes, effects, and solutions. Annals of emergency medicine. 2008;52(2):126-36. e1. 18. Kim J, Yun BJ, Aaronson EL, Kaafarani HM, Linov P, Rao SK, et al. The next step to reducing emergency department (ED) crowding: Engaging specialist physicians. PLoS One. 2018;13(8):e0201393. 19. Tenbensel T, Chalmers L, Jones P, Appleton-Dyer S, Walton L, Ameratunga S. New Zealand’s emergency department target–did it reduce ED length of stay, and if so, how and when? BMC health services research. 2017;17(1):1-15. 20. Kaplan A, O'Neill D. COVID-19 and Healthcare's Productivity Shock. Nejm Catalyst Innovations in Care Delivery. 2020. 21. HERMAN B. Pursuing productivity: As a labor-intensive industry, healthcare works to become more efficient. https://www.modernhealthcare.com/article/20161007/SUPPLEMENT/310079996/pursuing-productivity-as-a-labor-intensive-industry-healthcare-works-to-become-more-efficient: Modern Healthcare. 22. Hanauer DA, Mei Q, Law J, Khanna R, Zheng K. Supporting information retrieval from electronic health records: A report of University of Michigan’s nine-year experience in developing and using the Electronic Medical Record Search Engine (EMERSE). Journal of biomedical informatics. 2015;55:290-300. doi: 10.1016/j.jbi.2015.05.003. 23. Thiessard F, Mougin F, Diallo G, Jouhet V, Cossin S, Garcelon N, et al. RAVEL: retrieval and visualization in ELectronic health records. 2012. In: MIE [Internet]. [194-8]. 24. Jin M, Li H, Schmid CH, Wallace BC, editors. Using Electronic Medical Records and Physician Data to Improve Information Retrieval for Evidence-Based Care. 2016 IEEE International Conference on Healthcare Informatics (ICHI); 2016: IEEE. doi: 10.1109/ICHI.2016.12. 25. Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health affairs. 2002;21(5):103-11. 26. Ford E, Carroll JA, Smith HE, Scott D, Cassell JAJJotAMIA. Extracting information from the text of electronic medical records to improve case detection: a systematic review. 2016;23(5):1007-15. PMID: 26911811 doi: 10.1093/jamia/ocv180. 27. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J, editors. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems; 2013. Available from: https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality. 28. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:13013781. 2013. 29. Filippova K, Altun Y, editors. Overcoming the lack of parallel data in sentence compression. Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing; 2013. 30. Filippova K, Strube M, editors. Dependency tree based sentence compression. Proceedings of the Fifth International Natural Language Generation Conference; 2008. 31. Hermann KM, Kočiský T, Grefenstette E, Espeholt L, Kay W, Suleyman M, et al. Teaching machines to read and comprehend. arXiv preprint arXiv:150603340. 2015. 32. Liu Y, Lapata M. Text summarization with pretrained encoders. arXiv preprint arXiv:190808345. 2019. 33. Narayan S, Cohen SB, Lapata M. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. arXiv preprint arXiv:180808745. 2018. 34. Elhadad N, McKeown K, Kaufman D, Jordan D, editors. Facilitating physicians’ access to information via tailored text summarization. AMIA Annual Symposium Proceedings; 2005: American Medical Informatics Association. 35. Niu Y, Zhu X, Hirst G, editors. Using outcome polarity in sentence extraction for medical question-answering. AMIA Annual Symposium Proceedings; 2006: American Medical Informatics Association. 36. Sarker A, Mollá D, Paris CJTAmj. Extractive summarisation of medical documents using domain knowledge and corpus statistics. 2012;5(9):478. PMID: 23115581 doi: 10.4066/AMJ.2012.1361. 37. Workman TE, Fiszman M, Hurdle JFJBmi, making d. Text summarization as a decision support aid. 2012;12(1):41. PMID: 22621674 doi: 10.1186/1472-6947-12-41. 38. Ranjan H, Agarwal S, Prakash A, Saha SK, editors. Automatic labelling of important terms and phrases from medical discussions. 2017 Conference on Information and Communication Technology (CICT); 2017: IEEE. doi: 10.1109/INFOCOMTECH.2017.8340644. 39. Fiszman M, Rindflesch TC, Kilicoglu H, editors. Summarizing drug information in Medline citations. AMIA Annual Symposium Proceedings; 2006: American Medical Informatics Association. 40. Fiszman M, Demner-Fushman D, Kilicoglu H, Rindflesch TCJJobi. Automatic summarization of MEDLINE citations for evidence-based medical treatment: A topic-oriented evaluation. 2009;42(5):801-13. PMID: 19022398 doi: 10.1016/j.jbi.2008.10.002. 41. Zhu Z, Yin C, Qian B, Cheng Y, Wei J, Wang F, editors. Measuring patient similarities via a deep architecture with medical concept embedding. 2016 IEEE 16th International Conference on Data Mining (ICDM); 2016: IEEE. 42. Miotto R, Li L, Kidd BA, Dudley JT. Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Scientific reports. 2016;6(1):1-10. doi: 10.1038/srep26094. 43. Gehrmann S, Dernoncourt F, Li Y, Carlson ET, Wu JT, Welt J, et al. Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives. PloS one. 2018;13(2):e0192360. 44. Agarwal V, Podchiyska T, Banda JM, Goel V, Leung TI, Minty EP, et al. Learning statistical models of phenotypes using noisy labeled training data. Journal of the American Medical Informatics Association. 2016;23(6):1166-73. doi: 10.1093/jamia/ocw028. 45. Liao KP, Cai T, Savova GK, Murphy SN, Karlson EW, Ananthakrishnan AN, et al. Development of phenotype algorithms using electronic medical records and incorporating natural language processing. bmj. 2015;350:h1885. doi: 10.1136/bmj.h1885. 46. Li Y, Rao S, Solares JRA, Hassaine A, Ramakrishnan R, Canoy D, et al. BeHRt: transformer for electronic Health Records. Scientific Reports. 2020;10(1):1-12. doi: 10.1038/s41598-020-62922-y. 47. Xiao C, Choi E, Sun J. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review. Journal of the American Medical Informatics Association. 2018;25(10):1419-28. 48. Tran T, Nguyen TD, Phung D, Venkatesh S. Learning vector representation of medical objects via EMR-driven nonnegative restricted Boltzmann machines (eNRBM). Journal of biomedical informatics. 2015;54:96-105. PMID: 25661261. doi: 10.1016/j.jbi.2015.01.012. 49. Beaulieu-Jones BK, Greene CS. Semi-supervised learning of the electronic health record for phenotype stratification. Journal of biomedical informatics. 2016;64:168-78. PMID: 27744022. doi: 10.1016/j.jbi.2016.10.007. 50. Salton G, Wong A, Yang C-S. A vector space model for automatic indexing. Communications of the ACM. 1975;18(11):613-20. doi: 10.1145/361219.361220. 51. Ye C, Fabbri D. Extracting similar terms from multiple EMR-based semantic embeddings to support chart reviews. Journal of biomedical informatics. 2018;83:63-72. doi: 10.1016/j.jbi.2018.05.014. 52. Wang Y, Wen A, Liu S, Hersh W, Bedrick S, Liu H. Test collections for electronic health record-based clinical information retrieval. JAMIA open. 2019;2(3):360-8. doi: 10.1093/jamiaopen/ooz016. 53. Biron P, Metzger M, Pezet C, Sebban C, Barthuet E, Durand T. An information retrieval system for computerized patient records in the context of a daily hospital practice: the example of the Léon Bérard Cancer Center (France). Applied clinical informatics. 2014;5(1):191. doi: 10.4338/ACI-2013-08-CR-0065. 54. Hanauer DA, Wu DT, Yang L, Mei Q, Murkowski-Steffy KB, Vydiswaran VV, et al. Development and empirical user-centered evaluation of semantically-based query recommendation for an electronic health record search engine. Journal of biomedical informatics. 2017;67:1-10. doi: 10.1016/j.jbi.2017.01.013. 55. National Ambulatory Medical Care Survey. https://www.cdc.gov/nchs/ahcd/index.htm: U.S. Department of Health & Human Services. 56. ICD-10 Version:2016. https://icd.who.int/browse10/2016/en: World Health Organization. 57. Le HT, Cerisara C, Denis A, editors. Do convolutional networks need to be deep for text classification? Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence; 2018. 58. Brown PF, Desouza PV, Mercer RL, Pietra VJD, Lai JCJCl. Class-based n-gram models of natural language. Computational Linguistics. 1992;18(4):467-79. Available from: https://www.aclweb.org/anthology/J92-4003/. 59. Bojanowski P, Grave E, Joulin A, Mikolov TJTotAfCL. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics. 2017;5:135-46. doi: 10.1162/tacl_a_00051. 60. Xenouleas S, Malakasiotis P, Apidianaki M, Androutsopoulos IJapa. Sumqe: a bert-based summary quality estimation model. 2019. doi: 10.18653/v1/D19-1618. 61. Al-Rfou R, Choe D, Constant N, Guo M, Jones L, editors. Character-level language modeling with deeper self-attention. Proceedings of the AAAI Conference on Artificial Intelligence; 2019. 62. Zhang X, Zhao J, LeCun Y, editors. Character-level convolutional networks for text classification. Advances in neural information processing systems; 2015. doi: 10.1609/aaai.v33i01.33013159. 63. Wu Y, Schuster M, Chen Z, Le QV, Norouzi M, Macherey W, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:160908144. 2016. 64. Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, et al., editors. Deep contextualized word representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers); 2018; New Orleans, Louisiana. doi: 10.18653/v1/N18-1202. 65. Devlin J, Chang M-W, Lee K, Toutanova K, editors. Bert: Pre-training of deep bidirectional transformers for language understanding. Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019); 2019; Minneapolis, Minnesota. doi: 10.18653/v1/N19-1423. 66. Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:191010683. 2019. 67. Hochreiter S, Schmidhuber JJNc. Long short-term memory. Neural Computation. 1997;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735. 68. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems; Long Beach, California, USA: Curran Associates Inc.; 2017. p. 6000–10. 69. Kingma DP, Welling MJapa. Auto-encoding variational bayes. 2013. 70. He K, Zhang X, Ren S, Sun J, editors. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016; Las Vegas, NV, USA. doi: 10.1109/CVPR.2016.90. 71. Ba JL, Kiros JR, Hinton GE. Layer normalization. arXiv preprint arXiv:160706450. 2016. 72. Rajpurkar P, Zhang J, Lopyrev K, Liang P. Squad: 100,000+ questions for machine comprehension of text. 2016;Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. doi: 10.18653/v1/D16-1264. 73. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention: Springer; 2015. p. 234-41. doi: 10.1007/978-3-319-24574-4_28. 74. Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, et al. Huggingface’s transformers: State-of-the-art natural language processing. 2019. 75. Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. 2019. doi: 10.1093/bioinformatics/btz682. Available from: https://doi.org/10.1093/bioinformatics/btz682. 76. Taylor WL. “Cloze procedure”: A new tool for measuring readability. Journalism Bulletin. 1953;30(4):415-33. doi: 10.1177/107769905303000401. 77. Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. nature. 1986;323(6088):533-6. 78. Kingma DP, Ba JJapa, editors. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR); 2015. 79. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. Thirty-First AAAI Conference on Artificial Intelligence; San Francisco, California, USA2017. 80. Goyal P, Dollár P, Girshick R, Noordhuis P, Wesolowski L, Kyrola A, et al. Accurate, large minibatch sgd: Training imagenet in 1 hour. 2017. 81. Lin C-Y, editor. Rouge: A package for automatic evaluation of summaries. Association for Computational Linguistics; 2004; Barcelona, Spain. Available from: https://www.aclweb.org/anthology/W04-1013.pdf. 82. Papineni K, Roukos S, Ward T, Zhu W-J, editors. Bleu: a method for automatic evaluation of machine translation. Proceedings of the 40th annual meeting of the Association for Computational Linguistics; 2002. 83. Saggion H, Radev D, Teufel S, Lam W, editors. Meta-evaluation of summaries in a cross-lingual environment using content-based metrics. COLING 2002: The 19th International Conference on Computational Linguistics; 2002. 84. Raita Y, Goto T, Faridi MK, Brown DF, Camargo CA, Hasegawa K. Emergency department triage prediction of clinical outcomes using machine learning models. Critical care. 2019;23(1):64. doi: 10.1186/s13054-019-2351-7. 85. Goto T, Camargo CA, Faridi MK, Freishtat RJ, Hasegawa K. Machine learning–based prediction of clinical outcomes for children during emergency department triage. JAMA network open. 2019;2(1):e186937-e. doi: 10.1001/jamanetworkopen.2018.6937. 86. Barnes S, Saria S, Levin S. An evolutionary computation approach for optimizing multilevel data to predict patient outcomes. Journal of healthcare engineering. 2018;2018. doi: 10.1155/2018/7174803. 87. Sklar DP, Crandall CS, Loeliger E, Edmunds K, Paul I, Helitzer DL. Unanticipated death after discharge home from the emergency department. Annals of emergency medicine. 2007;49(6):735-45. PMID: 17210204. doi: 10.1016/j.annemergmed.2006.11.018. 88. Obermeyer Z, Cohn B, Wilson M, Jena AB, Cutler DM. Early death after discharge from emergency departments: analysis of national US insurance claims data. bmj. 2017;356:j239. doi: 10.1136/bmj.j239. 89. Fernandes M, Vieira SM, Leite F, Palos C, Finkelstein S, Sousa JM. Clinical Decision Support Systems for Triage in the Emergency Department using Intelligent Systems: a Review. Artificial Intelligence in Medicine. 2020;102:101762. PMID: 31980099. doi: 10.1016/j.artmed.2019.101762. 90. Hjelm RD, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, Trischler A, et al., editors. Learning deep representations by mutual information estimation and maximization. International Conference for Learning Representations (ICLR), 2019; 2019; New Orleans, Louisiana. Available from: https://arxiv.org/abs/1808.06670. 91. Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. Proceedings of the 28th ACM International Conference on Multimedia; Seattle, WA, USA2020. doi: 10.1145/3394171.3413694. 92. D'Souza M, Van Munster CEP, Dorn JF, Dorier A, Kamm CP, Steinheimer S, et al. Autoencoder as a New Method for Maintaining Data Privacy While Analyzing Videos of Patients With Motor Dysfunction: Proof-of-Concept Study. J Med Internet Res. 2020;22(5):e16669. doi: 10.2196/16669. 93. Pattanayak S, Ludwig SA, editors. Improving Data Privacy Using Fuzzy Logic and Autoencoder Neural Network. 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE); 2019: IEEE. doi: 10.1109/FUZZ-IEEE.2019.8858823. 94. Vincent P, Larochelle H, Bengio Y, Manzagol P-A, editors. Extracting and composing robust features with denoising autoencoders. Proceedings of the 25th international conference on Machine learning; 2008; Helsinki, Finland. doi: 10.1145/1390156.1390294. 95. Gutmann M, Hyvärinen A, editors. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics; 2010. 96. Oord Avd, Li Y, Vinyals O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:180703748. 2018. 97. Nowozin S, Cseke B, Tomioka R. f-gan: Training generative neural samplers using variational divergence minimization. Advances in neural information processing systems; Barcelona, Spain2016. p. 271-9. doi: 10.5555/3157096.3157127. 98. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al., editors. Generative adversarial nets. Advances in neural information processing systems; 2014; Montreal, Canada. doi: 10.5555/2969033.2969125. 99. Maaten Lvd, Hinton G. Visualizing data using t-SNE. Journal of machine learning research. 2008;9(Nov):2579-605. 100. Zheng J, Yu H. Methods for linking EHR notes to education materials. Information Retrieval Journal. 2016;19(1-2):174-88. doi: 10.1007/s10791-015-9263-1. 101. Nagwani NK, Verma SJIJoCA. A frequent term and semantic similarity based single document text summarization algorithm. International Journal of Computer Applications. 2011;17(2):36-40. doi: DOI: 10.5120/2190-2778. 102. Caelles S, Maninis K-K, Pont-Tuset J, Leal-Taixé L, Cremers D, Van Gool L, editors. One-shot video object segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. doi: 10.1109/CVPR.2017.565. 103. Schluter N, editor. The limits of automatic summarisation according to rouge. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers; 2017. 104. Hirao T, Nishino M, Suzuki J, Nagata M. Enumeration of extractive oracle summaries. arXiv preprint arXiv:170101614. 2017. 105. Zhou Q, Yang N, Wei F, Huang S, Zhou M, Zhao TJapa, editors. Neural document summarization by jointly learning to score and select sentences. Association for Computational Linguistics; 2018; Melbourne, Australia. doi: 10.18653/v1/P18-1061. 106. Nallapati R, Zhou B, Gulcehre C, Xiang BJapa, editors. Abstractive text summarization using sequence-to-sequence rnns and beyond. Association for Computational Linguistics; 2016; Berlin, Germany. doi: 10.18653/v1/K16-1028. 107. See A, Liu PJ, Manning CDJapa, editors. Get to the point: Summarization with pointer-generator networks. Association for Computational Linguistics; 2017; Vancouver, Canada. doi: 10.18653/v1/P17-1099. 108. Gigioli P, Sagar N, Rao A, Voyles J, editors. Domain-Aware Abstractive Text Summarization for Medical Documents. 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); 2018: IEEE. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83091 | - |
| dc.description.abstract | 醫療需求已隨著世界人口的上升而全球性的增加,加重了現有的醫療資源與醫療人力的負擔。隨著人類文明的發展,各種發明逐步的降低人類的勞力負擔並增加人類的生活品質,科技的進步已讓我們過著眾多科技圍繞與輔助的生活,然而醫療照護目前仍屬於高勞力密集的服務,在醫療需求增加的情況下,現有的醫療人力正處於不敷所需的邊界,若我們能如同工業革命般利用機器來輔助人類的醫療服務,或許能舒緩甚至解決醫療系統如此緊繃的情況。本研究專注於使用醫療資料庫內保藏的大量醫療經驗來輔助臨床醫師並期望能增加臨床人員的工作效率,透過建立文字摘要系統加速臨床人員掌握醫療紀錄中的關鍵資訊,並發展出一套模式將病人資訊轉變為電腦可分析的數據向量,其可用於醫療的事件預測、疾病推估、相似病情模式的檢索。隨著深度學習與醫療資訊科技的發展,醫療服務系統必將有嶄新的一面。 | zh_TW |
| dc.description.abstract | Nowadays, technology is almost everywhere and dramatically helps us; however, this penetration is still limited somewhere. Healthcare service is the one, and it may be the time to change. The increasing demand for healthcare is a worldwide issue, which might cause emergency department crowding. Labor-intensive healthcare services have faced high-intensity work pressure, and the issue of occupational burnout has become more prominent. We could take the concept of the Industrial Revolution to use machines to enhance physicians’ ability by leveraging medical experience in electronic health records. This study implemented the medical text summarization method to help people notice the critical words in lengthy records and established the medical concept embedding method to adapt the downstream event prediction and concept retrievals. Our proposed methods achieve the highest performance and have practical clinical applications. Deep learning methods will improve the clinical efficiency of doctors and promote healthcare services to turn a new page. (Portions of this dissertation have been published in two journals, and I have obtained permission from JMIR for the copyright of my article in the journals.) | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-01-08T17:01:12Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-01-08T17:01:12Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員審定書 I
誌謝 II 中文摘要 III ABSTRACT IV CONTENTS V LIST OF FIGURES IX LIST OF TABLES XI LIST OF FORMULAS XII CHAPTER 1 INTRODUCTION 1 1.1 BACKGROUND 1 Unmet Need in the Emergency Department of Medical Center 2 1.2 RELATED WORKS 3 1.2.1 Summarization 4 1.2.2 Patient to Vector 5 1.3 OBJECTIVE 6 CHAPTER 2 METHODOLOGY 7 2.1 MATERIALS 7 2.2 METHOD OF EXTRACTIVE SUMMARIZATION ON MEDICAL FREE TEXT NOTE 9 2.2.1 Data Preprocessing 9 2.2.2 Data Labelling for Extractive Summarization 10 2.2.3 Data Augmentation 12 2.2.4 Tokenization 13 2.2.4.1 The Level of Token Size 13 2.2.4.2 Token Embedding 14 2.2.5 Model Architecture 15 2.2.5.1 Language Model 15 2.2.5.2 Proposed Model for Extractive Summarization 16 2.2.5.2.1 Transformer-based Extractive Summarization Model 17 2.2.5.2.2 LSTM-based Extractive Summarization Model 23 2.2.6 Optimization Algorithm 24 2.2.7 Cleanup Method 25 2.2.8 Evaluation 26 2.2.8.1 Recall-Oriented Understudy for Gisting Evaluation (ROUGE) 26 2.2.8.2 Questionnaire Survey 27 2.2.8.3 Statistical Analysis 28 2.3 METHOD OF CONCEPT EMBEDDING ON ELECTRONIC HEALTH RECORD 29 2.3.1 Data Preprocessing 29 2.3.2 Model Architecture 32 2.3.2.1 Contrastive Learning 32 2.3.2.2 Proposed Model for Patient Embedding 33 2.3.2.2.1 Language Model for Patient Embedding 33 2.3.2.2.2 Self-supervised Methods for Patient Embedding 34 2.3.3 Optimization Algorithm 42 2.3.4 Evaluation 42 CHAPTER 3 RESULTS 44 3.1 EXTRACTIVE SUMMARIZATION ON MEDICAL FREE TEXT NOTE 44 3.1.1 Corpus details 44 3.1.2 The Performance on the Extractive Summarization 44 3.1.2.1 Area Under the Receiver Operating Characteristic (AUROC) 44 3.1.2.2 ROUGE Scores 45 3.1.2.3 Questionnaire Survey 47 3.1.3 Language Model Training Process 48 3.1.4 Effect of the Cleanup and Data Augmentation for Summarization 50 3.1.5 Implementation of Extractive Summarization Web Service 51 3.2 THE PERFORMANCE ON THE PATIENT EMBEDDING 53 3.2.1 Visualization 53 3.2.2 Questionnaire Survey on the Patient Retrieval 55 3.2.3 Events Prediction Application 58 CHAPTER 4 DISCUSSION 60 4.1 THE PRINCIPAL FINDINGS 60 4.1.1 Language Model and Extractive Summarization 60 Corpus Differences 66 4.1.2 Concept Embedding on Electronic Health Record 66 4.2 COMPARISON WITH PRIOR WORK 69 4.3 LIMITATIONS 71 CHAPTER 5 CONCLUSIONS 72 REFERENCES 74 | - |
| dc.language.iso | en | - |
| dc.title | 電子化醫療紀錄之信息萃取方法、應用及效能驗證 | zh_TW |
| dc.title | Information extraction methods, application, and performance evaluation of electronic health records | en |
| dc.title.alternative | Information extraction methods, application, and performance evaluation of electronic health records | - |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-1 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 李源德;何弘能;陳文鍾;趙坤茂;林永松;李金鳳;陳縕儂;阮聖彰;王裕仁 | zh_TW |
| dc.contributor.oralexamcommittee | Yuan-Teh Lee;Hong-nerng Ho;Wen-jone Chen;Kun-Mao Chao;Yeong-Sung Lin;Chin-Feng Lee;Yun-Nung Chen;Shanq-Jang Ruan;Yu-Jen Wang | en |
| dc.subject.keyword | 醫療資料庫,深度學習,醫療服務,摘要,數據化病人, | zh_TW |
| dc.subject.keyword | Deep learning,EHRs,Healthcare service,Summary,Medical information retrieval,Medical concept embedding, | en |
| dc.relation.page | 82 | - |
| dc.identifier.doi | 10.6342/NTU202210141 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2022-12-21 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 生醫電子與資訊學研究所 | - |
| 顯示於系所單位: | 生醫電子與資訊學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1059221216105029.pdf | 5.5 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
