Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80988
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳信希(Hsin-Hsi Chen)
dc.contributor.authorYu-Ting Linen
dc.contributor.author林禹廷zh_TW
dc.date.accessioned2022-11-24T03:25:02Z-
dc.date.available2021-09-11
dc.date.available2022-11-24T03:25:02Z-
dc.date.copyright2021-09-11
dc.date.issued2021
dc.date.submitted2021-09-03
dc.identifier.citationDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2015. Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long­ document trans­former, 2020. Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Shengping Liu, and Weifeng Chong. HyperCore: Hyperbolic and co­-graph representation for automatic ICD coding. In Pro­ceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3105–3114, Online, July 2020. Association for Computational Linguistics. . URL https://www.aclweb.org/anthology/2020.acl-main.282. Jean­-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self­-attention and convolutional layers. In International Conference on Learning Rep­resentations, 2020. URL https://openreview.net/forum?id=HJlnC1rKPB. Yichao Du, Pengfei Luo, Xudong Hong, Tong Xu, Zhe Zhang, Chao Ren, Yi Zheng, and Enhong Chen. Inheritance-­guided hierarchical assignment for clinical automatic diagnosis, 2021. Robert M. French. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3(4):128–135, 1999. ISSN 1364­-6613. . URL https://www.sciencedirect. com/science/article/pii/S1364661399012942. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. CoRR, abs/1607.00653, 2016. URL http://arxiv.org/abs/1607.00653. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.or/abs/1512. 03385. Sepp Hochreiter and Jürgen Schmidhuber. Long short­term memory. Neural computation, 9:1735–80, 12 1997. . Alistair Johnson, Tom Pollard, Lu Shen, Li­wei Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Celi, and Roger Mark. Mimic-­iii, a freely accessible critical care database. Scientific Data, 3:160035, 05 2016a. . Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li­Wei, Mengling Feng, Mo­ hammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic­-iii, a freely accessible critical care database. Scientific data, 3(1):1–9, 2016b. Yoon Kim. Convolutional neural networks for sentence classification. CoRR, abs/1408.5882, 2014. URL http://arxiv.org/abs/1408.5882. Thomas N. Kipf and Max Welling. Semi­supervised classification with graph convo­ lutional networks. CoRR, abs/1609.02907, 2016. URL http://arxiv.org/abs/1609. 02907. Simon Kocbek, Lawrence Cavedon, David Martinez, Christopher Bain, Chris Mac Manus, Gholamreza Haffari, Ingrid Zukerman, and Karin Verspoor. Text mining electronic hospital records to automatically classify admissions against disease: Measuring the impact of linking data sources. Journal of Biomedical Informatics, 64:158–167, 2016. ISSN 1532­-0464. . URL https://www.sciencedirect.com/science/article/ pii/S1532046416301411. Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. Imagenet classification with deep convolutional neural networks. Neural Information Processing Systems, 25, 01 2012. Leah S Larkey and W Bruce Croft. Combining classifiers in text categorization. In Pro­ceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, pages 289–297, 1996. Fei Li and Hong Yu. Icd coding from clinical text using multi­-filter residual convolutional neural network. In Proceedings of the AAAI Conference on Artificial Intelligence, vol­ume 34, pages 8180–8187, 2020. G. Moody and R. Mark. A database to support development and evaluation of intelligent intensive care monitoring. Computers in Cardiology 1996, pages 657–660, 1996. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisen­ stein. Explainable prediction of medical codes from clinical text. arXiv preprint arXiv:1802.05695, 2018. K. Nieminen, R.M. Langford, C.J. Morgan, J. Takala, and A. Kari. A clinical description of the improve data library. IEEE Engineering in Medicine and Biology Magazine, 16 (6):21–24, 1997. . Adler Perotte, Rimma Pivovarov, Karthik Natarajan, Nicole Weiskopf, Frank Wood, and Noémie Elhadad. Diagnosis code assignment: models and evaluation metrics.Journal of the American Medical Informatics Association, 21(2):231–237, 2014. Marta L Render, James Deddens, Ron Freyberg, Peter Almenoff, Alfred F Connors, Douglas Wagner, and Timothy P Hofer. Veterans affairs intensive care unit risk ad­justment model: validation, updating, recalibration. Critical care medicine, 36(4): 1031—1042, April 2008. ISSN 0090­-3493. . URL https://doi.org/10.1097/CCM. 0b013e318169f290. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ”why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 1135–1144, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450342322. . URL https://doi.org/10.1145/2939672.2939778.4410.1145/2939672.2939778 Mohammed Saeed, Mauricio Villarroel, Andrew T Reisner, Gari Clifford, Li­Wei Lehman, George Moody, Thomas Heldt, Tin H Kyaw, Benjamin Moody, and Roger G Mark. Multiparameter intelligent monitoring in intensive care ii (mimic­-ii): a public­-access intensive care unit database. Critical care medicine, 39(5):952, 2011. Xiangyang She and Di Zhang. Text classification based on hybrid cnn­-lstm hybrid model. In 2018 11th International Symposium on Computational Intelligence and Design (IS­ CID), volume 02, pages 185–189, 2018. . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30, pages 5998– 6008. Curran Associates, Inc., 2017a. URL https://proceedings.neurips.cc/paper/ 2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017b. URL http://arxiv.org/abs/1706.03762. Thanh Vu, Dat Quoc Nguyen, and Anthony Nguyen. A label attention model for icd coding from clinical text. In Proceedings of the Twenty-­Ninth International Joint Conference on Artificial Intelligence, IJCAI­-20, pages 3335–3341, 7 2020. . URL https://doi. org/10.24963/ijcai.2020/461. Main track. Guangtao Zeng, Wenmian Yang, Zeqian Ju, Yue Yang, Sicheng Wang, Ruisi Zhang, Meng Zhou, Jiaqi Zeng, Xiangyu Dong, Ruoyu Zhang, Hongchao Fang, Penghui Zhu, Shu Chen, and Pengtao Xie. MedDialog: Large-­scale medical dialogue datasets. In Pro­ceedings of the 2020 Conference on Empirical Methods in Natural Language Process­ing (EMNLP), pages 9241–9250, Online, November 2020. Association for Computa­tional Linguistics. . URL https://aclanthology.org/2020.emnlp-main.743. Yijia Zhang, Qingyu Chen, Zhihao Yang, Hongfei Lin, and Zhiyong Lu. Biowordvec, improving biomedical word embeddings with subword information and mesh. Scientific data, 6(1):1–9, 2019.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80988-
dc.description.abstract"在實際醫療場域,疾病分類發生在病患剛入院與出院這兩個時間點,然而過去之研究往往忽略前者之狀況。在入院階段,醫師會根據對患者當下的狀況判斷出後續所需要進行的檢驗與治療方針。在出院階 段,醫師或專業之醫療標記人員會根據整份病歷下國際疾病統計分類碼 (International Classification of Diseases, ICD)。本研究會將這兩種狀況納入考量。此外,過去之研究採用對各標籤專注度運算(per­-label attention)之模型來聚合模型抽取之文本特徵向量,然而此單一化之模型難以應付如 MIMIC III 這種有 8,921 類的多分類任務。我們嘗試最近較為流行之預訓練 模型做爲編碼器,經實驗,我們發現自我專注度模型搭配卷積神經網路有相 輔相成的效果。解碼器方面,經過嘗試基於圖與非圖的解碼器,在實驗結果中,我們證實了我們的非圖式標籤專注度解碼模型搭配卷積編碼模型在入院與出院皆達到了最好的效果。最後,本文亦發現在原 MIMIC 上所發展的模型,在我們標注的台大醫院急診室資料集上也有超越人類的表現。"zh_TW
dc.description.provenanceMade available in DSpace on 2022-11-24T03:25:02Z (GMT). No. of bitstreams: 1
U0001-0209202117194400.pdf: 2799353 bytes, checksum: fe3709eebb34411bf0afed27442051f0 (MD5)
Previous issue date: 2021
en
dc.description.tableofcontents口試委員會審定書 ii 誌謝 iii 摘要 iv Abstract v 1 Introduction 1 2 Related Work 4 2.1 Disease Classification............................ 4 2.2 Deep Learning ............................... 4 2.2.1 Convolution Neural Network.................... 4 2.2.2 Attention Mechanism........................ 5 2.3 MedicalDataset............................... 5 3 Datasets 7 3.1 MIMIC I................................... 7 3.2 MIMIC II.................................. 8 3.3 MIMIC III.................................. 8 3.4 MIMIC IV.................................. 9 3.5 MIMIC Impression ............................. 9 3.6 NTUHER: National Taiwan University Emergency Room Dataset . . . . . 11 3.6.1 Motivation ............................. 11 3.6.2 Data Collection........................... 11 3.6.3 Annotation System......................... 11 3.6.4 Statistics .............................. 12 4 Methodology 15 4.1 Preprocessing And Word Embedding.................... 15 4.2 Models Able to Read Long Text ...................... 16 4.2.1 BERT................................ 16 4.2.2 Sliding Window BERT....................... 17 4.2.3 Sliding Window BERT with overlaps ............... 17 4.2.4 Longformer............................. 17 4.2.5 Globalized Multi-­Residual CNN.................. 18 4.3 Customized Decoding Method ....................... 20 4.3.1 Labelwise Attention ........................ 20 4.3.2 Node2Vec.............................. 20 4.3.3 Graph Aggregation......................... 21 4.3.4 Multi­-Head Label Decoding .................... 21 4.4 Multi­-Head Label Decoding Module.................... 22 4.5 Word Correction Models .......................... 23 4.6 Loss Function................................ 24 4.6.1 BCE Loss.............................. 24 5 Experiments 26 5.1 Important Sections of the Dataset ..................... 27 5.2 On­ Discharge Disease Classification.................... 28 5.2.1 Encoders .............................. 28 5.2.2 Decoders .............................. 29 5.2.3 Final Model............................. 31 5.3 On ­Admission Disease Classification ................... 32 5.4 NTUHER Classification .......................... 33 5.4.1 Form Embed ............................ 33 5.4.2 Effect of Positive words ...................... 33 6 Discussion 35 6.1 Important Words .............................. 35 6.2 Masked Attention.............................. 37 6.2.1 Model Description ......................... 37 6.2.2 Results of MAT........................... 38 6.3 Additional Feature ............................. 38 7 Conclusion ............................. 41 Bibliography ............................. 42
dc.language.isoen
dc.subject殘差網路zh_TW
dc.subject國際疾病分類zh_TW
dc.subject自注意力模型zh_TW
dc.subject卷積神經網路zh_TW
dc.subject電子病歷zh_TW
dc.subjectConvolutional Neural Networken
dc.subjectElectronic Health Recorden
dc.subjectInternational Classification of Diseasesen
dc.subjectResidual Blocken
dc.subjectSelf Attention Modelen
dc.title使用多頭注意力標籤解碼加強模型研究入院與出院之疾病分類zh_TW
dc.titleDisease Classification on Admission and on Discharge with a Model Enhanced by Multi-­Head Label Decodingen
dc.date.schoolyear109-2
dc.description.degree碩士
dc.contributor.oralexamcommittee鄭卜壬(Hsin-Tsai Liu),蔡宗翰(Chih-Yang Tseng),古倫維,蔡銘峰
dc.subject.keyword國際疾病分類,自注意力模型,卷積神經網路,電子病歷,殘差網路,zh_TW
dc.subject.keywordInternational Classification of Diseases,Self Attention Model,Convolutional Neural Network,Electronic Health Record,Residual Block,en
dc.relation.page46
dc.identifier.doi10.6342/NTU202102964
dc.rights.note同意授權(限校園內公開)
dc.date.accepted2021-09-06
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
U0001-0209202117194400.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
2.73 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved