Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資料科學學位學程
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96331
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor曹昱zh_TW
dc.contributor.advisorYu Tsaoen
dc.contributor.author陳在民zh_TW
dc.contributor.authorTsai-Min Chenen
dc.date.accessioned2024-12-24T16:23:26Z-
dc.date.available2024-12-25-
dc.date.copyright2024-12-24-
dc.date.issued2024-
dc.date.submitted2024-11-13-
dc.identifier.citation[1] A. S. Kibos, B. P. Knight, V. Essebag, S. B. Fishberger, M. Slevin, and I. C. Țintoiu, Cardiac Arrhythmias: From Basic Mechanism to State-of-the-Art Management. Springer London, 2013.
[2] P. Malmivuo, J. Malmivuo, and R. Plonsey, Bioelectromagnetism: principles and applications of bioelectric and biomagnetic fields. Oxford University Press, USA, 1995.
[3] F. N. Wilson et al., "Recommendations for standardization of electrocardiographic and vectorcardiographic leads," Circulation, vol. 10, no. 4, pp. 564-573, 1954.
[4] A. Bayes de Luna et al., "Interatrial conduction block and retrograde activation of the left atrium and paroxysmal supraventricular tachyarrhythmia," European heart journal, vol. 9, no. 10, pp. 1112-1118, 1988.
[5] B. Surawicz, R. Childers, B. J. Deal, and L. S. Gettes, "AHA/ACCF/HRS recommendations for the standardization and interpretation of the electrocardiogram: part III: intraventricular conduction disturbances a scientific statement from the American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; the American College of Cardiology Foundation; and the Heart Rhythm Society endorsed by the International Society for Computerized Electrocardiology," Journal of the American College of Cardiology, vol. 53, no. 11, pp. 976-981, 2009.
[6] K. Wesley, Huszar's ECG and 12-Lead Interpretation - E-Book. Elsevier Health Sciences, 2016.
[7] Y. Kobayashi, "Idiopathic Ventricular Premature Contraction and Ventricular Tachycardia: Distribution of the Origin, Diagnostic Algorithm, and Catheter Ablation," Journal of Nippon Medical School, vol. 85, no. 2, pp. 87-94, 2018.
[8] T. Garcia and G. Miller, Arrhythmia Recognition: The Art of Interpretation. Jones & Bartlett Learning, 2004.
[9] E. B. Hanna and D. L. Glancy, "ST-segment depression and T-wave inversion: classification, differential diagnosis, and caveats," Cleveland Clinic journal of medicine, vol. 78, no. 6, p. 404, 2011.
[10] A. Y. Hannun et al., "Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network," Nature medicine, vol. 25, no. 1, p. 65, 2019.
[11] A. Shiyovich, A. Wolak, L. Yacobovich, A. Grosbard, and A. Katz, "Accuracy of diagnosing atrial flutter and atrial fibrillation from a surface electrocardiogram by hospital physicians: analysis of data from internal medicine departments," The American journal of the medical sciences, vol. 340, no. 4, pp. 271-275, 2010.
[12] R. Hoekema, G. J. Uijen, and A. Van Oosterom, "Geometrical aspects of the interindividual variability of multilead ECG recordings," IEEE Transactions on Biomedical Engineering, vol. 48, no. 5, pp. 551-559, 2001.
[13] A. Lyon, A. Mincholé, J. P. Martínez, P. Laguna, and B. Rodriguez, "Computational techniques for ECG analysis and interpretation in light of their contribution to medical advances," Journal of the Royal Society Interface, vol. 15, no. 138, p. 20170821, 2018.
[14] Q. Qin, J. Li, L. Zhang, Y. Yue, and C. Liu, "Combining Low-dimensional Wavelet Features and Support Vector Machine for Arrhythmia Beat Classification," Scientific reports, vol. 7, no. 1, p. 6067, 2017.
[15] A. L. Goldberger et al., "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals," Circulation, vol. 101, no. 23, pp. e215-e220, 2000.
[16] G. B. Moody and R. G. Mark, "The impact of the MIT-BIH arrhythmia database," IEEE Engineering in Medicine and Biology Magazine, vol. 20, no. 3, pp. 45-50, 2001.
[17] H. A. Guvenir, B. Acar, G. Demiroz, and A. Cekin, "Supervised machine learning algorithm for arrhythmia analysis," Computers in cardiology, pp. 433-436, 1997.
[18] R. Begg, Neural Networks in Healthcare: Potential and Challenges: Potential and Challenges. IGI Global, 2006.
[19] A. Mustaqeem, S. M. Anwar, and M. Majid, "Multiclass Classification of Cardiac Arrhythmia Using Improved Feature Selection and SVM Invariants," Computational and mathematical methods in medicine, vol. 2018, 2018.
[20] C. G. NAYAK, G. Seshikala, U. Desai, and S. G. Nayak, "Identification of arrhythmia classes using machine-learning techniques," International Journal of Biology and Biomedicine, vol. 1, pp. 48-53, 2016.
[21] G. D. Clifford et al., "AF classification from a short single lead ECG recording: The Physionet Computing in Cardiology Challenge 2017," Proceedings of Computing in Cardiology, vol. 44, p. 1, 2017.
[22] F. Liu et al., "An Open Access Database for Evaluating the Algorithms of Electrocardiogram Rhythm and Morphology Abnormality Detection," Journal of Medical Imaging and Health Informatics, vol. 8, no. 7, pp. 1368-1373, 2018.
[23] M. Schuster and K. K. Paliwal, "Bidirectional recurrent neural networks," (in English), Ieee T Signal Proces, vol. 45, no. 11, pp. 2673-2681, Nov 1997, doi: Doi 10.1109/78.650093.
[24] Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy, "Hierarchical Attention Networks for Document Classification," San Diego, California, June 2016: Association for Computational Linguistics, in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1480-1489, doi: 10.18653/v1/N16-1174. [Online]. Available: https://aclanthology.org/N16-1174 https://doi.org/10.18653/v1/N16-1174
[25] J. Hearty, Advanced Machine Learning with Python. Packt Publishing, 2016.
[26] L. Rutkowski, Computational Intelligence: Methods and Techniques. Springer Berlin Heidelberg, 2008.
[27] A. Pal and P. Prakash, Practical Time Series Analysis: Master Time Series Data Processing, Visualization, and Modeling using Python. Packt Publishing, 2017.
[28] K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio, "On the properties of neural machine translation: Encoder-decoder approaches," arXiv preprint arXiv:1409.1259, 2014.
[29] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," arXiv preprint arXiv:1412.3555, 2014.
[30] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," arXiv preprint arXiv:1502.03167, 2015.
[31] A. L. Maas, "Rectifier Nonlinearities Improve Neural Network Acoustic Models," 2013.
[32] D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," ArXiv e-prints, vol. 1412. [Online]. Available: http://adsabs.harvard.edu/abs/2014arXiv1412.6980K
[33] M. Abadi et al., "Tensorflow: Large-scale machine learning on heterogeneous distributed systems," arXiv preprint arXiv:1603.04467, 2016.
[34] D. E. Goldberg, "Genetic Algorithms in Search Optimization and Machine Learning," 1989.
[35] J. M. McCabe et al., "Physician accuracy in interpreting potential ST‐segment elevation myocardial infarction electrocardiograms," Journal of the American Heart Association, vol. 2, no. 5, p. e000268, 2013.
[36] J. B. Nielsen, M. S. Olesen, M. Tangø, S. Haunsø, A. G. Holst, and J. H. Svendsen, "Incomplete right bundle branch block: a novel electrocardiographic marker for lone atrial fibrillation," Europace, vol. 13, no. 2, pp. 182-187, 2010.
[37] M. Gertsch, The ECG Manual: An Evidence-Based Approach. Springer London, 2016.
[38] Apple. "Taking an ECG with the ECG app on Apple Watch Series 4." https://support.apple.com/hr-hr/HT208955 (accessed 4/20, 2019).
[39] R. Beebe and J. Myers, Professional Paramedic, Volume I: Foundations of Paramedic Care. Cengage Learning, 2012.
[40] S. N. Goodman, "Toward evidence-based medical statistics. 2: The Bayes factor," Annals of internal medicine, vol. 130, no. 12, pp. 1005-1013, 1999.
[41] P. Podrid, M. D. M. S. Rajeev Malhotra, R. Kakkar, and P. A. Noseworthy, Podrid's Real-World ECGs: Volume 4B, Arrhythmias [Practice Cases]: A Master's Approach to the Art and Practice of Clinical ECG Interpretation. Cardiotext Publishing, 2015.
[42] S. Chugh, Textbook of Clinical Electrocardiography. Jaypee Brothers, 2014.
[43] A. Esteva et al., "A guide to deep learning in healthcare," Nature medicine, vol. 25, no. 1, p. 24, 2019.
[44] A. C. Krueger. "FDA document of Electrocardiograph software for over-the-counter use." https://www.accessdata.fda.gov/cdrh_docs/pdf18/DEN180044.pdf (accessed October 17, 2018).
[45] Z. I. Attia et al., "Screening for cardiac contractile dysfunction using an artificial intelligence–enabled electrocardiogram," Nature medicine, vol. 25, no. 1, p. 70, 2019.
[46] Z. I. Attia et al., "Novel Bloodless Potassium Determination Using a Signal‐Processed Single‐Lead ECG," Journal of the American Heart Association, vol. 5, no. 1, p. e002746, 2016.
[47] H. Zhu et al., "Smart Healthcare in the Era of Internet-of-Things," IEEE Consumer Electronics Magazine, vol. 8, no. 5, pp. 26-30, 2019, doi: 10.1109/MCE.2019.2923929.
[48] S. P. Mohanty, U. Choppali, and E. Kougianos, "Everything you wanted to know about smart cities: The Internet of things is the backbone," IEEE Consumer Electronics Magazine, vol. 5, no. 3, pp. 60-70, 2016, doi: 10.1109/MCE.2016.2556879.
[49] M. Shao, Z. Zhou, G. Bin, Y. Bai, and S. Wu, "A Wearable Electrocardiogram Telemonitoring System for Atrial Fibrillation Detection," Sensors (Basel), vol. 20, no. 3, Jan 22 2020, doi: 10.3390/s20030606.
[50] C. T. Lin et al., "An intelligent telecardiology system using a wearable and wireless ECG to detect atrial fibrillation," IEEE Trans Inf Technol Biomed, vol. 14, no. 3, pp. 726-33, May 2010, doi: 10.1109/TITB.2010.2047401.
[51] S. Wattal, S. K. Spear, M. H. Imtiaz, and E. Sazonov, "A polypyrrole-coated textile electrode and connector for wearable ECG monitoring," in 2018 IEEE 15th International Conference on Wearable and Implantable Body Sensor Networks (BSN), 4-7 March 2018 2018, pp. 54-57, doi: 10.1109/BSN.2018.8329657.
[52] S. Hong et al., "CardioLearn: A Cloud Deep Learning Service for Cardiac Disease Detection from Electrocardiogram," presented at the Companion Proceedings of the Web Conference 2020, Taipei, Taiwan, 2020. [Online]. Available: https://doi.org/10.1145/3366424.3383529.
[53] X. Wang, Q. Gui, B. Liu, Z. Jin, and Y. Chen, "Enabling Smart Personalized Healthcare: A Hybrid Mobile-Cloud Approach for ECG Telemonitoring," IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 3, pp. 739-745, 2014, doi: 10.1109/JBHI.2013.2286157.
[54] J. Kim and C. Chu, "Analysis of energy consumption for wearable ECG devices," in SENSORS, 2014 IEEE, 2-5 Nov. 2014 2014, pp. 962-965, doi: 10.1109/ICSENS.2014.6985162.
[55] K. C. Liu, K. H. Hung, C. Y. Hsieh, H. Y. Huang, C. T. Chan, and Y. Tsao, "Deep Learning Based Signal Enhancement of Low-Resolution Accelerometer for Fall Detection Systems," IEEE Transactions on Cognitive and Developmental Systems, pp. 1-1, 2021, doi: 10.1109/TCDS.2021.3116228.
[56] X. Fafoutis, L. Marchegiani, A. Elsts, J. Pope, R. Piechocki, and I. Craddock, "Extending the battery lifetime of wearable sensors with embedded machine learning," in 2018 IEEE 4th World Forum on Internet of Things (WF-IoT), 5-8 Feb. 2018 2018, pp. 269-274, doi: 10.1109/WF-IoT.2018.8355116.
[57] N. Petrellis, I.-E. Kosmadakis, M. Vardakas, F. Gioulekas, M. Birbas, and A. Lalos, "Compressing and Filtering Medical Data in a Low Cost Health Monitoring System," in Proceedings of the 21st Pan-Hellenic Conference on Informatics, 2017, pp. 1-5.
[58] T.-M. Chen, C.-H. Huang, E. S. Shih, Y.-F. Hu, and M.-J. Hwang, "Detection and classification of cardiac arrhythmias by a challenge-best deep learning neural network model," Iscience, vol. 23, no. 3, p. 100886, 2020.
[59] S. Mahdiani, V. Jeyhani, M. Peltokangas, and A. Vehkaoja, "Is 50 Hz high enough ECG sampling frequency for accurate HRV analysis?," Conf Proc IEEE Eng Med Biol Soc, vol. 2015, pp. 5948-51, 2015, doi: 10.1109/EMBC.2015.7319746.
[60] O. Kwon et al., "Electrocardiogram Sampling Frequency Range Acceptable for Heart Rate Variability Analysis," Healthc Inform Res, vol. 24, no. 3, pp. 198-206, Jul 2018, doi: 10.4258/hir.2018.24.3.198.
[61] K. A. Sidek and I. Khalil, "Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation," Computer Methods and Programs in Biomedicine, vol. 109, no. 1, pp. 13-25, 2013/01/01/ 2013, doi: https://doi.org/10.1016/j.cmpb.2012.08.015.
[62] S. M. Mathews, C. Kambhamettu, and K. E. Barner, "A novel application of deep learning for single-lead ECG classification," (in eng), Comput Biol Med, vol. 99, pp. 53-62, Aug 1 2018, doi: 10.1016/j.compbiomed.2018.05.013.
[63] Y. L. Coelho, F. d. A. S. d. Santos, A. Frizera-Neto, and T. F. Bastos-Filho, "A Lightweight Framework for Human Activity Recognition on Wearable Devices," IEEE Sensors Journal, vol. 21, no. 21, pp. 24471-24481, 2021, doi: 10.1109/JSEN.2021.3113908.
[64] Z. Long, T. Wang, C. You, Z. Yang, K. Wang, and J. Liu, "Terahertz image super-resolution based on a deep convolutional neural network," Appl Opt, vol. 58, no. 10, pp. 2731-2735, Apr 1 2019, doi: 10.1364/AO.58.002731.
[65] M. Kwon, S. Han, K. Kim, and S. C. Jun, "Super-Resolution for Improving EEG Spatial Resolution using Deep Convolutional Neural Network-Feasibility Study," Sensors (Basel), vol. 19, no. 23, Dec 3 2019, doi: 10.3390/s19235317.
[66] K. Umehara, J. Ota, and T. Ishida, "Application of Super-Resolution Convolutional Neural Network for Enhancing Image Resolution in Chest CT," J Digit Imaging, vol. 31, no. 4, pp. 441-450, Aug 2018, doi: 10.1007/s10278-017-0033-z.
[67] D. Dai, Y. Wang, Y. Chen, and L. Van Gool, "Is Image Super-resolution Helpful for Other Vision Tasks?," p. arXiv:1509.07009. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2015arXiv150907009D
[68] M. Haris, G. Shakhnarovich, and N. Ukita, "Task-Driven Super Resolution: Object Detection in Low-resolution Images," p. arXiv:1803.11316. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2018arXiv180311316H
[69] M. S. M. Sajjadi, B. Schölkopf, and M. Hirsch, "EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis," p. arXiv:1612.07919. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2016arXiv161207919S
[70] Y. Bai, Y. Zhang, M. Ding, and B. Ghanem, "SOD-MTGAN: Small Object Detection via Multi-Task Generative Adversarial Network," in Computer Vision – ECCV 2018, Cham, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds., 2018// 2018: Springer International Publishing, pp. 210-226.
[71] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, "Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution," p. arXiv:1704.03915. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2017arXiv170403915L
[72] C. Dong, C. C. Loy, K. He, and X. Tang, "Learning a Deep Convolutional Network for Image Super-Resolution," Cham, 2014: Springer International Publishing, in Computer Vision – ECCV 2014, pp. 184-199.
[73] J. Kim, J. K. Lee, and K. M. Lee, "Accurate Image Super-Resolution Using Very Deep Convolutional Networks," p. arXiv:1511.04587. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2015arXiv151104587K
[74] N. Ahn, B. Kang, and K.-A. Sohn, "Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network," p. arXiv:1803.08664. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2018arXiv180308664A
[75] J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual Losses for Real-Time Style Transfer and Super-Resolution," p. arXiv:1603.08155. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2016arXiv160308155J
[76] A. Bulat and G. Tzimiropoulos, "Super-FAN: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with GANs," p. arXiv:1712.02765. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2017arXiv171202765B
[77] C. Ledig et al., "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network," p. arXiv:1609.04802. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2016arXiv160904802L
[78] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, "Enhanced Deep Residual Networks for Single Image Super-Resolution," p. arXiv:1707.02921. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2017arXiv170702921L
[79] Y. Wang, F. Perazzi, B. McWilliams, A. Sorkine-Hornung, O. Sorkine-Hornung, and C. Schroers, "A Fully Progressive Approach to Single-Image Super-Resolution," p. arXiv:1804.02900. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2018arXiv180402900W
[80] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, "Image Super-Resolution Using Very Deep Residual Channel Attention Networks," p. arXiv:1807.02758. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2018arXiv180702758Z
[81] T. Dai, J. Cai, Y. Zhang, S. Xia, and L. Zhang, "Second-Order Attention Network for Single Image Super-Resolution," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15-20 June 2019 2019, pp. 11057-11066, doi: 10.1109/CVPR.2019.01132.
[82] G. Hinton et al., "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012, doi: 10.1109/MSP.2012.2205597.
[83] D. Jurafsky and J. H. Martin, "Speech and language processing," (in English), 2014. [Online]. Available: http://www.dawsonera.com/depp/reader/protected/external/AbstractView/S9781292037936.
[84] X. Lu, Y. Tsao, S. Matsuda, and C. Hori, "Speech enhancement based on deep denoising autoencoder," in INTERSPEECH, 2013.
[85] A. van den Oord et al., "WaveNet: A Generative Model for Raw Audio," p. arXiv:1609.03499. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2016arXiv160903499V
[86] J.-P. Briot, G. Hadjeres, and F.-D. Pachet, "Deep Learning Techniques for Music Generation -- A Survey," p. arXiv:1709.01620. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2017arXiv170901620B
[87] M. A. Acevedo, C. J. Corrada-Bravo, H. Corrada-Bravo, L. J. Villanueva-Rivera, and T. M. Aide, "Automated classification of bird and amphibian calls using machine learning: A comparison of methods," Ecological Informatics, vol. 4, no. 4, pp. 206-214, 2009, doi: 10.1016/j.ecoinf.2009.06.005.
[88] S. Birnbaum, V. Kuleshov, Z. Enam, P. W. Koh, and S. Ermon, "Temporal FiLM: Capturing Long-Range Sequence Dependencies with Feature-Wise Modulations," p. arXiv:1909.06628. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2019arXiv190906628B
[89] F. Pedregosa et al., "Scikit-learn: Machine Learning in Python," J. Mach. Learn. Res., vol. 12, no. null, pp. 2825–2830, 2011.
[90] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.
[91] P. W. D. Charles, "Project Title," GitHub repository, vol. https://github.com/charlespwd/project-title, 2013.
[92] H. Thapliyal, V. Khalus, and C. Labrado, "Stress Detection and Management: A Survey of Wearable Smart Health Devices," IEEE Consumer Electronics Magazine, vol. 6, no. 4, pp. 64-69, 2017, doi: 10.1109/MCE.2017.2715578.
[93] S. H. Chae, M. C. Kang, J. Y. Sun, B. S. Kim, and S. J. Ko, "Collision detection method using image segmentation for the visually impaired," IEEE Transactions on Consumer Electronics, vol. 63, no. 4, pp. 392-400, 2017, doi: 10.1109/TCE.2017.015101.
[94] C. W. Lee, P. Chondro, S. J. Ruan, O. Christen, and E. Naroska, "Improving Mobility for the Visually Impaired: A Wearable Indoor Positioning System Based on Visual Markers," IEEE Consumer Electronics Magazine, vol. 7, no. 3, pp. 12-20, 2018, doi: 10.1109/MCE.2018.2797741.
[95] J. Wang, Z. Zhang, B. Li, S. Lee, and R. S. Sherratt, "An enhanced fall detection system for elderly person monitoring using consumer home networks," IEEE Transactions on Consumer Electronics, vol. 60, no. 1, pp. 23-29, 2014, doi: 10.1109/TCE.2014.6780921.
[96] M. A. Sayeed, S. P. Mohanty, E. Kougianos, and H. P. Zaveri, "Neuro-Detect: A Machine Learning-Based Fast and Accurate Seizure Detection System in the IoMT," IEEE Transactions on Consumer Electronics, vol. 65, no. 3, pp. 359-368, 2019, doi: 10.1109/TCE.2019.2917895.
[97] A. M. Joshi, P. Jain, and S. P. Mohanty, "iGLU 3.0: A Secure Noninvasive Glucometer and Automatic Insulin Delivery System in IoMT," IEEE Transactions on Consumer Electronics, vol. 68, no. 1, pp. 14-22, 2022, doi: 10.1109/TCE.2022.3145055.
[98] J. Li, M. L. Seltzer, X. Wang, R. Zhao, and Y. Gong, "Large-Scale Domain Adaptation via Teacher-Student Learning," p. arXiv:1708.05466. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2017arXiv170805466L
[99] S. O. Sadjadi and J. H. Hansen, "Assessment of single-channel speech enhancement techniques for speaker identification under mismatched conditions," in Eleventh Annual Conference of the International Speech Communication Association, 2010.
[100] M. Fujimoto and H. Kawai, "One-Pass Single-Channel Noisy Speech Recognition Using a Combination of Noisy and Enhanced Features," in INTERSPEECH, 2019, pp. 486-490.
[101] H. Sato, T. Ochiai, M. Delcroix, K. Kinoshita, N. Kamo, and T. Moriya, "Learning to Enhance or Not: Neural Network-Based Switching of Enhanced and Observed Signals for Overlapping Speech Recognition," in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022: IEEE, pp. 6287-6291.
[102] A. B. de Luna, Clinical Electrocardiography, Enhanced Edition: A Textbook. Wiley, 2012.
[103] Q. Yang, Y. Liu, T. Chen, and Y. Tong, "Federated Machine Learning: Concept and Applications," p. arXiv:1902.04885. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2019arXiv190204885Y
[104] F. Fatehi, F. Hassandoust, R. K. L. Ko, and S. Akhlaghpour, "General Data Protection Regulation (GDPR) in Healthcare: Hot Topics and Research Fronts," Stud Health Technol Inform, vol. 270, pp. 1118-1122, Jun 16 2020, doi: 10.3233/SHTI200336.
[105] Z. I. Attia et al., "Age and Sex Estimation Using Artificial Intelligence From Standard 12-Lead ECGs," Circ Arrhythm Electrophysiol, vol. 12, no. 9, p. e007284, Sep 2019, doi: 10.1161/CIRCEP.119.007284.
[106] C.-M. Liu et al., "Artificial Intelligence-Enabled Electrocardiogram Improves the Diagnosis and Prediction of Mortality in Patients With Pulmonary Hypertension," JACC: Asia, vol. 2, no. 3, Part 1, pp. 258-270, 2022/06/01/ 2022, doi: https://doi.org/10.1016/j.jacasi.2022.02.008.
[107] J. Rusz et al., "Imprecise vowel articulation as a potential early marker of Parkinson's disease: Effect of speaking task," The Journal of the Acoustical Society of America, vol. 134, no. 3, pp. 2171-2181, 2013.
[108] M. Wodzinski, A. Skalski, D. Hemmerling, J. R. Orozco-Arroyave, and E. Nöth, "Deep learning approach to Parkinson’s disease detection using voice recordings and convolutional neural network dedicated to image classification," in 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), 2019: IEEE, pp. 717-720.
[109] A. Z. Antosik-Wójcińska et al., "Smartphone as a monitoring tool for bipolar disorder: a systematic review including data analysis, machine learning algorithms and predictive modelling," International journal of medical informatics, vol. 138, p. 104131, 2020.
[110] J.-Y. Han et al., "Enhancing the Performance of Pathological Voice Quality Assessment System Through the Attention-Mechanism Based Neural Network," Journal of Voice, 2023.
[111] J. D. Arias-Londono, J. I. Godino-Llorente, N. Saenz-Lechon, V. Osma-Ruiz, and G. Castellanos-Dominguez, "An improved method for voice pathology detection by means of a HMM-based feature space transformation.," Pattern Recognition., vol. 43, no. 9, pp. 3100-3112, 2010.
[112] M. K. Arjmandi, M. Pooyan, M. Mikaili, M. Vali, and A. Moqarehzadeh, "Identification of voice disorders using long-time features and support vector machine with different feature reduction methods," J Voice, vol. 25, no. 6, pp. e275-89, Nov 2011, doi: 10.1016/j.jvoice.2010.08.003.
[113] G. Muhammad, T. A. Mesallam, K. H. Malki, M. Farahat, A. Mahmood, and M. Alsulaiman, "Multidirectional regression (MDR)-based features for automatic voice disorder detection," J Voice, vol. 26, no. 6, pp. 817 e19-27, Nov 2012, doi: 10.1016/j.jvoice.2012.05.002.
[114] N. Souissi and A. Cherif, "Artificial neural networks and support vector machine for voice disorders identification," International Journal of Advanced Computer Science and Applications, vol. 7, no. 5, 2016.
[115] S. H. Fang et al., "Detection of Pathological Voice Using Cepstrum Vectors: A Deep Learning Approach," J Voice, vol. 33, no. 5, pp. 634-641, Sep 2019, doi: 10.1016/j.jvoice.2018.02.003.
[116] S. Hegde, S. Shetty, S. Rai, and T. Dodderi, "A Survey on Machine Learning Approaches for Automatic Detection of Voice Disorders," J Voice, vol. 33, no. 6, pp. 947 e11-947 e33, Nov 2019, doi: 10.1016/j.jvoice.2018.07.014.
[117] A. Al-Nasheri et al., "An Investigation of Multidimensional Voice Program Parameters in Three Different Databases for Voice Pathology Detection and Classification," J Voice, vol. 31, no. 1, pp. 113 e9-113 e18, Jan 2017, doi: 10.1016/j.jvoice.2016.03.019.
[118] M. Chaiani, S. A. Selouani, M. Boudraa, and M. Sidi Yakoub, "Voice disorder classification using speech enhancement and deep learning models," Biocybernetics and Biomedical Engineering, vol. 42, no. 2, pp. 463-480, 2022, doi: 10.1016/j.bbe.2022.03.002.
[119] A. Aicha and K. Ezzine, "Cancer larynx detection using glottal flow parameters and statistical tools," presented at the 2016 International Symposium on Signal, Image, Video and Communications (ISIVC), 2016.
[120] L. Gavidia-Ceballos and J. H. Hansen, "Direct speech feature estimation using an iterative EM algorithm for vocal fold pathology detection," IEEE Trans Biomed Eng, vol. 43, no. 4, pp. 373-83, Apr 1996, doi: 10.1109/10.486257.
[121] C.-T. Wang, Z.-Y. Chuang, C.-H. Hung, Y. Tsao, and S.-H. Fang, "Detection of Glottic Neoplasm Based on Voice Signals Using Deep Neural Networks," IEEE Sensors Letters, vol. 6, no. 3, pp. 1-4, 2022, doi: 10.1109/lsens.2022.3152738.
[122] C. T. Wang, T. M. Chen, N. T. Lee, and S. H. Fang, "AI Detection of Glottic Neoplasm Using Voice Signals, Demographics, and Structured Medical Records," (in English), Laryngoscope, vol. 134, no. 11, pp. 4585-4592, Nov 2024, doi: 10.1002/lary.31563.
[123] D. Martínez, E. Lleida, A. Ortega, A. Miguel, and J. Villalba, "Voice Pathology Detection on the Saarbrücken Voice Database with Calibration and Fusion of Scores Using MultiFocal Toolkit," Berlin, Heidelberg, 2012: Springer Berlin Heidelberg, in Advances in Speech and Language Technologies for Iberian Languages, pp. 99-109.
[124] D. Jaganath et al., "Accelerating cough-based algorithms for pulmonary tuberculosis screening: Results from the CODA TB DREAM Challenge," medRxiv, p. 2024.05.13.24306584, 2024, doi: 10.1101/2024.05.13.24306584.
[125] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
[126] A. M. Bur, M. Shew, and J. New, "Artificial Intelligence for the Otolaryngologist: A State of the Art Review," Otolaryngol Head Neck Surg, vol. 160, no. 4, pp. 603-611, Apr 2019, doi: 10.1177/0194599819827507.
[127] M. G. Crowson et al., "A contemporary review of machine learning in otolaryngology-head and neck surgery," Laryngoscope, Feb 1 2019, doi: 10.1002/lary.27850.
[128] Y. Bensoussan, E. B. Vanstrum, M. M. Johns, 3rd, and A. Rameau, "Artificial Intelligence and Laryngeal Cancer: From Screening to Prognosis: A State of the Art Review," Otolaryngol Head Neck Surg, vol. 168, no. 3, pp. 319-329, Mar 2023, doi: 10.1177/01945998221110839.
[129] H. Kim et al., "Convolutional neural network classifies pathological voice change in laryngeal cancer with high accuracy," Journal of Clinical Medicine, vol. 9, no. 11, p. 3415, 2020.
[130] S. M. Cohen, J. Kim, N. Roy, C. Asche, and M. Courey, "Prevalence and causes of dysphonia in a large treatment-seeking population," Laryngoscope, vol. 122, no. 2, pp. 343-8, Feb 2012, doi: 10.1002/lary.22426.
[131] S.-H. Fang, C.-T. Wang, J.-Y. Chen, Y. Tsao, and F.-C. Lin, "Combining acoustic signals and medical records to improve pathological voice classification," APSIPA Transactions on Signal and Information Processing, vol. 8, 2019, doi: 10.1017/atsip.2019.7.
[132] U. Cesari, G. De Pietro, E. Marciano, C. Niri, G. Sannino, and L. Verde, "A new database of healthy and pathological voices," Computers & Electrical Engineering, vol. 68, pp. 310-321, 2018, doi: 10.1016/j.compeleceng.2018.04.008.
[133] N. Roy, R. M. Merrill, S. D. Gray, and E. M. Smith, "Voice disorders in the general population: prevalence, risk factors, and occupational impact," Laryngoscope, vol. 115, no. 11, pp. 1988-95, Nov 2005, doi: 10.1097/01.mlg.0000179174.32345.41.
[134] S. R. Best and C. Fakhry, "The prevalence, diagnosis, and management of voice disorders in a National Ambulatory Medical Care Survey (NAMCS) cohort," Laryngoscope, vol. 121, no. 1, pp. 150-7, Jan 2011, doi: 10.1002/lary.21169.
[135] J. E. Butler, T. H. Hammond, and S. D. Gray, "Gender-related differences of hyaluronic acid distribution in the human vocal fold," (in eng), Laryngoscope, vol. 111, no. 5, pp. 907-11, May 2001, doi: 10.1097/00005537-200105000-00029.
[136] P. D. Ward, S. L. Thibeault, and S. D. Gray, "Hyaluronic acid: its role in voice," J Voice, vol. 16, no. 3, pp. 303-9, Sep 2002. [Online]. Available: http://www.ncbi.nlm.nih.gov/pubmed/12395982.
[137] J. C. Stemple, N. Roy, and B. K. Klaben, Clinical Voice Pathology Theory and Management. Plural Publishing, 2014.
[138] S. Y. Tsui, Y. Tsao, C. W. Lin, S. H. Fang, F. C. Lin, and C. T. Wang, "Demographic and Symptomatic Features of Voice Disorders and Their Potential Application in Classification Using Machine Learning Algorithms," Folia Phoniatr Logop, vol. 70, no. 3-4, pp. 174-182, 2018, doi: 10.1159/000492327.
[139] J. I. Godino-Llorente, P. Gomez-Vilda, and M. Blanco-Velasco, "Dimensionality reduction of a pathological voice quality assessment system based on Gaussian mixture models and short-term cepstral parameters," IEEE Trans Biomed Eng, vol. 53, no. 10, pp. 1943-53, Oct 2006, doi: 10.1109/TBME.2006.871883.
[140] M. K. Arjmandi and M. Pooyan, "An optimum algorithm in pathological voice quality assessment using wavelet-packet-based features, linear discriminant analysis and support vector machine.," Biomedical Signal Processing and Control., vol. 7, no. 1, pp. 3-19, 2012.
[141] T.-M. Chen et al., "SRECG: ECG Signal Super-resolution Framework for Portable/Wearable Devices in Cardiac Arrhythmias Classification," IEEE Transactions on Consumer Electronics, pp. 1-1, 2023, doi: 10.1109/tce.2023.3237715.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96331-
dc.description.abstract人工智慧在提升心電圖於心律不整分類中的應用已變得至關重要。本研究利用2018年中國生理信號挑戰賽提供的包含6,877筆記錄的大型12導聯心電圖資料集,結合卷積神經網路和循環神經網路(CNN+RNN)開發出一個人工智慧模型。該模型在2,954筆心電圖隱藏測試集上的九類心律不整分類中,達到了0.84的整體F1-score中位數,展現了其在心律不整檢測中的卓越效能。進一步分析顯示,該模型能充分預測具有多種心律不整診斷的患者。值得注意的是,使用單導聯資料時,模型性能僅輕微下降,其中aVR和V1導聯最具資訊價值。
為解決便攜式/穿戴式裝置在長期心電圖監測中,因電池壽命和傳輸頻寬限制所使用的低解析度信號帶來的準確性下降,我提出了一個基於深度學習的心電圖信號超解析框架(SRECG)。SRECG透過在應用於高解析度訊號多分類器(HMC)的心律不整分類時提升低解析度心電圖信號的準確性。實驗結果顯示,與傳統的插值方法相比,SRECG顯著提高了HMC的心律不整分類準確性,證實了SRECG在增強來自便攜式/穿戴式裝置的低解析度心電圖信號以改善基於雲端的HMC性能方面的可行性。
此外,我研究了最初為心電圖心律不整分類設計的模型在其他生理信號中的多模態應用潛力,尤其是結合語音信號、人口統計資料和結構化醫療記錄以檢測聲門腫瘤。在從良性聲音障礙檢測聲門腫瘤的應用中,我的模型取得了顯著的準確性,展現了人工智慧方法學在跨不同生理信號診斷上的潛力。
總結而言,人工智慧技術,尤其是深度學習模型,在心電圖分析和心律不整分類方面取得了重大進展。強大的CNN+RNN模型、SRECG等信號增強技術,以及人工智慧模型在其他生理信號中的多模態適應性,展示了人工智慧在廣泛的醫學診斷領域中的應用潛力。
zh_TW
dc.description.abstractArtificial intelligence (AI) has become crucial in enhancing electrocardiography (ECG) classification for cardiac arrhythmia (CA). Leveraging a large 12-lead ECG dataset with 6,877 records from the 2018 China Physiological Signal Challenge (CPSC2018), I developed an AI model combining convolutional neural network and recurrent neural network (CNN+RNN). This model achieved a median overall F1-score of 0.84 across nine CA categories on a hidden test set of 2,954 ECG data, demonstrating high efficiency in CA detection. Further analysis showed that the model effectively predicted coexisting CAs in patients with multiple CA diagnoses. Notably, performance only slightly decreased when using single-lead data compared to the full 12-lead dataset, with the aVR and V1 leads providing the most informative signals.
To address challenges in long-term ECG monitoring with portable/wearable (P/W) devices, such as battery constraints and limited transmission bandwidth resulting in low-resolution signals, I proposed a deep learning (DL)-based ECG super-resolution framework (SRECG). SRECG enhances low-resolution ECG signals, focusing on accuracy for CA classification when applied to high-resolution multiclass classifiers (HMC). Experimental results showed that SRECG significantly improved HMC accuracy over traditional interpolation methods, confirming the feasibility of using SRECG to enhance low-resolution ECG signals from P/W devices, improving cloud-based HMC performance.
Additionally, I investigated the adaptability of the model, initially designed for ECG CA classification, to other physiological signals. Specifically, I explored its multimodal potential to detect glottic tumors in benign vocal disorders by integrating voice signals, demographic data, and structured medical records. My model achieved remarkable accuracy, highlighting the potential of AI methodologies to transition across different physiological signals for diagnostic purposes.
In summary, AI, particularly DL models, has achieved significant advances in ECG analysis and CA classification. The integration of powerful CNN+RNN models, signal enhancement techniques such as SRECG, and adaptability to other physiological signals underscores the transformative impact of AI’s potential for broader medical diagnostics.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-12-24T16:23:26Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-12-24T16:23:26Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
中文摘要 iii
ABSTRACT iv
CONTENTS vi
LIST OF FIGURES viii
LIST OF TABLES ix
Chapter 1 ECG Signal Recognition 1
1.1 Introduction 1
1.2 Methods 4
Data acquisition 4
Proposed ECG model architecture 5
Performance evaluation 7
Ensemble and optimization 10
1.3 Results 11
Best validation models on 10-fold tests and ensemble model on hidden test 11
Concurrent CA types 13
Model performances with single-lead ECG 16
Comparison between human assessments and AI 20
1.4 Discussion 22
1.5 Conclusion 23
Chapter 2 ECG Signal Reconstruction 25
2.1 Introduction 25
2.2 Methods 31
Data acquisition 31
Benchmark ECG model architecture 32
Performance evaluation 34
Traditional spline interpolation for up-sampling 34
Proposed SRECG framework 35
2.3 Results 40
Effects of sampling frequency on CA classification 40
Effects of traditional spline interpolation methods 41
Effects of joint training losses in SRECG 44
2.4 Discussion 47
2.5 Conclusion 50
Chapter 3 Multimodality Adaptation 51
3.1 Introduction 51
3.2 Methods 53
Data acquisition 53
Proposed multimodal model architecture 57
Performance evaluation 59
3.3 Results 61
Internal validation on FVD 61
External validation on SVD 65
Intrinsic validation on CPSC2018 ECG training set 67
Comparison between human assessments and AI 68
3.4 Discussion 70
3.5 Conclusion 74
REFERENCE 75
-
dc.language.isoen-
dc.subject人工智慧zh_TW
dc.subject多模態zh_TW
dc.subject心電圖zh_TW
dc.subject心律不整zh_TW
dc.subject深度學習zh_TW
dc.subject信號增強zh_TW
dc.subject穿戴式裝置zh_TW
dc.subjectMultimodalityen
dc.subjectArtificial intelligenceen
dc.subjectElectrocardiographyen
dc.subjectCardiac arrhythmiaen
dc.subjectDeep learningen
dc.subjectSignal enhancementen
dc.subjectWearable deviceen
dc.title人工智慧在心電圖的心律不整分類zh_TW
dc.titleArtificial Intelligence in Electrocardiography for Cardiac Arrhythmia Classificationen
dc.typeThesis-
dc.date.schoolyear113-1-
dc.description.degree博士-
dc.contributor.coadvisor沈俊嚴zh_TW
dc.contributor.coadvisorChun-Yen Shenen
dc.contributor.oralexamcommittee王棨德;方士豪;王曉嵐;謝伯讓zh_TW
dc.contributor.oralexamcommitteeChi-Te Wang;Shih-Hau Fang;Hsiao-Lan Wang;Po-Jang Hsiehen
dc.subject.keyword人工智慧,心電圖,心律不整,深度學習,信號增強,穿戴式裝置,多模態,zh_TW
dc.subject.keywordArtificial intelligence,Electrocardiography,Cardiac arrhythmia,Deep learning,Signal enhancement,Wearable device,Multimodality,en
dc.relation.page88-
dc.identifier.doi10.6342/NTU202404565-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2024-11-14-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資料科學學位學程-
顯示於系所單位:資料科學學位學程

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf4.76 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved