請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73697
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 林啟萬(Chii-Wann Lin) | |
dc.contributor.author | Nai-Yun Tung | en |
dc.contributor.author | 董乃昀 | zh_TW |
dc.date.accessioned | 2021-06-17T08:08:16Z | - |
dc.date.available | 2024-08-20 | |
dc.date.copyright | 2019-08-20 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-08-18 | |
dc.identifier.citation | [1] V. N. Oliynik.(2013).“Determining the amplitude-frequency response for electronic stethoscope 3M Littmann 3200,” Acoust. Bul. (Akustychny Visnyk), vol. 16, no. 3, p. 46–57, 2013–2014.
[2] V. N. Oliynik.(2015, April). On Potential Effectiveness of Integration of 3M Littmann 3200 Electronic Stethoscopes into the Third-Party Diagnostic Systems with Auscultation Signal Processing. 2015 IEEE 35th International Conference on Electronics and Nanotechnology, Ukraine. [3] RL Watrous et al.(2002). Methods and Results in Characterizing Electronic Stethoscopes. Computers in Cardiology. DOI: 10.1109/CIC.2002.1166857 [4] Inan Gu ̈ler, Hu ̈seyin Polat, and Uc ̧man Ergu ̈n.(2015). Combining Neural Network and Genetic Algorithm for Prediction of Lung Sounds. Journal of Medical Systems, Vol. 29, No. 3, [5] Karol J Piczak.(2015). Environmental sound classification with convolutional neural networks. IEEE international workshop on machine learning and signal processing, USA. [6] Justin Salamon and Juan Pablo Bello.(2017). Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification. IEEE SIGNAL PROCESSING LETTERS [7] Roger Jang(2005). Audio Signal Processing and Recognition. Retrieved from: http://mirlab.org/jang/books/audioSignalProcessing/ (May, 2019) [8] Paul y. Ertel et al.(1966). Stethoscope Acoustics [9] José Semedo et al.(2015). Computerised Lung Auscultation–Sound Software. Procedia Computer Science. Vol 64. [10] João Dinis et al.(2013). Respiratory Sound Annotation Software [11] J Anesth.(2007). Pulmonary atelectasis manifested after induction of anesthesia: a contribution of sinobronchial syndrome?. J Anesth 2007;21(1):66-8. DOI:10.1007/s00540-006-0451-4 [12] Ford ES et al.(2013). Trends in the prevalence of obstructive and restrictive lung function among adults in the United States: findings from the National Health and Nutrition Examination surveys from 1988-1994 to 2007-2010. Chest(2013, May) DOI:10.1378/chest.12-1135 [13] Ann Am Thorac Soc..(2015). Undiagnosed Obstructive Lung Disease in the United States. Associated Factors and Long-term Mortality. Ann Am Thorac Soc. (2015, Dec) 12(12): 1788–1795. DOI:10.1513/AnnalsATS.201506-388OC [14] Akinbami LJ and Liu X.(2011). Chronic obstructive pulmonary disease among adults aged 18 and over in the United States, 1998-2009. NCHS Data Brief. [15] Lemuel R. Waitman. (2000). Representation and classification of breath sounds recorded in an intensive care setting using neural networks. J Clin Monit Comput.2000;16(2):95-105. [16] Amjad Hashemi.(2011). Classification of wheeze sounds using wavelets and neural networks. International Conference on Biomedical Engineering and Technology, Singapore [17] Gadge PB, Rode SV. (2016) Automatic Wheeze Detection System as Symptoms of Asthma Using Spectral Power Analysis. Journal of Bioengineering and Biomedical Science. [18] S. Rietveld, M. Oud, and E. H. Dooijes(1999). Classification of Asthmatic Breath Sounds: Preliminary Results of the Classifying Capacity of Human Examiners versus Artificial Neural Networks. Computers and Biomedical Research 32, 440–448, 1999 [19] Openstax (2013). Anatomy and Physiology. Texas: Rice University. [20] Fredric j. harris(1976). Windows, Harmonic Analysis and the Discrete Fourier Transform. San Diego: Naval Undersea Center. [21] David William. (2017). Understanding, Calculating, and Measuring Total Harmonic Distortion (THD). Retrieved from https://www.allaboutcircuits.com/technical-articles/the-importance-of-total-harmonic-distortion/ (March, 2019) [22] Lindasalwa Muda, Mumtaj Begam and I. Elamvazuthi (2010). Voice Recognition Algorithms using Mel Frequency Cepstral Coefficient (MFCC) and Dynamic Time Warping (DTW) Techniques. Journal of Computing, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 [23] Nvidia developer (2019). Convolutional Neural Networks (CNN). Retrieved from https://developer.nvidia.com/discover/convolutional-neural-network (March, 2019) [24] Michael Nielsen (2019). Why are deep neural networks hard to train? Retrieved from http://neuralnetworksanddeeplearning.com/chap5.html (March, 2019) [25] 胡志明(1998)。肺音擷取系統及氣喘之哮鳴分析。台灣大學電機工程研究所碩士論文 [26] 陳冠宏(2006)。多種肺音自動識別之研究。台灣大學生物產業機電學研究所碩士論文 [27] 曾安慈(2018)。基於支持向量機辨識肺音之病徵。台灣大學工程科學及海洋工程學研究所碩士論文 [28] Mike Chen(2018)。自動語音識別觀念與實踐。檢自https://ithelp.ithome.com.tw/articles/10195970。(March, 2019) [29] Theneo(2018). Machine learning Tensorflow 練習5。檢自https://ithelp.ithome.com.tw/articles/10208235?sc=iThelpR。(March, 2019) | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73697 | - |
dc.description.abstract | 本研究主旨在建立可量化聽診器收音品質的聲學測試系統,以及臨床呼吸異常音識別演算法。聽診器是醫學診斷上不可或缺的工具,電子聽診器的問世解決了聽診聲音無法收集、判斷只能仰賴資深醫療人員的問題。使用電子聽診器可讓錄製後的聲音訊號儲存在電腦,供進一步的分析與醫療決策之用,如何量化收音品質,提高電子聽診器的可靠度便成為重要的議題。
目前需要一套完整且科學化的聲學測試系統,來測試電子聽診器收音訊號的品質,本研究利用3M電子聽診器及連續式聽診貼片於無響室錄音,藉由播放單頻訊號並分析其頻率響應與諧波失真率,建立聲學特性量測系統;再透過臨床試驗,實際收集加護病房及呼吸照護病房病患之呼吸聲音。最後,利用機器學習方法建構一套呼吸音識別演算法,其中包含梅爾倒頻譜的特徵擷取、並比較三種類神經網路分類模型,用以自動判斷正常及異常呼吸音,增加臨床實際應用價值。 本研究中,聲學測試系統能有效地評估電子聽診器之收音品質、區分3M電子聽診器和連續聽診貼片之間的收音質量。結果顯示,3M電子聽診器因其構造及錄音軟體的既有設定性質,導致錄製後的聲音因為其頻率響應非直線穩定、分類模型較難提取特定的頻率特徵而降低辨識正確率。最後,我們的臨床呼吸音分類模型具有93%的總體準確度,可以在醫院中自動識別不同的呼吸聲並幫助醫生進行診斷。 未來將使用此聲學測試系統應用在聽診器開發與評估上,並持續優化臨床呼吸音識別演算法,納入更多臨床呼吸異常音種類,並建構一個可以連續監控、即時警示的遠距智慧呼吸照護系統。 | zh_TW |
dc.description.abstract | Stethoscope is an indispensable tool for medical diagnosis. Electronic stethoscope solves the problem with traditional stethoscope, auscultation cannot be recorded and stored, and that diagnosis can only rely on medical professionals. By using electronic stethoscopes, sound signals are now transmittable to computers for further analysis. Therefore, quantification of the recording quality of electronic stethoscopes has become an important indicator.
The aim of this study is to establish an acoustic testing system which quantifies the recording quality of electronic stethoscopes, and to develop a clinical respiratory sound classification algorithm. Currently, there are no complete and scientific standards for evaluating stethoscopes. In this study, monotonic sound is played using the frequency sweep method and recorded by 3M electronic stethoscopes and continuous auscultation patches in a soundless room. After calculating frequency response and harmonic distortion rate to quantify acoustical properties, an acoustic testing system is developed. Through clinical trials in ICU and RCW in hospitals, patients’ respiratory sound is collected. Machine learning algorithm is then used to build up a respiratory sound recognition and classification system, including MFCC feature extraction and selecting the best out of 3 types of neural networks models, to automatically recognize normal and abnormal respiratory sound. In this study, the acoustic testing system works effectively on evaluating an electronic stethoscope, enabling it to differentiate on the quality of recording between 3M electronic stethoscopes and continuous auscultation patches. The result shows that because of 3M stethoscope’s structure and recoding restrictions, sound recorded by it has unstable frequency response which leads to the classification model unable to extract the specific frequency features and lower the accuracy of recognition. This difference in recording quality of stethoscope correlates to the classification performance. Finally, our classification model has an overall accuracy of 93%, which means that it can be used in hospitals to automatically recognize different respiratory sound and help doctors for diagnosis. In the future, the acoustic testing system will be used for developing and evaluating electronic stethoscope. In addition, respiratory sound recognition algorithm will be trained and will include more clinical abnormal sound types; furthermore, this will build up a continuous monitoring and real-time alerting respiratory caring system. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T08:08:16Z (GMT). No. of bitstreams: 1 ntu-108-R06945015-1.pdf: 3463187 bytes, checksum: 37df224356b16126fadd38abb20bca9a (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 中文摘要 iii 英文摘要 iv 第一章 緒論 1 1.1 研究動機與目的 1 1.2 文獻回顧 3 1.3 本文架構 9 第二章 聲學系統測試 10 2.1 基本聲學特徵 10 2.2 頻率響應 10 2.3 快速傅立葉轉換 13 2.4 諧波失真率 14 2.5 聽診器之聲學實驗 15 第三章 臨床呼吸音識別演算法 27 3.1 呼吸音的生理特徵 27 3.2 訊號收集及訊號前處理 31 3.3 訊號特徵擷取-梅爾倒頻譜 34 3.4 類神經網路分類演算法 38 3.5 統計方法 42 3.6 建立識別演算法架構 44 3.7 建立呼吸音識別演算模型 45 3.8 研究結果 48 3.9 結果討論 50 第四章 結論與展望 52 REFERENCE 54 | |
dc.language.iso | zh-TW | |
dc.title | 聽診器聲學特性分析與臨床呼吸音識別演算法 | zh_TW |
dc.title | Characterization of Stethoscope and Machine Learning
Algorithm for Respiratory Sound Classification | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 林致廷,王昭男 | |
dc.subject.keyword | 聽診器,聲學特性,頻率響應,諧波失真,呼吸音,機器學習, | zh_TW |
dc.subject.keyword | Stethoscope,acoustic characteristics,frequency response,harmonic distortion,respiratory sound,machine learning, | en |
dc.relation.page | 56 | |
dc.identifier.doi | 10.6342/NTU201903892 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-08-18 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 生醫電子與資訊學研究所 | zh_TW |
顯示於系所單位: | 生醫電子與資訊學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 3.38 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。