請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93762| 標題: | 應用光學和聲學儀器以深度學習演算法分類降雨強度之研究 Applying Deep Learning Algorithm Combined with Optical and Acoustic Devices to Determine Rainfall Intensity |
| 作者: | 錢柏丞 Po-Cheng Chien |
| 指導教授: | 何昊哲 Hao-Che Ho |
| 關鍵字: | 降雨強度量測,影像辨識,音訊辨識,CNN,MFCC, Rainfall estimation,Image classification,Acoustic classification,Deep learning,Convolution Neural Network, |
| 出版年 : | 2024 |
| 學位: | 碩士 |
| 摘要: | 根據聯合國國際減災策略組織(United Nations Office for Disaster Risk Reduction)指出,過去20年來氣候變遷造成的極端氣候使全球經濟損失約2.97兆美元。為降低災害的影響,即時的水文資料,尤其是降雨量,對於水資源應用是相當重要的因素。傳統的雨量觀測方法易受到站點覆蓋率、障礙物遮蔽或影像解析度的影響,使得量測不確定性高。本研究提出量測降雨時的光學影像和聲波,透過深度學習演算法來分辨降雨強度的方法。實驗設置分為三種類,一是自製的人造降雨影像,二是利用長1.2公尺、寬1.2公尺、高3.6公尺的人工降雨模擬器,模擬4種降雨強度區間,所有情境的降雨皆達到終端速度的85%以上,三是5場真實降雨。利用攝影機拍攝不同強度的降雨同時藉由麥克風收錄雨滴撞擊在硬質塑膠上的聲音,降雨量的率定則以雨量筒進行。所收錄的音訊需轉換為梅爾頻率倒譜係數(Mel-Frequency Cepstral Coefficients),並與同步拍攝的影像分別導入卷積神經網路(Convolutional Neural Network)模型中進行運算。研究結果表明,本研究所提出的影像及音訊辨識模型整合並匯入邊緣運算裝置方法,訓練測試集準確度於白天與夜晚分別達到99.88%與99.75%。未來結合物聯網技術,能提升防災應變、洪水預警與智慧城市的相關應用,能有效降低氣候變遷對人類的衝擊。 According to the United Nations Office for Disaster Risk Reduction, the last two decades have seen climate-induced extreme weather events impose an economic burden of approximately US$2.97 trillion globally. To address the repercussions, real-time hydrological data, specifically rainfall patterns, are essential in water resource management. Traditional methodologies for rainfall measurement face inherent uncertainties due to limited station coverage, physical obstructions, and image resolution constraints. To combat these challenges, our study introduces a novel method utilizing deep learning algorithms, integrating optical imagery and acoustic signals from precipitation. This methodology was validated in a controlled rainfall simulator, replicating four distinct rainfall intensities. A high-resolution optical device was employed to document precipitation, while the acoustic signature of raindrops on a rigid surface was captured concurrently. The recorded auditory data, transformed into Mel-Frequency Cepstral Coefficients (MFCC), was combined with synchronized optical data and processed via a Convolutional Neural Network (CNN). Preliminary findings suggest that this integrated approach, when embedded in edge computing devices, offers real-time rainfall intensity quantification. Coupling this technology with the Internet of Things (IoT) could enhance disaster response and flood warning systems, fortifying our resilience against climate change's adverse impacts. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93762 |
| DOI: | 10.6342/NTU202402759 |
| 全文授權: | 同意授權(限校園內公開) |
| 電子全文公開日期: | 2029-07-30 |
| 顯示於系所單位: | 土木工程學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 10.21 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
