Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 土木工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93331
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林偲妘zh_TW
dc.contributor.advisorSzu-Yun Linen
dc.contributor.author陳蓮安zh_TW
dc.contributor.authorLien-An Chenen
dc.date.accessioned2024-07-29T16:17:53Z-
dc.date.available2024-07-30-
dc.date.copyright2024-07-29-
dc.date.issued2024-
dc.date.submitted2024-07-23-
dc.identifier.citationAkarsu, B., KARAKÖSE, M., PARLAK, K., Erhan, A., & SARIMADEN, A. (2016). A fast and adaptive road defect detection approach using computer vision with real time implementation. International Journal of Applied Mathematics Electronics and Computers(Special Issue-1), 290-295. https://doi.org/10.18100/ijamec.270546
Antoniou, A., Storkey, A., & Edwards, H. (2018). Augmenting image classifiers using data augmentation generative adversarial networks. Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part III 27, https://doi.org/10.1007/978-3-030-01424-7_58
Boccardo, P., & Giulio Tonolo, F. (2015). Remote sensing role in emergency mapping for disaster response. Engineering Geology for Society and Territory-Volume 5: Urban Geology, Sustainable Planning and Landscape Exploitation, https://doi.org/10.1007/978-3-319-09048-1_3
Caye Daudt, R., Le Saux, B., Boulch, A., & Gousseau, Y. (2019). Multitask learning for large-scale semantic change detection. Computer Vision and Image Understanding, 187. https://doi.org/10.1016/j.cviu.2019.07.003
Chaurasia, A., & Culurciello, E. (2017). Linknet: Exploiting encoder representations for efficient semantic segmentation. 2017 IEEE visual communications and image processing (VCIP), https://doi.org/10.1109/vcip.2017.8305148
Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D., & Raskar, R. (2018). Deepglobe 2018: A challenge to parse the earth through satellite images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, https://doi.org/10.1109/cvprw.2018.00031
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. 2009 IEEE conference on computer vision and pattern recognition, https://doi.org/10.1109/cvpr.2009.5206848
DigitalGlobe. (2015). Basemap vivid: The definitive imagery base for high-resolution mapping [PDF]. Retrieved from https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/2/DG_Basemap_Vivid_DS_1.pdf
Etten, A. V. (2020). City-scale Road extraction from satellite imagery v2: Road speeds and travel times. Proceedings of the IEEE/CVF winter conference on applications of computer vision, https://doi.org/10.1109/wacv45572.2020.9093593
Gao, Y., Kong, B., & Mosalam, K. M. (2019). Deep leaf‐bootstrapping generative adversarial network for structural image data augmentation. Computer‐Aided Civil and Infrastructure Engineering, 34(9), 755-773. https://doi.org/10.1111/mice.12458
Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. Proceedings of the IEEE conference on computer vision and pattern recognition, https://doi.org/10.1109/cvpr.2016.265
Ghaffar, M., McKinstry, A., Maul, T., & Vu, T. (2019). Data augmentation approaches for satellite image super-resolution. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 4, 47-54. https://doi.org/10.5194/isprs-annals-iv-2-w7-47-2019
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144. https://doi.org/10.1145/3422622
Hasanlou, M., Shah-Hosseini, R., Seydi, S. T., Karimzadeh, S., & Matsuoka, M. (2021). Earthquake damage region detection by multitemporal coherence map analysis of radar and multispectral imagery. Remote Sensing, 13(6), 1195. https://doi.org/10.3390/rs13061195
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, https://doi.org/10.1109/cvpr.2016.90
Huang, X., & Zhang, L. (2011). Morphological building/shadow index for building extraction from high-resolution imagery over urban areas. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5(1), 161-172. https://doi.org/10.1109/jstars.2011.2168195
Huang, Y., Wei, H., Yang, J., & Wu, M. (2021). Damaged road extraction based on simulated post-disaster remote sensing images. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, https://doi.org/10.1109/igarss47720.2021.9554812
Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE conference on computer vision and pattern recognition, https://doi.org/10.1109/cvpr.2017.632
Kerle, N. (2010). Satellite-based damage mapping following the 2006 Indonesia earthquake—How accurate was it? International Journal of Applied Earth Observation and Geoinformation, 12(6), 466-476. https://doi.org/10.1016/j.jag.2010.07.004
Khelifi, L., & Mignotte, M. (2020). Deep learning for change detection in remote sensing images: Comprehensive review and meta-analysis. Ieee Access, 8, 126385-126400. https://doi.org/10.1109/access.2020.3008036
Lin, S-.Y., M. Edocia, Tsai, F-.J., Chen, L-.A., Kuo, W-.N. (2023) "HaitiBRD: A labeled satellite imagery dataset for building and road damage assessment of the 2010 Haiti earthquake." DesignSafe-CI. https://doi.org/10.17603/ds2-fqat-4v02
Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. Proceedings of the IEEE conference on computer vision and pattern recognition, https://doi.org/10.1109/cvpr.2017.106
Ma, Y., Chen, F., Liu, J., He, Y., Duan, J., & Li, X. (2016). An automatic procedure for early disaster change mapping based on optical remote sensing. Remote Sensing, 8(4), 272. https://doi.org/https://doi.org/10.3390/rs8040272
Maeda, H., Kashiyama, T., Sekimoto, Y., Seto, T., & Omata, H. (2021). Generative adversarial network for road damage detection. Computer‐Aided Civil and Infrastructure Engineering, 36(1), 47-60. https://doi.org/10.1111/mice.12561
Maeda, H., Sekimoto, Y., Seto, T., Kashiyama, T., & Omata, H. (2018). Road damage detection and classification using deep neural networks with smartphone images. Computer‐Aided Civil and Infrastructure Engineering, 33(12), 1127-1141. https://doi.org/10.1111/mice.12387
Milletari, F., Navab, N., & Ahmadi, S. A. (2016, October). V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV) (pp. 565-571). Ieee. https://doi.org/10.1109/3dv.2016.79
Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. https://doi.org/10.48550/arxiv.1411.1784
Moreno-Barea, F. J., Strazzera, F., Jerez, J. M., Urda, D., & Franco, L. (2018). Forward noise adjustment scheme for data augmentation. 2018 IEEE symposium series on computational intelligence (SSCI), https://doi.org/10.1109/ssci.2018.8628917
Peng, H., Long, F., & Ding, C. (2005). Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on pattern analysis and machine intelligence, 27(8), 1226-1238. https://doi.org/10.1109/TPAMI.2005.159
Pérez, P., Gangnet, M., & Blake, A. (2003). Poisson image editing. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2 (pp. 577-582). https://doi.org/10.1145/882262.882269
Roscher, R., Volpi, M., Mallet, C., Drees, L., & Wegner, J. D. (2020). SemCity Toulouse: A benchmark for building instance segmentation in satellite images. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 5, 109-116. https://doi.org/10.5194/isprs-annals-V-5-2020-109-2020
Xue, R., Yang, C., Xin, Y., Yu, K., & Song, W. (2021). Disastergan: Generative adversarial networks for remote sensing disaster image generation. Remote Sensing, 13(21), 4284. https://doi.org/10.3390/rs13214284
Schneider, A. (2012). Monitoring land cover change in urban and peri-urban areas using dense time stacks of Landsat satellite data and a data mining approach. Remote Sensing of Environment, 124, 689-704. https://doi.org/10.1016/j.rse.2012.06.006
scikit image. Skeletonize. https://scikit-image.org/docs/dev/auto_examples/edges/plot_ skeleton.html, 2018.
Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., & Webb, R. (2017). Learning from simulated and unsupervised images through adversarial training. Proceedings of the IEEE conference on computer vision and pattern recognition, https://doi.org/10.1109/cvpr.2017.241
Treitz, P., & Rogan, J. (2004). Remote sensing for mapping and monitoring land-cover and land-use change-an introduction. Progress in planning, 61(4), 269-279. https://doi.org/10.1016/s0305-9006(03)00064-3
Voigt, S., Schneiderhan, T., Twele, A., Gähler, M., Stein, E., & Mehl, H. (2011). Rapid damage assessment and situation mapping: learning from the 2010 Haiti earthquake. Photogrammetric Engineering and Remote Sensing (PE&RS), 77(9), 923-931. https://doi.org/10.14358/pers.77.9.923
Wang, C., Xu, C., Wanga, C., & Tao, D. (2018). Perceptual Adversarial Networks for Image-to-Image Transformation. IEEE Transactions on Image Processing, 27(8), 4066-4079. https://doi.org/10.1109/TIP.2018.2836316
Wang, J., Qin, Q., Zhao, J., Ye, X., Feng, X., Qin, X., & Yang, X. (2015). Knowledge-based detection and assessment of damaged roads using post-disaster high-resolution remote sensing image. Remote Sensing, 7(4), 4948-4967. https://doi.org/10.3390/rs70404948
yxdragon. (2018). sknw: Skeleton network. Retrieved from https://github.com/yxdragon/sknw
Zhan, F., Yu, Y., Wu, R., Zhang, J., Lu, S., & Zhang, C. (2022). Marginal contrastive correspondence for guided image generation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, https://doi.org/10.1109/cvpr52688.2022.01040
Zhang, A., Wang, K. C., Li, B., Yang, E., Dai, X., Peng, Y., Fei, Y., Liu, Y., Li, J. Q., & Chen, C. (2017). Automated pixel‐level pavement crack detection on 3D asphalt surfaces using a deep‐learning network. Computer‐Aided Civil and Infrastructure Engineering, 32(10), 805-819. https://doi.org/10.1111/mice.12297
Zhang, P., Zhang, B., Chen, D., Yuan, L., & Wen, F. (2020). Cross-domain correspondence learning for exemplar-based image translation. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, https://doi.org/10.1109/lgrs.2018.2802944
Zhang, Z., Liu, Q., & Wang, Y. (2018). Road extraction by deep residual u-net. IEEE Geoscience and Remote Sensing Letters, 15(5), 749-753. https://doi.org/10.1109/lgrs.2018.2802944
Zheng, Z., Zhong, Y., Wang, J., Ma, A., & Zhang, L. (2021). Building damage assessment for rapid disaster response with a deep object-based semantic change detection framework: From natural disasters to man-made disasters. Remote Sensing of Environment, 265, 112636. https://doi.org/10.1016/j.rse.2021.112636
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., & Torralba, A. (2017). Scene parsing through ade20k dataset. Proceedings of the IEEE conference on computer vision and pattern recognition, https://doi.org/10.1109/cvpr.2017.544
Zhou, L., Zhang, C., & Wu, M. (2018). D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction. Proceedings of the IEEE conference on computer vision and pattern recognition workshops, https://doi.org/10.1109/cvprw.2018.00034
Zhou, X., Zhang, B., Zhang, T., Zhang, P., Bao, J., Chen, D., Zhang, Z., & Wen, F. (2021). Cocosnet v2: Full-resolution correspondence learning for image translation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, https://doi.org/10.1109/cvpr46437.2021.01130
Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE international conference on computer vision, https://doi.org/10.1109/iccv.2017.244
Zhu, X., Liu, Y., Li, J., Wan, T., & Qin, Z. (2018). Emotion classification with data augmentation using generative adversarial networks. Advances in Knowledge Discovery and Data Mining: 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part III 22, https://doi.org/10.1007/978-3-319-93040-4_28
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93331-
dc.description.abstract災害發生後,快速準確地評估道路損壞位置及影響程度對於及時災害應變至關 重要。傳統的評估方法仰賴於民眾回報和勘災人員現場調查。然而,在大範圍災害 中,這些方法往往耗時且費力,造成後續物資配給、救援工作及民眾疏散的延誤。 因此,本研究將透過應用遙測影像以及深度學習技術,進行大範圍快速地道路災損 辨識。針對目前應用遙測影像於道路損壞辨識任務的相關研究中,因災後影像中破 壞道路數量稀少及缺乏災前後成對影像作為訓練資料集導致深度學習模型辨識能 力不佳的狀況,本研究利用生成資料集進行資料擴增,來提升卷積神經網路(CNNs) 的效能,使用高斯噪點和生成對抗網路(GANs)等方法,由災前影像產生共四種 具有不同破壞特徵的生成災後影像。此方法增加了影像中破壞類別數量,並提供成 對的災前後影像作為訓練資料集,有助於模型辨識破壞道路的能力,並提升模型在 不同災害情境中的適應能力。此外,本研究提出了一個結合道路提取模型和孿生模 型的道路破壞辨識架構,並將輸出的分類結果進一步投影至地圖,進行路段破壞程 度分級。本研究結果顯示,有使用生成資料進行訓練的模型在道路破壞辨識能力上 有顯著提升,特別是透過生成對抗網路生成之有當地破壞特徵的資料集提升最多。 未來的研究將集中於在不同的光線條件、色調和地形下生成和評估合成損壞特徵, 以進一步提升損壞識別模型的穩健性。zh_TW
dc.description.abstractAssessing road damage in the aftermath of disasters is crucial for timely and effective emergency response. Traditional assessment methods, which rely on civilian reports and on-site surveys, are often time-consuming and labor-intensive, especially during widespread disasters. This research addresses these limitations by applying remote sensing imagery and deep learning techniques for large-scale, rapid road damage identification. Existing studies on road damage identification using remote sensing imagery face challenges due to the scarcity of damaged road data and the lack of paired pre- and post-disaster images for training datasets. To overcome these issues, this study utilizes synthetic data for data augmentation to enhance the performance of Convolutional Neural Networks (CNNs). Using methods such as Gaussian noise and Generative Adversarial Networks (GANs), we generate four types of synthetic post-disaster images from pre-disaster images, each reflecting different damage characteristics. This approach increases the variety of damage categories in the images and provides paired pre- and post-disaster images for training datasets, improving the model's ability to identify damaged roads and its adaptability to different disaster scenarios. Additionally, we propose a road damage recognition framework that combines the Road Extraction Model and the Siamese damage identification model. The output prediction masks are projected onto a map, and road segments are classified to determine the extent of damage. Our results show significant improvements in the models' road damage identification capabilities when trained with synthetic data, particularly with datasets generated using GANs that include local damage features. Future research will focus on generating and evaluating synthetic damage features under various lighting conditions, color tones, and terrains to further enhance the robustness of damage identification models.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-07-29T16:17:53Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-07-29T16:17:53Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsACKNOWLEDGEMENT i
摘要 ii
ABSTRACT iii
CONTENTS v
LIST OF FIGURES viii
LIST OF TABLES xi
Chapter 1    Introduction 1
1.1  Research Background 1
1.2  Challenges 1
1.3  Objectives of Our Study 3
Chapter 2    Related Work 5
2.1  Applying Remote Sensing Imagery for Damaged Road Identification 5
2.2  Data Augmentation through Synthetic Data 6
2.3  Damaged Road Extraction Models 8
2.4  Concluding Remarks 10
Chapter 3    Methodology 12
3.1  Synthetic Post-disaster Data Generation 12
3.1.1   Employing Gaussian Noise for Road Damage Features 13
3.1.2   Employing Exemplar-based GAN model for Road Damage Features 14
3.1.3   Selection of Damaged Road Areas and Post-processing of Synthetic Post-Disaster Images 20
3.2  Post-disaster Road Damage Assessment Methods 22
3.2.1   Differentiating between Pre-/Post-disaster Road Extraction Results 23
3.2.2   Directly Identifying Road Damage 26
3.2.3   Integrated Prediction 32
3.3  Integration of Damage Detection Results into Road Network Maps 33
3.3.1   Road Network Pre-processing 34
3.3.2   Mapping Prediction Results to the Processed Road Network 35
3.4  Concluding Remarks 38
Chapter 4    Experiments 40
4.1  Training Dataset 40
4.1.1   Pre-disaster Dataset 40
4.1.2   Paired Pre- and Post-disaster Dataset 41
4.2  Test Dataset 45
4.2.1   Indonesia dataset (2018 Sulawesi earthquake and tsunami) 47
4.2.2   Haiti dataset (2010 Haiti earthquake) 49
4.2.3   Taiwan dataset (2008 Morakot typhoon) 50
4.3  Implementation Details 52
4.4  Performance Metrics 53
4.4.1   Recall, Precision, and F1 Score 55
4.4.2   Pixel-based Evaluation Method 56
4.4.3   Graph-based metric 58
4.5  Concluding Remarks 59
Chapter 5    Results and Discussions 61
5.1  Case Study (1) – Indonesia Dataset 61
5.1.1   Comparison of Synthetic Training Datasets 61
5.1.2   Comparison of Road Damage Assessment Methods 68
5.2  Case Study (2) – Haiti Dataset 73
5.3  Case Study (3) – Taiwan Dataset 79
5.4  Concluding Remarks 84
Chapter 6   Conclusion 87
REFERENCE 92
-
dc.language.isoen-
dc.subject道路損壞辨識zh_TW
dc.subject災害應變zh_TW
dc.subject卷積神經網路zh_TW
dc.subject生成假資料zh_TW
dc.subject遙測影像zh_TW
dc.subjectRemote Sensing imageryen
dc.subjectConvolutional Neural Networks (CNNs)en
dc.subjectRoad Damage Assessmenten
dc.subjectDisaster Responseen
dc.subjectSynthetic dataen
dc.title應用生成資料集增強遙測影像之道路破壞辨識zh_TW
dc.titleEnhancing Road Damage Identification in Remote Sensing Imagery through Synthetic Dataen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee林之謙;吳日騰zh_TW
dc.contributor.oralexamcommitteeJacob-J Lin;Rih-Teng Wuen
dc.subject.keyword道路損壞辨識,遙測影像,生成假資料,卷積神經網路,災害應變,zh_TW
dc.subject.keywordRoad Damage Assessment,Remote Sensing imagery,Synthetic data,Convolutional Neural Networks (CNNs),Disaster Response,en
dc.relation.page97-
dc.identifier.doi10.6342/NTU202401960-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2024-07-23-
dc.contributor.author-college工學院-
dc.contributor.author-dept土木工程學系-
dc.date.embargo-lift2029-07-19-
顯示於系所單位:土木工程學系

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf
  未授權公開取用
24.97 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved