Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 土木工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88915
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor蔡亞倫zh_TW
dc.contributor.advisorYa-Lun Tsaien
dc.contributor.author楊尚峰zh_TW
dc.contributor.authorShang-Fong Yangen
dc.date.accessioned2023-08-16T16:20:33Z-
dc.date.available2023-11-09-
dc.date.copyright2023-08-16-
dc.date.issued2023-
dc.date.submitted2023-08-09-
dc.identifier.citationR. D. S.Moreira, N. F. F.Ebecken, A. S.Alves, and F.Livernet, “A survey on video detection and tracking of maritime vessels,” Ijrras, vol. 20, July, 2014, pp. 37–50
D. D. Bloisi, F. Previtali, A. Pennisi, D. Nardi and M. Fiorini, “Enhancing Automatic Maritime Surveillance Systems With Visual Information,” in IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 4, pp. 824-833, April 2017, doi: 10.1109/TITS.2016.2591321.
交通部航港局, “交通部航港局行經臺灣各國際港各航線之船舶統計,” 交通部航港局官方網站, [Online]. Available: https://data.motcmpb.gov.tw/Reports/REP_Tableau/viz1602138329870, (accessed Aug. 2022).
行政院農委會漁業署, “行政院農委會漁業署民國110年(2021)漁業統計年報_歷年漁船數量” 行政院農委會漁業署官方網站, [Online]. Available: https://www.fa.gov.tw/view.php?theme=FS_AR&subtheme=&id=21, (accessed Jun. 2023).
行政院農委會漁業署, “行政院農委會漁業署民國110年(2021)漁業統計年報_中英文漁業統計概要,” 行政院農委會漁業署官方網站, [Online]. Available: https://www.fa.gov.tw/view.php?theme=FS_AR&subtheme=&id=21, (accessed Jun. 2023).
行政院農委會漁業署, “行政院農委會漁業署民國110年(2021)漁業統計年報_統計圖,” 行政院農委會漁業署官方網站, [Online]. Available: https://www.fa.gov.tw/view.php?theme=FS_AR&subtheme=&id=21, (accessed Jun. 2023).
T. Lv, J. Zhang, Y. Chen and C. He, “A Real-time AIS Data Computing Platform Based on Flink,” 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), Greenville, SC, USA, 2021, pp. 378-381, doi: 10.1109/ICFTIC54370.2021.9647063.
交通部航港局, “交通部航港局海事中心自動辨識系統(AIS),” 交通部航港局官方網站, [Online]. Available: https://transport-curation.nat.gov.tw/portAuthority/ais.html, (accessed Feb. 2022).
G. F. Watch, “What vessels are required to use ais? what are global regulations and requirements for vessels to carry ais?,” Global Fishing Watch Organization Official Website, [Online]. Available: https://globalfishingwatch.org/faqs/what-vessels-are-required-to-use-ais-what-are-global-regulations-and-requirements-for-vessels-to-carry-ais/, (accessed Aug. 2022).
邱永芳、黃茂信、楊奇達、翁健二, “行動中繼傳輸技術應用於 AIS 系統之研發,” 運輸計畫季刊, vol. 47, no. 3, pp. 221–244, 2018.
R. O. Lane, D. A. Nevell, S. D. Hayward and T. W. Beaney, “Maritime anomaly detection and threat assessment,” 2010 13th International Conference on Information Fusion, Edinburgh, UK, 2010, pp. 1-8, doi: 10.1109/ICIF.2010.5711998.
G. F. Watch, “Global analysis shows where fishing vessels’ identification devices have been switched off,” Global Fishing Watch Organization Official Website, [Online]. Available: https://globalfishingwatch.org/press-release/analysis-shows-vessels-identification-switched-off/,(accessed Aug. 2022).
海洋委員會海巡署, “取締越界漁船報告,” 海巡署官方網站, [Online]. Available: https://www.cga.gov.tw/GipOpen/wSite/ct?xItem=101246&ctNode=10110&mp=999,(accessed Aug. 2022).
D. Amitrano et al., “Earth environmental monitoring using multi-temporal synthetic aperture radar: A critical review of selected applications,” Remote Sens. (Basel), vol. 13, no. 4, p. 604, 2021.
Z. Ren and D. Zhu, "A GPU-Based Two-Step Approach for High Resolution SAR Imaging," 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 2021, pp. 376-380, doi: 10.1109/ICSIP52628.2021.9688851.
Wang, X., Chen, C., “Adaptive ship detection in SAR images using variance WIE-based method,” SIViP 10, 1219–1224, 2016, https://doi.org/10.1007/s11760-016-0879-4
Y. Wang and H. Liu, “A Hierarchical Ship Detection Scheme for High-Resolution SAR Images,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 10, pp. 4173-4184, Oct. 2012, doi: 10.1109/TGRS.2012.2189011.
J. Gómez-Enri, A. Scozzari, F. Soldovieri, J. Coca, and S. Vignudelli, “Detection and Characterization of Ship Targets Using CryoSat-2 Altimeter Waveforms,” Remote Sensing, vol. 8, no. 3, p. 193, Feb. 2016, doi: 10.3390/rs8030193.
F. Mazzarella, M. Vespe and C. Santamaria, “SAR Ship Detection and Self-Reporting Data Fusion Based on Traffic Knowledge,” in IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 8, pp. 1685-1689, Aug. 2015, doi: 10.1109/LGRS.2015.2419371.
X. Zhang, C. Huo, N. Xu, H. Jiang, Y. Cao, L. Ni, and C. Pan, “Multitask learning for ship detection from synthetic aperture radar images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 8048–8062, 2021.
S. Wei, X. Zeng, Q. Qu, M. Wang, H. Su, and J. Shi, “HRSID: A high-resolution sar images dataset for ship detection and instance segmentation,” IEEE Access, vol. 8, pp. 120 234– 120 254, 2020.
T. Zhang, X. Zhang, J. Li, X. Xu, B. Wang, X. Zhan, Y. Xu, X. Ke, T. Zeng, H. Su, I. Ahmad, D. Pan, C. Liu, Y. Zhou, J. Shi, and S. Wei, “Sar ship detection dataset (ssdd): Official release and comprehensive data analysis,” Remote Sensing, vol. 13, no. 18, 2021, [Online]. Available: https://www.mdpi.com/2072-4292/13/18/3690
K. El-darymli, E. W. Gill, P. Mcguire, D. Power, and C. Moloney, “Automatic target recognition in synthetic aperture radar imagery: A state-of-the-art review,” IEEE Access, vol. 4, pp. 6014–6058, 2016.
L. Zhai, Y. Li, and Y. Su, “Inshore ship detection via saliency and context information in high-resolution sar images,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 12, pp. 1870–1874, 2016.
Brizi, M., Lombardo, P. and Pastina, D., “Exploiting the shadow information to increase the target detection performance in SAR images,” In Proceedings of the 5th international conference and exhibition on radar systems, Brest, France, 17–21 May 1999.
Sciotti, M., Pastina, D. and Lombardo, P., “Exploiting the polarimetric information for the detection of ship targets in non-homogeneous SAR images,” In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toronto, ON, Canada, 24–28 June 2002, pp. 1911–1913.
P. Iervolino, R. Guida and P. Whittaker, “A Model for the Backscattering From a Canonical Ship in SAR Imagery,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 3, pp. 1163-1175, March 2016, doi: 10.1109/JSTARS.2015.2443557.
Xie, T., Zhang, W., Yang, L., Wang, Q., Huang, J., Yuan, N., “Inshore ship detection based on level set method and visual saliency for SAR images,” Sensors, 2018, 18, 3877.
Wang, X., Li, G., Zhang, X.P.; He, Y. “Ship detection in SAR images via local contrast of fisher vectors,” IEEE Trans. Geosci. Remote Sens., 2020, 58, 6467–6479.
Cong, R., Lei, J., Fu, H., Cheng, M., Lin, W. and Huang, Q., “Review of Visual Saliency Detection With Comprehensive Information,” IEEE Transactions on Circuits and Systems for Video Technology, 2018, pp. 10.1109/TCSVT.2018.2870832.
N. Li, H. Bi, Z. Zhang, X. Kong, and D. Lu, “Performance Comparison of Saliency Detection,” Advances in Multimedia, vol. 2018, pp. 1–13, Jun. 2018, doi: https://doi.org/10.1155/2018/9497083.
C. Wang, H. Zhang, F. Wu, S. Jiang, B. Zhang and Y. Tang, “A Novel Hierarchical Ship Classifier for COSMO-SkyMed SAR Data,” in IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 2, pp. 484-488, Feb. 2014, doi: 10.1109/LGRS.2013.2268875.
Xiangwei Xing, Kefeng Ji, Huanxin Zou & Jixiang Sun, “Feature selection and weighted SVM classifier-based ship detection in PolSAR imagery,” International Journal of Remote Sensing, 34:22, 7925-7944, 2013, doi: 10.1080/01431161.2013.827812
Zhi Zhao, Kefeng Ji, Xiangwei Xing, Wenting Chen, Huanxin Zou, “Ship Classification with High Resolution TerraSAR-X Imagery Based on Analytic Hierarchy Process,” International Journal of Antennas and Propagation, vol. 2013, Article ID 698370, 13 pages, 2013. https://doi.org/10.1155/2013/698370
S. R. H. Gamba, E. E. Sano, and M. P. Rocha, “Identificação de Embarcações em Imagens aerotransportadas de radar de Abertura Sintética (R-99 SAR) na área marítima do Brasil,” Boletim de Ciências Geodésicas, 17(3), Sep 2011, https://www.scielo.br/j/bcg/a/vD7FZz3TwcSgWcppRZdJcWF/abstract/?lang=pt
X. X. Zhu, S. Montazeri, M. Ali, Y. Hua, Y. Wang, L. Mou, Y. Shi, F. Xu, and R. Bamler, “Deep learning meets sar: Concepts, models, pitfalls, and perspectives,” IEEE Geoscience and Remote Sensing Magazine, vol. 9, no. 4, pp. 143–172, 2021.
C. Zhang et al., “Evaluation and Improvement of Generalization Performance of SAR Ship Recognition Algorithms,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 9311-9326, 2022, doi: 10.1109/JSTARS.2022.3216623.
S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788.
P. Xu, Q. Li, B. Zhang, F. Wu, K. Zhao, X. Du, C. Yang, and R. Zhong, “On-board real-time ship detection in hisea-1 sar images based on cfar and lightweight deep learning,” Remote Sensing, vol. 13, no. 10, 2021, [Online]. Available: https://www.mdpi.com/2072-4292/13/10/1995
Totani Y, Kotani S, Odai K, Ito E, Sakakibara M., “Real-Time Analysis of Animal Feeding Behavior With a Low-Calculation-Power CPU,” IEEE Trans Biomed Eng, 2020 Apr, 67(4):1197-1205, doi: 10.1109/TBME.2019.2933243.
Y. Lai, “A Comparison of Traditional Machine Learning and Deep Learning in Image Recognition,” Journal of Physics: Conference Series, 1314. 012148,2019, 10.1088/1742-6596/1314/1/012148.
M. Gheisari, G. Wang and M. Z. A. Bhuiyan, “A Survey on Deep Learning in Big Data,” 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), Guangzhou, China, 2017, pp. 173-180, doi: 10.1109/CSE-EUC.2017.215.
X. -W. Chen and X. Lin, “Big Data Deep Learning: Challenges and Perspectives,” in IEEE Access, vol. 2, pp. 514-525, 2014, doi: 10.1109/ACCESS.2014.2325029.
A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” in IEEE Access, vol. 6, pp. 52138-52160, 2018, doi: 10.1109/ACCESS.2018.2870052.
Common Objects in Context (COCO), “COCO Homepage,” COCO Official Website, [Online]. Available: https://cocodataset.org/#home, (accessed Feb. 2022).
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 618-626, doi: 10.1109/ICCV.2017.74.
M. T. Ribeiro, S. Singh and C. Guestrin, “"Why Should I Trust You?": Explaining the Predictions of Any Classifier,” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16). Association for Computing Machinery, New York, NY, USA, 1135–1144, 2016 https://doi.org/10.1145/2939672.2939778
C. H. Ng, H. S. Abuwala and C. H. Lim, “Towards More Stable LIME For Explainable AI,” 2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Penang, Malaysia, 2022, pp. 1-4, doi: 10.1109/ISPACS57703.2022.10082810.
Q. Su, X. Zhang, P. Xiao, Z. Li and W. Wang, “Which CAM is Better for Extracting Geographic Objects? A Perspective From Principles and Experiments,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 5623-5635, 2022, doi: 10.1109/JSTARS.2022.3188493.
A. Levering, D. Marcos, S. Lobry and D. Tuia, “Interpretable Scenicness from Sentinel-2 Imagery,” IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 2020, pp. 3983-3986, doi: 10.1109/IGARSS39084.2020.9323706.
Ying Yu, Bin Wang, Liming Zhang, “Hebbian-based neural networks for bottom-up visual attention and its applications to ship detection in SAR images,” Neurocomputing, vol. 74, Issue 11, 2011, pp. 2008-2017, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2010.06.026.
Garcia-Pineda, O., I. MacDonald, C. Hu, J. Svejkovsky, M. Hess, D. Dukhovskoy, and S.L. Morey., “Detection of floating oil anomalies from the Deepwater Horizon oil spill with synthetic aperture radar,” Oceanography, 26(2):124–137, 2013, https://doi.org/10.5670/oceanog.2013.38.
Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998, doi: 10.1109/5.726791.
Walsh, J., O' Mahony, N., Campbell, S., Carvalho, A., Krpalkova, L., Velasco-Hernandez, G., Harapanahalli, S. and Riordan, D., “Deep Learning vs. Traditional Computer Vision,” 2019, 10.1007/978-3-030-17795-9_10.
R. Girshick, J. Donahue, T. darrell and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 580-587, doi: 10.1109/CVPR.2014.81.
C. Cortes and V. Vapnik, “Support-vector networks - machine learning,” SpringerLink. [Online]. Available: https://link.springer.com/article/10.1007/BF00994018.
R. Girshick, “Fast r-cnn,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440–1448.
J. dai, Y. Li, K. He, J. Sun, “R-FCN: Proceedings of the 30th International Conference on Neural Information Processing Systems,” Guide Proceedings, 01-Dec-2016, [Online]. Available: https://dl.acm.org/doi/10.5555/3157096.3157139.
S. Hong, B. Roh, K.-H. Kim, Y. Cheon, and M. Park, “PVANet: Lightweight Deep Neural Networks for real-time object detection,” arXiv.org, 09-Dec-2016, [Online]. Available: https://arxiv.org/abs/1611.08588.
J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6517–6525.
J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”, 2018, [Online]. Available: https://arxiv.org/abs/1804.02767
A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” 2020, [Online]. Available: https://arxiv.org/abs/2004.10934
G. Jocher, A. Chaurasia, A. Stoken, J. Borovec, NanoCode012, Y. Kwon, TaoXie, K. Michael, J. Fang, imyhxy, Lorna, C. Wong, ff. Yifu), A. V, D. Montes, Z. Wang, C. Fati, J. Nadar, Laughing, UnglvKitDe, tkianai, yxNONG, P. Skalski, A. Hogan, M. Strobel, M. Jain, L. Mammana, and xylieong, “ultralytics/yolov5: v6.2 - YOLOv5 Classification Models, Apple M1, Reproducibility, ClearML and Deci.ai integrations,” Aug. 2022, [Online]. Available: https://doi.org/10.5281/zenodo.7002879
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single shot MultiBox detector,” Computer Vision – ECCV 2016. Springer International Publishing, 2016, pp. 21-37. [Online]. Available: https://doi.org/10.1007% 2F978-3-319-46448-0_2
K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980–2988.
D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee, “Yolact: Real-time instance segmentation,” arXiv.org, 24-Oct-2019. [Online]. Available: https://arxiv.org/abs/1904.02689
H. Chen, K. Sun, Z. Tian, C. Shen, Y. Huang, and Y. Yan, “Blendmask: Top-down meets bottom-up for instance segmentation,” arXiv.org, 26-Apr-2020, [Online]. Available: https://arxiv.org/abs/2001.00309
Y. Lee and J. Park, “Centermask : Real-time anchor-free instance segmentation,” arXiv.org, 02-Apr-2020, [Online]. Available: https://arxiv.org/abs/1911.06667
N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with Transformers,” arXiv.org, 28-May-2020, [Online]. Available: https://arxiv.org/abs/2005.12872
Y.-L. Chang, A. Anagaw, L. Chang, Y. C. Wang, C.-Y. Hsiao, and W.-H. Lee, “Ship detection based on yolov2 for sar imagery,” Remote Sensing, vol. 11, no. 7, 2019, [Online]. Available: https://www.mdpi.com/2072-4292/11/7/786
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser and I. Polosukhin, “Attention Is All You Need,” arXiv.org, 6-Dec-2017, [Online]. Available: https://arxiv.org/abs/1706.03762
Z. Cui, Q. Li, Z. Cao, and N. Liu, “Dense attention pyramid networks for multi-scale ship detection in sar images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 11, pp. 8983–8997, 2019.
J. Zhao, Z. Zhang, W. Yu, and T.-K. Truong, “A cascade coupled convolutional neural network guided visual attention method for ship detection from sar images,” IEEE Access, vol. 6, pp. 50 693–50 708, 2018.
Z. Lin, K. Ji, X. Leng, and G. Kuang, “Squeeze and excitation rank faster r-cnn for ship detection in sar images,” IEEE Geoscience and Remote Sensing Letters, vol. 16, no. 5, pp. 751–755, 2019
T. Zhang and X. Zhang, “High-speed ship detection in sar images based on a grid convolutional neural network,” Remote Sensing, vol. 11, no. 10, 2019, [Online]. Available: https://www.mdpi.com/2072-4292/11/10/1206
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient convolutional neural networks for Mobile Vision Applications,” arXiv.org, 17-Apr-2017, [Online]. Available: https://arxiv.org/abs/1704.04861.
X. Zhang, C. Huo, N. Xu, H. Jiang, Y. Cao, L. Ni, and C. Pan, “Multitask learning for ship detection from synthetic aperture radar images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 8048–8062, 2021.
J. Li, C. Qu, and J. Shao, “Ship detection in sar images based on an improved faster r-cnn,” 2017 SAR in Big data Era: Models, Methods and Applications (BIGSARDATA), 2017, pp. 1–6.
Y. Wang, C. Wang, H. Zhang, Y. Dong, and S. Wei, “A sar dataset of ship detection for deep learning under complex backgrounds,” Remote Sensing, vol. 11, no. 7, 2019, [Online]. Available: https://www.mdpi.com/2072-4292/11/7/765
S. Xian, W. Zhirui, S. Yuanrui, D. Wenhui, Z. Yue, and F. Kun, “Air-sarship-1.0: High-resolution sar ship detection dataset (in english),” Journal of Radars, vol. 8, no. R19097, p. 852, 2019, [Online]. Available: https://radars.ac.cn/en/article/doi/10.12000/JR19097
T. Zhang, X. Zhang, X. Ke, X. Zhan, J. Shi, S. Wei, D. Pan, J. Li, H. Su, Y. Zhou, and D. Kumar, “Ls-ssdd-v1.0: A deep learning dataset dedicated to small ship detection from large-scale Sentinel-1 sar images,” Remote Sensing, vol. 12, no. 18, 2020, [Online]. Available: https://www.mdpi.com/2072-4292/12/18/2997
A. Ludwig, “The definition of cross polarization,” IEEE Transactions on Antennas and Propagation, vol. 21, no. 1, pp. 116–119, 1973
C. Oliver and S. Quegan, Understanding Synthetic Aperture Radar Images, ser. EngineeringPro collection. SciTech Publ., 2004, [Online]. Available: https://books.google.com.tw/books?id=IeGKe40S77AC
D. Mera, J. M. Cotos, J. Varela-Pet, P. G. Rodríguez, and A. Caro, “Automatic decision support system based on sar data for oil spill detection,” Computers and Geosciences, vol. 72, pp. 184–191, 2014, [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0098300414001812
R. L. Paes, F. Nunziata, and M. Migliaccio, “On the capability of hybrid-polarity features to observe metallic targets at sea,” IEEE Journal of Oceanic Engineering, vol. 41, no. 2, pp. 346–361, 2016.
J. Li, C. Xu, H. Su, L. Gao, and T. Wang, “Deep learning for sar ship detection: Past, present and future,” Remote Sensing, vol. 14, no. 11, 2022, [Online]. Available: https://www.mdpi.com/2072-4292/14/11/2712
G. Zilman, A. Zapolski, and M. Marom, “On detectability of a ship’s kelvin wake in simulated sar images of rough sea surface,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 2, pp. 609–619, 2015.
D. Chaudhuri, A. Samal, A. Agrawal, Sanjay, A. Mishra, V. Gohri, and R. C. Agarwal, “A statistical approach for automatic detection of ocean disturbance features from sar images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 4, pp. 1231–1242, 2012
H. Li, W. Perrie, Y. He, S. Lehner, and S. Brusch, “Target detection on the ocean with the relative phase of compact polarimetry sar,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 6, pp. 3299–3305, 2013.
A. Frost, R. Ressel, and S. Lehner, “Automated iceberg detection using high-resolution x-band sar images,” Canadian Journal of Remote Sensing, vol. 42, no. 4, pp. 354–366, 2016, [Online]. Available: https://doi.org/10.1080/07038992.2016.1177451
P. Yang and L. Guo, “Polarimetric scattering from two-dimensional dielectric rough sea surface with a ship-induced kelvin wake,” International Journal of Antennas and Propagation, vol. 2016, pp. 1–14, 01 2016.
B. Tings, C. Bentes, D. Velotto, and S. Voinov, “Modelling ship detectability depending on terrasar-x-derived metocean parameters,” CEAS Space Journal, vol. 11, no. 1, pp. 81–94, Mar 2019, [Online]. Available: https://doi.org/10.1007/Sentinel-12567-018-0222-8
W. Guo, J. Wang and S. Wang, “Deep Multimodal Representation Learning: A Survey,” IEEE Access, vol. 7, pp. 63373-63394, 2019, doi: 10.1109/ACCESS.2019.2916887.
A. Barua, M. U. Ahmed and S. Begum, “A Systematic Literature Review on Multimodal Machine Learning: Applications, Challenges, Gaps and Future Directions,” IEEE Access, vol. 11, pp. 14804-14831, 2023, doi: 10.1109/ACCESS.2023.3243854.
J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, ‘‘Multimodal deep learning,’’ Proc. 28th Int. Conf. Mach. Learn., 2011, pp. 689–696
J. Huang and B. Kingsbury, ‘‘Audio-visual deep learning for noise robust speech recognition,’’ Proc. IEEE Int. Conf. Acoust., Speech Signal Process., May 2013, pp. 7596–7599.
A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach, ‘‘Multimodal compact bilinear pooling for visual question answering and visual grounding,’’ Proc. Conf. Empirical Methods Natural Lang. Process., 2016, pp. 457–468.
J. Lu, J. Yang, D. Batra, and D. Parikh, ‘‘Hierarchical question-image co-attention for visual question answering,’’ Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 289–297.
S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, ‘‘Generative adversarial text to image synthesis,’’ Proc. 33rd Int. Conf. Mach. Learn., 2016, pp. 1060–1069.
[101] H. Zhang et al., ‘‘StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks,’’ Proc. IEEE Int. Conf. Comput. Vis., Jun. 2017, pp. 5907–5915.
X.-A. Bi, X. Hu, H. Wu, and Y. Wang, ‘‘Multimodal data analysis of Alzheimer’s disease based on clustering evolutionary random forest,’’ IEEE J. Biomed. Health Informat., vol. 24, no. 10, pp. 2973–2983, Oct. 2020.
W. Lin, Q. Gao, M. Du, W. Chen, and T. Tong, ‘‘Multiclass diagnosis of stages of Alzheimer’s disease using linear discriminant analysis scoring for multimodal data,’’ Comput. Biol. Med., vol. 134, Jul. 2021, Art. no. 104478.
M. Xu, L. Ouyang, L. Han, K. Sun, T. Yu, Q. Li, H. Tian, L. Safarnejad, H. Zhang, Y. Gao, F. S. Bao, Y. Chen, P. Robinson, Y. Ge, B. Zhu, J. Liu, and S. Chen, ‘‘Accurately differentiating between patients with COVID-19, patients with other viral infections, and healthy individuals: Multimodal late fusion learning approach,’’ J. Med. Internet Res., vol. 23, no. 1, Jan. 2021, Art. no. e25535
M. A. Khan, I. Ashraf, M. Alhaisoni, R. Damaševičius, R. Scherer, A. Rehman, and S. A. C. Bukhari, ‘‘Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists,’’ Diagnostics, vol. 10, no. 8, p. 565, Aug. 2020.
M. Bednarek, P. Kicki, and K. Walas, ‘‘On robustness of multi-modal fusion—Robotics perspective,’’ Electronics, vol. 9, no. 7, p. 1152, 2020.
B. Arjun and H. Prakash, ‘‘Feature level fusion of seven neighbor bilinear interpolation data sets of finger vein,’’ Int. J. Adv. Trends Comput. Sci. Eng., vol. 9, no. 2, pp. 1531–1536, Apr. 2020.
L. Pei, L. Vidyaratne, M. M. Rahman, Z. A. Shboul, and K. M. Iftekharuddin, ‘‘Multimodal brain tumor segmentation and survival prediction using hybrid machine learning,’’ Proc. Int. MICCAI Brainlesion Workshop. Cham, Switzerland: Springer, 2019, pp. 73–81.
M. Abdelaziz, T. Wang, and A. Elazab, ‘‘Alzheimer’s disease diagnosis framework from incomplete multimodal data using convolutional neural networks,’’ J. Biomed. Informat., vol. 121, Sep. 2021, Art. no. 103863.
Y. Yin, A. Tran, Y. Zhang, W. Hu, G. Wang, J. Varadarajan, R. Zimmermann, and S.-K. Ng, ‘‘Multimodal fusion of satellite images and crowdsourced GPS traces for robust road attribute detection,’’ Proc. 29th Int. Conf. Adv. Geograph. Inf. Syst., Nov. 2021, pp. 107–116.
M. Ajith and M. Martínez-Ramón, ‘‘Deep learning based solar radiation micro forecast by fusion of infrared cloud images and radiation data,’’ Appl. Energy, vol. 294, Jan. 2021, Art. no. 117014.
國家實驗研究院台灣海洋科技中心,“岸基雷達測流資料,”國家實驗研究院台灣海洋科技中心官方網站, [Online]. Available: https://www.tori.narl.org.tw/TORI_WEB/CTORI/Data_Application/Codar/, (accessed Feb. 2022).
中央氣象局,“近海海象雷達觀測產品資料說明,”中央氣象局官方網站, [Online]. Available: https://www.cwb.gov.tw/Data/data_catalog/3-3-1.pdf, (accessed Feb. 2022).
National Center for Atmospheric Research (NCAR), “Weather Research & Forecasting Model (WRF),” NCAR MESOSCALE & MICROSCALE METEOROLOGY LABORATORY Official Website, [Online]. Available: https://www.mmm.ucar.edu/models/wrf, (accessed Feb. 2022).
Japan Meteorology Agency (JMA), “Japan Meteorology Agency Homepage,” JMA Official Website, [Online]. Available:https://www.jma.go.jp/jma/indexe.html, (accessed Feb. 2022).
European Centre for Medium-Range Weather Forecasts (ECMWF), “ECMWF Homepage,” ECMWF Official Website, [Online]. Available: https://www.ecmwf.int/, (accessed Feb. 2022).
交通部,“港灣環境資訊網海域模擬資訊,”港灣環境資訊網, [Online]. Available: https://isohe.ihmt.gov.tw/Frontend/index.aspx, (accessed Feb. 2022).
Google, “Google Earth Engine Hompage,” Google Earth Engine Official Website, [Online]. Available: https://earthengine.google.com/, (accessed Feb. 2022).
Google, “Google Earth Engine Sentinel-1 GRD,” Google Earth Engine Official Website, [Online]. Available: https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_Sentinel-1_GRD, (accessed Feb. 2022).
European Space Agency (ESA), “Sentinel-1 Toolbox,” The European Space Agency Sentinel Mission Official Website, [Online]. Available: https://sentinel.esa.int/web/sentinel/ toolboxes/Sentinel-1, (accessed Feb. 2022).
European Space Agency (ESA), “Sentinel-1 Mission Observation Scenario,” The European Space Agency Sentinel Mission Official Website, [Online].Available: https://sentinels.copernicus.eu/web/sentinel/missions/Sentinel-1/observation-scenario, (accessed Feb. 2022).
System for Automated Geoscientific Analyses (SAGA), “SAGA Hompage,” SAGA Official Website, [Online]. Available: https://saga-gis.sourceforge.io/en/index.html, (accessed Feb. 2022).
T. Selige, J. Böhner, and A. Ringeler, “Processing of srtm x-sar data to correct interferometric elevation models for land surface applications,” Göttinger Geographische Abhandlungen, vol. 115, 01 2006
cvat.ai, “Computer vision annotation tool (cvat),” Computer vision annotation tool github page, [Online]. Available: https://github.com/ opencv/cvat, (accessed Feb. 2022).
W. Li, C. Chen, M. Zhang, H. Li, and Q. Du, “data augmentation for hyperspectral image classification with deep cnn,” IEEE Geoscience and Remote Sensing Letters, vol. 16, no. 4, pp. 593–597, 2019.
B. Lewis, T. Scarnati, M. Levy, J. Nehrbass, E. Zelnio, and E. Sudkamp, “Machine learning techniques for sar data augmentation,” pp. 163–206, 2020, [Online]. Available: https://digital-library.theiet.org/content/books/10.1049/sbra529e_ch6
C. Belloni, N. Aouf, J.-M. L. Caillec, and T. Merlet, “Sar specific noise based data augmentation for deep learning,” 2019 International Radar Conference (RADAR), pp. 1–5, 2019.
M. Y. a. C. Yi, “A Novel Method of Ship Detection in High-Resolution SAR Images Based on GaN and HMRF Models,” Progress In Electromagnetics Research Letters, vol. 83, pp. 77–82, 2019, [Online]. Available: http://www.jpier.org/PIERL/pier.php?paper=19012502
Z. Cui, M. Zhang, Z. Cao, and C. Cao, “Image data augmentation for sar sensor via generative adversarial nets,” IEEE Access, vol. 7, pp. 42 255–42 268, 2019.
Y. Hu, Y. Li, and Z. Pan, “A dual-polarimetric sar ship detection dataset and a memory-augmented autoencoder-based detection method,” Sensors, vol. 21, no. 24, 2021, [Online]. Available: https://www.mdpi.com/1424-8220/21/24/8478
C. D. Prakash and L. J. Karam, “It GAN Do Better: GAN-Based Detection of Objects on Images With Varying Quality,” IEEE Transactions on Image Processing, vol. 30, pp. 9220-9230, 2021, doi: 10.1109/TIP.2021.3124155.
Y. Ma et al., “Structure and Illumination Constrained GAN for Medical Image Enhancement,” IEEE Transactions on Medical Imaging, vol. 40, no. 12, pp. 3955-3967, Dec. 2021, doi: 10.1109/TMI.2021.3101937.
S. Li, B. Qin, J. Xiao, Q. Liu, Y. Wang and D. Liang, “Multi-Channel and Multi-Model-Based Autoencoding Prior for Grayscale Image Restoration,” in IEEE Transactions on Image Processing, vol. 29, pp. 142-156, 2020, doi: 10.1109/TIP.2019.2931240.
Z. Di, X. Chen, Q. Wu, J. Shi, Q. Feng and Y. Fan, “Learned Compression Framework With Pyramidal Features and Quality Enhancement for SAR Images,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1-5, 2022, Art no. 4505605, doi: 10.1109/LGRS.2022.3155651.
The Pascal Visual Object Classes (VOC), “VOC Homepage,” VOC Official Website, [Online]. Available: http://host.robots.ox.ac.uk/pascal/VOC/, (accessed Feb. 2022).
Roboflow Inc., “Roboflow Homepage,” Roboflow Official Website, [Online]. Available: https://roboflow.com/, (accessed Feb. 2022).
World Meteorological Organization (WMO), “World Meteorological Organization,” WMO Official Website, [Online]. Available: https://community.wmo.int/en, (accessed Feb. 2022).
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015, [Online]. Available: https://arxiv.org/abs/1512.03385
S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” 2016, [Online]. Available: https://arxiv.org/abs/1611.05431
Christian Kharif, Efim Pelinovsky, “Physical mechanisms of the rogue wave phenomenon,” European Journal of Mechanics - B/Fluids, Volume 22, Issue 6, 2003, Pages 603-634, ISSN 0997-7546, https://doi.org/10.1016/j.euromechflu.2003.09.002.
Y. Peña-Sanchez, A. Mérigaud and J. V. Ringwood, “Short-Term Forecasting of Sea Surface Elevation for Wave Energy Applications: The Autoregressive Model Revisited,” IEEE Journal of Oceanic Engineering, vol. 45, no. 2, pp. 462-471, April 2020, doi: 10.1109/JOE.2018.2875575.
A. Mérigaud and J. V. Ringwood, “Free-Surface Time-Series Generation for Wave Energy Applications,” IEEE Journal of Oceanic Engineering, vol. 43, no. 1, pp. 19-35, Jan. 2018, doi: 10.1109/JOE.2017.2691199.
C. Ray et al., “SAR Altimeter Backscattered Waveform Model,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 2, pp. 911-919, Feb. 2015, doi: 10.1109/TGRS.2014.2330423.
Y. Liu, S. Li, P. W. Chan and D. Chen, “Empirical Correction Ratio and Scale Factor to Project the Extreme Wind Speed Profile for Offshore Wind Energy Exploitation,” IEEE Transactions on Sustainable Energy, vol. 9, no. 3, pp. 1030-1040, July 2018, doi: 10.1109/TSTE.2017.2759666.
J. Xie, G. Yao, M. Sun and Z. Ji, “Measuring Ocean Surface Wind Field Using Shipborne High-Frequency Surface Wave Radar,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 6, pp. 3383-3397, June 2018, doi: 10.1109/TGRS.2018.2799002.
J. Zau et al., “The Simulation of Ocean Surface Wind Measured by Polarimetric Scatterometer,” IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018, pp. 1497-1500, doi: 10.1109/IGARSS.2018.8518894.
Zhang T. and Zhang X., “HTC+ for SAR Ship Instance Segmentation,” MDPI. Multidisciplinary Digital Publishing Institute. [Online]. Available: https://www.mdpi.com/2072-4292/14/10/2395
Zhao D., Zhu C., Qi J., Qi X., Su Z and Shi Z., “Synergistic attention for ship instance segmentation in SAR images,” MDPI. Multidisciplinary Digital Publishing Institute. [Online]. Available: https://www.mdpi.com/2072-4292/13/21/4384
Z. Sun, M. Zhao, and B. Jia, “A GaoFen-3 SAR Image dataset of Road Segmentation,” Information Technology and Control, vol. 50, pp. 89–101, 03 2021.
C. Niu, Q. Yang, S. Ren, H. Hu, D. Han, Z. Hu and J. Liang, “Instance Segmentation of Auroral Images for Automatic Computation of Arc Width,” IEEE Geoscience and Remote Sensing Letters, vol. PP, pp. 1–5, 03 2019.
H. Su, S. Wei, S. Liu and J. Liang, “HQ-ISNet: High-Quality Instance Segmentation for Remote Sensing Imagery,” Remote Sensing, vol. 12, p. 989, 03 2020.
O. L. F. de Carvalho et al., “Instance segmentation for large, multi-channel remote sensing imagery using mask-RCNN and a mosaicking approach,” Remote Sens. (Basel), vol. 13, no. 1, p. 39, 2020.
X. Zhao, J. Guo, Y. Zhang, and Y. Wu, “Memory-augmented transformer for remote sensing image semantic segmentation,” Remote Sens. (Basel), vol. 13, no. 22, p. 4518, 2021
J. Peng, X. Sun, H. Yu, Y. Tian, C. Deng and F. Yao, “An Instance-Based Multitask Graph Network for Complex Facility Recognition in Remote Sensing Imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-15, 2022, Art no. 5615015, doi: 10.1109/TGRS.2021.3131381.
Z. Yan, X. Yang and K. -T. Cheng, “Enabling a Single Deep Learning Model for Accurate Gland Instance Segmentation: A Shape-Aware Adversarial Learning Framework,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 2176-2189, June 2020, doi: 10.1109/TMI.2020.2966594.
M. Ahishali, S. Kiranyaz, T. Ince, and M. Gabbouj, “Dual and single polarized SAR image classification using compact convolutional neural networks,” Remote Sens. (Basel), vol. 11, no. 11, p. 1340, 2019.
D. Wang, F. Zhang, F. Ma, W. Hu, Y. Tang and Y. Zhou, “A Benchmark Sentinel-1 SAR dataset for Airport Detection,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 6671-6686, 2022, doi: 10.1109/JSTARS.2022.3192063.
S. Yang, Y. Lv, Y. Ren, L. Yang, and L. Jiao, “Unsupervised images segmentation via incremental dictionary learning based sparse representation,” Information Sciences, vol. 269, pp. 48–59, 2014.
F. Sijia, K. Ji, M. Xiaojie, L. Zhang, and G. Kuang, “Target Region Segmentation in SAR Vehicle Chip Image With ACM Net,” IEEE Geoscience and Remote Sensing Letters, vol. PP, pp. 1–5, 06 2021.
A. Potlapally, N. Mishra, P. Sai and R. Chowdary, “Instance Segmentation in Remote Sensing Imagery using Deep Convolutional Neural Networks,” Fourth International Conference on Contemporary Computing and Informatics (iC3I 2019),12 2019.
F. Fan, X. Zeng, S. Wei, H. Zhang, D. Tang, J. Shi and X. Zhang, “Efficient instance segmentation paradigm for interpreting SAR and optical images,” Remote Sens. (Basel), vol. 14, no. 3, p. 531, 2022.
Q. Yuan and H. Shafri, “Multi-Modal Feature Fusion Network with Adaptive Center Point Detector for Building Instance Extraction,” Remote Sensing, vol. 14, p. 4920, 10 2022.
Y. Zhao, L. Zhao, B. Xiong, and G. Kuang, “Attention receptive pyramid network for ship detection in sar images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 2738–2756, 2020.
Z. Cui, X. Wang, N. Liu, Z. Cao, and J. Yang, “Ship detection in large-scale sar images via spatial shuffle-group enhance attention,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 1, pp. 379–391, 2021.
R. Yang, Z. Pan, X. Jia, L. Zhang, and Y. Deng, “A novel cnn-based detector for ship detection based on rotatable bounding box in sar images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 1938–1958, 2021.
T. Zhang and X. Zhang, “ShipDeNet-20: An Only 20 Convolution Layers and <1-MB Lightweight SAR Ship Detector,” IEEE Geoscience and Remote Sensing Letters, vol. 18, no. 7, pp. 1234-1238, July 2021, doi: 10.1109/LGRS.2020.2993899.
Q. Hu, S. Hu and S. Liu, “BANet: A Balance Attention Network for Anchor-Free Ship Detection in SAR Images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-12, 2022, Art no. 5222212, doi: 10.1109/TGRS.2022.3146027.
H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid and S. Savarese, “Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 658-666, doi: 10.1109/CVPR.2019.00075.
海洋委員會海巡署, “民國111年海巡統計年報-取締非法捕魚統計-按風力和浪高分級,” 海洋委員會海巡署官方網站, [Online]. Available: https://www.cga.gov.tw/GipOpen/wSite/public/Attachment/f1681385648856.pdf, (accessed Feb. 2022).
World Meteorological Organization (WMO), “International Convention for the Safety of Life at Sea (SOLAS) ,” WMO official Website, [Online]. Available: https://www.imo.org/en/About/Conventions/Pages/International-Convention-for-the-Safety-of-Life-at-Sea-(SOLAS),-1974.aspx, (accessed Feb. 2022).
N. Fouladgar, M. Alirezaie, and K. Främling, ‘‘CN-waterfall: A deep convolutional neural network for multimodal physiological affect detection,’’ Neural Comput. Appl., vol. 34, no. 3, pp. 2157–2176, Feb. 2022.
P. Shah, P. P. Raj, P. Suresh, and B. Das, ‘‘Contextually aware multimodal emotion recognition,’’ Proc. Int. Conf. Recent Trends Mach. Learn., IoT, Smart Cities Appl. Singapore: Springer, 2021, pp. 745–753.
H. Choi, J. P. Yun, B. J. Kim, H. Jang and S. W. Kim, ‘‘Attention-Based Multimodal Image Feature Fusion Module for Transmission Line Detection,’’ IEEE Transactions on Industrial Informatics, vol. 18, no. 11, pp. 7686-7695, Nov. 2022, doi: 10.1109/TII.2022.3147833.
Shang-Fong, Yang and YaLun, Tsai,“TSSWD Research Github Page,” 2023, [Online]. Avaliable: https://github.com/GMfatcat/TSSWD
S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “CutMix: Regularization strategy to train strong classifiers with localizable features,” arXiv [cs.CV], 2019, [Online]. Avaliable: https://arxiv.org/abs/1905.04899
L. T. Luppino et al., “Code-Aligned Autoencoders for Unsupervised Change Detection in Multimodal Remote Sensing Images,” IEEE Transactions on Neural Networks and Learning Systems, doi: 10.1109/TNNLS.2022.3172183.
G. N. Newsam and M. Wegener, “Generating non-Gaussian random fields for sea surface simulations,” Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing, Adelaide, SA, Australia, 1994, pp. VI/195-VI/198 vol.6, doi: 10.1109/ICASSP.1994.389910.
W. Xu et al., “Experimental Investigation on Bragg Resonant Reflection of Waves by Porous Submerged Breakwaters on a Horizontal Seabed,” Water, vol. 14, no. 17, p. 2682, Aug. 2022, doi: 10.3390/w14172682.
S. Fan, V. Kudryavtsev, B. Zhang, W. Perrie, B. Chapron, and A. Mouche, “On C-Band Quad-Polarized Synthetic Aperture Radar Properties of Ocean Surface Currents,” Remote Sensing, vol. 11, no. 19, p. 2321, Oct. 2019, doi: 10.3390/rs11192321.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88915-
dc.description.abstract臺灣海上貿易發達,近海漁業也十分興盛,因此船隻偵測系統對於臺灣的重要性不言而喻。然而,作為船隻偵測主要手段的自動識別系統(Automatic Identification System,AIS)存在訊號範圍受限、小型船隻並無安裝及可以被刻意關閉等問題,造成船隻偵測效率不佳。合成孔徑雷達(Synthetic Aperture Radar,SAR)由於不易受到天氣影響、不受日夜限制等優勢,已被廣泛地應用於船隻偵測任務。隨著深度學習的興起,許多研究都嘗試將此技術應用於SAR影像的船隻偵測。訓練深度學習模型需要資料集,儘管自2017年以來已有五個公開的船隻資料集,這些資料集仍舊有許多不足之處。以遙測角度檢視這些資料集,它們混合不同衛星和不同極化模式的影像的行為並不合理,因為不同衛星和極化模式影像內的船隻特徵會有所差異,進而造成模型訓練困難。本研究也發現前人進行船隻偵測時並未考慮風浪等海面天氣資訊,若能將風浪資訊加入深度學習模型,不僅可以提高模型的可解釋程度,進而發現模型的真實應用場景,亦可以將風浪資訊當作船隻偵測的輔助資訊,有望進一步提升船隻偵測精度。基於上述理由,本研究將嘗試於船隻偵測加入風浪資訊,而為達成此目的,本研究提出一個全新的SAR船隻資料集,名為Taiwan SAR-based Ship and Weather Dataset (TSSWD)。TSSWD以符合遙測原則的方式,從Google Earth Engine蒐集臺灣周邊海域的Sentinel-1影像,結合浮標資訊以測試模型對實際海況的適應能力,並使用光學衛星底圖和AIS來確保船隻標註的正確性。所有深度學習模型的mAP50均超過90,其中Mask R-CNN-x152的表現最為亮眼,在預測框和預測遮罩的mAP50都超過93。本研究也對此模型於不同風浪條件下進行測試,並根據分析結果認定TSSWD的實際應用場景為浪高介於0.1至2.5公尺。此外,本研究嘗試將風浪資訊和SAR影像進行資料融合,以聯合表示的形式建立多模態模型,並進行不同風浪條件下的精度測試。儘管多模態模型的整體測試精度並未超越未加入風浪資訊的NoFusion模型,但是在穩定性上優於NoFusion,而加入風速的多模態模型於低風速的場景下,其精度表現略優於NoFusion。本研究也針對模型的誤判率進行分析,發現誤判數量隨著浪級提升而有所上升,然而中低等級風浪的誤判數量佔比為67%,是需要優先改進的部分。總結來說,TSSWD是第一個符合遙測原則並結合浮標資料的資料集,不僅提供後人將船隻與風浪資訊進行結合的機會,也有助於建立結合風浪資訊的多模態模型。本研究將風浪數值當作船隻偵測模型的輔助資訊,與SAR影像一同建立多模態深度學習模型,並證明使用風浪資訊的多模態模型具有較高的穩定性,而此模型的出現也提供後續研究者一個全新的方向。zh_TW
dc.description.abstractTaiwan's developed maritime trade and thriving coastal fishing industry underscore the significance of developing ship detection systems. However, the Automatic Identification System (AIS), the primary means for ship detection, suffers from limitations such as signal range constraints, lack of installation on small vessels, and vulnerability to intentional shutdowns, leading to suboptimal ship detection efficiency. Synthetic Aperture Radar (SAR), with its advantages of weather independence and day-night operational capability, has been widely applied in ship detection tasks. With the rise of deep learning, researchers have explored applying this technique to ship detection using SAR imagery. Despite the availability of five publicly accessible ship datasets since 2017, these datasets still have several shortcomings. Examining them from a remote sensing perspective reveals an illogical mixing of imagery from different satellites and polarimetric modes, which makes model training challenging due to variations in ship features among different sources and modes. Additionally, previous ship detection studies have not considered vital oceanic information, such as wind and wave data. Integrating wind and wave information into deep learning models not only enhances interpretability to discover realistic application scenarios but also serves as auxiliary data for ship detection, potentially improving detection accuracy. In light of the above, this research considers the inclusion of wind and wave data in ship detection as a feasible attempt. To achieve this, a novel SAR-based ship dataset named Taiwan SAR-based Ship and Weather Dataset (TSSWD) is proposed. TSSWD adheres to remote sensing principles by collecting Sentinel-1 imagery from the ocean surrounding Taiwan using Google Earth Engine. It combines buoy information to evaluate model’s adaptability to real sea conditions and uses optical satellite imagery and AIS to ensure accurate ship annotation. All deep learning models achieve mAP50 values exceeding 90, with the Mask R-CNN-x152 model exhibiting exceptional performance with mAP50 values exceeding 93 in both bounding box and mask prediction tasks. The research also tests the proposed model under different wind and wave conditions and identifies the practical application scenario for TSSWD as SAR imagery with wave heights ranging from 0.1 to 2.5 meters. Moreover, the study attempts to fuse wind and wave data with SAR imagery to construct a multimodal model and undergoes accuracy testing under various wind and wave conditions. Although the overall testing accuracy of the multimodal model does not significantly surpass that of the NoFusion model without wind and wave data, the research discovers the superior stability of the multimodal model compared to NoFusion. Notably, the multimodal model with wind speed inclusion demonstrates slightly better accuracy in low wind speed scenarios. The research also analyzes the misclassification rates of the models and finds that misclassification increases with rising wave levels, with moderate to low wave conditions accounting for 67% of misclassifications, indicating an area in need of priority improvement. In summary, TSSWD is the first dataset that adheres to remote sensing principles while incorporating buoy data, providing researchers with the opportunity to integrate ship detection with wind and wave information. Moreover, it facilitates the establishment of multimodal models that combine wind and wave information, with the proposed wind-wave multimodal model being the first to use wind and wave data as auxiliary information for ship detection alongside SAR imagery. Its emergence offers subsequent researchers a new direction in ship detection research.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-16T16:20:33Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-08-16T16:20:33Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌謝 i
中文摘要 ii
ABSTRACT iv
大綱 vi
圖目錄 ix
表目錄 xi
Chapter 1 研究背景 1
1.1 背景介紹 1
1.2 文獻回顧 2
1.2.1 SAR衛星的優勢與限制 3
1.2.2 傳統船隻辨識方法的限制 6
1.2.3 深度學習應用於船隻偵測的優勢和限制 8
1.2.4 卷積神經網路 10
1.2.5 物件辨識 12
1.2.6 實例分割 13
1.2.7 深度學習應用於SAR船隻偵測 14
1.2.8 SAR船隻資料集 15
1.2.9 浮標資料 19
1.2.10 多模態深度學習模型 20
1.2.11 研究目標 21
Chapter 2 研究方法 23
2.1 研究區域與資料取得 23
2.1.1 研究區域 23
2.1.2 SAR影像資料 25
2.1.3 浮標資料 26
2.1.4 AIS資料 27
2.2 建置TSSWD資料集 28
2.2.1 處理流程 28
2.2.2 海象資料分析和資料增強 30
2.2.3 影像標註 32
2.3 精度指標 35
2.4 AIS輔助效益量化 36
2.5 不同程度風浪之模型精度測試 38
2.6 挑選深度學習模型 38
2.6.1 通用型邊界框 38
2.6.2 通用型實例分割 39
2.7 建立多模態深度學習模型 39
2.7.1 多模態模型基本假設 39
2.7.2 多模態資料融合過程 40
2.7.3 多模態模型架構和相關實驗 41
2.8 深度學習模型海陸誤判分析 42
Chapter 3 研究成果與討論 43
3.1 TSSWD資料集 43
3.1.1 資料集分析 43
3.1.2 TSSWD與前人資料集之比較 44
3.1.3 TSSWD測試集可信度分析 48
3.2 模型精度 50
3.2.1 TSSWD測試集精度 51
3.2.2 資料增強效益分析 56
3.2.3 不同程度風浪之模型精度測試結果分析 57
3.2.4 不同尺寸船隻測試集精度 60
3.2.5 不同風浪程度下大型船隻分佈情形 62
3.3 AIS輔助標記分析 64
3.4 多模態深度學習模型 65
3.4.1 多模態模型訓練 66
3.4.2 多模態模型的測試精度 66
3.4.3 多模態模型於不同程度風浪下的精度分析 68
3.5 深度學習模型海陸誤判分析 73
Chapter 4 結論與未來工作 76
4.1 本研究貢獻 76
4.2 未來展望 77
參考文獻 80
-
dc.language.isozh_TW-
dc.subjectSentinel-1zh_TW
dc.subjectAISzh_TW
dc.subject浮標zh_TW
dc.subject合成孔徑雷達zh_TW
dc.subject船隻偵測zh_TW
dc.subject深度學習zh_TW
dc.subjectShip Detectionen
dc.subjectDeep Learningen
dc.subjectAISen
dc.subjectBuoyen
dc.subjectSynthetic Aperture Radaren
dc.subjectSentinel-1en
dc.title深度學習模型融合風浪資訊於SAR船隻偵測-以臺灣周邊海域為例zh_TW
dc.titleInformation Fusion of Wind and Wave for SAR Ship Detection in Ocean Surrounding Taiwan Using Deep Learningen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee韓仁毓;林偲妘;吳日騰zh_TW
dc.contributor.oralexamcommitteeJen-Yu Han;Szu-Yun Lin;Rih-Teng Wuen
dc.subject.keyword合成孔徑雷達,船隻偵測,浮標,深度學習,AIS,Sentinel-1,zh_TW
dc.subject.keywordSynthetic Aperture Radar,Ship Detection,Buoy,Deep Learning,AIS,Sentinel-1,en
dc.relation.page104-
dc.identifier.doi10.6342/NTU202303285-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-08-10-
dc.contributor.author-college工學院-
dc.contributor.author-dept土木工程學系-
顯示於系所單位:土木工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
4.03 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved