Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/15347
Title: | 基於多模組卷積神經網路之單張影像除霧 UNet-AIR2: A Single Image Dehazing Network |
Authors: | Ju-Chin Chao 趙汝晉 |
Advisor: | 吳沛遠(Pei-Yuan Wu) |
Keyword: | 影像除霧,深度學習,顯著圖,機器學習,電腦視覺, Image Dehazing,Deep Learning,Saliency Map,Machine learning,Computer vision, |
Publication Year : | 2020 |
Degree: | 碩士 |
Abstract: | 在本文中,我們基於UNet和現有設計(包括聚合變換,初始模塊和遞歸殘差卷積神經網絡)的組合,提出UNet-AIR2作為有效的圖像去霧模型。 與以前的依賴於物理散射模型的方法不同,UNet-AIR2直接生成除霧後的圖像,而無需估計透射圖和大氣光。 為了更好地證明每個模塊的有效性,我們進行了消融研究,評估使用了峰值信噪比(PSNR),結構相似度(SSIM)和主觀視覺效果。 此外,我們確定了每個模塊在UNet-AIR2中有效的原因,並獲得了顯著圖以觀察每個輸出像素與輸入圖像之間的關係。 在合成數據集和真實數據集上進行的大量實驗表明,與現有的圖像霧度去除技術相比,該方法具有明顯的改進。 In this paper, we propose UNet-AIR2 as an effective image dehazing model, based on UNet and a combination of state-of-the-art designs, including the aggregated transformation, inception module, and recurrent residual convolutional neural network. Unlike previous methods that depend on physical scattering models, UNet-AIR2 directly generates the dehazed image without estimating the transmission map and atmospheric light. To better demonstrate the effectiveness of each module, we conduct an ablation study evaluated using the peak signal-to-signal (PSNR), Structural Similarity (SSIM), and subjective visual effects. Furthermore, we determine the reasons why each module is valid in UNet-AIR2, and we obtained a saliency map to observe how each output pixel is related to the input image. Extensive experiments on synthetic datasets and real-world datasets reveal that the proposed method has significant improvements over the existing state-of-the-art methods for image haze removal. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/15347 |
DOI: | 10.6342/NTU202001203 |
Fulltext Rights: | 未授權 |
Appears in Collections: | 電信工程學研究所 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
U0001-3006202010444700.pdf Restricted Access | 1.01 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.