請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74184
標題: | RA-GAN: 多重領域圖像轉換使用相對屬性值 RA-GAN: Multi-domain Image-to-Image Translation via Relative Attributes |
作者: | Po-Wui Wu 吳柏威 |
指導教授: | 廖世偉(Shih-wei Liao) |
關鍵字: | 深度學習,生成對抗網路,相對屬性,多領域圖像轉換, deep learning,generative adversarial network,relative attributes,Multi-domain Image-to-Image translation, |
出版年 : | 2019 |
學位: | 碩士 |
摘要: | 多域圖像到圖像的翻譯最近越來越受到關注。以前的方法將圖像和一些目標屬性作為輸入,並生成具有所需屬性的輸出圖像。但是,這有一個局限性。它們需要指定整個屬性集,即使它們中的大多數都不會被更改。為了解決這一局限性,我們提出了一種新的實用的多域圖像到圖像轉換公式RA-GAN。關鍵的想法是使用相對屬性,它描述了所選屬性的所需變化。為此,我們提出了一個對抗框架,它學習單個生成器來翻譯不僅與相關屬性相匹配,而且表現出更好質量的圖像。此外,我們的發生器能夠通過連續地改變感興趣的特定屬性來修改圖像,同時保留其他特徵。實驗結果證明了我們的方法在面部屬性轉移和插值任務中的定性和定量的有效性。 Multi-domain image-to-image translation has gained increasing attention recently. Previous methods take an image and some target attributes as inputs and generate an output image that has the desired attributes. However, this has one limitation. They require specifying the entire set of attributes even most of them would not be changed. To address this limitation, we propose RA-GAN, a novel and practical formulation to multi-domain image-to-image translation. The key idea is the use of relative attributes, which describes the desired change on selected attributes. To this end, we propose an adversarial framework that learns a single generator to translate images that not only match the relative attributes but also exhibit better quality. Moreover, Our generator is capable of modifying images by changing particular attributes of interest in a continuous manner while preserving the other ones. Experimental results demonstrate the effectiveness of our approach both qualitatively and quantitatively to the tasks of facial attribute transfer and interpolation. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74184 |
DOI: | 10.6342/NTU201902059 |
全文授權: | 有償授權 |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 11.75 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。