Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80437
Title: 非目標式之由文本控制的影片操縱技術
Target-free Text-guided Image Manipulation
Authors: 范萬泉
Wan-Cyuan Fan
Advisor: 王鈺強
Yu-Chiang Frank Wang
Co-Advisor: 陳祝嵩;邱維辰
Chu-Song Chen;Wei-Chen Walon Chiu
Keyword: 電腦視覺,影像操縱,文本至影像編輯,
computer vision,text-guided image manipulation,image manipulation,
Publication Year : 2021
Degree: 碩士
Abstract: 在本論文中,我們研究了沒有目標圖像監督下的文本引導圖像編輯問題。在僅觀察輸入圖像、使用者給定指令和對應圖像之物件類別標籤,我們提出了一種迴圈式編輯GAN (cManiGAN) 來解決這一具有挑戰性的任務。首先,通過引入一個圖像-文本跨模態解釋器,用相應的指令對輸出圖像進行比對驗證,我們能夠為訓練圖像生成器提供單詞級的訓練反饋。此外,迴圈式編輯一致性的假設進一步用於圖像處理,它結合了『撤消』指令,用於處理後的輸出以還原輸入圖像,能夠在像素級別提供額外的監督。我們在CLEVR 以及COCO 的數據集上進行了廣泛的實驗。雖然後者由於其多樣化的視覺和語義信息而特別具有挑戰性,但我們在兩個數據集上的實驗結果證實了我們提出的方法的有效性和普遍性。
In this thesis, we study the problem of text-guided image manipulation without ground truth image supervision. With only the input image, desirable instruction, and object labels observed, we propose a Cyclic-Manipulation GAN (cManiGAN) for tackling this challenging task. By introducing an image-text cross-modal interpreter authenticating output images with the corresponding instruction, we are able to provide word-level training feedback for training the image generator. Moreover, an operational cycle-consistency is further utilized for image manipulation, which synthesizes the “undo” instruction for recovering the input image based on the manipulated output, offering additional supervision at the pixel level. We conduct extensive experiments on the datasets of CLEVR and COCO datasets. While the latter is particularly challenging due to its diverse visual and semantic information, our experimental results on both datasets confirm the effectiveness and generalizability of our proposed method.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80437
DOI: 10.6342/NTU202104547
Fulltext Rights: 同意授權(限校園內公開)
Appears in Collections:電信工程學研究所

Files in This Item:
File SizeFormat 
ntu-110-2.pdf
Access limited in NTU ip range
2.69 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved