Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88349
Title: 基於語義引導之非監督式深度學習圖像美化
Semantic-guided 3D Lookup Tables for Image Enhancement with Unpaired Learning
Authors: 謝梓豪
Tzu-Hao Hsieh
Advisor: 莊永裕
Yung-Yu Chuang
Keyword: 圖像美化,三維查找表,深度學習,電腦視覺,無監督學習,
Image Enhancement,3D LUTs,Deep Learning,Computer Vision,Unpaired Learning,
Publication Year : 2023
Degree: 碩士
Abstract: 近年來,隨著深度學習的快速發展,利用深度學習進行圖像美化工作獲得了很大的進展,特別是深度學習搭配 3D LUTs (3-dimensional lookup tabels) 的方法相當受到歡迎,因為其無論在性能還是時間方面都取得了很好的成果,但是此方法受限於 3D LUTs 採用的是全域美化方式,只能對圖像進行整體美化,無法依據不同類別執行不同美化方式,一定程度上限制了圖像美化的彈性。因此,本篇論文提出了一種兼具全域美化以及區域美化的方法,目的是希望能達成依據圖像語義資訊執行對應美化,本方法由兩個部分組成。第一,將語義分割模型產出的結果插入原模型中學習,用以輔助模型生成類別權重圖,並結合 3D LUTs 進行圖像美化。第二,加入了語義片段相似度損失函數,使得我們能更有效的學習不同類別的圖像所需具備的美化要素。綜合以上兩種方法,我們能夠做到針對物體類別進行圖像美化,實驗結果表明了我們的方法無論在視覺效果還是評量數據上都勝過了以往的方法。
In recent years, with the rapid advancement of deep learning, significant progress has been made in utilizing deep learning for image enhancement tasks. Particularly, the combination of deep learning and 3D LUTs (3-dimensional lookup tables) has become quite popular due to its remarkable achievements in both performance and time efficiency. However, this approach is constrained by the global enhancement method employed by 3D LUTs, which allows only overall image enhancement and cannot execute different enhancement techniques based on different image categories, restricting the flexibility of image enhancement.Therefore, this paper proposes a method that combines both global and local enhancement, aiming to achieve semantic-based enhancement. This method consists of two parts. Firstly, the segmentation map obtained from a semantic segmentation model are integrated into the original model for training, assisting in the generation of category maps that are combined with 3D LUTs to enhance the images. Secondly, a semantic patch distance loss function is introduced, enabling us to more effective learn the aesthetic elements specific to different image categories. By integrating these two methods, we are able to achieve specialized image enhancement tailored to object categories. The experimental results demonstrate that our approach outperforms other methods in terms of both visual quality and quantitative data.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88349
DOI: 10.6342/NTU202301095
Fulltext Rights: 未授權
Appears in Collections:資訊網路與多媒體研究所

Files in This Item:
File SizeFormat 
ntu-111-2.pdf
  Restricted Access
11.32 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved