Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88347
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor廖世偉zh_TW
dc.contributor.advisorShih-wei Liaoen
dc.contributor.author羅費南zh_TW
dc.contributor.authorFernando Sebastian Lopez Ochoaen
dc.date.accessioned2023-08-09T16:39:27Z-
dc.date.available2023-11-09-
dc.date.copyright2023-08-09-
dc.date.issued2023-
dc.date.submitted2023-07-25-
dc.identifier.citation[1] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath. Generative adversarial networks: An overview. CoRR, abs/1710.07035, 2017.
[2] D. T. Dang Nguyen, C. Pasquini, V. Conotter, and G. Boato. Raise - a raw images dataset for digital image forensics. 03 2015.
[3] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021.
[4] S. Gao. Transferring Multiscale Map Styles Using Generative Adversarial Networks - Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Data-flow-of-CycleGAN-in-this-research_fig7_332932603 [Accessed: 2023-06-01].
[5] S. Gao. Transferring Multiscale Map Styles Using Generative Adversarial Networks - Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Data-flow-of-CycleGAN-in-this-research_fig7_332932603 [Accessed: 2023-06-01].
[6] D. Ghadiyaram and A. C. Bovik. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 25(1):372–387, 2016.
[7] H. Gholamalinezhad and H. Khosravi. Pooling methods in deep neural networks, a review, 2020.
[8] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks, 2014.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition, 2015.
[10] V. Hosu, H. Lin, T. Sziranyi, and D. Saupe. Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 29:4041–4056, 2020.
[11] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015.
[12] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, 2017.
[13] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang. EnlightenGAN, 2019. Available from: https://github.com/VITA-Group/EnlightenGAN [Accessed: 2023-04-28].
[14] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang. Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30:2340–2349, 2021.
[15] R. Khandelwal. Various image enhancement techniques- a critical review. 2013.
[16] A. Kravchenko. EnlightenGAN-inference, 2021. Available from: https://github.com/arsenyinfo/EnlightenGAN-inference [Accesed: 2023-05-01].
[17] J. Liu, X. Dejia, W. Yang, M. Fan, and H. Huang. Benchmarking low-light image enhancement and beyond. International Journal of Computer Vision, 129:1153–1184, 2021.
[18] M. Mirza and S. Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014.
[19] K. O’Shea and R. Nash. An introduction to convolutional neural networks, 2015.
[20] M. Raman and A. Himanshu. A comprehensive review of image enhancement techniques. Journal of Computing, 2(3):8–13, 2010.
[21] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation, 2015.
[22] A. Rougete. Landscape pictures, 2022. Available from: https://www.kaggle.com/datasets/arnaud58/landscape-pictures.
[23] H. Talebi and P. Milanfar. NIMA: Neural image assessment. IEEE Transactions on Image Processing, 27(8):3998–4011, aug 2018.
[24] K.-H. Thung and P. Raveendran. A survey of image quality measures. In 2009 International Conference for Technical Postgraduates (TECHPOS), pages 1–4, 2009.
[25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need, 2017.
[26] Z. Wang and A. Bovik. A universal image quality index. IEEE Signal Processing Letters, 9(3):81–84, 2002.
[27] Z. Wang, A. C. Bovik, and L. Lu. Why is image quality assessment so difficult? In 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 4, pages IV–3313–IV–3316, 2002.
[28] C. Wei, W. Wang, W. Yang, and J. Liu. Deep retinex decomposition for low-light enhancement. In British Machine Vision Conference, 2018.
[29] J. You. triq, 2020. Available from: https://github.com/junyongyou/triq [Accessed: 2022-10-15].
[30] J. You and J. Korhonen. Transformer for image quality assessment. CoRR, abs/2101.01097, 2021.
[31] J.-Y. Zhu. pytorch-CycleGAN-and-pix2pix, 2017. Available from: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix [Accessed: 2022-10-15].
[32] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88347-
dc.description.abstractnonezh_TW
dc.description.abstractImage Quality Enhancement is the process in which images can be improved for better human interpretation of its contents. Image enhancement is usually done based on certain parameters specified when formulating the problem. Generative Adversarial Networks (GAN), on the other hand, can create new images based only on characteristics it finds on the training set, without specifying those characteristics. We utilize three variations of the GAN architecture, Cycle GAN, Conditional GAN and EnlightenGAN, to implement different solutions to generate images with an increased image quality on existing datasets. Our goal is to demonstrate that Transformer for Image Quality Assessment, an image quality evaluation framework, can give a frame of reference for the performance of these GANs, and that those GANs can increase the quality of images after being trained on an original and enhanced group.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-09T16:39:27Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-08-09T16:39:27Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
Abstract iii
Contents iv
List of Figures vii
List of Tables ix
Chapter 1 Introduction 1
Chapter 2 Background 3
2.1 Image Quality Enhancement 3
2.2 ImageQualityMeasurement 4
2.3 Transformers 5
2.3.1 What are Transformers? 5
2.3.2 How do Transformers work? 6
2.4 Generative Adversarial Networks(GANs) 7
2.4.1 What are GANs? 7
2.4.2 How do GANs work? 8
2.5 Discussion 9
Chapter 3 Related work 11
3.1 Transformer for Image Quality Assessment (TRIQ) 11
3.2 CycleGAN 16
3.3 ConditionalGAN (cGAN) 18
3.4 GammaCorrection 21
3.5 EnlightenGAN 22
Chapter 4 Implementation 24
4.1 Datasets 24
4.1.1 Low-Light (LOL) dataset 24
4.1.2 VE-LOL dataset 24
4.1.3 Landscapes dataset 25
4.2 Experiments description 26
Chapter 5 Results 28
5.1 CycleGAN experiments 28
5.1.1 LOL 28
5.1.2 VE-LOL 31
5.2 ConditionalGAN experiments 33
5.2.1 LOL 33
5.2.2 VE-LOL 36
5.3 Comparison between Cycle and Conditional GAN 38
5.4 Experiments on the Landscapes datasets 39
5.4.1 First Landscapes dataset 40
5.4.2 Second Landscapes dataset 42
5.4.3 Third Landscapes dataset 44
5.4.4 Analysis of cGAN-Landscapes datasets 46
5.5 EnlightenGAN and Landscapes datasets 47
5.5.1 First Landscapes dataset 48
5.5.2 Second Landscapes dataset 50
5.5.3 Third Landscapes dataset 52
5.5.4 Analysis of EnlightenGAN - Landscapes datasets 54
Chapter 6 Conclusion 55
References 56
-
dc.language.isoen-
dc.subject變換器zh_TW
dc.subject圖像品質評估zh_TW
dc.subject低光照圖像增強zh_TW
dc.subject圖像增強zh_TW
dc.subject生成對抗網路zh_TW
dc.subjectLow-Light Image Enhancementen
dc.subjectGenerative Adversarial Networksen
dc.subjectImage Enhancementen
dc.subjectImage Quality Assessmenten
dc.subjectTransformersen
dc.title應用生成對抗網絡與用圖像品質評估的變換器zh_TW
dc.titleApplying Generative Adversarial Networks with Transformer for Image Quality Assessmenten
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee傅楸善;盧瑞山;黃維中zh_TW
dc.contributor.oralexamcommitteeChiou-Shann Fuh;Ruei-shan Lu;Wei-chung Hwangen
dc.subject.keyword圖像增強,生成對抗網路,低光照圖像增強,圖像品質評估,變換器,zh_TW
dc.subject.keywordImage Enhancement,Generative Adversarial Networks,Low-Light Image Enhancement,Image Quality Assessment,Transformers,en
dc.relation.page59-
dc.identifier.doi10.6342/NTU202210041-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2023-07-26-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf60.82 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved