請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88347
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 廖世偉 | zh_TW |
dc.contributor.advisor | Shih-wei Liao | en |
dc.contributor.author | 羅費南 | zh_TW |
dc.contributor.author | Fernando Sebastian Lopez Ochoa | en |
dc.date.accessioned | 2023-08-09T16:39:27Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-08-09 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-07-25 | - |
dc.identifier.citation | [1] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath. Generative adversarial networks: An overview. CoRR, abs/1710.07035, 2017.
[2] D. T. Dang Nguyen, C. Pasquini, V. Conotter, and G. Boato. Raise - a raw images dataset for digital image forensics. 03 2015. [3] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. [4] S. Gao. Transferring Multiscale Map Styles Using Generative Adversarial Networks - Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Data-flow-of-CycleGAN-in-this-research_fig7_332932603 [Accessed: 2023-06-01]. [5] S. Gao. Transferring Multiscale Map Styles Using Generative Adversarial Networks - Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Data-flow-of-CycleGAN-in-this-research_fig7_332932603 [Accessed: 2023-06-01]. [6] D. Ghadiyaram and A. C. Bovik. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 25(1):372–387, 2016. [7] H. Gholamalinezhad and H. Khosravi. Pooling methods in deep neural networks, a review, 2020. [8] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks, 2014. [9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition, 2015. [10] V. Hosu, H. Lin, T. Sziranyi, and D. Saupe. Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 29:4041–4056, 2020. [11] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. [12] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, 2017. [13] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang. EnlightenGAN, 2019. Available from: https://github.com/VITA-Group/EnlightenGAN [Accessed: 2023-04-28]. [14] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang. Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30:2340–2349, 2021. [15] R. Khandelwal. Various image enhancement techniques- a critical review. 2013. [16] A. Kravchenko. EnlightenGAN-inference, 2021. Available from: https://github.com/arsenyinfo/EnlightenGAN-inference [Accesed: 2023-05-01]. [17] J. Liu, X. Dejia, W. Yang, M. Fan, and H. Huang. Benchmarking low-light image enhancement and beyond. International Journal of Computer Vision, 129:1153–1184, 2021. [18] M. Mirza and S. Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014. [19] K. O’Shea and R. Nash. An introduction to convolutional neural networks, 2015. [20] M. Raman and A. Himanshu. A comprehensive review of image enhancement techniques. Journal of Computing, 2(3):8–13, 2010. [21] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation, 2015. [22] A. Rougete. Landscape pictures, 2022. Available from: https://www.kaggle.com/datasets/arnaud58/landscape-pictures. [23] H. Talebi and P. Milanfar. NIMA: Neural image assessment. IEEE Transactions on Image Processing, 27(8):3998–4011, aug 2018. [24] K.-H. Thung and P. Raveendran. A survey of image quality measures. In 2009 International Conference for Technical Postgraduates (TECHPOS), pages 1–4, 2009. [25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need, 2017. [26] Z. Wang and A. Bovik. A universal image quality index. IEEE Signal Processing Letters, 9(3):81–84, 2002. [27] Z. Wang, A. C. Bovik, and L. Lu. Why is image quality assessment so difficult? In 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 4, pages IV–3313–IV–3316, 2002. [28] C. Wei, W. Wang, W. Yang, and J. Liu. Deep retinex decomposition for low-light enhancement. In British Machine Vision Conference, 2018. [29] J. You. triq, 2020. Available from: https://github.com/junyongyou/triq [Accessed: 2022-10-15]. [30] J. You and J. Korhonen. Transformer for image quality assessment. CoRR, abs/2101.01097, 2021. [31] J.-Y. Zhu. pytorch-CycleGAN-and-pix2pix, 2017. Available from: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix [Accessed: 2022-10-15]. [32] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88347 | - |
dc.description.abstract | none | zh_TW |
dc.description.abstract | Image Quality Enhancement is the process in which images can be improved for better human interpretation of its contents. Image enhancement is usually done based on certain parameters specified when formulating the problem. Generative Adversarial Networks (GAN), on the other hand, can create new images based only on characteristics it finds on the training set, without specifying those characteristics. We utilize three variations of the GAN architecture, Cycle GAN, Conditional GAN and EnlightenGAN, to implement different solutions to generate images with an increased image quality on existing datasets. Our goal is to demonstrate that Transformer for Image Quality Assessment, an image quality evaluation framework, can give a frame of reference for the performance of these GANs, and that those GANs can increase the quality of images after being trained on an original and enhanced group. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-09T16:39:27Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-08-09T16:39:27Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i
Acknowledgements ii Abstract iii Contents iv List of Figures vii List of Tables ix Chapter 1 Introduction 1 Chapter 2 Background 3 2.1 Image Quality Enhancement 3 2.2 ImageQualityMeasurement 4 2.3 Transformers 5 2.3.1 What are Transformers? 5 2.3.2 How do Transformers work? 6 2.4 Generative Adversarial Networks(GANs) 7 2.4.1 What are GANs? 7 2.4.2 How do GANs work? 8 2.5 Discussion 9 Chapter 3 Related work 11 3.1 Transformer for Image Quality Assessment (TRIQ) 11 3.2 CycleGAN 16 3.3 ConditionalGAN (cGAN) 18 3.4 GammaCorrection 21 3.5 EnlightenGAN 22 Chapter 4 Implementation 24 4.1 Datasets 24 4.1.1 Low-Light (LOL) dataset 24 4.1.2 VE-LOL dataset 24 4.1.3 Landscapes dataset 25 4.2 Experiments description 26 Chapter 5 Results 28 5.1 CycleGAN experiments 28 5.1.1 LOL 28 5.1.2 VE-LOL 31 5.2 ConditionalGAN experiments 33 5.2.1 LOL 33 5.2.2 VE-LOL 36 5.3 Comparison between Cycle and Conditional GAN 38 5.4 Experiments on the Landscapes datasets 39 5.4.1 First Landscapes dataset 40 5.4.2 Second Landscapes dataset 42 5.4.3 Third Landscapes dataset 44 5.4.4 Analysis of cGAN-Landscapes datasets 46 5.5 EnlightenGAN and Landscapes datasets 47 5.5.1 First Landscapes dataset 48 5.5.2 Second Landscapes dataset 50 5.5.3 Third Landscapes dataset 52 5.5.4 Analysis of EnlightenGAN - Landscapes datasets 54 Chapter 6 Conclusion 55 References 56 | - |
dc.language.iso | en | - |
dc.title | 應用生成對抗網絡與用圖像品質評估的變換器 | zh_TW |
dc.title | Applying Generative Adversarial Networks with Transformer for Image Quality Assessment | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 傅楸善;盧瑞山;黃維中 | zh_TW |
dc.contributor.oralexamcommittee | Chiou-Shann Fuh;Ruei-shan Lu;Wei-chung Hwang | en |
dc.subject.keyword | 圖像增強,生成對抗網路,低光照圖像增強,圖像品質評估,變換器, | zh_TW |
dc.subject.keyword | Image Enhancement,Generative Adversarial Networks,Low-Light Image Enhancement,Image Quality Assessment,Transformers, | en |
dc.relation.page | 59 | - |
dc.identifier.doi | 10.6342/NTU202210041 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2023-07-26 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 資訊工程學系 | - |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf | 60.82 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。