請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92548完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 郭斯彥 | zh_TW |
| dc.contributor.advisor | Sy-Yen Kuo | en |
| dc.contributor.author | 陳韋廷 | zh_TW |
| dc.contributor.author | Wei-Ting Chen | en |
| dc.date.accessioned | 2024-04-10T16:13:06Z | - |
| dc.date.available | 2024-04-11 | - |
| dc.date.copyright | 2024-04-10 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-04-08 | - |
| dc.identifier.citation | [1] E. Ahishakiye, M. Bastiaan Van Gijzen, J. Tumwiine, R. Wario, and J. Obungoloch. A survey on deep learning in medical image reconstruction. Intelligent Medicine, 1(03):118–127, 2021.
[2] C. O. Ancuti and C. Ancuti. Single image dehazing by multiscale fusion. IEEE Transactions on Image Processing, 22(8):3271–3282, 2013. [3] C. O. Ancuti, C. Ancuti, R. Timofte, L. Van Gool, L. Zhang, and M.H. Yang. Ntire 2019 image dehazing challenge report. In CVPRW, 2019. [4] D. Berman, S. Avidan, et al. Nonlocal image dehazing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1674–1682, 2016. [5] J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk. Noninvasive imaging through opaque scattering layers. Nature, 2012. [6] L. BestRowden and A. K. Jain. Learning face image quality from human assessments. IEEE Transactions on Information forensics and security, 13(12):3064–3077, 2018. [7] S. Bharadwaj, M. Vatsa, and R. Singh. Can holistic representations be used for face biometric quality assessment? In IEEE International Conference on Image Processing, pages 2792–2796, 2013. [8] V. Blanz and T. Vetter. Face recognition based on fitting a 3d morphable model. TPAMI, 2003. [9] Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. ZelnikManor. The 2018 PIRM challenge on perceptual image superresolution. In Proceedings of the European Conference on Computer Vision (ECCV), pages0–0, 2018. [10] D. A. Boas, D. H. Brooks, E. L. Miller, C. A. DiMarzio, M. Kilmer, R. J. Gaudette, and Q. Zhang. Imaging the body with diffuse optical tomography. IEEE signal processing magazine, 2001. [11] F. Boutros, M. Fang, M. Klemt, B. Fu, and N. Damer. Crfiqa: face image quality assessment by learning sample relative classifiability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5836– 5845, 2023. [12] T. M. Bui and W. Kim. Single image dehazing using color ellipsoid prior. TIP, 2017. [13] A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin. Albumentations: Fast and flexible image augmentations. Information, 2020. [14] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao. Dehazenet: An end to end system for single image haze removal. IEEE Transactions on Image Processing, 25(11):5187– 5198, 2016. [15] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao. Dehazenet: An endtoend system for single image haze removal. TIP, 2016. [16] N. Chahine, S. Calarasanu, D. GarciaCiviero, T. Cayla, S. Ferradans, and J. Ponce. An image quality assessment dataset for portraits. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9968– 9978, June 2023. [17] S. Chandrasekhar. Radiative transfer. Courier Corporation, 2013. [18] C. Chen, M. N. Do, and J. Wang. Robust image and video dehazing with visual artifact suppression via gradient residual minimization. In European Conference on Computer Vision, pages 576–591, 2016. [19] J. Chen, Y. Deng, G. Bai, and G. Su. Face image quality assessment based on learning to rank. IEEE Signal Processing Letters, 22(1):90–94, 2014. [20] L. Chen, Y. Fu, K. Wei, D. Zheng, and F. Heide. Instance segmentation in the dark. IJCV, 2023. [21] W.T. Chen, I.H. Chen, C.Y. Yeh, H.H. Yang, H.E. Chang, J.J. Ding, and S.Y. Kuo. Rvsl: Robust vehicle similarity learning in real hazy scenes based on semisupervised learning. In European Conference on Computer Vision, pages 427–443. Springer, 2022. [22] W.T. Chen, I.H. Chen, C.Y. Yeh, H.H. Yang, J.J. Ding, and S.Y. Kuo. Sjdl-vehicle: Semi-supervised joint defogging learning for foggy vehicle reidentification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 347–355, 2022. [23] W.T. Chen, J.J. Ding, and S.Y. Kuo. PMSNet: Robust Haze Removal Based on Patch Map for Single Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11681–11689, 2019. [24] W.T. Chen, J.J. Ding, and S.Y. Kuo. Pmsnet: Robust haze removal based on patch map for single images. In CVPR, 2019. [25] W.T. Chen, J.J. Ding, and S.Y. Kuo. PMSNet: Robust Haze Removal Based on Patch Map for Single Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11681–11689, 2019. [26] W.T. Chen, H.Y. Fang, J.J. Ding, and S.Y. Kuo. Pmhld: Patch map-based hybrid learning dehazenet for single image haze removal. TIP, 2020. [27] W.T. Chen, H.Y. Fang, J.J. Ding, C.C. Tsai, and S.Y. Kuo. Jstasr: Joint size and Transparency-aware snow removal algorithm based on modified partial convolution and veiling effect removal. In European Conference on Computer Vision, 2020. [28] W.T. Chen, H.Y. Fang, C.L. Hsieh, C.C. Tsai, I. Chen, J.J. Ding, S.Y. Kuo, et al. All snow removed: Single image desnowing algorithm using hierarchical dual-tree complex wavelet representation and contradict channel loss. In ICCV, 2021. [29] W.T. Chen, V. Y. Jiet, S.Y. Kuo, S. Ma, and J. Wang. Robustsam: Segment anything robustly on degraded images. In CVPR, 2024. [30] W.T. Chen, G. Krishnan, Q. Gao, S.Y. Kuo, S. Ma, and J. Wang. Dslfiqa: Assessing facial image quality via dualset degradation learning and landmark guided transformer. In CVPR, 2024. [31] W.T. Chen, W. Yifan, S.Y. Kuo, and G. Wetzstein. Dehazenerf: Multiple image haze removal and 3d shape reconstruction using neural radiance fields. 2024. [32] W.T. Chen, S.Y. Yuan, G.C. Tsai, H.C. Wang, and S.Y. Kuo. Color Channel-Based Smoke Removal Algorithm Using Machine Learning for Static Images. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 2855–2859, 2018. [33] X. Chen, Q. Zhang, X. Li, Y. Chen, Y. Feng, X. Wang, and J. Wang. Hallucinated neural radiance fields in the wild. In CVPR, 2022. [34] M.M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S.M. Hu. Global contrast based salient region detection. IEEE TPAMI, 2015. [35] L. K. Choi, J. You, and A. C. Bovik. Referenceless prediction of perceptual fog density and perceptual image defogging. TIP, 2015. [36] L. K. Choi, J. You, and A. C. Bovik. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Transactions on Image Processing, 24(11):3888–3901, 2015. [37] W. Y. Chung, S. Y. Kim, and C. H. Kang. Image dehazing using lidar generated grayscale depth prior. Sensors, 2022. [38] T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. TPAMI, 2001. [39] J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou. Retinaface: Singleshot Multilevel face localisation in the wild. In CVPR, 2020. [40] J. Deng, J. Guo, N. Xue, and S. Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4690–4699, 2019. [41] S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide. Dirty pixels: Optimizing image classification architectures for raw sensor data. arXiv preprint arXiv:1701.06487, 3, 2017. [42] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. [43] A. Dudhane, S. W. Zamir, S. Khan, F. S. Khan, and M.H. Yang. Burst image restoration and enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5759–5768, 2022. [44] A. Dudhane, S. W. Zamir, S. Khan, F. S. Khan, and M.H. Yang. Burstormer: Burst image restoration and enhancement transformer. arXiv preprint arXiv:2304.01194, 2023. [45] C. Dunsby and P. French. Techniques for depth resolved imaging through turbid media including coherence gated imaging. Journal of Physics D: Applied Physics, 2003. [46] B. Egger, W. A. Smith, A. Tewari, S. Wuhrer, M. Zollhoefer, T. Beeler, F. Bernard, T. Bolkart, A. Kortylewski, S. Romdhani, et al. 3d morphable face models—past, present, and future. ACM ToG, 2020. [47] D. Eigen, D. Krishnan, and R. Fergus. Restoring an image taken through a window covered with dirt or rain. In Proceedings of the IEEE International Conference on Computer Vision, pages 633–640, 2013. [48] K. Endo, M. Tanaka, and M. Okutomi. Semantic segmentation of degraded images using layer-wise feature adjustor. In WACV, 2023. [49] D. Engin, A. Genç, and H. Kemal Ekenel. Cycledehaze: Enhanced cyclegan for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 825–833, 2018. [50] D. Faccio, A. Velten, and G. Wetzstein. Nonline of sight imaging. Nature Reviews Physics, 2020. [51] R. Fattal. Single image dehazing. ACM transactions on graphics (TOG), 27(3):72, 2008. [52] R. Fattal. Dehazing using colorlines. ACM transactions on graphics (TOG), 34(1):13, 2014. [53] R. Ferzli and L. J. Karam. A no reference objective image sharpness metric based on the notion of just noticeable blur (jnb). IEEE transactions on image processing, 18(4):717–728, 2009. [54] Y. Gao, X. Min, W. Zhu, X.P. Zhang, and G. Zhai. Image quality score distribution prediction via alpha stable model. IEEE Transactions on Circuits and Systems for Video Technology, 2022. [55] D. Ghadiyaram and A. C. Bovik. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 25(1):372–387, 2015. [56] A. Gibson and H. Dehghani. Diffuse optical imaging. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2009. [57] C. Godard, K. Matzen, and M. Uyttendaele. Deep burst denoising. In Proceedings of the European conference on computer vision (ECCV), pages 538–554, 2018. [58] S. A. Golestaneh, S. Dadsetan, and K. M. Kitani. Noreference image quality assessment via transformers, relative ranking, and self consistency. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1220–1230, 2022. [59] A. Golts, D. Freedman, and M. Elad. Unsupervised single image dehazing using dark channel prior loss. IEEE Transactions on Image Processing, 29:2692–2701, 2020. [60] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [61] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. Draw: A recurrent neural network for image generation. In and others, editor, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1462–1471, July 2015. [62] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3):362–386, 2020. [63] A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman. Implicit geometric regularization for learning shapes. arXiv preprint, 2020. [64] P. Grother, A. Hom, M. Ngan, and K. Hanaoka. Ongoing face recognition vendor test (FRVT) part 5: Face image quality assessment. NIST Interagency Report, 2020. [65] Z. Gu, M. Ju, and D. Zhang. A single image dehazing method using average saturation prior. Mathematical Problems in Engineering, 2017. [66] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767–5777, 2017. [67] C.L. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li. Image dehazing transformer with transmission aware 3d position embedding. In CVPR, 2022. [68] D. Guo, Y. Pei, K. Zheng, H. Yu, Y. Lu, and S. Wang. Degraded image semantic segmentation with dense gram networks. TIP, 2019. [69] X. Guo, Y. Li, and H. Ling. Lime: Lowlight image enhancement via illumination map estimation. IEEE Transactions on image processing, 26(2):982–993, 2016. [70] Y.C. Guo, D. Kang, L. Bao, Y. He, and S.H. Zhang. Nerfren: Neural radiance fields with reflections. In CVPR, 2022. [71] A. Gupta, P. Dollar, and R. Girshick. LVIS: A dataset for large vocabulary instance segmentation. In CVPR, 2019. [72] N. Hautière, J.P.Tarel, D. Aubert, and E. Dumont. Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Analysis & Stereology, 27(2):87–95, 2008. [73] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask rcnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017. [74] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12):2341–2353, 2010. [75] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. PAMI, 2010. [76] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12):2341–2353, 2010. [77] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [78] D. Hendrycks and T. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261, 2019. [79] J. Hernandez Ortega, J. Galbally, J. Fierrez, R. Haraksim, and L. Beslay. Faceqnet: Quality assessment for face recognition based on deep learning. In 2019 International Conference on Biometrics (ICB), pages 1–8. IEEE, 2019. [80] M. Hong, Y. Xie, C. Li, and Y. Qu. Distilling image dehazing with heterogeneous task imitation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3462–3471, 2020. [81] R. Horstmeyer, H. Ruan, and C. Yang. Guidestar assisted wavefront shaping Methods for focusing light into biological tissue. Nature photonics, 2015. [82] V. Hosu, H. Lin, T. Sziranyi, and D. Saupe. Koniq10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 29:4041–4056, 2020. [83] J. Hu, L. Shen, and G. Sun. Squeeze and excitation networks. In CVPR, 2018. [84] M. Hu, Y. Li, and X. Yang. Skinsam: Empowering skin cancer segmentation with segment anything model. arXiv preprint arXiv:2304.13973, 2023. [85] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017. [86] J. Huang, D. Guan, A. Xiao, and S. Lu. Rda: Robust domain adaptation via fourier adversarial attacking. In ICCV, 2021. [87] X. Huang and S. Belongie. Arbitrary style transfer in realtime with adaptive instance normalization. In ICCV, 2017. [88] X. Huang, Q. Zhang, Y. Feng, H. Li, X. Wang, and Q. Wang. Hdrnerf: High dynamic range neural radiance fields. In CVPR, 2022. [89] Y. Huang, Y. Cao, T. Li, F. Juefei Xu, D. Lin, I. W. Tsang, Y. Liu, and Q. Guo. On the robustness of segment anything. arXiv preprint arXiv:2305.16220, 2023. [90] Y. Huang, H. Yang, C. Li, J. Kim, and F. Wei. Adnet: Leveraging error bias towards normal direction in face alignment. In ICCV, 2021. [91] G. Indebetouw and P. Klysubun. Imaging through scattering media with depth resolution by use of low coherence gating in spatiotemporal digital holography. Optics Letters, 2000. [92] H. Israël and F. Kasten. Koschmieders theorie der horizontalen sichtweite. In Die Sichtweite im Nebel und die Möglichkeiten ihrer künstlichen Beeinflussung. 1959. [93] R. Jensen, A. Dahl, G. Vogiatzis, E. Tola, and H. Aanæs. Large scale multiview stereopsis evaluation. In CVPR, 2014. [94] W. Ji, J. Li, Q. Bi, W. Li, and L. Cheng. Segment anything is not always perfect: An investigation of sam on different realworld applications. arXiv preprint arXiv:2304.05750, 2023. [95] H. Jin, S. Liao, and L. Shao. Pixelinpixel net: Towards efficient facial landmark detection in the wild. IJCV, 2021. [96] Y. Jin, D. Jiang, and M. Cai. 3d reconstruction using deep learning: a survey. Communications in Information and Systems, 20(4):389–413, 2020. [97] Z. Jin, S. Chen, Y. Chen, Z. Xu, and H. Feng. Let segment anything help image dehaze. arXiv preprint arXiv:2306.15870, 2023. [98] B. Jo, D. Cho, I. K. Park, and S. Hong. Ifqa: Interpretable face quality assessment. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 3444–3453, January 2023. [99] J. Johnson, A. Alahi, and L. FeiFei. Perceptual losses for realtime style transfer and superresolution. In European conference on computer vision, pages 694–711, 2016. [100] M. Ju, C. Ding, Y. J. Guo, and D. Zhang. Idgcp: Image dehazing based on gamma correction prior. IEEE Transactions on Image Processing, 2019. [101] J. T. Kajiya and B. P. Von Herzen. Ray tracing volume densities. ACM SIGGRAPH computer graphics, 1984. [102] S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J.S. Lee, Y.S. Lim, Q.H. Park, and W. Choi. Imaging deep within a scattering medium using collective accumulation of single scattered waves. Nature Photonics, 2015. [103] T. Karras, S. Laine, and T. Aila. A style based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401–4410, 2019. [104] Y. Kartynnik, A. Ablavatski, I. Grishchenko, and M. Grundmann. Realtime facial surface geometry from monocular video on mobile gpus. arXiv preprint arXiv:1907.06724, 2019. [105] O. Katz, P. Heidmann, M. Fink, and S. Gigan. Noninvasive single-shot Imaging through scattering layers and around corners via speckle correlations. Nature photonics, 2014. [106] J. Ke, Q. Wang, Y. Wang, P. Milanfar, and F. Yang. MUSIQ: multiscale image quality transformer. CoRR, abs/2108.05997, 2021. [107] J. Ke, Q. Wang, Y. Wang, P. Milanfar, and F. Yang. Musiq: Multiscale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5148–5157, 2021. [108] L. Ke, M. Ye, M. Danelljan, Y. Liu, Y.W. Tai, C.K. Tang, and F. Yu. Segment anything in high quality. arXiv:2306.01567, 2023. [109] I. Kim, S. Han, J.w. Baek, S.J. Park, J.J. Han, and J. Shin. Quality agnostic image recognition via invertible decoder. In CVPR, 2021. [110] M. Kim, S. Seo, and B. Han. Infonerf: Ray entropy minimization for few shot Neural volume rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12912–12921, 2022. [111] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint, 2014. [112] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [113] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.Y. Lo, P. Dollár, and R. Girshick. Segment anything. arXiv:2304.02643, 2023. [114] W.S. Lai, J.B. Huang, N. Ahuja, and M.H. Yang. Deep laplacian pyramid networks for fast and accurate super resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 624–632, 2017. [115] J.G. Lee, S. Jun, Y.W. Cho, H. Lee, G. B. Kim, J. B. Seo, and N. Kim. Deep learning in medical imaging: general overview. Korean journal of radiology, 18(4):570– 584, 2017. [116] S. Lee, T. Son, and S. Kwak. Fifo: Learning fog invariant features for foggy scene segmentation. In CVPR, 2022. [117] D. Levy, A. Peleg, D. Akkaynak, N. Pearl, D. Rosenbaum, S. Korman, and T. Treibitz. Seathrunerf: Neural radiance fields in scattering media. In CVPR, 2023. [118] B. Li, X. Liu, P. Hu, Z. Wu, J. Lv, and X. Peng. All in one image restoration for unknown corruption. In CVPR, 2022. [119] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng. Aodnet: All in one dehazing network. In ICCV, 2017. [120] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng. Aodnet: All in one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision, pages 4770–4778, 2017. [121] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang. Reside: A benchmark for single image dehazing. arXiv preprint arXiv:1712.04143, 2017. [122] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang. Benchmarking single image dehazing and beyond. IEEE Transactions on Image Processing, 28:492–505, 2018. [123] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang. Benchmarking single image dehazing and beyond. IEEE Transactions on Image Processing, 28(1):492–505, 2018. [124] R. Li, J. Pan, Z. Li, and J. Tang. Single image dehazing via conditional generative adversarial network. In CVPR, 2018. [125] S. Li, I. B. Araujo, W. Ren, Z. Wang, E. K. Tokuda, R. H. Junior, R. CesarJunior, J. Zhang, X. Guo, and X. Cao. Single image deraining: A comprehensive benchmark analysis. In CVPR, 2019. [126] S. Li, M. Liu, Y. Zhang, S. Chen, H. Li, H. Chen, and Z. Dou. Samdeblur: Let segment anything boost image deblurring. arXiv preprint arXiv:2309.02270, 2023. [127] X. Li, T. Wei, Y. P. Chen, Y.W. Tai, and C.K. Tang. Fss1000: A 1000class dataset for fewshot segmentation. CVPR, 2020. [128] X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha. Recurrent squeeze and excitation Context aggregation net for single image deraining. In ECCV, 2018. [129] Y. Li, Q. Miao, W. Ouyang, Z. Ma, H. Fang, C. Dong, and Y. Quan. Lapnet: Levelaware progressive network for image dehazing. In Proceedings of the IEEE International Conference on Computer Vision, pages 3276–3285, 2019. [130] Z. Li, P. Tan, R. T. Tan, D. Zou, S. Zhiying Zhou, and L.F. Cheong. Simultaneous video defogging and stereo reconstruction. In CVPR, 2015. [131] Z. Li, J. Zhang, Z. Fang, B. Huang, X. Jiang, Y. Gao, and J.N. Hwang. Single Image Snow Removal via Composition Generative Adversarial Networks. IEEE Access, 7:25016–25025, 2019. [132] J. H. Liew, S. Cohen, B. Price, L. Mai, and J. Feng. Deep interactive thin object selection. In WACV, 2021. [133] Z. Lijun, S. Xiaohu, Y. Fei, D. Pingling, Z. Xiangdong, and S. Yu. Multibranch face quality assessment for face recognition. In 2019 IEEE 19th International Conference on Communication Technology (ICCT), pages 1659–1664. IEEE, 2019. [134] C. Lin, B. Zhu, Q. Wang, R. Liao, C. Qian, J. Lu, and J. Zhou. Structure coherent deep feature learning for robust face alignment. TIP, 2021. [135] C.H. Lin, W.C. Ma, A. Torralba, and S. Lucey. Barf: Bundle adjusting neural radiance fields. In ICCV, 2021. [136] H. Lin, V. Hosu, and D. Saupe. Kadid10k: A large scale artificially distorted iqa database. In 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), pages 1–3. IEEE, 2019. [137] T.Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. [138] D. B. Lindell, D. Van Veen, J. J. Park, and G. Wetzstein. Bacon: Bandlimited coordinate networks for multiscale scene representation. In CVPR, 2022. [139] D. B. Lindell and G. Wetzstein. Three dimensional imaging through scattering media based on confocal diffuse tomography. Nat. Commun., 2020. [140] D. B. Lindell, G. Wetzstein, and M. O’Toole. Wave based non line of sight imaging using fast fk migration. ACM Transactions on Graphics (ToG), 2019. [141] D. Liu, D. Zhang, S. Liu, Y. Song, H. Jia, D. Feng, Y. Xia, and W. Cai. Densely Connected Large Kernel Convolutional Network for Semantic Membrane Segmentation in Microscopy Images. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 2461–2465, 2018. [142] G. Liu, F. A. Reda, K. J. Shih, T.C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 85–100, 2018. [143] J. Liu, D. Xu, W. Yang, M. Fan, and H. Huang. Benchmarking lowlight image enhancement and beyond. International Journal of Computer Vision, 129:1153–1184, 2021. [144] Q. Liu, X. Gao, L. He, and W. Lu. Single image dehazing with depth-aware Nonlocal total variation regularization. IEEE Transactions on Image Processing, 27(10):5178–5191, 2018. [145] W. Liu, G. Ren, R. Yu, S. Guo, J. Zhu, and L. Zhang. Image-adaptive yolo for object detection in adverse weather conditions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 1792–1800, 2022. [146] X. Liu, S. Bauer, and A. Velten. Phasor field diffraction based reconstruction for fast non line of sight imaging systems. Nature communications, 2020. [147] X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten. Non line of sight imaging using phasor field virtual wave optics. Nature, 2019. [148] Y. Liu, J. Pan, J. Ren, and Z. Su. Learning deep priors for image dehazing. In ICCV, 2019. [149] Y.F. Liu, D.W. Jaw, S.C. Huang, and J.N. Hwang. DesnowNet: Contextaware deep network for snow removal. IEEE Transactions on Image Processing, 27(6):3064–3073, 2018. [150] Y.F. Liu, D.W. Jaw, S.C. Huang, and J.N. Hwang. Desnownet: Context-aware deep network for snow removal. IEEE Transactions on Image Processing, 27(6):3064–3073, 2018. [151] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022, 2021. [152] F. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Schölkopf, and O. Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning, pages 4114–4124. PMLR, 2019. [153] J. Ma and B. Wang. Segment anything in medical images. arXiv preprint arXiv:2304.12306, 2023. [154] L. Ma, X. Li, J. Liao, Q. Zhang, X. Wang, J. Wang, and P. V. Sander. Deblurnerf: Neural radiance fields from blurry images. In CVPR, 2022. [155] Z. Ma, X. Hong, and Q. Shangguan. Can sam count anything? an empirical study on sam counting. arXiv preprint arXiv:2304.10817, 2023. [156] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, volume 30, page 3, 2013. 187 [157] P. C. Madhusudana, N. Birkbeck, Y. Wang, B. Adsumilli, and A. C. Bovik. Image quality assessment using contrastive learning. CoRR, abs/2110.13266, 2021. [158] P. C. Madhusudana, N. Birkbeck, Y. Wang, B. Adsumilli, and A. C. Bovik. Image quality assessment using contrastive learning. IEEE Transactions on Image Processing, 31:4149–4161, 2022. [159] R. MartinBrualla, N. Radwan, M. S. Sajjadi, J. T. Barron, A. Dosovitskiy, and D. Duckworth. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In CVPR, 2021. [160] N. Max. Optical models for direct volume rendering. TVCG, 1995. [161] K. Mei, A. Jiang, J. Li, and M. Wang. Progressive feature fusion network for realistic image dehazing. In Asian conference on computer vision, pages 203–215. Springer, 2018. [162] K. Mei, A. Jiang, J. Li, and M. Wang. Progressive feature fusion network for realistic image dehazing. In Asian Conference on Computer Vision (ACCV), 2018. [163] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan. Efficient image dehazing with boundary constraint and contextual regularization. In Proceedings of the IEEE international conference on computer vision, pages 617–624, 2013. [164] Q. Meng, S. Zhao, Z. Huang, and F. Zhou. Magface: A universal representation for face recognition and quality assessment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14225–14234, 2021. [165] B. Mildenhall, P. Hedman, R. Martin Brualla, P. P. Srinivasan, and J. T. Barron. Nerf in the dark: High dynamic range view synthesis from noisy raw images. In CVPR, 2022. [166] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. [167] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021. [168] A. Mittal, A. K. Moorthy, and A. C. Bovik. Noreference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12):4695–4708, 2012. [169] A. Mittal, R. Soundararajan, and A. C. Bovik. Making a“completely blind"image quality analyzer. IEEE Signal processing letters, 20(3):209–212, 2012. [170] V. Mnih, N. Heess, A. Graves, and k. kavukcuoglu. Recurrent models of visual attention. In Advances in neural information processing systems, pages 2204–2212, 2014. [171] A. K. Moorthy and A. C. Bovik. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE transactions on Image Processing, 20(12):3350–3364, 2011. [172] K. Muhammad, A. Ullah, J. Lloret, J. Del Ser, and V. H. C. de Albuquerque. Deep learning for safe autonomous driving: Current challenges and future directions. IEEE Transactions on Intelligent Transportation Systems, 22(7):4316–4336, 2020. [173] S. Nah, T. Hyun Kim, and K. Mu Lee. Deep multiscale convolutional neural network for dynamic scene deblurring. In CVPR, 2017. [174] H. Nam and H.E. Kim. Batch instance normalization for adaptively style invariant neural networks. NIPS, 2018. [175] S. G. Narasimhan and S. K. Nayar. Chromatic framework for vision in bad weather. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 598–605, 2000. [176] S. G. Narasimhan and S. K. Nayar. Contrast restoration of weather degraded images. TPAMI, 2003. [177] P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012. [178] S. K. Nayar and S. G. Narasimhan. Vision in bad weather. In ICCV, 1999. [179] M. Niemeyer, J. T. Barron, B. Mildenhall, M. S. Sajjadi, A. Geiger, and N. Radwan. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In CVPR, 2022. [180] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision, pages 1520–1528, 2015. [181] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision, pages 1520–1528, 2015. [182] M. Oechsle, S. Peng, and A. Geiger. Unisurf: Unifying neural implicit surfaces and radiance fields for multiview reconstruction. In ICCV, 2021. [183] A. v. d. Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. [184] F.Z. Ou, X. Chen, R. Zhang, Y. Huang, S. Li, J. Li, Y. Li, L. Cao, and Y.G. Wang. Sddfiqa: unsupervised face image quality assessment with similarity distribution distance. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7670–7679, 2021. [185] Z. Pan, F. Yuan, J. Lei, Y. Fang, X. Shao, and S. Kwong. Vcrnet: Visual compensation restoration network for no reference image quality assessment. IEEE Transactions on Image Processing, 31:1613–1627, 2022. [186] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536–2544, 2016. [187] N. Pearl, T. Treibitz, and S. Korman. Nan: Noise-aware nerfs for burst denoising. In CVPR, 2022. [188] S.C. Pei, Y.T. Tsai, and C.Y. Lee. Removing rain and snow in a single image using saturation and visibility features. In 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pages 1–6, 2014. [189] C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun. Large Kernel Matters Improve Semantic Segmentation by Global Convolutional Network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4353–4361, 2017. [190] C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun. Large Kernel Matters Improve Semantic Segmentation by Global Convolutional Network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4353–4361, 2017. [191] M. Pharr, W. Jakob, and G. Humphreys. Physically based rendering: From theory to implementation. Morgan Kaufmann, 2016. [192] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti. Tid2008a database for evaluation of full reference visual quality assessment metrics. Advances of modern radio electronics, 10(4):30–45, 2009. [193] S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan. Image transmission through an opaque material. Nature communications, 2010. [194] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu. Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2482–2491, 2018. [195] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu. Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2482–2491, 2018. [196] Y. Qiao, C. Zhang, T. Kang, D. Kim, S. Tariq, C. Zhang, and C. S. Hong. Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713, 2023. [197] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia. Ffanet: Feature fusion attention network for single image dehazing. arXiv preprint arXiv:1911.07559, 2019. [198] Y. Qu, Y. Chen, J. Huang, and Y. Xie. Enhanced pix2pix dehazing network. In CVPR, 2019. [199] Y. Qu, Y. Chen, J. Huang, and Y. Xie. Enhanced Pix2pix Dehazing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8160–8168, 2019. [200] A. Rajagopalan et al. Improving robustness of semantic segmentation to motionblur using class centric augmentation. In CVPR, 2023. [201] A. RedoSanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar. Terahertz timegated spectral imaging for content extraction through layered structures. Nature communications, 2016. [202] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.H. Yang. Single image dehazing via multiscale convolutional neural networks. In European conference on computer vision, pages 154–169, 2016. [203] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.H. Yang. Single image dehazing via multiscale convolutional neural networks. In European conference on computer vision, pages 154–169. Springer, 2016. [204] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.H. Yang. Single image dehazing via multiscale convolutional neural networks. In European conference on computer vision, pages 154–169, 2016. [205] W. Ren, L. Ma, J. Zhang, J. Pan, X. Cao, W. Liu, and M.H. Yang. Gated fusion network for single image dehazing. In CVPR, 2018. [206] W. Ren, J. Zhang, X. Xu, L. Ma, X. Cao, G. Meng, and W. Liu. Deep video dehazing with semantic segmentation. IEEE Transactions on Image Processing, 28(4):1895–1908, 2018. [207] I. Rocco, M. Cimpoi, R. Arandjelović, A. Torii, T. Pajdla, and J. Sivic. Neighbourhood consensus networks. Advances in neural information processing systems, 31, 2018. [208] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022. [209] O. Ronneberger, P. Fischer, and T. Brox. Unet: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234–241, 2015. [210] O. Ronneberger, P. Fischer, and T. Brox. Unet: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer assisted intervention, pages 234–241, 2015. [211] J. Rose and T. Bourlai. Deep learning based estimation of facial attributes on challenging mobile phone face datasets. In Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pages 1120–1127, 2019. [212] A. Saha, S. Mishra, and A. C. Bovik. Reiqa: Unsupervised learning for image quality assessment in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5846–5855, 2023. [213] S. Santra, R. Mondal, and B. Chanda. Learning a patch quality comparator for single image dehazing. IEEE Transactions on Image Processing, 27(9):4598–4607, 2018. [214] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar. Instant dehazing of images using polarization. In CVPR, 2001. [215] T. Schlett, C. Rathgeb, O. Henniger, J. Galbally, J. Fierrez, and C. Busch. Face image quality assessment: A literature survey. ACM Computing Surveys, 2021. [216] T. Schlett, C. Rathgeb, O. Henniger, J. Galbally, J. Fierrez, and C. Busch. Face image quality assessment: A literature survey. ACM Computing Surveys (CSUR), 54(10s):1–49, 2022. [217] J. L. Schönberger and J.M. Frahm. Structure from motion revisited. In CVPR, 2016. [218] J. L. Schönberger, E. Zheng, M. Pollefeys, and J.M. Frahm. Pixelwise view selection for unstructured multiview stereo. In ECCV, 2016. [219] H. Sellahewa and S. A. Jassim. Image quality based adaptive face recognition. IEEE Transactions on Instrumentation and Measurement, 59(4):805–813, 2010. [220] X. Shan and C. Zhang. Robustness of segment anything model (sam) for autonomous driving in adverse weather conditions. arXiv preprint arXiv:2306.13290, 2023. [221] Y. Shao, L. Li, W. Ren, C. Gao, and N. Sang. Domain adaptation for image dehazing. In CVPR, 2020. [222] H. R. Sheikh, M. F. Sabir, and A. C. Bovik. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on image processing, 15(11):3440–3451, 2006. [223] D. Shen, G. Wu, and H.I. Suk. Deep learning in medical image analysis. Annual review of biomedical engineering, 19:221–248, 2017. [224] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014. [225] K. Simonyan and A. Zisserman. Very deep convolutional networks for large scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [226] D. Singh and V. Kumar. A comprehensive review of computational dehazing techniques. Archives of Computational Methods in Engineering, 2019. [227] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30, 2017. [228] C. Snyder and M. Do. Streets: A novel camera network dataset for traffic flow. NIPS, 2019. [229] T. Son, J. Kang, N. Kim, S. Cho, and S. Kwak. Urie: Universal image enhancement for visual recognition in the wild. In ECCV, 2020. [230] S. Su, H. Lin, V. Hosu, O. Wiedemann, J. Sun, Y. Zhu, H. Liu, Y. Zhang, and D. Saupe. Going the extra mile in face image quality assessment: A novel database and model. IEEE Transactions on Multimedia, 2023. [231] S. Su, Q. Yan, Y. Zhu, C. Zhang, X. Ge, J. Sun, and Y. Zhang. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3667–3676, 2020. [232] S. Sun, T. Yu, J. Xu, W. Zhou, and Z. Chen. Graphiqa: Learning distortion graph representations for blind image quality assessment. IEEE Transactions on Multimedia, 2022. [233] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inceptionv4, inception resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. [234] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inceptionv4, inception resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017. [235] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inceptionv4, inception resnet and the impact of residual connections on learning. In Thirty First AAAI Conference on Artificial Intelligence, 2017. [236] A. Tagliasacchi and B. Mildenhall. Volume rendering digest (for nerf). arXiv preprint, 2022. [237] K. Tang, J. Yang, and J. Wang. Investigating haze relevant features in a learning framework for image dehazing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2995–3000, 2014. [238] K. Tang, J. Yang, and J. Wang. Investigating haze relevant features in a learning framework for image dehazing. In CVPR, 2014. [239] L. Tang, H. Xiao, and B. Li. Can sam segment anything? when sam meets camouflaged object detection. arXiv preprint arXiv:2304.04709, 2023. [240] J.P. Tarel and N. Hautiere. Fast visibility restoration from a single color or gray level image. In 2009 IEEE 12th International Conference on Computer Vision, pages 2201–2208, 2009. [241] J.P. Tarel and N. Hautiere. Fast visibility restoration from a single color or gray level image. In 2009 IEEE 12th International Conference on Computer Vision, pages 2201–2208. IEEE, 2009. [242] P. Terhorst, J. N. Kolf, N. Damer, F. Kirchbuchner, and A. Kuijper. Serfiq: Unsupervised estimation of face image quality based on stochastic embedding robustness. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5651–5660, 2020. [243] A. Tewari, J. Thies, B. Mildenhall, P. Srinivasan, E. Tretschk, W. Yifan, C. Lassner, V. Sitzmann, R. MartinBrualla, S. Lombardi, et al. Advances in neural rendering. In Computer Graphics Forum, 2022. [244] C. Tian, L. Fei, W. Zheng, Y. Xu, W. Zuo, and C.W. Lin. Deep learning on image denoising: An overview. Neural Networks, 131:251–275, 2020. [245] C. Tian, Y. Xu, L. Fei, and K. Yan. Deep learning for image denoising: A survey. In Genetic and Evolutionary Computing: Proceedings of the Twelfth International Conference on Genetic and Evolutionary Computing, December 1417, Changzhou, Jiangsu, China 12, pages 563–572. Springer, 2019. [246] C. Trotter, G. Atkinson, M. Sharpe, K. Richardson, A. S. McGough, N. Wright, B. Burville, and P. Berggren. Ndd20: A largescale few-shot dolphin dataset for coarse and fine-grained categorisation. arXiv preprint arXiv:2005.13359, 2020. [247] Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, and Y. Li. Maxim: Multiaxis mlp for image processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5769–5780, 2022. [248] Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, and Y. Li. Maxvit: Multiaxis vision transformer. arXiv preprint arXiv:2204.01697, 2022. [249] Z. Tu, X. Yu, Y. Wang, N. Birkbeck, B. Adsumilli, and A. C. Bovik. Rapique: Rapid and accurate video quality prediction of user generated content. IEEE Open Journal of Signal Processing, 2:425–440, 2021. [250] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. [251] M. v. van Rossum and T. M. Nieuwenhuizen. Multiple scattering of classical waves: microscopy, mesoscopy, and diffusion. Reviews of Modern Physics, 1999. [252] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. NIPS, 2017. [253] I. M. Vellekoop and A. Mosk. Focusing coherent light through opaque strongly scattering media. Optics letters, 2007. [254] V. Voronin, E. Semenishchev, M. Zhdanova, R. Sizyakin, and A. Zelenskii. Rain and snow removal using multi-guided filter and anisotropic gradient in the quaternion framework. In Artificial Intelligence and Machine Learning in Defense Applications, volume 11169, page 111690, 2019. [255] C. Wang, Y. Huang, Y. Zou, and Y. Xu. Fully nonhomogeneous atmospheric scattering modeling with convolutional neural networks for single image dehazing. arXiv preprint, 2021. [256] C. Wang, X. Wu, Y.C. Guo, S.H. Zhang, Y.W. Tai, and S.M. Hu. Nerfsr: High quality neural radiance fields using super-sampling. In MM, 2022. [257] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang, W. Liu, and B. Xiao. Deep high-resolution representation learning for visual recognition. TPAMI, 2019. [258] L. Wang, P. Ho, C. Liu, G. Zhang, and R. Alfano. Ballistic 2d imaging through scattering walls using an ultrafast optical kerr gate. Science, 1991. [259] P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang. Neus: Learning neural implicit surfaces by volume rendering for multiview reconstruction. NeurIPS, 2021. [260] T. Wang, K. Zhang, X. Chen, W. Luo, J. Deng, T. Lu, X. Cao, W. Liu, H. Li, and S. Zafeiriou. A survey of deep face restoration: Denoise, super resolution, deblur, artifact removal. arXiv preprint arXiv:2211.02831, 2022. [261] W. Wang, C. Wei, W. Yang, and J. Liu. Gladnet: Lowlight enhancement network with global awareness. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 751–755. IEEE, 2018. [262] Y. Wang, S. Liu, C. Chen, and B. Zeng. A hierarchical approach for rain or snow removing in a single color image. IEEE Transactions on Image Processing, 26(8):3936–3950, 2017. [263] Y. Wang, I. Skorokhodov, and P. Wonka. Hfneus: Improved surface reconstruction using high-frequency details. In NeurIPS, 2022. [264] Y. Wang, Y. Zhao, and L. Petzold. An empirical study on the robustness of the segment anything model (sam). arXiv preprint arXiv:2305.06422, 2023. [265] Z. Wang, S. Wu, W. Xie, M. Chen, and V. A. Prisacariu. Nerf–: Neural radiance fields without known camera parameters. arXiv preprint, 2021. [266] J. Wu, R. Fu, H. Fang, Y. Liu, Z. Wang, Y. Xu, Y. Jin, and T. Arbel. Medical sam adapter: Adapting segment anything model for medical image segmentation. arXiv preprint arXiv:2304.12620, 2023. [267] W. Wu, C. Qian, S. Yang, Q. Wang, Y. Cai, and Q. Zhou. Look at boundary: A boundary-aware face alignment algorithm. In CVPR, 2018. [268] W. Xia, Z. Cheng, Y. Yang, and J.H. Xue. Cooperative semantic segmentation and image restoration in adverse environmental conditions. arXiv preprint arXiv:1911.00679, 2019. [269] J. Xu, W. Zhao, P. Liu, and X. Tang. An improved guidance image based method to remove rain and snow in a single image. Computer and Information Science, 5(3):49, 2012. [270] Q. Xu, R. Zhang, Y. Zhang, Y. Wang, and Q. Tian. A fourier-based framework for domain generalization. In CVPR, 2021. [271] Z. Xu, X. Yang, X. Li, and X. Sun. Strong baseline for single image dehazing with deep features and instance normalization. BMVC, 2018. [272] T. Yan, Y. Ding, F. Zhang, N. Xie, W. Liu, Z. Wu, and Y. Liu. Snow removal from light field images. IEEE Access, 7:164203–164215, 2019. [273] D. Yang and J. Sun. Proximal dehazenet: A prior learning-based deep network for single image dehazing. In ECCV, 2018. [274] D. Yang and J. Sun. Proximal dehazenet: a prior learning-based deep network for single image dehazing. In Proceedings of the European Conference on Computer Vision (ECCV), pages 702–717, 2018. [275] F. Yang, X. Shao, L. Zhang, P. Deng, X. Zhou, and Y. Shi. Dfqa: Deep face image quality assessment. In Image and Graphics: 10th International Conference, ICIG 2019, Beijing, China, August 23–25, 2019, Proceedings, Part II 10, pages 655–667. Springer, 2019. [276] H. Yang, W. Kaixuan, C. Linwei, and F. Ying. Crafting object detection in very lowlight. In BMVC, 2021. [277] J. Yang, M. Gao, Z. Li, S. Gao, F. Wang, and F. Zheng. Track anything: Segment anything meets videos. arXiv preprint arXiv:2304.11968, 2023. [278] J. Yang, Q. Liu, and K. Zhang. Stacked hourglass network for robust facial landmark localisation. In CVPR, 2017. [279] S. Yang, T. Wu, S. Shi, S. Lao, Y. Gong, M. Cao, J. Wang, and Y. Yang. Maniqa: Multidimension attention network for no reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1191–1200, 2022. [280] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1357–1366, 2017. [281] Y. Yang and S. Soatto. Fda: Fourier domain adaptation for semantic segmentation. In CVPR, 2020. [282] Y. Yang, C. Wang, R. Liu, L. Zhang, X. Guo, and D. Tao. Self-augmented unpaired image dehazing via density and depth decomposition. In CVPR, 2022. [283] Z. Yang, J. Huang, J. Chang, M. Zhou, H. Yu, J. Zhang, and F. Zhao. Visual recognition-driven image restoration for multiple degradation with intrinsic semantics recovery. In CVPR, 2023. [284] L. Yariv, J. Gu, Y. Kasten, and Y. Lipman. Volume rendering of neural implicit surfaces. NeurIPS, 2021. [285] L. Yariv, Y. Kasten, D. Moran, M. Galun, M. Atzmon, B. Ronen, and Y. Lipman. Multiview neural surface reconstruction by disentangling geometry and appearance. NeurIPS, 2020. [286] Z. Ying, H. Niu, P. Gupta, D. Mahajan, D. Ghadiyaram, and A. Bovik. From patches to pictures (paq2piq): Mapping the perceptual space of picture quality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3575–3585, 2020. [287] J. You and J. Korhonen. Transformer for image quality assessment. In 2021 IEEE International Conference on Image Processing (ICIP), pages 1389–1393. [288] S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi. Adherent raindrop modeling, detectionand removal in video. IEEE transactions on pattern analysis and machine intelligence, 38(9):1721– 1733, 2015. [289] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, T. Darrell, et al. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 2(5):6, 2018. [290] H. Yu, N. Zheng, M. Zhou, J. Huang, Z. Xiao, and F. Zhao. Frequency and spatial dual guidance for image dehazing. In ECCV, 2022. [291] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5505–5514, 2018. [292] S. Yu, Y. Zhao, Y. Mou, J. Wu, L. Han, X. Yang, and B. Zhao. Content-adaptive rain and snow removal algorithms for single image. In International Symposium on Neural Networks, pages 439–448, 2014. [293] T. Yu, R. Feng, R. Feng, J. Liu, X. Jin, W. Zeng, and Z. Chen. Inpaint anything: Segment anything meets image inpainting. arXiv preprint arXiv:2304.06790, 2023. [294] Z. Yu, S. Peng, M. Niemeyer, T. Sattler, and A. Geiger. Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction. arXiv preprint, 2022. [295] F. Yuan and H. Huang. Image haze removal via reference retrieval and scene prior. IEEE Transactions on Image Processing, 27(9):4395–4409, 2018. [296] S. Zagoruyko and N. Komodakis. Wide Residual Networks. In BMVC, 2016. [297] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.H. Yang, and L. Shao. Multistage progressive image restoration. In CVPR, 2021. [298] H. Zhang and V. M. Patel. Densely connected pyramid dehazing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3194–3203, 2018. [299] H. Zhang and V. M. Patel. Densely connected pyramid dehazing network. In CVPR, 2018. [300] H. Zhang and V. M. Patel. Densely connected pyramid dehazing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3194–3203, 2018. [301] J. Zhang, Y. Cao, S. Fang, Y. Kang, and C. Wen Chen. Fast haze removal for nighttime image using maximum reflectance prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7418–7426, 2017. [302] J. Zhang, Y. Cao, Y. Wang, C. Wen, and C. W. Chen. Fully pointwise convolutional neural network for modeling statistical regularities in natural images. In ACM Multimedia Conference, 2018. [303] J. Zhang and D. Tao. Famednet: a fast and accurate multiscale end to end dehazing network. IEEE Transactions on Image Processing, 29:72–84, 2019. [304] K. Zhang and D. Liu. Customized segment anything model for medical image segmentation. arXiv preprint arXiv:2304.13785, 2023. [305] K. Zhang, G. Riegler, N. Snavely, and V. Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint, 2020. [306] W. Zhang, K. Ma, J. Yan, D. Deng, and Z. Wang. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 30(1):36–47, 2018. [307] W. Zhang, K. Ma, G. Zhai, and X. Yang. Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Transactions on Image Processing, 30:3474–3486, 2021. [308] W. Zhang, G. Zhai, Y. Wei, X. Yang, and K. Ma. Blind image quality assessment via vision language correspondence: A multitask learning perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14071–14081, 2023. [309] W. Zhang, G. Zhai, Y. Wei, X. Yang, and K. Ma. Blind image quality assessment via vision language correspondence: A multitask learning perspective. In IEEE Conference on Computer Vision and Pattern Recognition, pages 14071–14081, 2023. [310] X. Zhang, H. Dong, J. Pan, C. Zhu, Y. Tai, C. Wang, J. Li, F. Huang, and F. Wang. Learning to restore hazy video: A new real-world dataset and a new method. In CVPR, 2021. [311] Y. Zhang, T. Zhou, P. Liang, and D. Z. Chen. Input augmentation with sam: Boosting medical image segmentation with segmentation foundation model. arXiv preprint arXiv:2304.11332, 2023. [312] Z. Zhang, W. Liu, H. Ma, and X. Liu. Going Clear from Misty Rain in Dark Channel Guided Network. [313] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multitask learning. In ECCV, 2014. [314] K. Zhao, K. Yuan, M. Sun, M. Li, and X. Wen. Quality-aware pretrained models for blind image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22302–22313, 2023. [315] X. Zhao, Y. Li, and S. Wang. Face quality assessment via semi-supervised learning. In International Conference on Computing and Pattern Recognition, pages 288– 293, 2019. [316] X. Zheng, Y. Liao, W. Guo, X. Fu, and X. Ding. Single image-based rain and snow removal using multi-guided filter. In International Conference on Neural Information Processing, pages 258–265, 2013. [317] X. Zheng, Y. Liao, W. Guo, X. Fu, and X. Ding. Single image-based rain and snow removal using multi-guided filter. In International Conference on Neural Information Processing, pages 258–265. Springer, 2013. [318] E. Zhou, H. Fan, Z. Cao, Y. Jiang, and Q. Yin. Extensive facial landmark localization with coarse to fine convolutional network cascade. In ICCVW, 2013. [319] Z. Zhou, Z. Wang, H. Lu, S. Wang, and M. Sun. Multitype self-attention guided degraded saliency detection. In AAAI, 2020. [320] H. Zhu, L. Li, J. Wu, W. Dong, and G. Shi. Metaiqa: Deep meta learning for no reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14143–14152, 2020. [321] Q. Zhu, J. Mai, and L. Shao. A fast single image haze removal algorithm using color attenuation prior. IEEE transactions on image processing, 24(11):3522–3533, 2015. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92548 | - |
| dc.description.abstract | 在數位時代中,對於自動駕駛、醫療診斷等多種應用領域,高品質視覺數據的需求顯得尤為重要。然而,由於惡劣天氣、低光照、模糊及噪聲等因素,真實環境中捕獲的影像常常受損,這不僅妨礙了人類的判斷能力,也減弱了機器學習模型在進行物體檢測、語義分割等下游任務的效能。面對這些挑戰,從精準的影像質量評估開始,到運用先進的影像恢復技術變得至關重要。隨著深度學習技術的進步,我們現在有能力利用深度學習模型從數據中學習複雜的模式和特徵,自動檢測和修正影像質量問題,達到顯著的進步。
本研究通過深度學習技術,針對受損影像系統性地開發了從影像質量評估到二維影像及三維體積重建的相關應用。此過程涵蓋了從初步的影像質量評估,到二維影像的質量提升,進而到三維體積的重建技術,並最終探討了如何從機器的視角出發,對受損影像進行處理,以提升下游機器學習任務的性能。透過一系列廣泛的實驗,本研究不僅在多種受損情況下顯著提升了影像質量,也展示了我們的系統如何直接增強下游任務的性能,為複雜環境中高品質影像處理和分析提供了新的見解和工具。 | zh_TW |
| dc.description.abstract | In the digital era, the demand for high-quality visual data has become increasingly critical for a variety of applications, including autonomous driving and medical diagnostics. However, images captured in real-world conditions often suffer from degradation due to adverse weather, low lighting, blurriness, and noise. These factors not only hinder human judgment but also impair the performance of machine learning models in tasks such as object detection and semantic segmentation. Addressing these challenges, starting with precise image quality assessment to employing advanced image restoration techniques, has become essential. With advancements in deep learning technology, we now have the capability to use deep learning models to learn complex patterns and features from data, automatically detecting and correcting image quality issues for significant improvements.
This study systematically develops applications ranging from image quality assessment to the restoration of two-dimensional images and the reconstruction of three-dimensional volumes using deep learning technology for degraded images. The process encompasses initial image quality assessment, enhancement of two-dimensional image quality, and then reconstruction of three-dimensional volumes. It further explores how to process damaged images from a machine-centric perspective to enhance the performance of downstream machine learning tasks. Through a series of extensive experiments, this research not only significantly improves image quality under various degraded conditions but also demonstrates how our system directly enhances the performance of downstream tasks, providing new insights and tools for high-quality image processing and analysis in complex environments. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-04-10T16:13:06Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-04-10T16:13:06Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 摘要 iii
Abstract v Contents vii List of Figures xiii List of Tables xxi Chapter 1 Introduction 1 1.1 Image Quality Assessment 3 1.2 2D Image Restoration 6 1.2.1 Single Image Haze Removal 8 1.2.2 Single Image Snow Removal 13 1.3 3D Volume Reconstruction 16 1.4 Improve Machine Vision under Degraded Scenes 20 Chapter 2 Related work 25 2.1 Image Quality Assessment 25 2.1.1 Quality Assessment of Face Images 25 2.1.2 General Image Quality Assessment 26 2.1.3 Face Landmark Detection 27 2.2 2D Image Restoration 28 2.2.1 Single Image Haze Removal 28 2.2.2 Single Image Snow Removal 31 2.3 Generative Adversarial Networks (GANs) 33 2.4 3D Volume Reconstruction 33 2.4.1 3D Scene Reconstruction with NeRF 33 2.4.2 Estimating Scattering Coefficients 34 2.4.3 Seeing Through Scattering Media 34 2.5 Enhancing Machine Vision in Degraded Conditions 35 2.5.1 Highlevel Vision Tasks in Degraded Scenarios 35 2.5.2 Segment Anything Model (SAM) 36 Chapter 3 Generic Face Image Quality Assessment 37 3.1 Proposed Method 38 3.1.1 Model Overview 38 3.1.2 SelfSupervised DualSet Degradation Representation Learning 40 3.1.2.1 Patchbased Degradation Learning 40 3.1.2.2 Our Solution 41 3.1.3 Landmarkguided GFIQA 44 3.1.4 Loss Functions 46 3.2 Experimental Results 47 3.2.1 Experiment Settings 47 3.2.2 Training Details 49 3.2.3 Performance Evaluation 52 3.2.4 Ablation Study 52 3.3 Discussion 56 Chapter 4 2D Image Restoration Haze Removal 59 4.1 Dark Channel Prior Limitations 59 4.2 Patch Map 61 4.3 Patch Map Selection Net 63 4.4 Patch Map Based Hybrid Learning DehazeNet 66 4.5 Hybrid Learning DehazeNetwork 74 4.6 Experimental Result 79 4.6.1 Dataset and Training Detail 79 4.6.2 Model Complexity Analysis 81 4.6.3 Quantitative Analysis on the Synthetic Dataset 82 4.6.4 Dehazed Results on Real World Images 85 4.6.5 Ablation Study 90 4.6.6 Analysis on Learned Latent Statistical Regularities 94 4.6.7 Analysis on Maximal Patch Size Selection 96 4.6.8 Limitation 98 Chapter 5 2D Image Restoration Snow Removal 99 5.1 Modified Snow Model Formulation 100 5.2 Joint Size and TransparencyAware Snow Removal 101 5.2.1 Architecture of Snow Removal 101 5.2.2 Veiling Effect Removal 108 5.3 Experimental Result 111 5.3.1 Dataset Generation 111 5.3.2 Training Details 112 5.3.3 Comparison with Stateoftheart Methods 113 5.3.4 Ablation Study 117 Chapter 6 3D Volume Reconstruction 119 6.1 Method 120 6.1.1 Preliminary on Neural Radiance Fields 120 6.1.2 3D Haze Formation 120 6.1.3 Hazeaware Neural Radiance Field 122 6.1.4 Implementation Details 127 6.2 Implementation Details 128 6.3 Experiments with Synthetic Scenes 131 6.3.1 Synthetic Data Generation 131 6.3.2 Comparative Analysis 132 6.3.3 Ablation Study 135 6.4 Experimentally Captured Results 136 6.4.1 Data Collection 136 6.4.2 Implementation Details 137 6.4.3 Comparison 138 6.5 Discussion 139 Chapter 7 Improve Machine Vision under Degraded Scenes 143 7.1 Proposed Method 144 7.1.1 Preliminary: Segment Anything Model (SAM) 144 7.1.2 Robust Segment Anything Model (RobustSAM) 144 7.1.2.1 Model Overview 145 7.1.2.2 AntiDegradation Mask Feature Generation 147 7.1.2.3 AntiDegradation Output Token Generation 149 7.1.2.4 Overall Loss 150 7.2 Implementation Details 150 7.2.1 Dataset for Training and Evaluation of RobustSAM 150 7.2.2 Training Detail 151 7.2.3 Evaluation Protocol 152 7.3 Experimental Results 153 7.3.1 Performance Evaluation 155 7.3.2 Ablation Study 157 7.3.3 Different Backbones in RobustSAM 160 7.3.4 Token Visualization 160 7.3.5 Improving SAMprior Tasks 161 Chapter 8 Conclusion 163 Chapter 9 Limitation and Future Work 165 References 167 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 二維影像還原 | zh_TW |
| dc.subject | 影像質量評估 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 強健的電腦視覺系統 | zh_TW |
| dc.subject | 三維體積重建 | zh_TW |
| dc.subject | 3D Volume Reconstruction | en |
| dc.subject | Robust Computer Vision System | en |
| dc.subject | Deep Learning | en |
| dc.subject | 2D Image restoration | en |
| dc.subject | Image Quality Assessment | en |
| dc.title | 劣化場景下的視覺研究 | zh_TW |
| dc.title | Vision in Degraded Scenes | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 雷欽隆;顏嗣鈞;丁建均;黃彥男;陳俊良;游家牧;林宗男 | zh_TW |
| dc.contributor.oralexamcommittee | Chin-Laung Lei;Hsu-Chun Yen;Jian-Jiun Ding;Yen-Nun Huang;Jiann-Liang Chen;Chia-Mu Yu;Tsung-Nan Lin | en |
| dc.subject.keyword | 影像質量評估,二維影像還原,三維體積重建,強健的電腦視覺系統,深度學習, | zh_TW |
| dc.subject.keyword | Image Quality Assessment,2D Image restoration,3D Volume Reconstruction,Robust Computer Vision System,Deep Learning, | en |
| dc.relation.page | 206 | - |
| dc.identifier.doi | 10.6342/NTU202400832 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-04-08 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電子工程學研究所 | - |
| 顯示於系所單位: | 電子工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 55.67 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
