請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91275完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 郭斯彥 | zh_TW |
| dc.contributor.advisor | Sy-Yen Kuo | en |
| dc.contributor.author | 呂襄 | zh_TW |
| dc.contributor.author | Hsiang Lu | en |
| dc.date.accessioned | 2023-12-20T16:16:02Z | - |
| dc.date.available | 2023-12-21 | - |
| dc.date.copyright | 2023-12-20 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-09-05 | - |
| dc.identifier.citation | [1] B. He, J. Li, Y. Zhao, and Y. Tian, “Part-regularized near-duplicate vehicle reidentification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3997–4005.
[2] Y. Rao, G. Chen, J. Lu, and J. Zhou, “Counterfactual attention learning for finegrained visual categorization and re-identification,” in ICCV, 2021. [3] Y. Huang, X. Fu, L. Li, and Z.-J. Zha, “Learning degradation-invariant representation for robust real-world person re-identification,” International Journal of Computer Vision, vol. 130, no. 11, pp. 2770–2796, 2022. [4] S. He, H. Luo, P. Wang, F. Wang, H. Li, and W. Jiang, “Transreid: Transformer-based object re-identification,” arXiv preprint arXiv:2102.04378, 2021. [5] A. Ghosh, K. Shanmugalingam, and W.-Y. Lin, “Relation preserving triplet mining for stabilising the triplet loss in re-identification systems,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 4840– 4849. [6] H. Liu, Y. Tian, Y. Wang, L. Pang, and T. Huang, “Deep relative distance learning: Tell the difference between similar vehicles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2167–2175. [7] Y. Lou, Y. Bai, J. Liu, S. Wang, and L. Duan, “Veri-wild: A large dataset and a new method for vehicle re-identification in the wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3235–3243. [8] X. Liu, W. Liu, T. Mei, and H. Ma, “Provid: Progressive and multimodal vehicle reidentification for large-scale urban surveillance,” IEEE Transactions on Multimedia, vol. 20, no. 3, pp. 645–658, 2017. [9] K. Xu, X. Yang, B. Yin, and R. W. Lau, “Learning to restore low-light images via decomposition-and-enhancement,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2281–2290. [10] W. Wu, J. Weng, P. Zhang, X. Wang, W. Yang, and J. Jiang, “Uretinex-net: Retinexbased deep unfolding network for low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5901–5910. [11] W.-T. Chen, I.-H. Chen, C.-Y. Yeh, H.-H. Yang, H.-E. Chang, J.-J. Ding, and S.-Y. Kuo, “Rvsl: Robust vehicle similarity learning in real hazy scenes based on semisupervised learning,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIV. Springer, 2022, pp. 427–443. [12] Y. Hu, H. He, C. Xu, B. Wang, and S. Lin, “Exposure: A white-box photo postprocessing framework,” ACM Transactions on Graphics (TOG), vol. 37, no. 2, pp. 1–17, 2018. [13] S. Yang, W. Xiao, M. Zhang, S. Guo, J. Zhao, and F. Shen, “Image data augmentation for deep learning: A survey,” arXiv preprint arXiv:2204.08610, 2022. [14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016. [15] A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person reidentification,” arXiv preprint arXiv:1703.07737, 2017. [16] J. Zhao, Y. Zhao, J. Li, K. Yan, and Y. Tian, “Heterogeneous relational complement for vehicle re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 205–214. [17] S. He, H. Luo, P. Wang, F. Wang, H. Li, and W. Jiang, “Transreid: Transformer-based object re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 15 013–15 022. [18] W. Qian, H. Luo, S. Peng, F. Wang, C. Chen, and H. Li, “Unstructured feature decoupling for vehicle re-identification,” in European Conference on Computer Vision (ECCV), October 2022. [19] S. He, H. Luo, W. Chen, M. Zhang, Y. Zhang, F. Wang, H. Li, and W. Jiang, “Multidomain learning and identity mining for vehicle re-identification,” in Proc. CVPR Workshops, 2020. [20] K. Wang, C. Wu, A. D. Bagdanov, X. Liu, S. Yang, S. Jui, and J. van de Weijer, “Positive pair distillation considered harmful: Continual meta metric learning for lifelong object re-identification,” in British Machine Vision Conference, 2022. [21] Y. Ren, Z. Ying, T. H. Li, and G. Li, “Lecarm: Low-light image enhancement using the camera response model,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 4, pp. 968–981, 2019. [22] S. Hao, X. Han, Y. Guo, X. Xu, and M. Wang, “Low-light image enhancement with semi-decoupled decomposition,” IEEE Transactions on Multimedia, vol. 22, no. 12, pp. 3025–3038, 2020. [23] J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Transactions on Image Processing, vol. 29, pp. 5022–5037, 2020. [24] W. Y. J. L. Chen Wei, Wenjing Wang, “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference, 2018. [25] K. Xu, X. Yang, B. Yin, and R. W. Lau, “Learning to restore low-light images via decomposition-and-enhancement,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 2278–2287. [26] Y. Zhang, X. Guo, J. Ma, W. Liu, and J. Zhang, “Beyond brightening low-light images,” International Journal of Computer Vision, vol. 129, pp. 1013–1037, 2021. [27] X. Xu, R. Wang, C.-W. Fu, and J. Jia, “Snr-aware low-light image enhancement,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17 693–17 703. [28] W. Wu, J. Weng, P. Zhang, X. Wang, W. Yang, and J. Jiang, “Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 5891–5900. [29] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,” IEEE Transactions on Image Processing, vol. 30, pp. 2340–2349, 2021. [30] C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and C. Runmin, “Zero-reference deep curve estimation for low-light image enhancement,” CVPR, 2020. [31] L. Risheng, M. Long, Z. Jiaao, F. Xin, and L. Zhongxuan, “Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021. [32] L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward fast, flexible, and robust low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5637–5646. [33] W. Liu, G. Ren, R. Yu, S. Guo, J. Zhu, and L. Zhang, “Image-adaptive yolo for object detection in adverse weather conditions,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2022. [34] J. Oh and M.-C. Hong, “Low-light image enhancement using hybrid deep-learning and mixed-norm loss functions,” Sensors, vol. 22, no. 18, p. 6904, 2022. [35] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. pmlr, 2015, pp. 448–456. [36] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016. [37] Y. Wu, C. Pan, G. Wang, Y. Yang, J. Wei, C. Li, and H. T. Shen, “Learning semanticaware knowledge guidance for low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1662–1671. [38] S. Lee, J. Lee, B. Kim, E. Yi, and J. Kim, “Patch-wise attention network for monocular depth estimation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 3, 2021, pp. 1873–1881. [39] P. Liu, H. Zhang, K. Zhang, L. Lin, and W. Zuo, “Multi-level wavelet-cnn for image restoration,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 773–782. [40] H. Guo, C. Zhao, Z. Liu, J. Wang, and H. Lu, “Learning coarse-to-fine structured feature embedding for vehicle re-identification,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018. [41] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026–1034. [42] H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person re-identification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2019, pp. 0–0. [43] J. Zhao, Y. Zhao, J. Li, K. Yan, and Y. Tian, “Heterogeneous relational complement for vehicle re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 205–214. [44] K. Wang, C. Wu, A. Bagdanov, X. Liu, S. Yang, S. Jui, and J. van de Weijer, “Positive pair distillation considered harmful: Continual meta metric learning for lifelong object re-identification,” arXiv preprint arXiv:2210.01600, 2022. [45] W. Qian, H. Luo, S. Peng, F. Wang, C. Chen, and H. Li, “Unstructured feature decoupling for vehicle re-identification,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIV. Springer, 2022, pp. 336–353. [46] X. Xu, R. Wang, C.-W. Fu, and J. Jia, “Snr-aware low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 714–17 724. [47] W. Liu, G. Ren, R. Yu, S. Guo, J. Zhu, and L. Zhang, “Image-adaptive yolo for object detection in adverse weather conditions,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 2, 2022, pp. 1792–1800. [48] Y. Wang, R. Wan, W. Yang, H. Li, L.-P. Chau, and A. Kot, “Low-light image enhancement with normalizing flow,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 3, 2022, pp. 2604–2612. [49] Y. Jin, W. Yang, and R. T. Tan, “Unsupervised night image enhancement: When layer decomposition meets light-effects suppression,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVII. Springer, 2022, pp. 404–421. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91275 | - |
| dc.description.abstract | 在車輛重識別任務中,白天和夜晚的光照分布不同導致域差異,是使模型辨別車輛身分性能下降的一個重要因素。由於夜間圖像的能見度不足,增強低光照圖像是解決此問題的常見方法,然而這些方法存在著一些問題:低光照的汽車圖片缺乏正常光照圖像,無法適應於監督式學習的光照提升方法;而非監督式學習的光照增強方法容易使影像產生色偏,但車輛的顏色是重要特徵,色偏問題會使得模型辨認能力下降;此外,多數方法無法適應不同程度的光照,會大幅降低車輛重識別任務在實際上的應用價值。為解決這些問題,本文提出一種全新的自校正模塊,用於提高圖像的光照程度。該模塊可以直接安裝在重識別模型前面,與重識別模型聯合訓練,不需要正常光照圖像或額外的損失函數,即可實現圖像光照提升且不影響車輛原始顏色。該模塊可以同時適應白天和夜間的不同程度光照,不須考慮原始訓練資料的光照分布問題。此外,本文提出了一個全新的車輛重識別資料集,包含了白天和夜晚的訓練和測試資料集。通過實驗證明,我們所提出的方法能夠有效提高影像的光照度,並提高重識別網路在低光照情況下的表現能力。 | zh_TW |
| dc.description.abstract | In vehicle re-identification(ReID) tasks, the difference in illumination distribution between daytime and nighttime results in domain discrepancies, which is a significant factor leading to decreased model performance in identifying vehicle identities. Enhancing low-light images is a common method to solve this problem since nighttime images have poor visibility. However, these methods have some issues: low-light vehicle images lack normal lighting images, making them unsuitable for supervised learning-based illumination enhancement methods. Unsupervised illumination enhancement methods tend to cause color shifts. Since vehicle color is an important feature, color shift reduces model recognition capability. Moreover, most methods cannot adapt to different degrees of illumination, greatly reducing the practical value of vehicle re-identification tasks. To address these issues, we propose a novel self-calibrated module that improves image illumination. The module can be directly installed in front of the re-identification model and jointly trained with it, without the need for normal lighting images or additional loss functions, to achieve image illumination enhancement and not affect the original vehicle color. The module can adapt to different degrees of illumination during both daytime and nighttime, without considering the illumination distribution of the original training data. This paper proposes a novel vehicle re-identification dataset that includes training and testing datasets for both daytime and nighttime scenarios. Through experiments, we verify that the proposed method effectively improves the illumination of images and enhances the performance of vehicle re-identification networks under low-light conditions. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-12-20T16:16:02Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-12-20T16:16:02Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員審定書i
摘要ii Abstract iii Contents v List of Figures vii List of Tables viii Chapter 1 Introduction 1 Chapter 2 Background 7 2.1 Backbone and Training Stage 7 2.2 Triplet Loss 8 2.3 ID Loss 9 2.4 BNNeck 10 2.5 Inference Stage 10 Chapter 3 Related Works 12 3.1 Vehicle Re-identification 12 3.2 Low-light Image Enhancement 14 3.3 Differentiable Image Processing 16 Chapter 4 Proposed Method 22 4.1 Re-Identification Network 23 4.2 Illumination-Enhancement Network 24 4.2.1 Architecture Formulation 25 4.2.2 Semantic Selection Module 25 4.2.3 Textures Extraction Module 27 4.2.4 Illumination Calibration Module 28 4.3 Total Loss 29 Chapter 5 Experiments and Results 31 5.1 Dataset 31 5.2 Evaluation Protocols 32 5.2.1 Mean Average Precision (mAP) 32 5.2.2 Cumulative Matching Characteristic (CMC) Curve 33 5.3 Implementation Details 33 5.4 Comparison with State-of-the-art Methods 34 5.5 Ablation Study 35 5.6 Comparison of Using Different Image Low-light Enhancement Methods in a Two-stage strategy 38 5.7 Evaluation on Real World Illumination Conditions 39 Chapter 6 Conclusion 41 References 42 | - |
| dc.language.iso | en | - |
| dc.subject | 車輛重識別 | zh_TW |
| dc.subject | 自監督式學習 | zh_TW |
| dc.subject | 夜晚視覺 | zh_TW |
| dc.subject | 低光照增強 | zh_TW |
| dc.subject | 車輛重識別 | zh_TW |
| dc.subject | 自監督式學習 | zh_TW |
| dc.subject | 夜晚視覺 | zh_TW |
| dc.subject | 低光照增強 | zh_TW |
| dc.subject | Nighttime vision | en |
| dc.subject | Nighttime vision | en |
| dc.subject | Self-supervised learning | en |
| dc.subject | Self-supervised learning | en |
| dc.subject | Low-light enhancement | en |
| dc.subject | Vehicle re-identification | en |
| dc.subject | Vehicle re-identification | en |
| dc.subject | Low-light enhancement | en |
| dc.title | 低光照車輛再識別的自我增強學習 | zh_TW |
| dc.title | Self-Enhanced Learning for Low-light Vehicle Re-identification | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 黃士嘉;游家牧;雷欽隆;顏嗣鈞 | zh_TW |
| dc.contributor.oralexamcommittee | Shih-Chia Huang;Chia-Mu Yu;Chin-Laung Lei;Hsu-Chun Yen | en |
| dc.subject.keyword | 車輛重識別,低光照增強,夜晚視覺,自監督式學習, | zh_TW |
| dc.subject.keyword | Vehicle re-identification,Low-light enhancement,Nighttime vision,Self-supervised learning, | en |
| dc.relation.page | 48 | - |
| dc.identifier.doi | 10.6342/NTU202301088 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2023-09-05 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-1.pdf 未授權公開取用 | 10.8 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
