請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93109完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 貝蘇章 | zh_TW |
| dc.contributor.advisor | Soo-Chang Pei | en |
| dc.contributor.author | 盧怡臻 | zh_TW |
| dc.contributor.author | I-Chen Lu | en |
| dc.date.accessioned | 2024-07-17T16:27:49Z | - |
| dc.date.available | 2024-07-18 | - |
| dc.date.copyright | 2024-07-17 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-06-26 | - |
| dc.identifier.citation | [1] A. G. A., Nataraja, S. Kumar, S. K. K., and S. S. Palle. A novel gabor filtering and adaptive histogram equalization method for improving images. International Journal on Recent and Innovation Trends in Computing and Communication, 2023.
[2] V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pat- tern Analysis and Machine Intelligence, 2017. [3] D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee. Yolact: Real-time instance segmentation. Computer Vision and Pattern Recognition, 2019. [4] J. Cen, Y. Wu, K. Wang, X. Li, J. Yang, Y. Pei, L. Kong, Z. Liu, and Q. Chen. Sad: Segment any rgbd. Computer Vision and Pattern Recognition, 2023. [5] X. Chang, H. Pan, W. Sun, and H. Gao. Yoltrack: Multitask learning based real-time multiobject tracking and segmentation for autonomous vehicles. IEEE Transactions on Neural Networks and Learning Systems, 2021. [6] C. Chen, Q. Chen, J. Xu, and V. Koltun. Learning to see in the dark. 2018 IEEE/ CVF Conference on Computer Vision and Pattern Recognition, 2018. [7] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, , and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. [8] Y.-S. Chen, Y.-C. Wang, M.-H. Kao, and Y.-Y. Chuang. Deep photo enhancer: Un- paired learning for image enhancement from photographs with gans. 2018 IEEE/ CVF Conference on Computer Vision and Pattern Recognition, 2018. [9] B. Cheng, M. D. Collins, Y. Zhu, T. Liu, T. S. Huang, H. Adam, and L.-C. Chen. Panoptic-deeplab: A simple, strong, and fast baseline for bottom-up panoptic seg- mentation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [10] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar. Masked-attention mask transformer for universal image segmentation. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. [11] B. Cheng, A. G. Schwing, and A. Kirillov. Per-pixel classification is not all you need for semantic segmentation. Computer Vision and Pattern Recognition, 2021. [12] F. Chollet.Xception:Deep learning with depthwise separable convolutions. arXiv:1610.02357, 2016. [13] A. Chrungoo. Improving panoptic segmentation for nighttime or low-illumination urban driving scenes. Computer Vision and Pattern Recognition, 2023. [14] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understand- ing. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [15] D. de Geus, P. Meletis, and G. Dubbelman. Fast panoptic segmentation network. Computer Vision and Pattern Recognition, 2019. [16] Q. Dong and Y. Fu. Memflow: Optical flow estimation and prediction with memory. Computer Vision and Pattern Recognition, 2024. [17] S. F. dos Santos, R. Berriel, T. Oliveira-Santos, N. Sebe, and J. Almeida. Budget aware pruning for multi-domain learning. Computer Vision and Pattern Recognition, 2022. [18] K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang, and Q. Tian. Centernet: Keypoint triplets for object detection. Computer Vision and Pattern Recognition, 2019. [19] M. Fan, S. Lai, J. Huang, X. Wei, Z. Chai, J. Luo, and X. Wei. Rethinking bisenet for real-time semantic segmentation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. [20] M. Fan, S. Lai, J. Huang, X. Wei, Z. Chai, J. Luo, and X. Wei. Rethinking bisenet for real-time semantic segmentation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. [21] G. Gao, G. Xu, Y. Yu, J. Xie, J. Yang, and D. Yue. Mscfnet: A lightweight network with multi-scale context fusion for real-time semantic segmentation. IEEE Transactions on Intelligent Transportation Systems, 2021. [22] N. Gao, F. He, J. Jia, Y. Shan, H. Zhang, X. Zhao, and K. Huang. Panopticdepth: A unified framework for depth-aware panoptic segmentation. Computer Vision and Pattern Recognition, 2022. [23] C. Godard, O. M. Aodha, M. Firman, and G. J. Brostow. Digging into self-supervised monocular depth prediction. Computer Vision and Pattern Recognition, 2018. [24] C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong. Zero-reference deep curve estimation for low-light image enhancement. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [25] X. Guo, Y. Li, and H. Ling. Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 2016. [26] X. Hao, X. Hao, Y. Zhang, Y. Li, and C. Wu. Real-time semantic segmentation with weighted factorized-depthwise convolution. Image and Vision Computing, 2021. [27] J. He, L. W. Yifan Wang, H. Lu, J.-Y. He, J.-P. Lan, B. Luo, Y. Geng, and X. Xie. Towards deeply unified depth-aware panoptic segmentation with bidirectional guidance learning. Computer Vision and Pattern Recognition, 2023. [28] K. He, G. Gkioxari, P. Dollár, and R. G. A. Mask r-cnn. 2017 IEEE International Conference on Computer Vision (ICCV), 2017. [29] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385, 2015. [30] Q. He, X. Sun, Z. Yan, and K. Fu. Dabnet: Deformable contextual and boundary- weighted network for cloud detection in remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 2021. [31] W. Hong, Q. Guo, W. Zhang, and J. C. W. Chu. Lpsnet: A lightweight solution for fast panoptic segmentation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. [32] R. Hou, J. Li, A. Bhargava, A. Raventos, V. Guizilini, C. Fang, J. Lynch, and A. Gaidon. Real-time panoptic segmentation from dense detections. 2020 IEEE/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [33] L. Hoyer, D. Dai, H. Wang, and L. V. Gool. Mic: Masked image consistency for context-enhanced domain adaptation. Computer Vision and Pattern Recognition, 2023. [34] J. Hu, L. Huang, T. Ren, S. Zhang, R. Ji, and L. Cao. You only segment once: Towards real-time panoptic segmentation. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. [35] W. Hua and Y. Xia. Low-light image enhancement based on joint generative adversarial network and image quality assessment. 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP- BMEI), 2018. [36] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. Computer Vision and Pattern Recognition, 2016. [37] J. J. Jeon, J. Y. Park, and I. K. Eom. Low-light image enhancement using gamma correction prior in mixed color spaces. Pattern Recognition, 2023. [38] W. Jiang, Z. Xie, Y. Li, C. Liu, and H. Lu. Lrnnet: A light-weighted network with efficient reduced non-local operation for real-time semantic segmentation. 2020 IEEE International Conference on Multimedia Expo Workshops (ICMEW), 2020. [39] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang. Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 2021. [40] G. Kim, D. Kwon, and J. Kwon. Low-lightgan: Low-light enhancement via advanced generative adversarial network with task-driven training. 2019 IEEE International Conference on Image Processing (ICIP), 2019. [41] A. Kirillov, R. Girshick, K. He, and P. Dollár. Panoptic feature pyramid networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. [42] A. Kirillov, Y. Wu, K. He, and R. Girshick. Pointrend: Image segmentation as rendering. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [43] J. P. Lagos and E. Rahtu. Semsegdepth: A combined model for semantic segmentation and depth completion. Computer Vision and Pattern Recognition, 2022. [44] J. Lee, D. Shiotsuka, G. Bang, Y. Endo, T. Nishimori, K. Nakao, and S. Kamijo. Day-to-night image translation via transfer learning to keep semantic information for driving simulator. IATSS Research,, 2023. [45] H. Li, P. Xiong, H. Fan, and J. Sun. Dfanet: Deep feature aggregation for real-time semantic segmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. [46] Y. Li, H. Qi, J. Dai, X. Ji, and Y. W. A. Fully convolutional instance-aware semantic segmentation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [47] Y. Li, H. Zhao, X. Qi, Y. Chen, L. Qi, L. Wang, Z. Li, J. Sun, and J. Jia. Fully convolutional networks for panoptic segmentation with point-based supervision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. [48] J. Liu, Y. Wang, Y. Li, J. Fu, J. Li, and H. Lu. Collaborative deconvolutional neural networks for joint depth estimation and semantic segmentation. IEEE Transactions on Neural Networks and Learning Systems, 2018. [49] R. Liu, L. Ma, T. Ma, X. Fan, and Z. Luo. Learning with nested scene modeling and cooperative architecture search for low-light vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. [50] R. Liu, K. Yang, A. Roitberg, J. Zhang, H. L. Kunyu Peng, Y. Wang, and R. Stiefel-hagen. Transkd: Transformer knowledge distillation for efficient semantic segmentation. Computer Vision and Pattern Recognition, 2022. [51] L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo. Toward fast, flexible, and robust low-light image enhancement. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. [52] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. European Conference on Computer Vision (ECCV), 2018. [53] S. Mehta, M. Rastegari, A. Caspi, L. Shapiro, and H. Hajishirzi. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. European Conference on Computer Vision, 2018. [54] M. Oršic, I. Krešo, P. Bevandic, and S. Šegvic. In defense of pre-trained imagenet architectures for real-time semantic segmentation of road-driving images. 2019 IEEE/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. [55] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv:1606.02147, 2016. [56] S. K. Prathapaneni, S. Shashank, and S. R. K. Lowdino – a low parameter self supervised learning model. Computer Vision and Pattern Recognition, 2023. [57] S. Qiao, Y. Zhu, H. Adam, A. Yuille, and L.-C. Chen. Vip-deeplab: Learning visual perception with depth-aware video panoptic segmentation. Computer Vision and Pattern Recognition, 2020. [58] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. Computer Vision and Pattern Recognition, 2016. [59] E. Romera, J. M. Álvarez, L. M. Bergasa, and R. Arroyo. Erfnet: Efficient residual factorized convnet for real-time semantic segmentation. IEEE Transactions on Intelligent Transportation Systems, 2017. [60] H. Sakaino. Panopticvis: Integrated panoptic segmentation for visibility estimation at twilight and night. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2023. [61] C. Sakaridis, D. Dai, and L. V. Gool. Map-guided curriculum domain adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. [62] C. Sakaridis, D. Dai, and L. V. Gool. Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021. [63] E. Schwartz, R. Giryes, and A. M. Bronstein. Deepisp: Toward learning an end-to- end image processing pipeline. IEEE Transactions on Image Processing, 2018. [64] M. Shi, J. Shen, Q. Yi, J. Weng, Z. Huang, A. Luo, and Y. Zhou. Lmffnet: A well-balanced lightweight network for fast and accurate semantic segmentation. IEEE Transactions on Neural Networks and Learning Systems, 2022. [65] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. [66] S. Singhal, S. Nanduri, Y. Raghav, and A. S. Parihar. Lrd-net: A lightweight deep network for low-light image enhancement. 2021 3rd International Conference on Signal Processing and Communication (ICPSC), 2021. [67] F. Siruo. Logistics distribution route optimization algorithm based on deep learning. 2022 Second International Conference on Advanced Technologies in Intelligent Control, Environment, Computing Communication Engineering (ICATIECE), 2022. [68] C.-C. Sun, S.-J. Ruan, M.-C. Shie, and T.-W. Pai. Dynamic contrast enhancement based on histogram specification. IEEE Transactions on Consumer Electronics, 2005. [69] X. Tan, K. Xu, Y. Cao, Y. Zhang, L. Ma, and R. W. H. Lau. Night-time scene parsing with a large real dataset. IEEE Transactions on Image Processing, 2021. [70] Z. Teed and J. Deng. Raft: Recurrent all pairs field transforms for optical flow. European Conference on Computer Vision, 2020. [71] Y. Wang, Q. Zhou, J. Liu, J. Xiong, G. Gao, X. Wu, and L. J. Latecki. Lednet: A lightweight encoder-decoder network for real-time semantic segmentation. 2019 IEEE International Conference on Image Processing (ICIP), 2019. [72] Y. Wang, Q. Zhou, and X. Wu. Esnet: An efficient symmetric network for real-time semantic segmentation. arXiv:1906.09826, 2019. [73] C. Wei, W. Wang, W. Yang, and J. Liu. Deep retinex decomposition for low light enhancement. arXiv:1808.04560, 2018. [74] T. Wu, S. Tang, R. Zhang, J. Cao, and Y. Zhang. Cgnet: A light-weight context guided network for semantic segmentation. IEEE Transactions on Image Processing, 2020. [75] X. Wu, Z. Wu, H. Guo, L. Ju, and S. Wang. Dannet: A one-stage domain adaptation network for unsupervised nighttime semantic segmentation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. [76] R. Xia, C. Zhao, M. Zheng, Z. Wu, Q. Sun, and Y. Tang. Cmda: Cross-modality domain adaptation for nighttime semantic segmentation. Computer Vision and Pattern Recognition, 2023. [77] Y. Xiong, R. Liao, H. Zhao, R. Hu, M. B. E. Yumer, and R. Urtasun. A unified panoptic segmentation network. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. [78] H. Xu, J. Zhang, J. Cai, H. Rezatofighi, and D. Tao. Gmflow: Learning optical flow via global matching. Computer Vision and Pattern Recognition, 2022. [79] C. Yin, J. Tang, T. Yuan, Z. Xu, and Y. Wang. Bridging the gap between semantic segmentation and instance segmentation. IEEE Transactions on Multimedia, 2021. [80] J.-Q. Yu and S.-C. Pei. Panoptic-depth color map for combination of depth and image segmentation. Computer Vision and Pattern Recognition, 2023. [81] X. Zhang, B. Du, Z. Wu, and T. Wan. Laanet: lightweight attention-guided asymmetric network for real-time semantic segmentation. Neural Comput Application, 2022. [82] X. Zhang, P. Shen, L. Luo, L. Zhang, and J. Song. Kindling the darkness: A practical low-light image enhancer. Proceedings of the 27th ACM International Conference on Multimedia, 2019. [83] H. Zhao, X. Qi, X. Shen, J. Shi, and J. Jia. Icnet for real-time semantic segmentation on high-resolution images. European Conference on Computer Vision (ECCV), 2018. [84] Z. Zhao, B. Xiong, L. Wang, Q. Ou, L. Yu, and F. Kuang. Retinexdip: A unified deep framework for low-light image enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 2021. [85] Q. Zhong and J.-L. Wang. Neural networks for partially linear quantile regression. Statistics Theory, 2021. [86] J. Šarić, M. Oršić, and S. Šegvić. Panoptic swiftnet: Pyramidal fusion for real time panoptic segmentation. Computer Vision and Pattern Recognition, 2022. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93109 | - |
| dc.description.abstract | 隨著自動駕駛技術的進步,夜間駕駛場景的挑戰變得越來越明顯。在低光源條件下,影像清晰度和可見度受到嚴重影響,對自動駕駛系統的性能和安全性構成挑戰。本研究探索低光增強、深度估計和影像分割等技術,以提高自動駕駛汽車在夜間駕駛場景中的表現。首先,採用低光源增強技術增強夜間駕駛場景影像的亮度和對比度,進而改善駕駛者和系統對環境的感知。其次,深度估計技術可以準確估計夜間場景中物體的距離和位置信息,為自動駕駛系統提供關鍵的環境感知數據。最後,影像分割技術可以精確識別和分割夜間駕駛場景中的各種物體,為自動駕駛汽車提供更準確、更可靠的環境理解。除此之外,我們提出了名為 Panoptic-LMFFNet、NLP-Net、Panoptic-DLMFFNet 和 NLPD 的模型,它們分別處理白天和夜間的即時全景分割和全景深度分割。這項研究的結果表明,結合微光增強、深度估計和影像分割技術可以顯著提高自動駕駛汽車在夜間駕駛場景中的性能和安全性。 | zh_TW |
| dc.description.abstract | As autonomous vehicle technology advances, the challenges of nighttime driving scenes become increasingly apparent. In low-light conditions, image clarity and visibility are severely compromised, posing challenges to the performance and safety of autonomous driving systems. This study explores techniques such as low-light enhancement, depth estimation, and image segmentation to improve the performance of autonomous vehicles in nighttime driving scenes. Firstly, low-light enhancement techniques are employed to enhance the brightness and contrast of images in nighttime driving scenes, thereby improving the perception of the environment by both drivers and systems. Secondly, depth estimation techniques accurately estimate the distance and location information of objects in nighttime scenes, providing crucial environmental perception data for autonomous driving systems. Finally, image segmentation techniques precisely identify and segment various objects in nighttime driving scenes, providing a more accurate and reliable environmental understanding of autonomous vehicles. Otherwise, we proposed models named Panoptic-LMFFNet, NLP-Net, Panoptic-DLMFFNet and NLPD, and they deal with real-time panoptic segmentation and panoptic depth segmentation in daytime and nighttime respectively. The results of this study demonstrate that combining low-light enhancement, depth estimation, and image segmentation techniques significantly improves the performance and safety of autonomous vehicles in nighttime driving scenes. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-07-17T16:27:49Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-07-17T16:27:49Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee................................... i
Acknowledgements ........................................................................................iii 摘要................................................................................................................v Abstract ........................................................................................................vii Contents ........................................................................................................ix List of Figures ...............................................................................................xiii List of Tables .................................................................................................xix Chapter 1 Introduction ....................................................................................1 1.1 Night-Time Autonomous Driving Scene........................................................1 1.2 Real-time Processing.................................................................................3 1.3 Domain Adaption.......................................................................................5 1.4 Optical Flow...............................................................................................6 1.5 Summary.................................................................................................. 8 Chapter 2 Real-time and Nighttime Low Light Enhancement..............................9 2.1 Introduction...............................................................................................9 2.2 Related Works .........................................................................................10 2.2.1 Traditional Image Processing Methods ...................................................10 2.2.2 Paired Datasets Methods.......................................................................11 2.2.3 Retinex-Based Methods.........................................................................12 2.2.4 Adversarial Learning Methods................................................................13 2.2.5 Unsupervised Methods..........................................................................14 2.3 Proposed Method.....................................................................................16 2.3.1 Overview Of The Framework...................................................................17 2.3.2 Loss Function....................................................................................... 18 2.3.3 Evaluation Metrics.................................................................................20 2.3.4 Experiment Results...............................................................................21 2.4 Summary.................................................................................................25 Chapter 3 Real-time and Nighttime Semantic Segmentation........................... 27 3.1 Introduction.............................................................................................27 3.2 Related Works .........................................................................................28 3.2.1 Encoder Methods..................................................................................28 3.2.2 Decoder Methods..................................................................................30 3.2.3 Night-Time Semantic Segmentation ...................................................... 30 3.2.4 Real-Time Semantic Segmentation.........................................................32 3.3 Proposed Method.....................................................................................33 3.3.1 Overview Of The Framework...................................................................34 3.3.2 Loss Function........................................................................................37 3.3.3 Evaluation Metrics ................................................................................39 3.4 Experiment Results..................................................................................41 3.5 Summary.................................................................................................45 Chapter 4 Real-time and Nighttime Panoptic Segmentation.............................47 4.1 Introduction............................................................................................ 47 4.2 Related Works .........................................................................................49 4.2.1 Semantic Segmentation........................................................................49 4.2.2 Instance Segmentation.........................................................................50 4.2.3 Panoptic Segmentation........................................................................51 4.2.4 Night-Time Panoptic Segmentation.......................................................52 4.2.5 Real-Time Panoptic Segmentation……………...........................................53 4.3 Proposed Method.....................................................................................53 4.3.1 Overview Of The Framework...................................................................54 4.3.2 Loss Function........................................................................................57 4.3.3 Evaluation Metrics ................................................................................61 4.4 Experiment Results..................................................................................62 4.5 Summary.................................................................................................69 Chapter 5 Real-time and Nighttime Panoptic Depth Segmentation...................71 5.1 Introduction.............................................................................................71 5.2 Related Works .........................................................................................73 5.2.1 Depth Estimation ..................................................................................73 5.2.2 Panoptic Depth Segmentation ....................................................................74 5.3 Proposed Method.....................................................................................76 5.3.1 Overview Of The Framework...................................................................77 5.3.2 Loss Function. ......................................................................................81 5.3.3 Evaluation Metrics ............................................................................... 85 5.4 Experiment Results..................................................................................86 5.5 Summary ................................................................................................95 Chapter 6 Conclusion ................................................................................... 97 References ........................................................................................................................99 | - |
| dc.language.iso | en | - |
| dc.subject | 影片 | zh_TW |
| dc.subject | 夜間 | zh_TW |
| dc.subject | 即時 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 輕量化 | zh_TW |
| dc.subject | 低光源亮度增強 | zh_TW |
| dc.subject | 領域適應 | zh_TW |
| dc.subject | 深度估計 | zh_TW |
| dc.subject | 光流 | zh_TW |
| dc.subject | 語意分割 | zh_TW |
| dc.subject | 實例分割 | zh_TW |
| dc.subject | 全景分割 | zh_TW |
| dc.subject | 全景深度分割 | zh_TW |
| dc.subject | 影像 | zh_TW |
| dc.subject | Image | en |
| dc.subject | Video | en |
| dc.subject | Nighttime | en |
| dc.subject | Real-time | en |
| dc.subject | Deep Learning | en |
| dc.subject | Machine Learning | en |
| dc.subject | Lightweight | en |
| dc.subject | Low-Light Enhancement | en |
| dc.subject | Domain Adaption | en |
| dc.subject | Depth Estimation | en |
| dc.subject | Optical Flow | en |
| dc.subject | Semantic Segmentation | en |
| dc.subject | Instance Segmentation | en |
| dc.subject | Panoptic Segmentation | en |
| dc.subject | Panoptic Depth Segmentation | en |
| dc.title | 夜間暨低光源下自駕即時影像分割、深度及優化模組 | zh_TW |
| dc.title | Low Light Enhancement, Depth and Image Segmentation in Real-time and Nighttime Driving Scene Video for Autonomous Vehicles | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.coadvisor | 丁建均 | zh_TW |
| dc.contributor.coadvisor | Jian-Jiun Ding | en |
| dc.contributor.oralexamcommittee | 吳家麟;鍾國亮;杭學鳴 | zh_TW |
| dc.contributor.oralexamcommittee | Ja-Ling Wu;Kuo-Liang Chung;Hsueh-Ming Hang | en |
| dc.subject.keyword | 影像,影片,夜間,即時,深度學習,機器學習,輕量化,低光源亮度增強,領域適應,深度估計,光流,語意分割,實例分割,全景分割,全景深度分割, | zh_TW |
| dc.subject.keyword | Image,Video,Nighttime,Real-time,Deep Learning,Machine Learning,Lightweight,Low-Light Enhancement,Domain Adaption,Depth Estimation,Optical Flow,Semantic Segmentation,Instance Segmentation,Panoptic Segmentation,Panoptic Depth Segmentation, | en |
| dc.relation.page | 109 | - |
| dc.identifier.doi | 10.6342/NTU202401061 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2024-06-27 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電信工程學研究所 | - |
| dc.date.embargo-lift | 2024-09-01 | - |
| 顯示於系所單位: | 電信工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf | 78.2 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
