請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/67452
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 貝蘇章 | |
dc.contributor.author | Tzu-Ting Tseng | en |
dc.contributor.author | 曾子庭 | zh_TW |
dc.date.accessioned | 2021-06-17T01:32:49Z | - |
dc.date.available | 2017-08-04 | |
dc.date.copyright | 2017-08-04 | |
dc.date.issued | 2017 | |
dc.date.submitted | 2017-08-02 | |
dc.identifier.citation | REFERENCE
[1] Yu Liu ,Shuping Liu a ,Zengfu Wang, A general framework for image fusion based on multi-scale transform and sparse representation, 2014 [2] Jianbing Shen, Ying Zhao, Shuicheng Yan, and Xuelong Li, Exposure Fusion Using Boosting Laplacian Pyramid, 2014 [3] T. Mertens, J. Kautz and F. Van Reeth, Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography, 2009 [4] Deepali Sale, Rajashree Bhokare, and Dr. Madhuri A. Joshi, Multiresolution Image Fusion Approach For Image Enhancement, 2015 [5] MALIK J. and PERONA P.: Preattentive texture discrimination with early vision mechanism. Journal of the Optical Society of America 7, 5 (May 1990), 923–932. [6] OGDEN J. M., ADELSON E. H., BERGEN J. R. and BURT P. J.: Pyramid-based computer graphics. RCA Engineer 30, 5 (1985). [7] P. Romaniak, L. Janowski, M. Leszczuk, and Z. Papir, “A no reference metric for the quality assessment of videos affected by exposure distortion,” in Proc. IEEE ICME, 2011, pp. 1–6. [8] R. Raskar, A. Ilie, and J. Yu, “Image fusion for context enhancement and video surrealism,” in Proc. NPAR, 2004, pp. 85–95. [9] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013). [10] X. Yang, W. Lin, Z. Lu, E. P. Ong, and S. Yao, “Motion-compensated residue pre-processing in video coding based on just-noticeable distortion profile,” IEEE Trans. Circuits Syst. Video Technol., vol. 15,no. 6, pp. 742–752, Jun. 2005. [11] A. Liu, W. Lin, M. Paul, C. Deng, and F. Zhang, “Just noticeable difference for images with decomposition model for separating edge and textured regions,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 11, pp. 648–1652, Nov. 2010. [12] A. Goshtasby, “Fusion of multiexposure images,” Image Vision Comput., vol. 23, no. 6, pp. 611–618, 2005. [13] H. Kolb, “How the retina works,” Amer. Scientist, vol. 91, no. 1,pp. 28–35, 2003. [14] E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, High Dynamic Range Imaging, Second Edition: Acquisition, Display and Image-Based Lighting. San Mateo, CA, USA: Morgan Kaufmann, 2010. [15] Kai-Lung Hua, Cheng-Hsien Huang, RANDOM WALKS ON GRAPHS FOR BACKGROUND EXTRACTION [16] Xiameng Qin, Jianbing Shen, Senior Member, IEEE, Xiaoyang Mao,Xuelong Li, Fellow, IEEE, and Yunde Jia, Robust Match Fusion Using Optimization, 2015 [17] R. Hersh and R. Griego, “Brownian motion and potential theory,” Scientific American, vol. 220, pp.67–74, 1969. [18] Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 4, pp. 807–822, Apr. 2011. [19] S. Mann and R. W. Picard, “On being undigital with digital cameras: Extending dynamic range by combining exposed pictures,” in Proc. 48th Annu. Conf. IS&T, 1995, pp. 422–428. [20] Zhiqiang Zhou, Mingjie Dong, Xiaozhu Xie, and Zhifeng Gao, Fusion of infrared and visible images for night-vision context enhancement, 2016 [21] Fusion Infrared and Visible Images Using Optimal Weights [22] Xiaowei Zhou, Can Yang and Weichuan Yu, Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation, 2012 [23] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM computing surveys, vol. 38, no. 4, pp. 1–45, 2006. [24] T. Moeslund, A. Hilton, and V. Kruger, “A survey of advances in vision-based human motion capture and analysis,” Comput. Vis. Image Und., vol. 104, no. 2-3, pp. 90–126, 2006. [25] A. Akerman III, “Pyramidal techniques for multisensor fusion,” in Applications in Optical Science and Engineering (International Society for Optics and Photonics, 1992), pp. 124–131. [26] A. Toet, L. J. Van Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Opt. Eng. 28, 287789 (1989). [27] P. R. Hill, C. N. Canagarajah, and D. R. Bull, “Image fusion using complex wavelets,” in British Machine Vision Conference (CiteSeer, 2002), pp. 1–10. [28] J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Inf. Fusion 8, 119–130 (2007). [29] S. Li and B. Yang, “Hybrid multiresolution method for multisensor multimodal image fusion,” IEEE J. Sens. 10, 1519–1526 (2010). [30] S. Zhenfeng, L. Jun, and C. Qimin, “Fusion of infrared and visible images based on focus measure operators in the curvelet domain,” Appl. Opt. 51, 1910–1921 (2012). [31] T. A. Wilson, S. K. Rogers, and M. Kabrisky, “Perceptual-based image fusion for hyperspectral data,” IEEE Trans. Geosci. Remote Sens. 35, 1007–1017 (1997). [32] H. Chen and P. K. Varshney, “A human perception inspired quality metric for image fusion based on regional information,” Inf. Fusion 8, 193–207 (2007). [33] J. L. Mannos and D. J. Sakrison, “The effects of a visual fidelity criterion of the encoding of images,” IEEE Trans. Inf. Theory 20, 525–536 (1974). [34] Rania HassenZhou WangMagdy Salama, Multifocus Image Fusion Using Local Phase Coherence Measurement, 2009 [35] Burt, P.J., Lolczynski, R.J.:Enhanced image capture through fusion. in Proceedings of the Fourth Inter. Conf. Computer Vision, Berlin, Germany, pp. 173–182, (1993). [36] Burt, P.J.: The pyramid as structure for efficient computation. In Multiresolution Image Processing and Analysis, A. Rosenfeld Ed., pp.6–35, Springer-Verlag (1984). [37] Li, H., Manjunath, B.S., Mitra, S.K.: Multisensor image fusion using the wavelet transform. Graphical Models Image Processing, 57(3), pp. 235–245, (1995). [38] Toet, A., van Ruyven, L.J., Valeton, J.M.: Merging thermal and visual images by a contrast pyramid. Opt. Eng. 28(7), pp. 789–792, (1989). [39] Tzu-Heng Henry Lee, Edge Detection Analysis [40] W. Frei and C. Chen, 'Fast Boundary Detection: A Generalization and New Algorithm,' IEEE Trans. Computers, vol. C-26, no. 10, pp. 988-998, Oct. 1977. [41] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 8, No. 6, pp. 679-698, Nov. 1986. [42] W. K. Pratt, Digital Image Processing. New York, NY: Wiley-Interscience, 1991, pp. 491-556. [43] Wenguan Wang , Jianbing Shen, Fatih Porikli, Saliency-Aware Geodesic Video Object Segmentation, 2015 [44] Jianming Zhang , Stan Sclaroff , Saliency Detection: A Boolean Map Approach [45] J. Han, K. Ngan, M. Li, and H. Zhang. Unsupervised extraction of visual attention objects in color images. Trans. Circuits and Systems for Video Technology, 16(1), 2006. [46] U. Rutishauser, D. Walther, C. Koch, and P. Perona. Is bottom-up attention useful for object recognition? In CVPR, 2004. [47] V. Mahadevan and N. Vasconcelos. Saliency-based discriminant tracking. In CVPR, 2009. [48] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM TOG, 23(3), 2004 [49] D. Tsai, M. Flagg, A. Nakazawa, and J. M. Rehg. Motion coherent tracking using multi-label mrf optimization. In IJCV,2012. [50] An Advanced Moving Object Detection Algorithm for Automatic Traffic Monitoring in Real-World Limited Bandwidth Networks Bo-Hao Chen and Shih-Chia Huang | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/67452 | - |
dc.description.abstract | 近幾年,影像融合在影像處理的領域裡成為重要的議題,影像融合的目標為透過整合所有的同個場景的輸入影像,並依照其互補性資訊產生一個重組性的影像[1]。影像融合在場景觀察的增強性扮演重要的角色,由不同影像裝置所捕捉到的影像,藉由結合其各個不同的細節處資訊融合成包含最多且豐富的影像資訊,達到提升影像增強的目的。以系統層面來說,在輸入的來源影像可分成不同形態的輸入影像感應器或是可調式的不同參數設定之感應器。例如:不同的曝光度、不同的焦距、不同的光線來源。而輸出的結果融合影像比起輸入的影像擁有更適合人類或是機器感知的視覺效果,因此影像融合是個有效率的工具且被廣用於許多重要的應用。如: 醫學圖像學、顯微成像,遙感,計算機視覺和機器人。
除此之外,我修改了傳統的顯著處偵測演算法的缺點,透過結合影像相減概念,修正顯著處偵測到冗餘的部分,將此命名為:『顯著處偵測基於影像相減』。 而顯著處偵測通常應用在辨識出靜態『影像』中視覺會注意的區塊,因此我試圖將此應用在偵測動態『影片』中顯著的部分。透過此方式可以抓取出每一個影像幀,則可表示出影片中正在移動的物體。 最後,我整合先前提到的不同曝光度的影融合法,以及影像相減顯著處偵測演算法,將兩系統整合達到最終的輸出結果圖,完成探測移動物體軌跡之目標,並且是富有高品質的影像增強效果。在展示部分,我們提供一些應用結果,包含偵測球類領域上的球體運動軌跡,以及監視器下的人物移動之過程。 | zh_TW |
dc.description.abstract | In recent years, image fusion has become an important issue in image processing community. The target of image fusion is to generate a composite image by integrating the complementary information from multiple source images of the same scene[1].
Image fusion play a role of enhancing the perception of a scene by combining detail information captured by different imaging sensors.For system, the input source images can be acquired from either different types of imaging sensors or a sensor whose optical parameters can be changed, e.g., at different exposure levels, at different focus levels, at different light source. And the output called fused image will be more suitable for human or machine perception than any individual source image. Image fusion has been used as an effective tool for many important applications, which include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics. Besides, I revised the disadvantage of traditional Saliency Detection algorithm and made a combination of Image Subtraction concept to solve the defect of redundant detection named “Boosting Saliency Detection with Image Subtraction (BSD)”.Furthermore, Boosting Saliency Detectionis applied to recognize the noticeable part in image. I attempted to make use of the system on video detection. Using BSD, we can extracteach of saliency part in each frame, which represent moving object in the video. Last, I integrated both of Exposure Image Fusion and Boosting Saliency Detection algorithm to reach the final goal of moving object tracking with image quality enhancement. And present some applications in trajectory of ball in sport field and motion tracking under the monitor | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T01:32:49Z (GMT). No. of bitstreams: 1 ntu-106-R04942078-1.pdf: 8566284 bytes, checksum: 76677a48300c3be00bf80b3252cc41a1 (MD5) Previous issue date: 2017 | en |
dc.description.tableofcontents | Chapter 1 Introduction 1
Chapter 2 Image Fusion Enhancement 2 2.1 Fusion of Exposure Enhancement 2 2.1.1 Multi-scale Decomposition Weight Map 2 .2.1.1.1 Laplacian Pyramid 3 .2.1.1.2 Quality Measures Contrast 5 .2.1.1.3 System Work Flow 8 2.1.2 Global and Local Exposure Weight 9 .2.1.2.1 Exposure Weight 9 .2.1.2.2 Detail and Base Layers 13 .2.1.2.3 System Work Flow 16 2.1.3 Robust Match Fusion 17 .2.1.3.1 Random Walker Algorithm 20 .2.1.3.2 System Work Flow 22 2.1.4 Experimental Results 24 2.2 Fusion of Infrared and Visible Images Enhancement 28 2.2.1 MSD-based Fusion method with the GF (Guide Filter) 28 2.2.2 CSF: Contrast Sensitivity Function 31 2.2.3 Application: Night photography, Indoor/Outdoor photography, 33 2.3 Fusion of Multi-focus Image Enhancement 36 2.3.1 Wavelet Transform Domain 36 2.3.2 Three Modules of the Fusion Process 39 2.3.3 Experimental results 41 Chapter 3 Motion Object in Dynamic Video 45 3.1 Moving Object Detection 45 3.1.1 Remove Transparent Ghosting 46 .3.1.1.1 Image Edge Detection 46 .3.1.1.2 Image Segmentation 48 3.1.2 Saliency Detection Algorithm 52 .3.1.2.1 Spatiotemporal Saliency 53 .3.1.2.2 Select Few Attention Term 56 .3.1.2.3 Experimental Results 58 3.2 Boosting Saliency Detection (BSD) 59 3.2.1 Image Subtraction 62 3.2.2 Iterations 64 3.2.3 Rectified after BSD system 65 3.2.4 Quantitative Comparison 69 Chapter 4 Integrate Exposure Fusion and Saliency Detection 71 4.1 System work chart 71 4.2 Application in Movement Tracking 77 4.2.1 Dynamic Motion Trajectory Tracking under the Monitor 77 4.2.2 Sport Moment Tracking 78 Chapter 5 Application in Moving Object Tracking 80 5.1 Motion Tracking under the Monitor 80 5.2 Trajectory of Ball in Sport Field 88 5.3 Others Moving Object Tracking 104 Chapter 6 Conclusion and Future Work 110 REFERENCE 112 | |
dc.language.iso | en | |
dc.title | 多種影像融合技術與物件軌跡探測之應用 | zh_TW |
dc.title | Multiple Exposure, Infrared/Visible and Multiple Focus Saliency-based Image Fusion in Object Tracking. | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 丁建均,徐忠枝,林康平,曾建誠 | |
dc.subject.keyword | 影像融合,顯著處偵測, | zh_TW |
dc.subject.keyword | image fusion,Saliency Detection, | en |
dc.relation.page | 116 | |
dc.identifier.doi | 10.6342/NTU201702479 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2017-08-03 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-1.pdf 目前未授權公開取用 | 8.37 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。