請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51622
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳炳宇 | |
dc.contributor.author | Hsin-I Chen | en |
dc.contributor.author | 陳心怡 | zh_TW |
dc.date.accessioned | 2021-06-15T13:41:37Z | - |
dc.date.available | 2019-03-08 | |
dc.date.copyright | 2016-03-08 | |
dc.date.issued | 2015 | |
dc.date.submitted | 2016-01-06 | |
dc.identifier.citation | Bibliography
[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Sぴusstrunk. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. on Pattern Anal. and Mach. Intell., 34(11):2274–2282, 2012. [2] A. Agarwalaa, C. Zheng, C. Pal, M. Agrawala, M. Cohen, B. Curless, D. Salesin, and R. Szelisk. Panoramic video textures. ACM Trans. on Graphics, 24(3):821–827, 2005. [3] A. Albarelli, E. Rodol`a, and A. Torsello. Imposing semi-local geometric constraints for accurate correspondences selection in structure from motion: A game-theoretic perspective. Int. J. Computer Vision, 97(1):36–53, 2012. [4] Y. Avrithis and G. Tolias. Hough pyramid matching: Speeded-up geometry re-ranking for large scale image retrieval. Int. J. Computer Vision, 107(1):1–19, 2014. [5] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. on Graphics, 28(3), 2009. [6] H. Bay, T. Tuytelaars, and L. Van Gool. SURF: Speeded up robust features. In Proc. Eur. Conf. Comput. Vis., pages 404–417, 2006. [7] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Trans. on Pattern Anal. and Mach. Intell., 24(4):509–522, 2002. [8] A. C. Berg, T. L. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 26–33, 2005. [9] A. Bosch, A. Zisserman, and X. Mu˜noz. Representing shape with a spatial pyramid kernel. In Proc. ACM Conf. Image and Video Retrieval, 2007. [10] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Trans. on Pattern Anal. and Mach. Intell., 23(11):1222–1239, 2001. [11] M. Brown and D. G. Lowe. Automatic panoramaic image stitching using invariant features. Int. J. Computer Vision, 74(1):59–73, 2007. [12] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based on a theory for warping. In Proc. Eur. Conf. Comput. Vis., pages 25–36, 2004. [13] T. Brox and J. Malik. Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE Trans. on Pattern Anal. and Mach. Intell., 33(3):500–513, 2011. [14] J. Cˇ ech, J. Matas, and M. Perdˇoch. Efficient sequential correspondence selection by cosegmentation. IEEE Trans. on Pattern Anal. and Mach. Intell., 32(9):1568–1581, 2010. [15] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. on Intelligent Systems and Technology, 2(3):27:1–27:27, 2011. [16] K.-Y. Chang, T.-L. Liu, and S.-H. Lai. From co-saliency to co-segmentation: An efficient and fully unsupervised energy minimization model. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 2129–2136, 2011. [17] H.-Y. Chen, Y.-Y. Lin, and B.-Y. Chen. Robust feature matching with alternate hough and inverted hough transforms. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 2762– 2769, 2013. [18] S.-C. Chen, H.-Y. Chen, Y.-L. Chen, H.-M. Tsai, and B.-Y. Chen. Making in-front-of cars transparent: Sharing first-person-views via dashcam. Computer Graphics Forum, 33(7):289–297, 2014. [19] T.-J. Chin, J. Yu, and D. Suter. Accelerated hypothesis generation for multistructure data via preference analysis. IEEE Trans. on Pattern Anal. and Mach. Intell., 34(2):625–638, 2012. [20] M. Cho, J. Lee, and K. M. Lee. Feature correspondence and deformable object matching via agglomerative correspondence clustering. In Proc. Int’l Conf. Comput. Vis., pages 144–157, 2009. [21] M. Cho, J. Lee, and K. M. Lee. Reweighted random walks for graph matching. In Proc. Eur. Conf. Comput. Vis., pages 144–157, 2010. [22] M. Cho and K. M. Lee. Progressive graph matching: Making a move of graphs via probabilistic voting. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 492–505, 2012. [23] M. Cho, Y. M. Shin, and K. M. Lee. Co-recognition of image pairs by data-driven monte carlo image exploration. In Proc. Eur. Conf. Comput. Vis., pages 2129–2136, 2008. [24] T. F. Cox and M. Cox. Multidimensional Scaling, Second Edition. Chapman and Hall/CRC, 2000. [25] F. Diego, D. Ponsa, J. Serrat, and A. M. L´opez. Video alignment for change detection. IEEE Trans. on Image Processing, 20(7):1858–1869, 2011. [26] F. Diego, J. Serrat, and A. M. L´opez. Joint spatio-temporal alignment of sequences. IEEE Transactions on Multimedia, 15(6):1377–1387, 2013. [27] G. Evangelidis and C. Bauckhage. Efficient subframe video alignment using short descriptors. IEEE Trans. on Pattern Anal. and Mach. Intell., 35(10):2371–2386, 2013. [28] A. Faktor and M. Irani. Co-segmentation by composition. In Proc. Int’l Conf. Comput. Vis., pages 1297–1304, 2013. [29] V. Ferrari, T. Tuytelaars, and L. Van Gool. Simultaneous object recognition and segmentation by image exploration. In Proc. Eur. Conf. Comput. Vis., pages 40–54, 2004. [30] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with application to image analysis and automated cartography. Commun. ACM, 24(6):381– 395, 1981. [31] M. Garrigues, A. Manzanera, and T. M. Bernard. Video extruder: a semi-dense point tracker for extracting beams of trajectories in real time. Journal of Real-Time Image Processing, pages 1–14, 2014. [32] P. E. R. Gomes, F. Vieira, and M. Ferreira. The see-through system: From implementation to test-drive. In Proceeding of IEEE Vehicular Networking Conference 2012, pages 40–47, 2012. [33] K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. In Proc. Int’l Conf. Comput. Vis., pages 1458–1465, 2005. [34] R. Grompone, J. Jakubowicz, J.-M. Morel, and G. Randall. LSD: A fast line segment detector with a false detection control. IEEE Trans. on Pattern Anal. and Mach. Intell., 32(4):722–732, 2010. [35] Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski. Non-rigid dense correspondence with application for image enhancement. ACM Trans. on Graphics, 30(4):70:1– 70:10, 2011. [36] T. Harada, H. Nakayama, and Y. Kuniyoshi. Improving local descriptors by embedding global and local spatial information. In Proc. Eur. Conf. Comput. Vis., 2010. [37] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, second edition, 2004. [38] D. C. Hauagge and N. Snavely. Image matching using local symmetry features. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 206–213, 2012. [39] Y.-H. Huang, T.-K. Huang, Y.-H. Huang, W.-C. Chen, and Y.-Y. Chuang. Warping-based novel view synthesis from a binocular image for autostereoscopic displays. In Proceedings of IEEE International Conference on Multimedia and Expo 2012, pages 302–307, 2012. [40] Y. Jiang, C. Ngo, and J. Yang. Towards optimal bag-of-features for object categorization and semantic video retrieval. In Proceedings of the 6th ACM International Conference on Image and Video Retrieval, CIVR 2007, Amsterdam, The Netherlands, July 9-11, 2007, pages 494–501, 2007. [41] A. Joulin, F. Bach, and J. Ponce. Discriminative clustering for image co-segmentation. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 1943–1950, 2010. [42] A. Joulin, F. Bach, and J. Ponce. Multi-class cosegmentation. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 542–549, 2012. [43] T. Kadir, A. Zisserman, and M. Brady. An affine invariant salient region detector. In Proc. Eur. Conf. Comput. Vis., 2004. [44] E. Kim, H. Li, and X. Huang. A hierarchical image clustering cosegmentation framework. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 686–693, 2012. [45] J. Kim and K. Grauman. Boundary preserving dense local regions. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 1553–1560, 2011. [46] K.-Y. Lee, Y.-Y. Chuang, B.-Y. Chen, and M. Ouhyoung. Video stabilization using robust feature trajectories. In Proc. Int’l Conf. Comput. Vis., pages 1307–1404, 2009. [47] M. Leordeanu and M. Hebert. A spectral technique for correspondence problems using pairwise constraints. In Proc. Int’l Conf. Comput. Vis., pages 1482–1489, 2005. [48] M. Leordeanu, M. Hebert, and R. Sukthankar. An integer projected fixed point method for graph matching and map inference. In Advances in Neural Information Processing Systems, pages 1114–1122, 2009. [49] F. Liu, M. Gleicher, H. Jin, and A. Agarwala. Content-preserving warps for 3D video stabilization. ACM Trans. on Graphics, 28(3):44:1–44:9, 2009. [50] H. Liu and S. Yan. Common visual pattern discovery via spatially coherent correspondences. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 1609–1616, 2010. [51] S. Liu, L. Yuan, P. Tan, and J. Sun. Bundled camera paths for video stabilization. ACM Trans. on Graphics, 32(4), 2013. [52] D. Lowe. Object recognition from local scale-invariant features. In Proc. Int’l Conf. Comput. Vis., pages 1150–1157, 1999. [53] D. G. Lowe. Distinctive image features from scale-invariant keypoints. Int. J. Computer Vision, 60(2):91–110, 2004. [54] J. Ma, J. Zhao, J. Tian, A. L. Yuille, and Z. Tu. Robust point matching via vector field consensus. TIP, 23(4):1706–1721, 2014. [55] L. Manevitz and M. Yousef. One-class SVMs for document classification. J. Machine Learning Research, pages 139–154, 2002. [56] K. Mikolajczyk and C. Schmid. Scale and affine invariant interest point detectors. Int. J. Computer Vision, 60(1):63–86, 2004. [57] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE Trans. on Pattern Anal. and Mach. Intell., 27(10):1615–1630, 2005. [58] E. Mortensen, H. Deng, and L. Shapiro. A SIFT descriptor with global context. In Proc. Conf. Comput. Vis. and Pattern Recognit., 2005. [59] C. Rother, T. P. Minka, A. Blake, and V. Kolmogorov. Cosegmentation of image pairs by histogram matching - Incorporating a global constraint into MRFs. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 993–1000, 2006. [60] M. Rubinstein, A. Joulin, J. Kopf, and C. Liu. Unsupervised joint object discovery and segmentation in internet images. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 1939–1946, 2013. [61] J. C. Rubio, J. Serrat, A.M. L´opez, and N. Paragios. Unsupervised co-segmentation through region matching. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 749–756, 2012. [62] P. Sand and S. J. Teller. Video matching. ACM Trans. on Graphics, 23(3):592–599, 2004. [63] P. Sand and S. J. Teller. Particle video: Long-range motion estimation using point trajectories. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 2195–2202, 2006. [64] B. Scholkopf, J. Platt, J. Shawe-Taylor, A. Smola, and R.Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1471, 2001. [65] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. on Pattern Anal. and Mach. Intell., 22(8):888–905, 2000. [66] J. Shi and C. Tomasi. Good feature to track. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 593–600, 1994. [67] N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: Exploring photo collections in 3D. ACM Trans. on Graphics, 25(3):835–846, 2006. [68] J. Sun and J. Ponce. Learning discriminative part detectors for image classification and cosegmentation. In Proc. Int’l Conf. Comput. Vis., pages 3400–3407, 2013. [69] E. Tola, V. Lepetit, and P. Fua. DAISY: An efficient dense descriptor applied to wide baseline stereo. IEEE Trans. on Pattern Anal. and Mach. Intell., 32(5):815–830, 2010. [70] G. Tolias and Y. Avrithis. Speeded-up, relaxed spatial matching. In Proc. Int’l Conf. Comput. Vis., pages 1653–1660, 2011. [71] L. Torresani, V. Kolmogorov, and C. Rother. A dual decomposition approach to feature correspondence. IEEE Trans. on Pattern Anal. and Mach. Intell., 35(2):259–271, 2013. [72] K. van de Sande, T. Gevers, and C. Snoek. Evaluating color descriptors for object and scene recognition. IEEE Trans. on Pattern Anal. and Mach. Intell., 32(9):1582–1596, 2010. [73] F.Wang, Q. Huang, and L. J. Guibas. Image co-segmentation via consistent functional maps. In Proc. Int’l Conf. Comput. Vis., pages 849–856, 2013. [74] O. Wang, C. Schroers, H. Zimmer, M. H. Gross, and A. Sorkine-Hornung. VideoSnapping: Interactive synchronization of multiple videos. ACM Trans. on Graphics, 33(4):77:1–77:10, 2010. [75] Y.-S. Wang, H. Fu, O. Sorkine, T.-Y. Lee, and H.-P. Seidel. Motion-aware temporal coherence for video resizing. ACM Trans. on Graphics, 28(5):127:1–127:10, 2009. [76] Y.-S. Wang, C.-L. Tai, O. Sorkine, and T.-Y. Lee. Optimized scale-and-stretch for image resizing. ACM Trans. on Graphics, 27(5):118:1–118:8, 2008. [77] Z. Wang, B. Fan, and F. Wu. Local intensity order pattern for feature description. In Proc. Int’l Conf. Comput. Vis., pages 603–610, 2011. [78] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. Deepflow: Large displacement optical flow with deep matching. In Proc. Int’l Conf. Comput. Vis., 2013. [79] Z. Wu, Q. Ke, M. Isard, and J. Sun. Bundling features for large scale partial-duplicate web image search. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 25–32, 2009. [80] P. Yarlagadda, A. Monroy, and B. Ommer. Voting by grouping dependent parts. In Proc. Eur. Conf. Comput. Vis., pages 197–210, 2010. [81] Y. Yuan, Y. Pang, K.Wang, and M. Shang. Efficient image matching using weighted voting. Pattern Recognition Letters, 4(33):471–475, 2012. [82] J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter. As-projective-as-possible image stitching with moving DLT. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 2339–2346, 2013. [83] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: A comprehensive study. Int. J. Computer Vision, 73(2):213–238, 2007. [84] W. Zhong, H. Lu, and M.-H. Yang. Robust object tracking via sparsity-based collaborative model. In Proc. Int’l Conf. Comput. Vis., pages 1838–1845, 2012. [85] F. Zhou and F. De la Torre. Factorized graph matching. In Proc. Conf. Comput. Vis. and Pattern Recognit., pages 127–134, 2012. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51622 | - |
dc.description.abstract | 建立影像間的特徵點對應關係是許多影像分析課題中,一個十分基礎與核心的資料分析技術。
盡管其是許多影像相關應用的基本要素, 在複雜場景下仍普遍存在兩個問題: 低準確率 (precision) 與低檢出率 (recall)。 在另外一方面, 如何識別影片之間像素的對應關係,仍是一個具有很大挑戰但尚未被被廣泛討論的問題。 在本論文中,我們發展以投票為基礎之特徵點配對演算法來進行影像的匹配,並研究影片之間像素稠密配對的技術。 首先,我們提出了一個植基於霍夫轉換 (Hough transform) 的特徵點對應演算法來大幅提高幾何驗證的速度,並發展了反霍夫轉換 (inverted Hough transform),透過交替疊代霍夫轉換與反霍夫轉換,所提出的方法可同時提昇特徵比對之準確率 (precision) 與檢出率 (recall)。 接著我們更進一步結合影像共分割 (image co-segmentation) 和描述子 (descriptor) 的不變性,探索如何搜尋和增加更多的可能的候選特徵點達成稠密的對應效果。 最後,我們提出一個創新的視頻映射技術來建立兩段重疊區域極小的影片間像素稠密對應關係, 我們將此技術應用在行車紀錄器拍攝到的影片中,並對前車拍攝到的影像做區域性的形變,使其視角及輪廓邊緣能夠與後車被遮擋部分對應並能無縫的接合在一起,創造出前方車輛似乎變得半透明的效果以達成改善駕駛者可見度的目的之應用。 | zh_TW |
dc.description.abstract | Establishing feature correspondences is a fundamental problem in many
image analysis tasks, and is required for a wide range of applications. Despite the great applicability, two main difficulties hinder the advance in establishing the correspondences of high quality: (1) low precision and (2) low recall. In addition, how to establish dense mapping between videos is a more challenging but less addressed in the community. In this dissertation, we introduce a voting-based algorithm for image matching, and describe an inter-video mapping framework to establish dense mapping between partial overlapping videos. First, we propose an algorithm that is based on the Hough transform to establish feature correspondences, which leads to speed-up in geometric checking. We also develop an inverted Hough transform, and through an iterative optimization process, we can enhance the quality of matching in both precision and recall. Second, we integrate image co-segmentation into feature matching and combine different descriptors, which can yield more accurate and dense correspondences. Finally, we present a novel inter-video mapping approach to align videos with small overlapping regions, and apply it to video footages from two different dashcams installed on back vehicle. We show that with our technique, it is able to locally adjust the shape of the unobstructed view in the preceding vehicle so that its perspective and boundary could be matched to that of the occluded region in its following vehicle, creating an impression as if the preceding vehicle is transparent thus increases drivers' visibilities. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T13:41:37Z (GMT). No. of bitstreams: 1 ntu-104-D99922026-1.pdf: 25741906 bytes, checksum: d0157de4565436976e81324387b40b09 (MD5) Previous issue date: 2015 | en |
dc.description.tableofcontents | Table of Contents
Abstract viii List of Figures xiii Chapter 1 Introduction 1 1.1 Overview of Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 2 Related Work 5 Chapter 3 Robust Feature Matching with Alternate Hough and Inverted Hough Transforms 11 3.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.1 Transformation space . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.2 Distance metric in the transformation space . . . . . . . . . . . . . 13 3.2 The Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2.1 Initial correspondence candidates . . . . . . . . . . . . . . . . . . . 14 3.2.2 Hough transform for homography verification . . . . . . . . . . . 15 3.2.3 Inverted Hough transform for correspondence recommendation . 17 3.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3.1 Matching with multiple common objects . . . . . . . . . . . . . . . 20 3.3.2 Incremental correspondence enrichment . . . . . . . . . . . . . . . 22 3.3.3 Plug-in with other feature descriptors . . . . . . . . . . . . . . . . 23 Chapter 4 Co-segmentation Guided Hough Transform and Multiple Descriptor Fusion for Robust Feature Matching 25 4.1 Enhanced Image Co-segmentation with Feature Matching . . . . . . . . . 27 4.1.1 Corrupt correspondence filtering . . . . . . . . . . . . . . . . . . . 27 4.1.2 Information transfer from feature matching to image co-segmentation 29 4.1.3 Graph-partition co-segmentation model . . . . . . . . . . . . . . . 30 4.1.4 MRF-based co-segmentation model . . . . . . . . . . . . . . . . . 32 4.2 Multiple Descriptor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2.2 The Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . 35 4.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.3.1 Homography space visualization . . . . . . . . . . . . . . . . . . . 36 4.3.2 Evaluation metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.3.3 Matching with multiple common objects . . . . . . . . . . . . . . . 38 4.3.4 Collaborating with other feature descriptors . . . . . . . . . . . . . 44 4.3.5 Comprehensive studies . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.3.6 Locally Adaptive Descriptor Selection . . . . . . . . . . . . . . . . 51 4.3.7 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3.8 Visualization of Matching Results . . . . . . . . . . . . . . . . . . 55 Chapter 5 Integrating DashCam Views through Inter-Video Mappings 60 5.1 Inter-Video Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.1.2 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2 Intra-Video Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.2.1 Warping-based Motion Model . . . . . . . . . . . . . . . . . . . . . 65 5.2.2 Long-range Motion Estimation . . . . . . . . . . . . . . . . . . . . 67 5.3 Cross-Video Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.3.1 Bridge Image Selection . . . . . . . . . . . . . . . . . . . . . . . . 68 5.3.2 Trajectory Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.4.1 Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.4.2 Computational time . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.4.3 Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Chapter 6 Conclusion and Future Work 77 Bibliography 79 | |
dc.language.iso | en | |
dc.title | 穩健的影像及影片比對演算法及其應用於視野結合 | zh_TW |
dc.title | Robust Image and Video Matching and Its Application to View Integration | en |
dc.type | Thesis | |
dc.date.schoolyear | 104-1 | |
dc.description.degree | 博士 | |
dc.contributor.oralexamcommittee | 莊永裕(Yung-Yu Chuang),徐宏民,林彥宇,王昱舜,王鈺強 | |
dc.subject.keyword | 電腦視覺,特徵點對應,影像共分割,影片比對,影像形變, | zh_TW |
dc.subject.keyword | computer vision,feature correspondence,co-segmentation,video matching,image warping, | en |
dc.relation.page | 85 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2016-01-06 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-104-1.pdf 目前未授權公開取用 | 25.14 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。