Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 理學院
  3. 數學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/19234
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳宜良
dc.contributor.authorChien-Tsung Huangen
dc.contributor.author黃乾宗zh_TW
dc.date.accessioned2021-06-08T01:49:54Z-
dc.date.copyright2016-08-02
dc.date.issued2016
dc.date.submitted2016-07-28
dc.identifier.citation[1] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based
on a theory for warping. In T. Pajdla and J. Matas, editors, Proc. 8th European Conference on
Computer Vision, volume 3024 of LNCS, pages 25–36. Springer, May 2004.
[2] M. J. Black and P. Anandan. Robust dynamic motion estimation over time. In Proc. 1991 IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, pages 292–302, Maui, HI, June 1991. IEEE Computer Society Press.
[3] M. J. Black and P. Anandan. The robust estimation of multiple motions: parametric and
piecewise smooth flow fields. Computer Vision and Image Understanding, 63(1):75–104, Jan. 1996.
[4] A. Bruhn and J. Weickert. Towards ultimate motion estimation: Combining highest accuracy
with real-time performance. In Proc. 10th International Conference on Computer Vision, pages 749–755. IEEE Computer Society Press, Beijing, China, Oct. 2005.
[5] T. Brox, C. Bregler, and J. Malik. Large displacement optical flow. In Proc. International
Conference on Computer Vision and Pattern Recognition, 2009.
[6] D. Sun, S. Roth, J. P. Lewis, and M. J. Black. Learning optical flow. In Proc. European
Conference on Computer Vision, volume 5304 of LNCS, pages 83–87. Springer, 2008.
[7] C.L. Zitnick and S.B. Kang, “Stereo for Image-Based Rendering Using Image Over-Segmentation,” Int’l J. Computer Vision, vol. 75, pp. 49-65, Oct. 2007.
[8] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Ssstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. on PAMI, 34(11), 2012.
[9] A. Lucchi, K. Smith, R. Achanta, V. Lepetit, and P. Fua, “A Fully Automated Approach to Segmentation of Irregularly Shaped Cellular Structures in EM Images,” Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, 2010.
[10] J. Foley, A. van Dam, S. Feiner, and J. Hughes. Computer Graphics: Principles and Practice, 2nd Edition. Addison- Wesley, 1990.
[11] P. Viola and M. Jones. Robust real-time object detection. In IJCV, 2001.
[12] Geman, S., Geman, D.: Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE TPAMI 6 (1984) 721–741
[13] Tomaso Poggio, Vincent Torre, and Christof Koch. Computational vision and regularization
theory. Nature, 317:314–319, 1985.
[14] Demetri Terzopoulos. Regularization of inverse visual problems involving discontinuities.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(4):413–424, 1986.
[15] http://www.vis.uky.edu/~cheung/courses/ee639/Hammersley-Clifford_Theorem.pdf
[16] D. Greig, B. Porteous, and A. Seheult. Exact maximum a posteriori estimation for binary
images. Journal of the Royal Statistical Society, Series B, 51(2):271–279, 1989.
[17] Olga Veksler. EFficient Graph-based Energy Minimization Methods in Computer Vision. PhD
thesis, Cornell University, July 1999.
[18] A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and K. Siddiqi. ”TurboPixels: Fast Superpixels Using Geometric Flows. IEEE Trans. on PAMI, 2009.
[19] Y. J. Lee, J. Kim, and K. Grauman. Key-segments for video object segmentation. In ICCV, 2011.
[20] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. In SIGGRAPH, 2004.
[21] T.Wang and J. Collomosse. Probabilistic motion diffusion of labeling priors for coherent video segmentation. IEEE Trans. Multimedia, 2012.
[22] T. Ma and L. Latecki. Maximum weight cliques with mutex constraints for video object segmentation. In CVPR, pages 670–677, 2012.
[23] D. Tsai, M. Flagg, and J. Rehg. Motion coherent tracking with multi-label mrf optimization. In BMVC, page 1, 2010.
[24] P. Chockalingam, N. Pradeep, and S. Birchfield. Adaptive fragments-based tracking of non-rigid objects using level sets. In ICCV, pages 1530–1537, 2009.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/19234-
dc.description.abstract在雜亂的環境中,從背景模型穩健地分割前景是困難的一個難題。我們提出能穩健地推定背景和在這樣的環境中檢測感興趣區域的方法。大部分舊有方法是利用大量的疊代運算去計算前景與背景的能量模型,但這樣會很依賴好的初始條件,並耗費大量的運算時間以分析影像。針對這些限制,在此提出了有效率的能量模型計數基於馬可夫隨機場為主要架構。首先建立有效前景預估,再來更精細地標記前景背景,最後得到快於其他方法的前景背景分離。zh_TW
dc.description.abstractRobust foreground object segmentation via background modelling is a difficult problem in cluttered environments, where obtaining a clear view of the background to model is almost impossible. We propose a method capable of robustly estimating the background and detecting regions of interest in such environments. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose an efficient appearance modeling technique for automatic primary video object segmentation in the MRF framework. We create an efficient initial foreground estimation. Then we use foreground-background labelling refinement. Finally, we can get the foreground from video faster than other approaches.en
dc.description.provenanceMade available in DSpace on 2021-06-08T01:49:54Z (GMT). No. of bitstreams: 1
ntu-105-R02221028-1.pdf: 11264505 bytes, checksum: 16e90d2fff6eebc6e0e40f93002ef630 (MD5)
Previous issue date: 2016
en
dc.description.tableofcontents口試委員會審定書
Acknowledgement . . . . . . . . . . . . . . . . . . .i
中文摘要 . . . . . . . . . . . . . . . . . . . . . . .ii
Abstract . . . . . . . . . . . . . . . . . . . . . .iii
Table of Contents . . . . . . . . . . . . . . . . . iv
List of Figures. . . . . . . . . . . . . . . . . . . .v
List of Tables. . . . . . . . . . . . . . . . . . . .vi
1. Introduction . . . . . . . . . . . . . . . . . . . 1
2. Efficient initial foreground estimation . . . . . .3
2.1 Optical flow . . . . . . . . . . . . . . . . 3
2.2 Superpixel . . . . . . . . . . . . . . . . . .7
2.3 Inside-outside maps . . . . . . . . . . . . .13
3. Foreground-background labelling refinement. . . . 15
3.1 Energy minimization in early vision. . . . . 17
3.2 Relationship to Gibbs Fields . . . . . . . .19
3.3 Energy function . . . . . . . . . . . . . . .21
4. Results. . . . . . . . . . . . . . . . . . . . . 27
5. Conclusion . . . . . . . . . . . . . . . . . . . 45
Reference . . . . . . . . . . . . . . . . . . . . . .47
dc.language.isoen
dc.title基於馬可夫隨機場之影像動態前景偵測與追蹤zh_TW
dc.titleDynamic Foreground Detection and Tracking from Video using Markov Random Fielden
dc.typeThesis
dc.date.schoolyear104-2
dc.description.degree碩士
dc.contributor.oralexamcommittee曾正男,黃文良
dc.subject.keyword馬可夫隨機場,光流法,超像素,內外分割圖,能量函數,zh_TW
dc.subject.keywordMarkov random fields,Optical flow,Superpixel,Inside-outside maps,Energy function,en
dc.relation.page49
dc.identifier.doi10.6342/NTU201601244
dc.rights.note未授權
dc.date.accepted2016-07-28
dc.contributor.author-college理學院zh_TW
dc.contributor.author-dept數學研究所zh_TW
顯示於系所單位:數學系

文件中的檔案:
檔案 大小格式 
ntu-105-1.pdf
  未授權公開取用
11 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved