Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/42944
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor吳家麟(Ja-Ling Wu)
dc.contributor.authorMing-Che Chiangen
dc.contributor.author江明哲zh_TW
dc.date.accessioned2021-06-15T01:29:51Z-
dc.date.available2009-07-23
dc.date.copyright2009-07-23
dc.date.issued2009
dc.date.submitted2009-07-21
dc.identifier.citation[1] Youtube: http://www.youtube.com
[2] Overy.TV: http://www.overlay.tv
[3] INNOVID: http://www.innovid.com
[4] H. Liu, S. Jiang, Q. Huang, and C. Xu, “A generic virtual content insertion system based on visual attention analysis,” in Proc. 16th ACM Intl. Conf. Multimedia (MM’08), pp. 379-388, 2008.
[5] H. Liu, S. Jiang, Q. Huang, and C. Xu, “Lower attentive region detection for virtual content insertion,” in Proc. 2008 IEEE Intl. Conf. Multimedia and Expo (ICME’08), pp. 1529-1532, 2008.
[6] C. Xu, K. Wan, S. Bui, and Q. Tian, “Implanting virtual advertisement into broadcast soccer video,” in Proc of Pacific-Rim Conf. Multimedia (PCM’04), pp. 264-271, 2004.
[7] K. Wan, and C. Xu, “Automatic content placement in sports highlights,” in Proc. 2006 IEEE Intl. Conf. Multimedia and Expo (ICME’06), pp. 1893-1896, 2006.
[8] J.-H. Lai and S.-Y. Chien, “Tennis video enrichment with content layer separation and real-time rendering in sprite plane,” in Proc. 10th IEEE Workshop on Multimedia Signal Processing (MMSP’08), pp. 672-675, 2008.
[9] X. Yu, X. Yan, T. Chi, and L. Cheong, “Inserting 3D projected virtual content into broadcast tennis video,” in Proc. 14th ACM Intl. Conf. Multimedia (MM’06), pp. 619-622, 2006.
[10] X. Yan, X. Yu, and T. Chi, “A system for 3D projected virtual content insertion into broadcast tennis video,” in Proc. 14th ACM Intl. Conf. Multimedia (MM’06), pp. 809-810, 2006.
[11] X. Yu, N. Jiang, L. Cheong, H. Leong, and X. Yan, “Automatic camera calibration of broadcast tennis video with applications to 3D virtual content insertion and ball detection and tracking,” Computer Vision and Image Understanding, pp. 837-840, 2008.
[12] Y. Li, K. Wan, X. Yan, and C. Xu, “Real time advertisement insertion in baseball video based on advertisement effect,” in Proc. 13th ACM Intl. Conf. Multimedia (MM’05), pp. 343-346, 2005.
[13] K. Wan, and X. Yan, “Advertising insertion in sports webcasts,” IEEE Multimedia, vol. 14, no. 2, pp. 78-82, 2007.
[14] T. Mei, X.-S. Hua, L. Yang, and S. Li, “VideoSense: towards effective online video advertising,” in Proc. 15th ACM Intl. Conf. Multimedia (MM’07), pp. 1075-1084, 2007.
[15] T. Mei, X.-S. Hua, and S. Li, “Contextual in-image advertising,” in Proc. 16th ACM Intl. Conf. Multimedia (MM’08), pp. 439-448, 2008.
[16] L. Li, T. Mei, X.-S. Hua, and S. Li, “ImageSense,” in Proc. 16th ACM Intl. Conf. Multimedia (MM’08), pp. 1027-1028, 2008.
[17] C.-H. Chang, K.-Y. Hsieh, M.-C. Chung, and J.-L. Wu, “ViSA: virtual spotlighted advertising,” in Proc. 16th ACM Intl. Conf. Multimedia (MM’08), pp. 837-840, 2008.
[18] S. McCoy, A. Everard, P. Polak, and D. Galletta, “The effects of online advertising,” ACM Commun., vol. 50, no. 3, pp. 84-88, 2007.
[19] H. Li, S. Edwards, and J. Lee, “Measuring the intrusiveness of advertisements: scale development and validation,” Journal of Advertising, vol. 31, no. 2, pp. 37-47, 2002.
[20] S. S. Beauchemin, J. L. Barron, “The computation of optical flow,” ACM Computing Surveys, pp. 433-467, 1995.
[21] J. Shi, C. Tomasi, “Good features to track,” CVPR'94, pp. 539-600, 1994.
[22] Y. Deng, and B.S. Manjunath, “Unsupervised segmentation of color-texture regions in images and video,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 800-810, 2001.
[23] W.-H. Cheng, C.-W. Wang, and J.-L. Wu, 'Video adaptation for small display based on content recomposition,' IEEE Trans. on Circuits and Systems for Video Technology, vol. 17, no. 1, pp. 43-58, 2007.
[24] J.-C. Chen, W.-T. Chu, J.-H. Kuo, C.-Y. Weng, and J.-L. Wu, “Tiling Slideshow,” in Proc. 14th ACM Intl. Conf. Multimedia (MM’06), pp. 25-34, 2006.
[25] D. Walther and C. Koch, “Modeling attention to salient proto-objects,” Neural Networks, vol. 19, pp. 1395–1407, 2006.
[26] Y-F. Ma, X-S. Hua, L. Lu, H-J. Zhang, “A generic framework of user attention model and its application in video summarization,” IEEE Trans. on Multimedia, vol. 7, no. 5, pp. 907- 919, 2005.
[27] D. Cohen-Or, O. Sorkine, R. Gal, T. Leyvand, and Y.-Q. Xu, “Color harmonization,” ACM Trans. Graph., vol. 25, no. 3, pp. 624-630, 2006.
[28] George Wolberg, “Image morphing: a survey,” The Visual Computer, pp.360-372, 1998.
[29] Y. Zhai and M. Shah, “Visual Attention Detection in Video Sequences Using spatiotemporal Cues,” in Proc. 14th ACM Intl. Conf. Multimedia (MM’06), pp. 815-824, 2006.
[30] K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video inpainting under constrained camera motion,” IEEE Trans. on Image Processing (T-IP), vol. 16, no 2, pp. 545-553, 2007.
[31] Y.-Y. Chuang, D. B. Goldman, K. C. Zheng, B. Curless, D. Salesin, and R. Szeliski, “Animating pictures with stochastic motion textures,” ACM Trans. on Graph., vol. 24, no. 3, pp. 853-860, 2005.
[32] S. Krinidis, and V. Chatzis, “A skeleton family generator via physics-based deformable models,” IEEE Trans. on Image Processing, vol. 18, no. 1, pp. 1-11, 2009.
[33] A. Hornung, E. Dekkers, and L. Kobbelt, “Character animation from 2D pictures and 3D motion data,” ACM Trans. on Graph., vol. 26, no. 1, pp. 1-9, 2007.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/42944-
dc.description.abstract隨著多媒體數位內容的分析趨於成熟,有效的置入虛擬內容已經被廣泛地研究,並使用在影片上來傳達廣告意象,以及加強影片內容所能呈現的資訊;然而,如何在降低干擾程度的前提下,將虛擬內容以具有吸引力的方式置入於影片中仍然是一個重要並有挑戰性的問題。在這篇論文中,我們提出了一個創新的虛擬內容置入系統,能夠讓虛擬內容根據我們從演化生物學概念上所定義出來的行為來進行生動的演化。對於影片而言,不僅是虛擬內容傳達資訊的載體,更提供了一個環境讓虛擬內容能夠以生命般的形式存活其中,也使得虛擬內容可以透過演化機制和影片內容產生互動。此外,我們將演化的過程分為三個相依的階段,虛擬內容會隨著互動次數的增加,演化出不同的外觀和行為。藉由這種方式,我們所提出來的系統建構了虛擬內容和影片內容在視覺上的關聯性,並達到降低干擾程度的目的,同時也提高了影片觀賞者對虛擬內容置入的吸引力和接受度。由實驗結果得知,以我們系統的方式置入虛擬內容所產生的影片有效的降低了虛擬內容置入所帶來的干擾,並提升了觀看者對被置入的虛擬內容的印象和認知,而虛擬內容的演化過程也增進了觀眾對於原本影片的觀感,並吸引他們享受這個娛樂性的故事情節。zh_TW
dc.description.abstractWith the development of multimedia analysis, virtual content insertion has been widely used and studied for the video enrichment and advertising. However, how to less-intrusively insert a virtual content into general videos with an attractive representation is a significant and challenging problem in the field of virtual content insertion. In this paper, we present a novel virtual content insertion system which inserts virtual contents into videos with evolved animations according to predefined behaviors based on the concept of evolutionary biology. The videos are considered as not only carriers of message conveyed by the virtual content but also the environment in which the lifelike virtual contents live. Thus, the inserted virtual content interacts with video contents and triggers the artificial evolution. The evolution process is divided into distinct yet dependent phases, in which the virtual content evolves its appearances and behaviors with the incremental interactions. In this way, the proposed system constructs a visually relevant connection between the inserted virtual content and the source videos to reduce the intrusiveness and increase the acceptability and the attractiveness simultaneously. User studies show that the augmented videos produced by the proposed system effectively reduce the intrusiveness, and emphasize the impression of the inserted virtual content. Moreover, the evolution process improves the audience’s viewing experience to the original video content and engages viewers with the entertaining storyline.en
dc.description.provenanceMade available in DSpace on 2021-06-15T01:29:51Z (GMT). No. of bitstreams: 1
ntu-98-R96922070-1.pdf: 2969822 bytes, checksum: 7eab6ae44e0266afb329806010e8ea61 (MD5)
Previous issue date: 2009
en
dc.description.tableofcontentsCHAPTER 1 INTRODUCTION 1
1.1 MOTIVATION 1
1.2 VIRTUAL CONTENT INSERTION 3
1.3 RELATED WORK 4
1.4 THE PROPOSED SYSTEM 6
1.5 THESIS ORGANIZATION 7
CHAPTER 2 SYSTEM OVERVIEW 9
2.1 ESSENTIAL IDEAS 9
2.2 SYSTEM OVERVIEW 12
CHAPTER 3 VIDEO CONTENT ANALYSIS 15
3.1 FRAME PROFILING 15
3.1.1 Motion Estimation 15
3.1.2 Region Segmentation 16
3.2 ROI ESTIMATION 17
3.3 AURAL SALIENCY ANALYSIS 18
CHAPTER 4 VIRTUAL CONTENT ANIMATION 21
4.1 VIRTUAL CONTENT CHARACTERIZATION 21
4.2 BEHAVIOR MODELING 24
4.2.1 The Cell Phase 25
4.2.2 The Microbe Phase 27
4.2.3 The Creature Phase 30
4.3 ANIMATION GENERATION 33
4.4 LAYER COMPOSITION 35
CHAPTER 5 EXPERIMENTS 37
5.1 RESULTS OF EVOLUTION PATH 37
5.2 EVALUATION 42
CHAPTER 6 CONCLUSIONS AND FUTURE WORK 47
6.1 CONCLUSIONS 47
6.2 APPLICATION SCENARIOS 48
6.3 FUTURE WORK 48
REFERENCE 49
dc.language.isoen
dc.subject降低干擾zh_TW
dc.subject虛擬內容置入zh_TW
dc.subject生動zh_TW
dc.subject演化模擬zh_TW
dc.subject互動zh_TW
dc.subjectVirtual content insertionen
dc.subjectless intrusivenessen
dc.subjectinteractionen
dc.subjectsimulated evolutionen
dc.subjectanimationen
dc.title以演化概念置入與影片互動之虛擬內容zh_TW
dc.titleInteractive Virtual Content Insertion with Evolutions in Videos.en
dc.typeThesis
dc.date.schoolyear97-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳恆佑(Herng-Yow Chen),朱威達(Wei-Ta Chu),李明穗(Ming-Sui Lee)
dc.subject.keyword虛擬內容置入,生動,演化模擬,互動,降低干擾,zh_TW
dc.subject.keywordVirtual content insertion,animation,simulated evolution,interaction,less intrusiveness,en
dc.relation.page52
dc.rights.note有償授權
dc.date.accepted2009-07-21
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-98-1.pdf
  未授權公開取用
2.9 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved