Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/24764
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor徐宏民
dc.contributor.authorYu-Chia Daien
dc.contributor.author戴佑家zh_TW
dc.date.accessioned2021-06-08T05:56:11Z-
dc.date.copyright2011-08-11
dc.date.issued2011
dc.date.submitted2011-08-08
dc.identifier.citation[1] L. Kennedy et al, Less talk, more rock: Automated organization of community-contributed collections of concert videos, in Proc. ACM 18th Int. Conf. World Wide Web, 2009, pp. 311–320.
[2] Shrestha, P. et al, Synchronization of multiple camera videos using audio-visual features. IEEE Trans. on Multimedia 12(1), 79–92 (2010)
[3] Yeh, M.C., Cheng, K.T.: Video copy detection by fast sequence matching. In: CIVR (2009)
[4] J. Harel, C. Koch, and P. Perona, Graph-Based Visual Saliency, Proceedings of Neural Information Processing Systems (NIPS), 2006.
[5] Laptev, I. (2005). On space-time interest points. International Journal of Computer Vision, 64(2–3), 107–123.
[6] John R. Smith, Shih-Fu Chang, 'VisualSEEk: a Fully Automated Content-Based Image Query System,' In ACM Multimedia, Boston, MA, November 1996.
[7] A. Bosch, A. Zisserman, and X. Munoz. Representing shape with a spatial pyramid kernel. In CIVR, 2007.
[8] J. Sivic et al, “Video google: a text retrieval approach to object matching in videos,” ICCV, 2003.
[9] M. Charikar. Similarity estimation techniques from rounding algorithms. STOCI 2002.
[10] K. Grauman, “Efficiently searching for similar images,” Commun.ACM, vol. 53, no. 6, pp. 84–94, 2010.
[11] Jun Yang, et. al. , MIR 2007.Sivic and Zisserman, ICCV, 2003.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/24764-
dc.description.abstract內嵌攝錄功能的電子產品的價格越來越低和越來越多影片分享的媒介,除了一般熟知的YouTube,還有其他像是Facebook、Picasa等等的社群網站都提供了影片分享的服務,由於以上的因素,分享變成一件方便簡單的事情,越來越多人願意且習慣性地拍攝身邊的事物以及活動與朋友大眾分享,因此一個大型的活動通常會有許多使用者拍攝且上傳到網路上與他人分享。
若可以計算出多部拍攝相同活動的影片之間的時間差,並且調整這些影片的時間到相同時間點播放,這麼一來與單一使用者拍攝的影片相比,一次性地收看多部影片顯得較有效率並且由於多個視角的資訊集合使得畫面更加豐富。然而隨著拍攝位置的不一致、相機設定差異,畫面也呈現著完全不同的風貌。以往聲音特徵被應用來解此同步問題,然而可能因為環境、距離與使用者後置的關係使得並非所有影片都含有正確的聲音資訊。
這篇論文利用了視覺的相似性來解影片同步問題,強調於視覺興趣區域上的視覺特徵,並利用湊雜函數的觀念使得執行上更有效率。此外我們提出了一個時間序列比對演算法,名為具時間因子動態時間偏移演算法,此演算法不僅考慮不同影片的幀之間的相似度也利用了本身時間連續關係的變化。最後經由多樣實驗,證明所提方法可達到80%以上正確性且具時間效率。
zh_TW
dc.description.abstractThe high availability of digital video capture devices and the increasing diversity of social video sharing sites make sharing and searching become easy. Multi-view event videos provide diverse visual content and different audio information of the same event. Compared with single-view video, users prefer a more diverse and comprehensive views (video segments) of the same event. Therefore, the rise of multi-view event videos alignment becomes more and more important. It is a challenging work because the scene’s visual appearances from different views look apparently dissimilar. This work has been solved using audio before, but videos’ audio is not always available. In this work, we investigate the effect of different visual features and focus on regions of interest. Moreover, we propose a time sensitive dynamic time warping algorithm which takes temporal factor into consideration. Besides, we can reduce the computational cost by LSH indexing to improve time efficiency. Experimental results show that our proposed method provides an efficiency way to align videos and derive robust matching results.en
dc.description.provenanceMade available in DSpace on 2021-06-08T05:56:11Z (GMT). No. of bitstreams: 1
ntu-100-R98922100-1.pdf: 1936813 bytes, checksum: 47d2830c4f7fb185244ce77bf6211ed1 (MD5)
Previous issue date: 2011
en
dc.description.tableofcontents口試委員會審定書 #
中文摘要 i
ABSTRACT ii
CONTENTS iii
LIST OF FIGURES v
LIST OF TABLES vi
Chapter 1 Introduction 1
Chapter 2 Related Work 5
Chapter 3 System Overview 7
3.1 ROI Detection 8
3.2 Feature Extraction 9
Color 9
Shape 9
Space-Time Interest Point 10
Gray Intensity 11
3.3 Acceleration 11
3.4 Time Sensitive Dynamic Time Warping 12
3.4.1 Dynamic Time Warping 12
3.4.2 Similarity Function 12
3.4.3 Path Selection 15
Chapter 4 Experiment and Discussion 18
4.1 Evaluation Metrics 18
4.2 Dataset and Experiment Setting 19
4.3 Feature Comparison 20
4.4 Evaluation of Hash 22
4.5 Evaluation of Temporal Factor 23
Chapter 5 Conclusion 26
REFERENCE 27
dc.language.isoen
dc.subject社交網路zh_TW
dc.subject時間序列比對zh_TW
dc.subject影片同步zh_TW
dc.subject動態時間偏移演算法zh_TW
dc.subject湊雜函數zh_TW
dc.subjectvideo alignmenten
dc.subjectsequence matchingen
dc.subjectsocial networken
dc.subjectlocality sensitive hashingen
dc.subjectdynamic time warpingen
dc.title以事件為基礎之多重視角影片同步系統zh_TW
dc.titleAutomatic Alignment of Multi-View Event Videos by Fast
Sequence Matching
en
dc.typeThesis
dc.date.schoolyear99-2
dc.description.degree碩士
dc.contributor.oralexamcommittee李明穗,楊奕軒
dc.subject.keyword社交網路,時間序列比對,影片同步,動態時間偏移演算法,湊雜函數,zh_TW
dc.subject.keywordsocial network,sequence matching,video alignment,dynamic time warping,locality sensitive hashing,en
dc.relation.page27
dc.rights.note未授權
dc.date.accepted2011-08-08
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-100-1.pdf
  未授權公開取用
1.89 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved