Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資料科學學位學程
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74455
Title: 使用多尺度注意模型之弱監督學習式影片動作片段重定位
Weakly Supervised Video Re-localization of Action Segments with Multiscale Attention Model
Authors: Yung-Han Huang
黃永翰
Advisor: 鄭士康(Shyh-Kang Jeng)
Co-Advisor: 林彥宇(Yen-Yu Lin)
Keyword: 影片重定位,深度學習,注意力機制,共注意損失函數,
Video Re-localization,Deep Learning,Attention Mechanism,Co-attention Loss,
Publication Year : 2019
Degree: 碩士
Abstract: 影片重定位的目標是根據我們所給予的查詢影片,在參考影片中找出我們所感興趣的片段-本研究探討的是動作片段。本論文提出多尺度注意力機制模型來處理弱監督學習下的影片重定位問題。在訓練模型的過程中,不需要使用參考影片中動作片段的位置資訊。整體模型由三個模組所組成:第一部份使用預訓練的C3D模型提取特徵;接著設計一個多尺度注意力模組,根據不同尺度的特徵來計算參考影片與查詢影片的相似度:這樣的機制可保留更多時間資訊。最後,定位模組對參考影片中每個時間點的特徵進行預測,判斷其是否屬於動作片段。本研究使用共注意力損失函數來訓練模型,利用特徵之間的相似度,將動作片段從參考影片中分離出來。此外,只要加上交叉熵損失函數,我們的模型可以輕易修改成監督式學習的版本。我們在一個公開的資料集上測試,無論是監督式學習或是弱監督式學習,都獲得目前最好的效果。
Video re-localization aims to localize the segment we are interested in, an activity segment in this work, in the reference according to the query video we given. In this thesis, we propose an attention-based model to deal with this problem under a weakly-supervised setting. In other words, we train our model without the label of the action clip location in the reference video. Our model is composed of three major modules. In the first module, we employ a pre-trained C3D network as a feature extractor. For the second module, we design an attention mechanism to compute the similarity between the query video and a reference video based on multiscaled features. It can better preserve the local temporal information. Finally, in the final module, a localization layer predicts whether the feature of each time step in the reference video belongs to the query action segment. The full model is based on co-attention loss which separates an action instance from the reference video by utilizing the similarity between features. Our model can also be easily revised to fully supervised setting by updating the cross entropy loss to exploit the given location information. We evaluate our model on a public dataset and achieve the state-of-the-art performance under both weakly supervised and fully supervised settings.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74455
DOI: 10.6342/NTU201902914
Fulltext Rights: 有償授權
Appears in Collections:資料科學學位學程

Files in This Item:
File SizeFormat 
ntu-108-1.pdf
  Restricted Access
2.95 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved