Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/53690
Title: 利用感測器資料分析運動狀態進行第一人稱影片即時事件偵測
Real-Time Instant Event Detection in Egocentric Videos by Leveraging Sensor-Based Motion Context
Authors: Pei-Yun Hsu
許培芸
Advisor: 徐宏民
Keyword: 第一人稱視角影片,感測器,穿戴式裝置,事件偵測,
Egocentric Video,Sensor,Wearable Device,Event Detection,
Publication Year : 2015
Degree: 碩士
Abstract: 現今穿戴式裝置的普及產生了大量的第一人稱視角影片,因此偵測這些影片中的事件的需求也越來越多。與傳統的方法不同的地方在於穿戴式裝置(例如 Google Glass)有其計算能力與電量的限制,因此我們希望能著重在即時性的事件偵測,並且在事件發生同時才開始錄影。而傳統方法大多是必須先將所有影片錄下來,並且分析影像內容來達到目的,此方法非常耗時,因此不適用於穿戴式裝置。另一方面,由於穿戴式裝置多配有感測器,因此我們提出了一個利用感測器來判斷使用者的運動狀態,並依此偵測事件發生點的方法。首先是分析穿戴式裝置上的各種感測器的資料來當做特徵值,接著依此特徵值利用一個階層式的模型預測使用者目前的運動狀態,再依據此運動狀態選擇相對應的事件重要性預測模型,對每一個時間點給一個重要性分數,分數越高表示越有可能是重要事件發生的時間點,有了這個分數,我們便可以只在偵測到高分的時間點開始錄影(小短片),藉此達到省電省儲存空間的效果。除此之外,我們還收集了一個第一人稱日常生活影片資料集,包含由多個不同使用者以Google Glass錄製的第一人稱影片以及眼鏡上的感測器資料,而我們用此資料集來評估上述方法的表現,實驗結果顯示我們的方法比其他的方法好。
With rapid growth of egocentric videos from wearable devices, the need for instant video event detection is emerging. Different from conventional video event detection, it requires more considerations on a real-time event detection and immediate video recording due to the computational cost on wearable devices (e.g., Google Glass). Conventional work of video event detection analyzed video content in an offline process and it is time-consuming for visual analysis. Observing that wearable devices are usually along with sensors, we propose a novel approach for instant event detection in egocentric videos by leveraging sensor-based motion context. We compute statistics of sensor data as features. Next, we predict the user's current motion context by a hierarchical model, and then choose the corresponding ranking model to rate the importance score of the timestamp. With importance score provided in real-time, camera on the wearable device can dynamically record micro-videos without wasting power and storage. In addition, we collected a challenging daily-life dataset called EDS (Egocentric Daily-life Videos with Sensor Data), which contains both egocentric videos and sensor data recorded by Google Glass of different subjects. We evaluate the performance of our system on the EDS dataset, and the result shows that our method outperforms other baselines.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/53690
Fulltext Rights: 有償授權
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-104-1.pdf
  Restricted Access
2.53 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved