Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/53690
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor徐宏民
dc.contributor.authorPei-Yun Hsuen
dc.contributor.author許培芸zh_TW
dc.date.accessioned2021-06-16T02:27:41Z-
dc.date.available2015-12-19
dc.date.copyright2015-08-11
dc.date.issued2015
dc.date.submitted2015-08-03
dc.identifier.citationBibliography
[1] C.-C. Chang and C.-J. Lin. Libsvm: a library for support vector machines. ACM
Transactions on Intelligent Systems and Technology (TIST).
[2] M. Gygli, H. Grabner, H. Riemenschneider, and L. Van Gool. Creating summaries
from user videos. In ECCV 2014.
[3] Y. J. Lee, J. Ghosh, and K. Grauman. Discovering important people and objects for
egocentric video summarization. In CVPR 2012.
[4] Z. Lu and K. Grauman. Story-driven summarization for egocentric video. In CVPR
2013.
[5] Y. Poleg, C. Arora, and S. Peleg. Temporal segmentation of egocentric videos. In
CVPR 2014.
[6] X. Wang, Y.-G. Jiang, Z. Chai, Z. Gu, X. Du, and D. Wang. Real-time summarization
of user-generated videos based on semantic recognition. In Proceedings of the
ACM International Conference on Multimedia, 2014.
[7] J. Windau and L. Itti. Situation awareness via sensor-equipped eyeglasses. In Intelligent
Robots and Systems (IROS), 2013.
[8] B. Xiong and K. Grauman. Detecting snap points in egocentric video with a web
photo prior. In ECCV 2014.
[9] J. Xu and H. Li. Adarank: a boosting algorithm for information retrieval. In SIGIR
2007.
[10] B. Zhao and E. P. Xing. Quasi real-time summarization for consumer videos. In
CVPR 2014.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/53690-
dc.description.abstract現今穿戴式裝置的普及產生了大量的第一人稱視角影片,因此偵測這些影片中的事件的需求也越來越多。與傳統的方法不同的地方在於穿戴式裝置(例如 Google Glass)有其計算能力與電量的限制,因此我們希望能著重在即時性的事件偵測,並且在事件發生同時才開始錄影。而傳統方法大多是必須先將所有影片錄下來,並且分析影像內容來達到目的,此方法非常耗時,因此不適用於穿戴式裝置。另一方面,由於穿戴式裝置多配有感測器,因此我們提出了一個利用感測器來判斷使用者的運動狀態,並依此偵測事件發生點的方法。首先是分析穿戴式裝置上的各種感測器的資料來當做特徵值,接著依此特徵值利用一個階層式的模型預測使用者目前的運動狀態,再依據此運動狀態選擇相對應的事件重要性預測模型,對每一個時間點給一個重要性分數,分數越高表示越有可能是重要事件發生的時間點,有了這個分數,我們便可以只在偵測到高分的時間點開始錄影(小短片),藉此達到省電省儲存空間的效果。除此之外,我們還收集了一個第一人稱日常生活影片資料集,包含由多個不同使用者以Google Glass錄製的第一人稱影片以及眼鏡上的感測器資料,而我們用此資料集來評估上述方法的表現,實驗結果顯示我們的方法比其他的方法好。zh_TW
dc.description.abstractWith rapid growth of egocentric videos from wearable devices, the need for instant video event detection is emerging. Different from conventional video event detection, it requires more considerations on a real-time event detection and immediate video recording due to the computational cost on wearable devices (e.g., Google Glass). Conventional work of video event detection analyzed video content in an offline process and it is time-consuming for visual analysis. Observing that wearable devices are usually along with sensors, we propose a novel approach for instant event detection in egocentric videos by leveraging sensor-based motion context. We compute statistics of sensor data as features. Next, we predict the user's current motion context by a hierarchical model, and then choose the corresponding ranking model to rate the importance score of the timestamp. With importance score provided in real-time, camera on the wearable device can dynamically record micro-videos without wasting power and storage. In addition, we collected a challenging daily-life dataset called EDS (Egocentric Daily-life Videos with Sensor Data), which contains both egocentric videos and sensor data recorded by Google Glass of different subjects. We evaluate the performance of our system on the EDS dataset, and the result shows that our method outperforms other baselines.en
dc.description.provenanceMade available in DSpace on 2021-06-16T02:27:41Z (GMT). No. of bitstreams: 1
ntu-104-R02922023-1.pdf: 2592568 bytes, checksum: 4180f0dd47331c7c0ffdefbcce631573 (MD5)
Previous issue date: 2015
en
dc.description.tableofcontentsContents
誌謝ii
Acknowledgements iii
摘要iv
Abstract v
1 Introduction 1
2 Technical Details 4
2.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Sensor Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Motion Context Classification . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Importance Rating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Experiment Results 8
3.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2.1 Motion Context Classification . . . . . . . . . . . . . . . . . . . 9
3.2.2 Importance Rating . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Conclusion 12
Bibliography 12
dc.language.isoen
dc.subject事件偵測zh_TW
dc.subject穿戴式裝置zh_TW
dc.subject感測器zh_TW
dc.subject第一人稱視角影片zh_TW
dc.subjectEvent Detectionen
dc.subjectEgocentric Videoen
dc.subjectSensoren
dc.subjectWearable Deviceen
dc.title利用感測器資料分析運動狀態進行第一人稱影片即時事件偵測zh_TW
dc.titleReal-Time Instant Event Detection in Egocentric Videos by Leveraging Sensor-Based Motion Contexten
dc.typeThesis
dc.date.schoolyear103-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳祝嵩,葉梅珍,余能豪
dc.subject.keyword第一人稱視角影片,感測器,穿戴式裝置,事件偵測,zh_TW
dc.subject.keywordEgocentric Video,Sensor,Wearable Device,Event Detection,en
dc.relation.page14
dc.rights.note有償授權
dc.date.accepted2015-08-04
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-104-1.pdf
  未授權公開取用
2.53 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved