Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74442
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳建錦(Chien Chin Chen)
dc.contributor.authorYu-Chen Huangen
dc.contributor.author黃于真zh_TW
dc.date.accessioned2021-06-17T08:36:01Z-
dc.date.available2020-08-20
dc.date.copyright2019-08-20
dc.date.issued2019
dc.date.submitted2019-08-08
dc.identifier.citation[1] BARBIERI, F., ANKE, L.E., BALLESTEROS, M., SOLER, J., and SAGGION, H., 2017. Towards the understanding of gaming audiences by modeling twitch emotes. In Proceedings of the 3rd Workshop on Noisy User-generated Text, 11-20.
[2] BODLA, N., SINGH, B., CHELLAPPA, R., and DAVIS, L.S., 2017. Soft-NMS--Improving Object Detection With One Line of Code. In Proceedings of the IEEE International Conference on Computer Vision, 5561-5569.
[3] CABA HEILBRON, F., CARLOS NIEBLES, J., and GHANEM, B., 2016. Fast temporal activity proposals for efficient detection of human actions in untrimmed videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1914-1923.
[4] CABA HEILBRON, F., ESCORCIA, V., GHANEM, B., and CARLOS NIEBLES, J., 2015. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 961-970.
[5] CHAO, Y.-W., VIJAYANARASIMHAN, S., SEYBOLD, B., ROSS, D.A., DENG, J., and SUKTHANKAR, R., 2018. Rethinking the faster r-cnn architecture for temporal action localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1130-1139.
[6] DEVLIN, J., CHANG, M.-W., LEE, K., and TOUTANOVA, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
[7] DIBA, A., SHARMA, V., and VAN GOOL, L., 2017. Deep temporal linear encoding networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2329-2338.
[8] ESCORCIA, V., HEILBRON, F.C., NIEBLES, J.C., and GHANEM, B., 2016. Daps: Deep action proposals for action understanding. In European Conference on Computer Vision Springer, 768-784.
[9] ESULI, A. and SEBASTIANI, F., 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In LREC Citeseer, 417-422.
[10] FU, C.-Y., LEE, J., BANSAL, M., and BERG, A.C., 2017. Video highlight prediction using audience chat reactions. arXiv preprint arXiv:1707.08559.
[11] GAO, J., CHEN, K., and NEVATIA, R., 2018. Ctap: Complementary temporal action proposal generation. In Proceedings of the European Conference on Computer Vision (ECCV), 68-83.
[12] GAO, J., YANG, Z., CHEN, K., SUN, C., and NEVATIA, R., 2017. Turn tap: Temporal unit regression network for temporal action proposals. In Proceedings of the IEEE International Conference on Computer Vision, 3628-3636.
[13] HU, M. and LIU, B., 2004. Mining opinion features in customer reviews. In AAAI, 755-760.
[14] JIAO, Y., LI, Z., HUANG, S., YANG, X., LIU, B., and ZHANG, T., 2018. Three-Dimensional Attention-Based Deep Ranking Model for Video Highlight Detection. IEEE Transactions on Multimedia 20, 10, 2693-2705.
[15] JUHLIN, O., ENGSTRProceedings of the 12th international conference on Human computer interaction with mobile devices and services ACM, 35-44.
[16] KHAN, A., SOHAIL, A., ZAHOORA, U., and QURESHI, A.S., 2019. A survey of the recent architectures of deep convolutional neural networks. arXiv preprint arXiv:1901.06032.
[17] LIN, T., ZHAO, X., SU, H., WANG, C., and YANG, M., 2018. Bsn: Boundary sensitive network for temporal action proposal generation. In Proceedings of the European Conference on Computer Vision (ECCV), 3-19.
[18] LIPTON, Z.C., BERKOWITZ, J., and ELKAN, C., 2015. A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019.
[19] MAINIERI, B.O., BRAGA, P.H.C., DA SILVA, L.A., and OMAR, N. Text Mining of Audience Opinion in eSports Events.
[20] METTES, P., VAN GEMERT, J.C., CAPPALLO, S., MENSINK, T., and SNOEK, C.G., 2015. Bag-of-fragments: Selecting and encoding video fragments for event detection and recounting. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval ACM, 427-434.
[21] MIKOLOV, T., SUTSKEVER, I., CHEN, K., CORRADO, G.S., and DEAN, J., 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, 3111-3119.
[22] OTSUKA, I., NAKANE, K., DIVAKARAN, A., HATANAKA, K., and OGAWA, M., 2005. A highlight scene detection and video summarization system using audio feature for a personal video recorder. IEEE Transactions on Consumer Electronics 51, 1, 112-116.
[23] PAK, A. and PAROUBEK, P., 2010. Twitter as a corpus for sentiment analysis and opinion mining. In LREc, 1320-1326.
[24] PANG, B. and LEE, L., 2008. Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval 2, 1–2, 1-135.
[25] PENG, X., ZOU, C., QIAO, Y., and PENG, Q., 2014. Action recognition with stacked fisher vectors. In European Conference on Computer Vision Springer, 581-595.
[26] PRENSKY, M., 2001. Digital natives, digital immigrants part 1. On the horizon 9, 5, 1-6.
[27] REN, S., HE, K., GIRSHICK, R., and SUN, J., 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 91-99.
[28] ROCHAN, M., YE, L., and WANG, Y., 2018. Video summarization using fully convolutional sequence networks. In Proceedings of the European Conference on Computer Vision (ECCV), 347-363.
[29] RUI, Y., GUPTA, A., and ACERO, A., 2000. Automatically extracting highlights for TV baseball programs. In Proceedings of the eighth ACM international conference on Multimedia ACM, 105-115.
[30] SHOU, Z., WANG, D., and CHANG, S.-F., 2016. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1049-1058.
[31] SIMONYAN, K. and ZISSERMAN, A., 2014. Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, 568-576.
[32] SIMONYAN, K. and ZISSERMAN, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[33] TRAN, D., BOURDEV, L., FERGUS, R., TORRESANI, L., and PALURI, M., 2015. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, 4489-4497.
[34] VASWANI, A., SHAZEER, N., PARMAR, N., USZKOREIT, J., JONES, L., GOMEZ, A.N., KAISER, Ł., and POLOSUKHIN, I., 2017. Attention is all you need. In Advances in neural information processing systems, 5998-6008.
[35] WANG, H. and SCHMID, C., 2013. Action recognition with improved trajectories. In Proceedings of the IEEE international conference on computer vision, 3551-3558.
[36] WANG, L., XIONG, Y., WANG, Z., QIAO, Y., LIN, D., TANG, X., and VAN GOOL, L., 2016. Temporal segment networks: Towards good practices for deep action recognition. In European conference on computer vision Springer, 20-36.
[37] XIONG, Y., ZHAO, Y., WANG, L., LIN, D., and TANG, X., 2017. A pursuit of temporal accuracy in general activity detection. arXiv preprint arXiv:1703.02716.
[38] XU, C., WANG, J., WAN, K., LI, Y., and DUAN, L., 2006. Live sports event detection based on broadcast video and web-casting text. In Proceedings of the 14th ACM international conference on Multimedia ACM, 221-230.
[39] XU, H., DAS, A., and SAENKO, K., 2017. R-c3d: Region convolutional 3d network for temporal activity detection. In Proceedings of the IEEE international conference on computer vision, 5783-5792.
[40] YAO, T., MEI, T., and RUI, Y., 2016. Highlight detection with pairwise deep ranking for first-person video summarization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 982-990.
[41] YU, Y., LEE, S., NA, J., KANG, J., and KIM, G., 2018. A Deep Ranking Model for Spatio-Temporal Highlight Detection from a 360◦ Video. In Thirty-Second AAAI Conference on Artificial Intelligence.
[42] ZHANG, L., WANG, S., and LIU, B., 2018. Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8, 4, e1253.
[43] ZHAO, J., LIU, K., and XU, L., 2016. Sentiment analysis: mining opinions, sentiments, and emotions MIT Press.
[44] ZHU, W., HU, J., SUN, G., CAO, X., and QIAO, Y., 2016. A key volume mining deep framework for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1991-1999.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74442-
dc.description.abstract直播在網路中形成一股浪潮,帶給人們更臨場的觀看體驗,以及更即時的互動,吸引眾多的數位原住民。多數直播服務都有提供即時聊天室以增加即時互動性,而這些留言除了傳達出觀眾當下的情緒、想法,也提供了另一種和影片內容有關的豐富資訊來源,可用來挖掘觀眾對影片內容的意見。因此我們希望用這樣的概念來做精彩片段之擷取,以觀眾留言背後隱藏的意見來判別出影片中何處是精彩片段的開始與結束。並且結合當前主流的two-staged network的概念,透過兩階段的學習,先過濾出可能的片段,再進一步衡量這些片段屬於精彩片段的機率,減少訓練時間,且提升預測結果表現。最終本研究希望能夠設計出一套系統,可以有效率的透過留言去定位出人們喜愛的精彩片段。zh_TW
dc.description.abstractLive streaming raises a burst of upsurge on the Internet. The reason is that it brings people a more on-the-spot viewing experience and more immediate interaction. Most live streaming services provide instant chat rooms to increase interaction. These messages convey the emotion of the audience and provide another rich source of information related to the video content, which can be used to mining the audience's opinions on the video content. Therefore, we hope to use this concept to extract highlight and use the information implicit in the audience message to locate highlight in the video. We also introduce the current mainstream two-staged network concept, through two-stage learning, the first filter out possible segments, further measure the probability that these segments belong to highlight, reduce training time, and improve the performance of prediction results. In the end, this study wants to design a system that can efficiently locate popular highlights through messages.en
dc.description.provenanceMade available in DSpace on 2021-06-17T08:36:01Z (GMT). No. of bitstreams: 1
ntu-108-R06725028-1.pdf: 1104399 bytes, checksum: 11562839039f99caaeabb7ecdcdb3a6e (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents摘要 i
ABSTRACT ii
TABLE OF CONTENTS iii
LIST OF FIGURES iv
LIST OF TABLES v
1 INTRODUCTION 1
2 RELATED WORKS 4
2.1 Datasets and Evaluation Metrics 4
2.2 Temporal Action Detection and Two-staged Network 5
3 TWO-STAGED HIGHLIGHT EXTRACTION NETWORK 11
3.1 Sequence Probability Generator 12
3.2 Candidate Generator and Evaluator 14
3.3 Redundant Candidate Suppression 15
4 EXPERIMENT 16
4.1 Datasets and Evaluation Metrics 16
4.2 Effects of System Parameters on Training 18
4.3 Comparisons with other Methods 19
5 CONCLUSION 20
6 ACKNOWLEDGMENTS 21
7 REFERENCES 21
dc.language.isoen
dc.subject兩階段網絡zh_TW
dc.subject循環神經網路zh_TW
dc.subject深度學習zh_TW
dc.subject群眾外包zh_TW
dc.subject精彩剪輯zh_TW
dc.subject直播zh_TW
dc.subjectCrowd Sourcingen
dc.subjectDeep Learningen
dc.subjectHighlight Extractionen
dc.subjectLive Streamingen
dc.subjectTwo Staged Networken
dc.subjectRecurrent Neural Networken
dc.title結合兩階段網絡於影片精彩片段之擷取zh_TW
dc.titleVideo Highlight Extraction with a Two-staged Networken
dc.typeThesis
dc.date.schoolyear107-2
dc.description.degree碩士
dc.contributor.oralexamcommittee張詠淳(Yung-Chun Chang),陳孟彰(Meng Chang Chen)
dc.subject.keyword深度學習,循環神經網路,兩階段網絡,直播,精彩剪輯,群眾外包,zh_TW
dc.subject.keywordDeep Learning,Recurrent Neural Network,Two Staged Network,Live Streaming,Highlight Extraction,Crowd Sourcing,en
dc.relation.page26
dc.identifier.doi10.6342/NTU201902938
dc.rights.note有償授權
dc.date.accepted2019-08-11
dc.contributor.author-college管理學院zh_TW
dc.contributor.author-dept資訊管理學研究所zh_TW
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  未授權公開取用
1.08 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved