請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/54930完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 張智星(Jyh-Shing Jang) | |
| dc.contributor.author | Yu-Chung Yang | en |
| dc.contributor.author | 楊育宗 | zh_TW |
| dc.date.accessioned | 2021-06-16T03:41:52Z | - |
| dc.date.available | 2017-03-13 | |
| dc.date.copyright | 2015-03-13 | |
| dc.date.issued | 2015 | |
| dc.date.submitted | 2015-02-11 | |
| dc.identifier.citation | [1] Lorenzo Torresani, Martin Szummer, and Andrew Fitzgibbon. Efficient object category recognition using classemes. In Computer Vision–ECCV 2010, pages 776– 789. Springer, 2010.
[2] Damian Borth, Tao Chen, Rongrong Ji, and Shih-Fu Chang. Sentibank: large-scale ontology and classifiers for detecting sentiment and emotions in visual content. In Proceedings of the 21st ACM international conference on Multimedia, pages 459– 460. ACM, 2013. [3] Li-Jia Li, Hao Su, Li Fei-Fei, and Eric P Xing. Object bank: A high-level image representation for scene classification & semantic feature sparsification. In Advances in neural information processing systems, pages 1378–1386, 2010. [4] Bryan C Russell, Antonio Torralba, Kevin P Murphy, and William T Freeman. Labelme: a database and web-based tool for image annotation. International journal of computer vision, 77(1-3):157–173, 2008. [5] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [6] Yu-Gang Jiang, Baohan Xu, and Xiangyang Xue. Predicting emotions in user- generated videos. The 28th AAAI Conference on Artificial Intelligence, 2014. [7] Ming-Ju Wu, Zhi-Sheng Chen, JR Jang, Jia-Min Ren, Yi-Hsung Li, and Chun- Hung Lu. Combining visual and acoustic features for music genre classification.In Machine Learning and Applications and Workshops (ICMLA), 2011 10th In- ternational Conference on, volume 2, pages 124–129. IEEE, 2011. [8] Milind Naphade, John R Smith, Jelena Tesic, Shih-Fu Chang, Winston Hsu, Lyndon Kennedy, Alexander Hauptmann, and Jon Curtis. Large-scale concept ontology for multimedia. MultiMedia, IEEE, 13(3):86–91, 2006. [9] Robert Plutchik. The nature of emotions. American Scientist, 89(4):344–350, 2001. [10] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9):1627–1645, 2010. [11] Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2169–2178. IEEE, 2006. [12] Aude Oliva and Antonio Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International journal of computer vision, 42(3):145–175, 2001. [13] Christiane Fellbaum. WordNet. Wiley Online Library, 1998. [14] Chih-ChungChangandChih-JenLin.Libsvm:alibraryforsupportvectormachines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27, 2011. [15] Bernhard Scholkopf, Robert C Williamson, Alex J Smola, John Shawe-Taylor, and John C Platt. Support vector method for novelty detection. In NIPS, volume 12, pages 582–588, 1999. [16] Mike Thelwall. Heart and soul: Sentiment strength detection in the social web with sentistrength. Cyberemotions, pages 1–14, 2013. [17] National Taiwan University Natural Language Processing Laboratory. 中文情感極 性辭典. [18] Wikipedia. 維基百科,自由的百科全書. [19] BethLoganetal.Melfrequencycepstralcoefficientsformusicmodeling.InISMIR, 2000. [20] Rene Marcelino Abritta Teixeira, Toshihiko Yamasaki, and Kiyoharu Aizawa. De- termination of emotional content of video clips by low-level audiovisual features. Multimedia Tools and Applications, 61(1):21–49, 2012. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/54930 | - |
| dc.description.abstract | 有別於過往使用電子郵件或是部落格的方式,現在網路使用者更 常選擇在社群平台上分享文字、圖片、甚至影片,進而衍生出許多關 於社群平台的研究,而能否用電腦來判斷使用者所想分享的情緒,就 是其中一個有趣的議題。近年來,文字、圖片及影片在各自的領域都 有許多深入的研究,但針對的大多為單一領域。在這篇論文中我們提 出的是一個可以同時處理三種不同形態的資料的系統,更為貼近實際 上所會面對的資料。在此我們提出一套程序化的方法,分別用建立視 覺情緒知網的方式,作為圖片情緒分析;One-hot 文字特徵表示方法及 Word2vec 文字向量化工具,作為文字情緒分析;結合影片中影像及聲 音的特徵,作為影片情緒分析的依據。最後將訓練出來的模型加以結 合,對使用者的發佈的動態做預測,試圖讓電腦能自動判斷其中所含 有的情緒為正向或是負向。我們收集了 1113 則 Facebook 上的動態作 為測試資料,以驗證我們系統中文字及圖片部分的情緒辨認效果,亦 收集了 1101 個帶有情緒的影片資料,來測試系統中影片部分的效果。 雖然因為預測情緒需耗費大量時間,及在影片情緒的辨識上尚未得到 令人滿意的結果,我們仍然提出了一個對社群平台上動態情緒辨識的 系統原型,在圖片情緒辨識上的效能追上、及文字情緒辨識大幅超越 Sentibank 現所提出的方法。 | zh_TW |
| dc.description.abstract | Nowadays, Internet users prefer to share texts, images, and videos on social networks rather than share them via E-mail or blogs, thus creating many research topics related social networks. One of the research problems is to identify the connoted sentiment of an user’s post automatically. Image, text, and video sentiment recognition have been studied extensively over recent years independently. In this thesis we would like to propose a comprehensive system which incorporates all three types of an user’s posts. Here we proposes a procedural system that utilizes Sentibank visual sentiment ontology for image sentiment recognition, one-hot and Word2vec representation for text sentiment analysis, and the combination of visual and audio features for video sentiment recognition. Lastly we use a combination of these three types of model to predict whether the sentiment of an user’s post is positive or negative. We collect 1113 Facebook posts with text and image as our testing dataset for evaluating the performance for image and text input, and 1101 emotion video dataset for video input. Although the proposed system does not perform satisfactorily in terms of computation time and recognition accuracy with video input, it achieves a comparable recognition accuracy with image input and outperforms Sentibank with text input to existing methods proposed by Sentibank. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T03:41:52Z (GMT). No. of bitstreams: 1 ntu-104-R01922056-1.pdf: 25203134 bytes, checksum: e82249631f1f3aa0496aafe1327f55ac (MD5) Previous issue date: 2015 | en |
| dc.description.tableofcontents | 1 Introduction 1
2 Related Work 4 2.1 Classemes.................................. 4 2.2 VisualSentimentOntology......................... 5 2.3 Objectbank ................................. 6 2.4 LabelMe................................... 7 2.5 GoogleWord2vec.............................. 8 2.6 PredictingEmotionsinUser-GeneratedVideos . . . . . . . . . . . . . . 9 2.7 MusicGenreClassification......................... 12 3 System Description 14 3.1 ProblemDefinition ............................. 14 3.2 ImageSentimentRecognition........................ 14 3.2.1 VisualSentimentOntology..................... 14 3.2.2 ANPImageCollection....................... 16 3.2.3 ImageFeatureExtractionbyObjectBank . . . . . . . . . . . . . 17 3.2.4 Model Generation of ANP by One-class Support Vector Machine 18 3.2.5 ProcessingFlowofImages..................... 18 3.2.6 SentiStrength............................ 19 3.3 TextSentimentRecognition ........................ 21 3.3.1 One-hotRepresentation ...................... 21 3.3.2 Word2vecRepresentation ..................... 21 3.3.3 Sentence Representation by Term Frequency and Inverse Document FrequencyWeightingmethod ................... 23 3.4 VideoSentimentAnalysis.......................... 23 3.4.1 ExtractVisualFeaturesofVideo.................. 25 3.4.2 ExtractAudioFeaturesofVideo.................. 26 3.5 CombiningText,Image,andVideoFeatures . . . . . . . . . . . . . . . . 28 3.5.1 PrincipalComponentAnalysis................... 28 3.5.2 CombingTextandImageFeatures................. 28 4 Performance Evaluation 29 4.1 Methodology ................................ 29 4.2 Dataset ................................... 30 4.2.1 ANPsModelsDataset ....................... 30 4.2.2 ChineseFacebookDataset ..................... 30 4.2.3 EmotionVideoDataset....................... 30 4.3 ResultsandDiscussion ........................... 32 4.3.1 Experiment: Image Sentiment Recognition on Twitter Dataset . . 32 4.3.2 Experiment: Image Sentiment Recognition on Chinese Facebook Dataset ............................... 33 4.3.3 Experiment: Text Sentiment Analysis on Chinese Facebook Dataset 35 4.3.4 Experiment: Sentiment Recognition on Facebook Dataset . . . . 36 4.3.5 Experiment: Visual Part of Video Sentiment Analysis on Emotion VideoDataset............................ 37 4.3.6 Experiment: Audio Part of Video Sentiment Analysis on Emotion VideoDataset............................ 39 4.3.7 Experiment: Video Sentiment Analysis on Emotion Video Dataset 40 5 Conclusions and Future Work 42 5.1 Conclusions................................. 42 5.2 FutureWork................................. 43 A Frame by Frame Example of Predicting Results for One Video 45 B Object List of Objectbank 52 Bibliography 55 | |
| dc.language.iso | en | |
| dc.subject | 圖片情緒辯識 | zh_TW |
| dc.subject | 社群網路 | zh_TW |
| dc.subject | 影片情緒辨識 | zh_TW |
| dc.subject | 文字情緒辯識 | zh_TW |
| dc.subject | text sentiment recognition | en |
| dc.subject | social networks | en |
| dc.subject | video sentiment recognition | en |
| dc.subject | image sentiment recognition | en |
| dc.title | 基於文字、圖片及影像對社群平台訊息進行情緒辨識的初步 研究 | zh_TW |
| dc.title | An Initial Study on Sentiment Recognition of Social Network Messages Based on Texts, Images, and Videos | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 103-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 林嘉文(Chia-Wen Lin),葉梅珍(Mei-Chen Yeh) | |
| dc.subject.keyword | 社群網路,圖片情緒辯識,文字情緒辯識,影片情緒辨識, | zh_TW |
| dc.subject.keyword | social networks,image sentiment recognition,text sentiment recognition,video sentiment recognition, | en |
| dc.relation.page | 57 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2015-02-12 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-104-1.pdf 未授權公開取用 | 24.61 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
