Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 文學院
  3. 語言學研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70749
Title: 文本意圖的多模態分析:以Instagram為例
An Analysis of Multimodal Document Intent in Instagram Posts
Authors: Ying-Yu Chen
陳盈瑜
Advisor: 謝舒凱(Shu-Kai Hsieh)
Keyword: 多模態文本分析,自然語言處理,
multimodal documents understanding,contextual relationship,semiotic relationship,authors intent,Natural Language Processing,
Publication Year : 2020
Degree: 碩士
Abstract: 時至今日,社群媒體(如Instagram)趨向結合圖片以及文字表徵,建構出一種新的「多模態」溝通方式。利用計算方法分析多模態關係已成為一個熱門的主題,然而,尚未有研究針對台灣的百大網紅發文中的多模態圖文配對(Image-caption Pair)來分析文本意圖和圖文關係。利用文字和圖片的多模態表徵,本研究沿用 Kruk et al. (2019)的圖文關係分類方法(contextual relationship/semiotic relationship/authors intent),對此三種分類提出新的圖文表徵方式(Sentence-BERT及image embedding),並利用計算模型(Random Forest, Decision Tree Classifier)精準分類以上三種圖文關係,研究結果顯示正確率高達86.23%。
A majority of representation style on social media (i.e., Instagram) tends to combine visual and textual content in the same message as a consequence of building up a modern way of communication. Message in multimodality is essential in almost any types of social interactions especially in the context of social multimedia content on- line. Hence, effective computational approaches for understanding documents with multiple modalities needed to identify the relationship between them. This study extends recent advances in intent classification by putting forward an approach us- ing Image-caption Pairs (ICPs). Several Machine Learning algorithm like Decision Tree Classifier (DTC’s), Random Forest (RF) and encoders like Sentence-BERT and picture embedding are undertaken in the tasks in order to classify the relation- ships between multiple modalities, which are 1) contextual relationship 2) semiotic relationship and 3) authors intent. This study points to two results. First, despite the prior studies consider incorporating the two synergistic modalities in a com- bined model will improve the accuracy in the relationship classification task, this study found out the simple fusion strategy that linearly projects encoded vectors from both modalities in the same embedding space may not strongly enhance the performance of that in single modality. The results suggest that the incorporating of text and image needs more effort to complement each other. Second, we show that these text-image relationships can be classified with high accuracy (86.23%) by using only text modality. In sum, this study may be of essential in demonstrating a computational approach to access multimodal documents as well as providing a better understanding of classifying the relationships between modalities.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70749
DOI: 10.6342/NTU202004151
Fulltext Rights: 有償授權
Appears in Collections:語言學研究所

Files in This Item:
File SizeFormat 
U0001-2008202019480500.pdf
  Restricted Access
5.52 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved