Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資料科學學位學程
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84736
Title: 文本嵌入向量逆向攻擊與主題式語意分解模型
Embedding Inversion Attack of Documents with Topic-aware Semantic Decoder Model
Authors: Li-Jen Liu
劉力仁
Advisor: 林守德(Shou-De Lin)
Co-Advisor: 葉彌妍(Mi-Yen Yeh)
Keyword: 嵌入向量反向攻擊,文章嵌入,主題模型,排序,
Embedding Inversion Attack,Document Embedding,Topic Model,Learning to Rank,
Publication Year : 2022
Degree: 碩士
Abstract: 文本嵌入學習是一種將文章轉化成低維度向量的重要技術,文章的嵌入向量保有完整的文章語意且並在自然語言處理的各種應用上取得成功。然而,這些成功也吸引來許多惡意攻擊者的注意,在其中一種惡意攻擊裡,攻擊者嘗試去逆向工程,把文章的嵌入向量反推回其原本的輸入文字,或是敏感的文字,進而窺探出向量背後的資訊。在這篇論文中,我們將先前的設定延伸成更普遍的形式,並提供兩種資訊來增加嵌入向量的解釋性。其一,我們認為攻擊者對文章中的每個字有他自己的偏好分數,而我們的目標就是回傳一個序列的文字使得它和攻擊者偏好的先後順序一樣。其二,即使能取得該文字序列,我們仍然難以理解它背後的涵義,因此我們借用主題模型的優點,從目標找到一致的語意。為了達成這些目標,我們結合神經網路主題模型和排序優化。經過完整的實驗,我們的設計在返回攻擊者的偏好文字序列,以及提供一致且多元的主題上,都有良好的表現,這意味著攻擊者在各種資料集與常見的嵌入模型上,都能夠輕易地理解一個文章嵌入向量背後的特性。
Document representation learning has become an important technique to embed rich document context into lower dimensional vectors. The embeddings preserve complete semantics of documents and lead to a huge success in various NLP applications. Nonetheless, the success also attracted many malicious adversaries’ attention. In one branch, the adversaries try to reverse-engineer the embeddings to its content words or sensitive keywords to pry into the information behind them. In our work, we want to extend the previous setting to a more general one and provide two types of information that both increase the interpretation of the embeddings. For the first one, we assume an adversary has his/her preference for the information in the documents and our goal is to retrieve the sequence of words that correspond to the adversary’s preference. Second, since even we could precisely retrieve a sequence of words that represent the documents, it is still hard for human to actually understand the idea among them. Thus, we borrow the advantages of topic model to acquire coherent semantics from the targets. To achieve these goals, wecombine the mechanism of neural topic model and ranking optimization. Through comprehensive experiments, our design shows promising results in capturing the sequence of adversaries’ preference words and providing coherent and diverse topics that the adversary could easily realize the characteristic of the unknown embeddings on various datasets and off-the-shelf embedding models .
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84736
DOI: 10.6342/NTU202203066
Fulltext Rights: 同意授權(限校園內公開)
metadata.dc.date.embargo-lift: 2022-09-29
Appears in Collections:資料科學學位學程

Files in This Item:
File SizeFormat 
U0001-0109202215464400.pdf
Access limited in NTU ip range
1.27 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved