Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72133
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林守德(Shou-De Lin)
dc.contributor.authorChih-Te Laien
dc.contributor.author賴至得zh_TW
dc.date.accessioned2021-06-17T06:25:02Z-
dc.date.available2021-08-18
dc.date.copyright2018-08-18
dc.date.issued2018
dc.date.submitted2018-08-17
dc.identifier.citationReferences
[1] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2414–2423, 2016.
[2] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint, 2017.
[3] Ming-Yu Liu and Oncel Tuzel. Coupled generative adversarial networks. In Advances in neural information processing systems, pages 469–477, 2016.
[4] Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou. Demystifying neural style transfer. arXiv preprint arXiv:1701.01036, 2017.
[5] Boris Kovalenko and CS229 Machine Learning. Controllable text generation.
[6] Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Information Processing Systems, pages 6830–6841, 2017.
[7] Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems, pages 700–708, 2017.
[8] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200, 2016.
[9] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. Learning to discover cross-domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017.
[10] Zili Yi, Hao (Richard) Zhang, Ping Tan, and Minglun Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In ICCV, pages 2868–2876, 2017.
[11] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multidomain image-to-image translation. arXiv preprint, 1711, 2017.
[12] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
[13] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
[14] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, pages 2852–2858, 2017.
[15] Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. Professor forcing: A new algorithm for training recurrent networks. In Advances In Neural Information Processing Systems, pages 4601–4609, 2016.
[16] R Devon Hjelm, Athul Paul Jacob, Tong Che, Adam Trischler, Kyunghyun Cho, and Yoshua Bengio. Boundary-seeking generative adversarial networks. arXiv preprint arXiv:1702.08431, 2017.
[17] Tong Che, Yanran Li, Ruixiang Zhang, R Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983, 2017.
[18] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
[19] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
[20] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
[21] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495, 2014.
[22] Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. In Advances in Neural Information Processing Systems, pages 343–351, 2016.
[23] Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. Adversarial multi-criteria learning for chinese word segmentation. arXiv preprint arXiv:1704.07556, 2017.
[24] Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72133-
dc.description.abstract風格轉換是人工智慧研究領域內一個熱門的議題。不過,在自然語言處理研究上的非平行文本風格轉換受限於一些既有的問題使得研究仍非常具有挑戰性。其中一個問題在於在文字中區別風格(style) 和內文(content) 是非常困難的,而且現有的模型並沒有適當的機制去在文本中保持與風格不相關的內文。另一個缺點在於主要的方式僅專注於兩種類別的風格轉換, 因此在處理多類別風格轉移時,需要建立成對的模型在任意兩種方面。本論文提出一個自編碼模型以及一個統一的生成對抗網路用以處理多類別間的風格轉換,同時設計在隱空間(latent space) 的正規化損失函數(regularization loss) 用於保持內文的特徵。實驗結果顯示我們的模型能夠達到更加多樣且一般性的風格轉移。zh_TW
dc.description.abstractStyle transfer is a popular topic in artificial intelligence research. However, several issues of non-parallel style transfer in natural language processing remain challenging. One is that separating styles with content of texts is difficult, and current models have no proper mechanism to remain style-unrelated content of texts. Another drawback is that main approaches focus on transfer styles between two classes, since pairwise models among any two aspects should be built when dealing with transfer of multiple classes. In this work, we propose an auto-encoder model including a unified generative adversarial network for multi-class style transfer and a designed regularization losses on latent space to preserve content representation. Empirical results show that our models achieve more diverse and general style transfer.en
dc.description.provenanceMade available in DSpace on 2021-06-17T06:25:02Z (GMT). No. of bitstreams: 1
ntu-107-R05944018-1.pdf: 1121105 bytes, checksum: edfedfefb70ef7378680a2a7cb6b51bd (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents誌謝 i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
Chapter 2 Related Works 3
2.1 Style Transfer in Computer Vision 3
2.2 Style Transfer in Natural Language Processing 3
2.3 Adversarial Learning over Discrete Samples 4
2.4 Adversarial Networks for Domain Separation 5
Chapter 3 Preliminaries 6
3.1 Problem Definition 6
3.2 Encoder-Decoder Framework 7
3.3 Cross-aligned Auto-encoder 8
Chapter 4 Methodology 10
4.1 Latent Regularization Loss 10
4.1.1 Latent Consistency Loss 11
4.1.2 Latent Adversarial Loss 11
4.2 Unified Discriminator 12
4.3 Model Architecture 13
4.4 Training Algorithm 15
Chapter 5 Experiments 17
5.1 Dataset 17
5.2 Evaluation Metrics 17
5.3 Model Settings 18
5.4 Results of Latent Regularization Loss 18
5.5 Results of Unified Discriminator 19
5.6 Results of Latent-aligned Auto-encoder 19
5.7 Latent Visualization 20
Chapter 6 Conclusion 25
6.1 Discussion 25
6.2 Future Work 25
References 26
dc.language.isoen
dc.subject風格轉移zh_TW
dc.subject非平行文本zh_TW
dc.subject生成對抗網路zh_TW
dc.subject自編碼zh_TW
dc.subject隱空間zh_TW
dc.subjectgenerative adversarial networken
dc.subjectstyle transferen
dc.subjectauto-encoderen
dc.subjectlatent spaceen
dc.subjectnon-parallel texten
dc.title使用隱空間校準實現非平行文本風格轉移zh_TW
dc.titleNon-parallel Text Style Transfer by Latent Space Alignmenten
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree碩士
dc.contributor.oralexamcommittee林軒田(Hsuan-Tien Lin),陳信希(Hsin-Hsi Chen),鄭卜壬(Pu-Jen Cheng),陳縕儂(Yun-Nung Chen)
dc.subject.keyword風格轉移,非平行文本,生成對抗網路,自編碼,隱空間,zh_TW
dc.subject.keywordstyle transfer,non-parallel text,generative adversarial network,auto-encoder,latent space,en
dc.relation.page28
dc.identifier.doi10.6342/NTU201803914
dc.rights.note有償授權
dc.date.accepted2018-08-17
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
1.09 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved