Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86403
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳信希(Hsin-Hsi Chen)
dc.contributor.authorYou-En Linen
dc.contributor.author林佑恩zh_TW
dc.date.accessioned2023-03-19T23:53:47Z-
dc.date.copyright2022-08-29
dc.date.issued2022
dc.date.submitted2022-08-22
dc.identifier.citationAdrian Aiordachioae and Radu-Daniel Vatavu. 2019. Life-tags: a smartglasses-based system for recording and abstracting life with tag clouds. Proceedings of the ACM on human-computer interaction, 3(EICS):1–22. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. Yoshua Bengio, R. Janda, Y. W. Yu, Daphne Ippolito, Max Jarvie, D. Pilat, Brooke Struck, Sekoul Krastev, and A. Sharma. 2020. The need for privacy with public digital contact tracing during the covid-19 pandemic. The Lancet. Digital Health, 2:e342 – e344. Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. 2020. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. Rajeshree Bora-Kathariya and Yashodhara Haribhakta. 2018. Natural language inference as an evaluation measure for abstractive summarization. In 2018 4th International Conference for Convergence in Technology (I2CT), pages 1–4. Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and Hans-Peter Kriegel. 2005. Protein function prediction via graph kernels. Bioinformatics, 21(suppl_1):i47–i56. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Shaked Brody, Uri Alon, and Eran Yahav. 2022. How attentive are graph attention networks? In International Conference on Learning Representations. Vannevar Bush et al. 1945. As we may think. The atlantic monthly, 176(1):101–108. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 9539–9549. Curran Associates, Inc. Tai-Te Chu, Yi-Ting Liu, Chia-Chung Chang, An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2020. Nlp301 at the ntcir-15 micro-activity retrieval task: incorporating region of interest features into supervised encoder. In Proceedings of the NTCIR-15 Conference. Tzu-Hsuan Chu, Hen-Hsen Huang, and Hsin-Hsi Chen. 2019. Image recall on image-text intertwined lifelogs. In 2019 IEEE/WIC/ACM International Conference on Web Intelligence (WI), pages 398–402. IEEE. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Machine learning challenges workshop, pages 177–190. Springer. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Aiden R Doherty, Niamh Caprani, Vaiva Kalnikaite, Cathal Gurrin, Alan F Smeaton, Noel E O’Connor, et al. 2011. Passively recognising human activities through lifelogging. Computers in Human Behavior, 27(5):1948–1958. Charles J Fillmore, Miriam RL Petruck, Josef Ruppenhofer, and Abby Wright. 2003. Framenet in action: The case of attaching. International Journal of Lexicography, 16(3):297–332. Jim Gemmell, Lyndsay Williams, Ken Wood, Roger Lueder, and Gordon Bell. 2004. Passive capture and ensuing issues for a personal lifetime store. In Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences, pages 48–55. Zheng Gong, Kun Zhou, Wayne Xin Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen. 2022. Continual pre-training of language models for math problem understanding with syntax-aware memory network. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5923–5933. Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, and Rami Albatal. 2016. NTCIR Lifelog: The first test collection for lifelog research. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 705–708. Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, Rashmi Gupta, Rami Albatal, and Duc Tien Dang Nguyen. 2017. Overview of ntcir-13 Lifelog-2 task. Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, V-T Ninh, T-K Le, Rami Albatal, D-T Dang-Nguyen, and Graham Healy. 2019. Overview of the ntcir-14 Lifelog-3 task. In Proceedings of the 14th NTCIR Conference, pages 14–26. NII. Cathal Gurrin, Tu-Khiem Le, Van-Tu Ninh, Duc-Tien Dang-Nguyen, Björn Þór Jónsson, Jakub Lokoš, Wolfgang Hürst, Minh-Triet Tran, and Klaus Schoeffmann. 2020. Introduction to the Third Annual Lifelog Search Challenge (lsc’20). In Proceedings of the 2020 International Conference on Multimedia Retrieval, pages 584–585. Cathal Gurrin, Alan F Smeaton, Aiden R Doherty, et al. 2014. Lifelogging: Personal big data. Foundations and Trends® in information retrieval, 8(1):1–125. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, 30. Sanda Harabagiu and Andrew Hickl. 2006. Using scenario knowledge in automatic question answering. In Proceedings of the Workshop on Task-Focused Summarization and Question Answering, pages 32–39, Sydney, Australia. Association for Computational Linguistics. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778. Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 187–196, Online. Association for Computational Linguistics. Pei-Wei Kao, An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2021. Convlogminer: A real-time conversational lifelog miner. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 4992–4995. International Joint Conferences on Artificial Intelligence Organization. Demo Track. Jerrold J Katz and Jerry A Fodor. 1963. The structure of a semantic theory. language, 39(2):170–210. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Amel Ksibi, Ala Saleh D Alluhaidan, Amina Salhi, and Sahar A El-Rahman. 2021. Overview of lifelogging: current challenges and advances. IEEE Access, 9:62630– 62641. Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics. Jure Leskovec and Andrej Krevl. 2014. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 521–528, Manchester, UK. Coling 2008 Organizing Committee. Takuya Maekawa. 2013. A sensor device for automatic food lifelogging that is embedded in home ceiling light: A preliminary investigation. In 2013 7th International Conference on Pervasive Computing Technologies for Healthcare and Workshops, pages 405–407. IEEE. Guanglin Niu, Bo Li, Yongfei Zhang, and Shiliang Pu. 2022. Cake: A scalable commonsense-aware framework for multi-view knowledge graph completion. In ACL. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 779–788. Maarten Sap, Eric Horvitz, Yejin Choi, Noah A. Smith, and James Pennebaker. 2020. Recollection versus imagination: Exploring human memory and cognition via neural language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1970–1978, Online. Association for Computational Linguistics. Chao Shang, Guangtao Wang, Peng Qi, and Jing Huang. 2022. Improving time sensitivity for question answering over temporal knowledge graphs. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8017–8026, Dublin, Ireland. Association for Computational Linguistics. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, and Claire Cardie. 2022. Improving machine reading comprehension with contextualized commonsense knowledge. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8736–8747, Dublin, Ireland. Association for Computational Linguistics. Hao Tang, Donghong Ji, Chenliang Li, and Qiji Zhou. 2020. Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6578–6588, Online. Association for Computational Linguistics. Johan Van Benthem. 2008. A brief history of natural logic. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations. Kai Wang, Weizhou Shen, Yunyi Yang, Xiaojun Quan, and Rui Wang. 2020. Relational graph attention network for aspect-based sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3229–3238, Online. Association for Computational Linguistics. Yu-Wun Wang, Hen-Hsen Huang, Kuan-Yu Chen, and Hsin-Hsi Chen. 2018. Discourse marker detection for hesitation events on mandarin conversation. In Interspeech, pages 1721–1725. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112– 1122. Association for Computational Linguistics. Zeguan Xiao, Jiarun Wu, Qingliang Chen, and Congjian Deng. 2021. BERT4GCN: Using BERT intermediate layers to augment GCN for aspect-based sentiment classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9193–9200, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32. An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2019. Personal knowledge base construction from text-based lifelogs. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 185–194. An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2020. Multimodal joint learning for personal knowledge base construction from twitter-based lifelogs. Information Processing & Management, 57(6):102148. An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2021a. Ten questions in lifelog mining and information recall. In Proceedings of the 2021 International Conference on Multimedia Retrieval, pages 511–518. An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2021b. Unanswerable question correction in question answering over personal knowledge base. In Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21). Adams Wei Yu, David Dohan, Thang Luong, Rui Zhao, Kai Chen, and Quoc Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, and Jure Leskovec. 2022. GreaseLM: Graph REASoning enhanced language models. In International Conference on Learning Representations. Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, and Rui Wang. 2020. Sg-net: Syntax-guided machine reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9636–9643.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86403-
dc.description.abstract在回憶生活經歷時,人們經常忘記或混淆生活事件,所以提供資訊召回的服 務是需要的。而以前關於資訊召回的研究主要是被動式提供,也就是使用者透過 給定生活事件來評估是否需要資訊召回服務。然而,很少有研究涉及由系統主動 偵測人們是否需要資訊召回服務。在本文中,我們透過比較同一作者在兩個不同 時間點、針對同一事件所寫的敘述,來確定用戶在描述他們的過往的生活經歷時 是否遇到困難。因此,我們使用標記者根據個人真實生活經歷組成的資料集來偵 測觸發資訊召回服務的正確時間。此外,我們也提出一個模型–結構化事件增強網 路(SEEN),它可以檢測到標記者撰寫的生活經歷是否包含不一致、額外新增或 是被遺忘的生活事件。而此模型中還包含我們提出的一種特殊機制,我們透過這 種機制來融合以生活事件為基礎所構建的無向圖和語言模型所產生的文字嵌入向 量。同時,為了進一步提供具解釋性的服務,我們的模型會從生活事件的無向圖 中選擇相關的節點用以當作參考事件。而實驗結果也表明,我們的模型在偵測資 訊召回需求的任務 取得了很好的成果,提取出的參考事件也可以有效作為補充資 訊,提醒用戶他們可能想要召回的生活事件。zh_TW
dc.description.abstractWhen recalling life experiences, people often forget or confuse life events, which necessitates information recall services. Previous work on information recall focuses on providing such assistance reactively, i.e., by retrieving the life event of a given query. What is rarely discussed, however, is a proactive system that is capable of detecting the need for information recall services. In this paper, we propose determining whether users are experiencing difficulty in recalling their life experiences by comparing the events described in two retold stories written at different times. We use a human-annotated life experience retelling dataset to detect the right time to trigger the information recall service. We also propose a pilot model–Structured Event Enhancement Network (SEEN) that detects life event inconsistency, additional information in life events, and forgotten events. A fusing mechanism is also proposed to incorporate event graphs of stories and enhance the textual representations. To explain the need detection results, SEEN simultaneously provides support evidence by selecting the related nodes from the event graph. Experimental results show that SEEN achieves promising performance in detecting information needs. In addition, the extracted evidence can be served as complementary information to remind users what events they may want to recall.en
dc.description.provenanceMade available in DSpace on 2023-03-19T23:53:47Z (GMT). No. of bitstreams: 1
U0001-2008202215161900.pdf: 1202596 bytes, checksum: 647c1ad45486e2488970fb8a1527b358 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontentsAcknowledgements i 摘要 ii Abstract iii Contents v List of Figures viii List of Tables ix Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Thesis Organization 5 Chapter 2 Related Work 6 2.1 Lifelogging 6 2.2 Structured Information 7 2.3 Nature Language Inference 9 Chapter 3 Dataset Construction 11 3.1 Hippocorpus 11 3.2 From Hippocorpus to NIR 12 3.3 Life Event Annotation 14 3.4 Event Type Annotation 16 Chapter 4 Methodology 18 4.1 Task Definition 18 4.1.1 Need Detection of Information Recall 18 4.1.2 Support Evidence Extraction 18 4.2 Structured Event Enhancement Network 19 4.2.1 Event Graph Construction 20 4.2.2 Textual Encoder Layer 21 4.2.3 Integration Layer 22 4.2.4 Event Type Classifier 24 4.2.5 Related Node Classifier 25 4.2.6 Integration with Natural Language Inference 25 Chapter 5 Experiments 27 5.1 Baseline Models 27 5.2 Experiment Setup 28 5.3 Experiment Results 29 Chapter 6 Analysis and Discussion 33 6.1 Ablation Study 33 6.2 Impactof Pre-training Task 34 6.3 Number of Integration Layers 35 6.4 Contribution of Different Fusion Layers 36 6.5 Case Study of Support Evidence Extraction 37 6.6 Error Analysis 38 Chapter 7 Data Analysis 40 7.1 Event Type Analysis on Age 40 7.2 Event Type Analysis on Time Interval 41 7.3 Event Type Analysis on importance 42 7.4 Event Type Analysis on Ownership 43 Chapter 8 Conclusion 45 References 47
dc.language.isoen
dc.titleSEEN: 以結構化事件增強網路偵測與解釋資訊召回需求zh_TW
dc.titleSEEN: Structured Event Enhancement Network for Explainable Need Detection of Information Recall Assistanceen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee鄭卜壬(Pu-Jen Cheng),陳冠宇(Kuan-Yu Chen),蔡銘峰(Ming-Feng Tsai)
dc.subject.keyword生活日誌,資訊召回,個人知識庫,zh_TW
dc.subject.keywordlifelogging,information recall,personal knowledge base,en
dc.relation.page55
dc.identifier.doi10.6342/NTU202202609
dc.rights.note同意授權(全球公開)
dc.date.accepted2022-08-22
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
dc.date.embargo-lift2023-12-31-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
U0001-2008202215161900.pdf1.17 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved