Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86274
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳銘憲(Ming-Syan Chen)
dc.contributor.authorJia-Hong Quen
dc.contributor.author屈佳宏zh_TW
dc.date.accessioned2023-03-19T23:46:16Z-
dc.date.copyright2022-08-30
dc.date.issued2022
dc.date.submitted2022-08-29
dc.identifier.citation[1] Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6430–6439, 2019. [2] Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3366–3375, 2017. [3] Yoshua Bengio,Jérôme Louradour,Ronan Collobert,and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009. [4] Rich Caruana. Multitask learning. Machine learning, 28(1):41–75, 1997. [5] Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420, 2018. [6] Zhiyuan Chen and Bing Liu. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3):1–207, 2018. [7] Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence, 44(7):3366–3385, 2021. [8] Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012. [9] Robert M French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128–135, 1999. [10] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016. [11] Guy Hacohen and Daphna Weinshall. On the power of curriculum learning in training deep networks. In International Conference on Machine Learning, pages 2535– 2544. PMLR, 2019. [12] Tin Kam Ho and Mitra Basu. Complexity measures of supervised classification problems. IEEE transactions on pattern analysis and machine intelligence, 24(3): 289–300, 2002. [13] Yen-Chang Hsu, Yen-Cheng Liu, Anita Ramasamy, and Zsolt Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488, 2018. [14] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017. [15] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. [16] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017. [17] David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017. [18] Ana C Lorena, Luís PF Garcia, Jens Lehmann, Marcilio CP Souto, and Tin Kam Ho. How complex is your classification problem? a survey on measuring classification complexity. ACM Computing Surveys (CSUR), 52(5):1–34, 2019. [19] Marc Masana, Bartłomiej Twardowski, and Joost Van de Weijer. On class orderings for incremental learning. arXiv preprint arXiv:2007.02145, 2020. [20] Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher-student curriculum learning. IEEE transactions on neural networks and learning systems, 31(9):3732–3740, 2019. [21] Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Timothy Nguyen, Razvan Pascanu, Dilan Gorur, and Mehrdad Farajtabar. Architecture matters in continual learn- ing. arXiv preprint arXiv:2202.00275, 2022. [22] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu,and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. [23] Cuong V Nguyen, Alessandro Achille, Michael Lam, Tal Hassner, Vijay Mahadevan, and Stefano Soatto. Toward understanding catastrophic forgetting in continual learning. arXiv preprint arXiv:1908.01091, 2019. [24] Anastasia Pentina, Viktoriia Sharmanska, and Christoph H Lampert. Curriculum learning of multiple tasks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5492–5500, 2015. [25] Roger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychological review, 97(2):285, 1990. [26] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001– 2010, 2017. [27] Mark B Ring. Child: A first step towards continual learning. In Learning to learn, pages 261–292. Springer, 1998. [28] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. [29] Jeffrey C Schlimmer and Douglas Fisher. A case study of incremental concept induction. In AAAI, volume 86, pages 496–501, 1986. [30] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017. [31] Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. Curriculum learning: A survey. International Journal of Computer Vision, pages 1–40, 2022. [32] Gido M van de Ven, Hava T Siegelmann, and Andreas S Tolias. Brain-inspired replay for continual learning with artificial neural networks. Nature Communications, 11:4069, 2020. [33] Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734, 2019. [34] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769–8778, 2018. [35] Daphna Weinshall, Gad Cohen, and Dan Amir. Curriculum learning by transfer learning: Theory and experiments with deep networks. In International Conference on Machine Learning, pages 5238–5246. PMLR, 2018. [36] Amir R. Zamir, Alexander Sax, William B. Shen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. [37] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, pages 3987–3995. PMLR, 2017. [38] Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Ma- lik. Side-tuning: a baseline for network adaptation via additive side networks. In European Conference on Computer Vision, pages 698–714. Springer, 2020.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86274-
dc.description.abstract在持續學習的文獻中,由於非平穩的數據分佈變化,大多數研究致力於克服災難性遺忘的問題,這對於人工神經網絡而言是個嚴重問題。然而,大多數的研究主要致力在持續學習「演算法」的部分,而任務順序之於機器學習模型的影響卻鮮少受到關注。在這篇論文中,我們研究了持續學習中的任務順序效應,並發現任務順序的選擇可能會影響最終的模型性能。因此,我們設計了一個名為啟發式任務選擇 的框架,它可以加入現有算法中,並在漸近多項式的時間內有效地選擇下一個任務。此外,我們利用幾種啟發式策略作為我們的選擇標準,並顯示這種基於特徵空間中的估計方法能作為有效性的指標,藉此可以很好地組織分類任務的任務順序。而為了評估具有大量任務的基準,我們利用符號檢定,一種可用於測量統計量的無母數或無分佈檢定,來評估我們所選擇的任務順序。實驗結果揭示了我們的方法在幾個基準上的有效性。zh_TW
dc.description.abstractIn continual learning literature, due to the non-stationary data distribution shifts, most research is dedicated to overcoming the catastrophic forgetting problem, which is a severe issue for artificial neural networks. Nevertheless, most works mainly focused on the algorithm part of continual learning, while the task order presented to machine learning models received little attention. In this work, we investigate the task order effect in continual learning and show that the choice of task order can impact the final performance. Therefore, we design the framework named Heuristic Task Selection, which can be plugged into existing algorithms and efficiently select the next task. Furthermore, we utilize several heuristics as our selection criterion and show that such feature-based metrics can be good indicators to well organize the task order for classification tasks. For evaluation, we exploit the sign test, a non-parametric or distribution-free test, which can be applied to measure the location of a statistic and help assess how performant the task order we select for the benchmarks with enormous tasks. The results reveal the efficacy of our method on several commonly used benchmarks.en
dc.description.provenanceMade available in DSpace on 2023-03-19T23:46:16Z (GMT). No. of bitstreams: 1
U0001-2107202213043600.pdf: 5136340 bytes, checksum: 811168ccde29cedaa4838266fba46df0 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents口試委員會審定書 i 誌謝 ii 摘要 iii Abstract iv Contents v List of Figures vii List of Tables viii Denotation ix 1 Introduction 1 2 Preliminary 4 2.1 Continual Learning Scenarios 4 2.2 Related Work about Task Order Effect 5 2.3 Strategies in Continual Learning Literature 7 3 Measuring Task Order Effect 9 3.1 Task Order Effect 9 3.2 Correlation Analysis 11 4 Methodology 15 4.1 Heuristic Task Selection 16 4.2 Heuristic Measures 17 4.2.1 Forward Transfer 17 4.2.2 Maximum Individual Feature Efficiency 18 4.2.3 Fraction of HyperSpheres Covering Data 19 5 Experiments 21 5.1 Setup 21 5.1.1 Benchmarks 21 5.1.2 ExperimentalDetails 23 5.1.3 Evaluation 24 5.2 Results 25 6 Discussion and Future Directions 27 7 Conclusion 30 Bibliography 31 Appendix A — Implementation Details 36
dc.language.isoen
dc.subject任務順序效應zh_TW
dc.subject持續學習zh_TW
dc.subject數據分佈變化zh_TW
dc.subject領域增量式學習zh_TW
dc.subject終身學習zh_TW
dc.subjectData Distribution Shiften
dc.subjectContinual Learningen
dc.subjectLifelong Learningen
dc.subjectTask Order Effecten
dc.subjectDomain Incremental Learningen
dc.title任務順序於持續學習之影響zh_TW
dc.titleTask Order Effect in Continual Learningen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee王鈺強(Yu-Chiang Frank Wang),彭文志(Wen-Chih Peng),林彥宇(Yen-Yu Lin),沈之涯(Chih-Ya Shen)
dc.subject.keyword持續學習,終身學習,任務順序效應,領域增量式學習,數據分佈變化,zh_TW
dc.subject.keywordContinual Learning,Lifelong Learning,Task Order Effect,Domain Incremental Learning,Data Distribution Shift,en
dc.relation.page37
dc.identifier.doi10.6342/NTU202201598
dc.rights.note同意授權(全球公開)
dc.date.accepted2022-08-29
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
dc.date.embargo-lift2022-08-30-
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
U0001-2107202213043600.pdf5.02 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved