請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80186完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 鄭卜壬(Pu-Jen Cheng) | |
| dc.contributor.author | Cheng-De Lin | en |
| dc.contributor.author | 林承德 | zh_TW |
| dc.date.accessioned | 2022-11-23T09:30:46Z | - |
| dc.date.available | 2021-08-23 | |
| dc.date.available | 2022-11-23T09:30:46Z | - |
| dc.date.copyright | 2021-08-23 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-08-16 | |
| dc.identifier.citation | [1] Yoav Freund, Robert Schapire, and Naoki Abe. A short introduction to boosting. JournalJapanese Society For Artificial Intelligence, 14(771780):1612, 1999. [2] Wonji Lee, ChiHyuck Jun, and JongSeok Lee. Instance categorization by support vector machines to adjust weights in adaboost for imbalanced data classification. Information Sciences, 381:92–103, 2017. [3] TrevorHastie,SaharonRosset,JiZhu,andHuiZou.Multiclassadaboost.Statistics and its Interface, 2(3):349–360, 2009. [4] Mikel Galar, Alberto Fernández, Edurne Barrenechea, Humberto Bustince, and Francisco Herrera. Orderingbased pruning for improving the performance of ensem bles of classifiers in the framework of imbalanced datasets. Information Sciences, 354:178–196, 2016. [5] Yoav Freund and Robert E Schapire. A decisiontheoretic generalization of online learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997. [6] Yoav Freund, Robert E Schapire, et al. Experiments with a new boosting algorithm. In icml, volume 96, pages 148–156. Citeseer, 1996. [7] Robert E Schapire. Using output codes to boost multiclass learning problems. In ICML, volume 97, pages 313–321. Citeseer, 1997. [8] Robert E Schapire and Yoram Singer. Improved boosting algorithms using confidencerated predictions. Machine learning, 37(3):297–336, 1999. [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [10] David H Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992. [11] Li Deng and Dong Yu. Deep convex net: A scalable architecture for speech pattern classification. In Twelfth annual conference of the international speech communication association, 2011. [12] Li Deng, Dong Yu, and John Platt. Scalable stacking and learning for building deep architectures. In 2012 IEEE International conference on Acoustics, speech and signal processing (ICASSP), pages 2133–2136. IEEE, 2012. [13] Chuang Sun, Meng Ma, Zhibin Zhao, and Xuefeng Chen. Sparse deep stacking network for fault diagnosis of motor. IEEE Transactions on Industrial Informatics, 14(7):3261–3270, 2018. [14] Bin Wang, Bing Xue, and Mengjie Zhang. Particle swarm optimisation for evolving deep neural networks for image classification by evolving and stacking transferable blocks. In 2020 IEEE Congress on Evolutionary Computation (CEC), pages 1–8. IEEE, 2020. [15] Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multipatch network for image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5978– 5986, 2019. [16] Nima Tajbakhsh, Suryakanth R Gurudu, and Jianming Liang. Automatic polyp de tection in colonoscopy videos using an ensemble of convolutional neural networks. In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), pages 79–83. IEEE, 2015. [17] Hamid Palangi, Li Deng, and Rabab K Ward. Recurrent deepstacking networks for sequence classification. In 2014 IEEE China Summit International Conference on Signal and Information Processing (ChinaSIP), pages 510–514. IEEE, 2014. [18] Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):1–48, 2019. [19] Luke Taylor and Geoff Nitschke. Improving deep learning using generic data aug mentation. arXiv preprint arXiv:1708.06020, 2017. [20] IanJGoodfellow,JeanPougetAbadie,MehdiMirza,BingXu,DavidWardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial net works. arXiv preprint arXiv:1406.2661, 2014. [21] JunYan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired imageto image translation using cycleconsistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017. [22] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David LopezPaz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. [23] Jason Wei and Kai Zou. Eda: Easy data augmentation techniques for boosting per formance on text classification tasks. arXiv preprint arXiv:1901.11196, 2019. [24] Yangkai Du, Tengfei Ma, Lingfei Wu, Fangli Xu, Xuhong Zhang, and Shouling Ji. Constructing contrastive samples via summarization for text classification with lim ited annotations. arXiv preprint arXiv:2104.05094, 2021. [25] YizheZhang,ZheGan,andLawrenceCarin.Generatingtextviaadversarialtraining. In NIPS workshop on Adversarial Training, volume 21, pages 21–32. academia. edu, 2016. [26] LiqunChen,ShuyangDai,ChenyangTao,DinghanShen,ZheGan,HaichaoZhang, Yizhe Zhang, and Lawrence Carin. Adversarial text generation via featuremover’s distance. arXiv preprint arXiv:1809.06297, 2018. [27] Md Akmal Haidar and Mehdi Rezagholizadeh. Textkdgan: Text generation using knowledge distillation and generative adversarial networks. In Canadian conference on artificial intelligence, pages 107–118. Springer, 2019. [28] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. [29] Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 2921–2926. IEEE, 2017. [30] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. [31] AndrewL.Maas,RaymondE.Daly,PeterT.Pham,DanHuang,AndrewY.Ng,and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Asso ciation for Computational Linguistics. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80186 | - |
| dc.description.abstract | 自適應提升演算法,是提升演算法中一個非常經典的傑作,它的驚人能力已經被驗證多年。它會透過將多個弱分類器有順序性的訓練,並使用先前訓練中所得到的錯誤來幫助新的分類器,盡可能的使其不要再犯相同的錯誤,並且能夠使這些訓練好的弱分類器同心協力,互相互補,進而成為一個更強的分類器。然而隨著時間的推移,近幾年來陸續有各式各樣關於機器學習的不同想法被提出,或許乍看之下這些新的想法與自適應提升演算法並無關連,但我們認為其中有些其實是可以融合進原本的自適應提升演算法中,並且讓其變得更好的。 舉例來說,使用多個不同的特徵空間來訓練,以及近年來非常火紅的資料增強技術,在我們的認知中,便還未有人將其與自適應提升演算法一同討論。因此我們在本篇論文中提出了一個可以使用多個不同特徵空間,以及加入了資料增強來輔助的自適應提升演算法架構。並且,我們分別對圖像,文字等不同屬性的資料在多個資料集中進行了實驗,而透過實驗我們發現,使用多個不同的特徵空間,確實能夠對分類帶來助益,同時資料增強的技術也能輔助自適應提升演算法,使其有更好的結果。 | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-23T09:30:46Z (GMT). No. of bitstreams: 1 U0001-2206202115271900.pdf: 7909218 bytes, checksum: e13971edbb35808209537fde7559f488 (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee I Acknowledgements ii 摘要 iii Abstract iv Contents v List of Figures vii List of Tables ix Denotation x Chapter 1. Introduction 1 Chapter 2. Related Work 7 2.1 Adaptive Boosting 7 2.2 Multiple Feature Spaces Learning 8 2.3 Data Augmentation 9 Chapter 3. Proposed Method 11 3.1 Main Challenges 11 3.2 Model Overview 12 3.3 Single Estimator 14 3.4 Evaluation and Re-Weighting 15 3.5 Data Augmentation Block 17 3.6 Training and Testing 19 Chapter 4. Experiments 21 4.1 Datasets 21 4.2 Experiment Settings 24 4.3 Image Classification 25 4.4 Sentiment Analysis 28 4.5 Case Studies 29 4.6 Feature Space Comparison 31 4.7 Data Augmentation Comparison 32 4.8 Inheritance Rates 37 Chapter 5. Conclusions 40 5.1 Conclusions 40 5.2 FutureWorks 40 References 42 | |
| dc.language.iso | en | |
| dc.subject | 資料增強 | zh_TW |
| dc.subject | 圖像分類 | zh_TW |
| dc.subject | 卷積神經網路 | zh_TW |
| dc.subject | 自適應提升 | zh_TW |
| dc.subject | 多特徵空間 | zh_TW |
| dc.subject | multi-feature spaces | en |
| dc.subject | Image Classification | en |
| dc.subject | CNN | en |
| dc.subject | AdaBoost | en |
| dc.subject | data augmentation | en |
| dc.title | 結合資料增強與多特徵空間的自適應提升模型 | zh_TW |
| dc.title | AdaBoost with Data Augmentation in Multiple Feature Spaces | en |
| dc.date.schoolyear | 109-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 陳信希(Hsin-Tsai Liu),曾新穆(Chih-Yang Tseng),陳建錦 | |
| dc.subject.keyword | 資料增強,多特徵空間,自適應提升,卷積神經網路,圖像分類, | zh_TW |
| dc.subject.keyword | data augmentation,multi-feature spaces,AdaBoost,CNN,Image Classification, | en |
| dc.relation.page | 46 | |
| dc.identifier.doi | 10.6342/NTU202101094 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2021-08-17 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-2206202115271900.pdf | 7.72 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
