請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88375完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 王勝德 | zh_TW |
| dc.contributor.advisor | Sheng-De Wang | en |
| dc.contributor.author | 楊仁傑 | zh_TW |
| dc.contributor.author | JEN-CHIEH YANG | en |
| dc.date.accessioned | 2023-08-09T16:47:05Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-08-09 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-07-25 | - |
| dc.identifier.citation | [1] K. Deb and H. Jain. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints. IEEE Transactions on Evolutionary Computation, 18(4), 2014.
[2] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 2002. [3] X. Dong and Y. Yang. Network pruning via transformable architecture search. Advances in Neural Information Processing Systems, 32, 2019. [4] R. Duggal, C. Xiao, R. Vuduc, D. H. Chau, and J. Sun. Cup: Cluster pruning for compressing deep neural networks. In 2021 IEEE International Conference on Big Data (Big Data). IEEE, 2021. [5] A. E. Eiben and J. E. Smith. Introduction to evolutionary computing. Springer, 2015. [6] T. Elsken, J. H. Metzen, and F. Hutter. Neural architecture search: A survey. The Journal of Machine Learning Research, 20(1), 2019. [7] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. International Conference on Learning Representations (ICLR), 2016. [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016. [9] Y. He, X. Dong, G. Kang, Y. Fu, C. Yan, and Y. Yang. Asymptotic soft filter pruning for deep convolutional neural networks. IEEE transactions on cybernetics, 50(8), 2019. [10] Y. He, G. Kang, X. Dong, Y. Fu, and Y. Yang. Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866, 2018. [11] Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, and S. Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European conference on computer vision (ECCV), 2018. [12] Y. He, P. Liu, Z. Wang, Z. Hu, and Y. Yang. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019. [13] Y. He, X. Zhang, and J. Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE international conference on computer vision, 2017. [14] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, 2015. [15] J. H. Holland. Genetic algorithms. Scientific American, 267(1), 1992. [16] Y. Idelbayev. Proper ResNet implementation for CIFAR10/CIFAR100 in PyTorch. https://github.com/akamaster/pytorch_resnet_cifar10. [17] IntelLabs. Resnet for cifar10. https://github.com/IntelLabs/distiller/blob/master/distiller/models/cifar10/resnet_cifar.py. [18] A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. 2009. [19] kuangliu. Train cifar10 with pytorch. https://github.com/kuangliu/pytorch-cifar [20] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficient convnets. In International Conference on Learning Representations, 2017. [21] M. Lin, R. Ji, Y. Wang, Y. Zhang, B. Zhang, Y. Tian, and L. Shao. Hrank: Filter pruning using high-rank feature map. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020. [22] M. Lin, R. Ji, Y. Zhang, B. Zhang, Y. Wu, and Y. Tian. Channel pruning via automatic structure search. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2020. [23] J.-H. Luo, J. Wu, and W. Lin. Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, 2017. [24] T. Niu, Y. Teng, and P. Zou. Asymptotic soft cluster pruning for deep neural networks. arXiv preprint arXiv:2206.08186, 2022. [25] S. Petchrompo, D. W. Coit, A. Brintrup, A. Wannakrairot, and A. K. Parlikad. A review of pareto pruning methods for multi-objective optimization. Computers & Industrial Engineering, 2022. [26] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115, 2015. [27] N. Srinivas and K. Deb. Multiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation, 2(3), 1994. [28] H. Tanaka, D. Kunin, D. L. Yamins, and S. Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. Advances in neural information processing systems, 33, 2020. [29] S. Verma, M. Pant, and V. Snasel. A comprehensive review on nsga-ii for multi-objective combinatorial optimization problems. IEEE Access, 9, 2021. [30] K. Xu, D. Zhang, J. An, L. Liu, L. Liu, and D. Wang. Genexp: Multi-objective pruning for deep neural network based on genetic algorithm. Neurocomputing, 451, 2021. [31] H. Zhuo, X. Qian, Y. Fu, H. Yang, and X. Xue. Scsp: Spectral clustering filter pruning with soft self-adaption manners. arXiv preprint arXiv:1806.05320, 2018. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88375 | - |
| dc.description.abstract | 本研究提出了一種名為 NSGAP 的新方法,它是一種基於 "非支配性排序遺傳算法" 的自動化剪枝方法。與現有方法相比, NSGAP 在壓縮率、準確性和搜索時間方面都具有競爭力。 NSGAP 的一個顯著特點是它產生了一個柏拉圖前緣作為搜索結果,從而避免為獲得不同壓縮率的架構而進行多次搜索的需要。
為了加強搜索過程,本研究還引入了 "非對稱高斯分佈"方法,將其應用於遺傳算法的初始化群體,從而有效地改善了搜索結果。總體而言,NSGAP方法在在模型剪枝和壓縮領域具有競爭力和高效性。它的貢獻在於多目標優化能力、柏拉圖前緣的生成和"非對稱高斯分佈"的引入,在壓縮率、準確性和搜索時間方面都取得了優異的表現。 | zh_TW |
| dc.description.abstract | This study proposes NSGAP, an innovative architecture for automatic pruning compression that incorporates the "Non-Dominated Sorting Genetic Algorithm II" (NSGA-II). NSGAP possesses an advantage in terms of compression rate, accuracy, and search duration, diverging from conventional methods. A notable feature of NSGAP is its ability to generate a Pareto Front as a search result, eliminating the need for multiple searches to obtain architectures with different compression rates.
To optimize the search procedure, this investigation incorporates the "Asymmetric Gaussian Distribution" (AGD) strategy. By applying the AGD to the initial population of the genetic algorithm, the study facilitates improved search outcomes. In conclusion, NSGAP demonstrates impressive competitiveness and efficiency in the field of model pruning and compression. The principal contributions of this approach include its capability for multi-objective optimization, the generation of the Pareto Front, and the integration of AGD. These contributions result in superior performance in terms of compression rate, accuracy, and search duration. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-09T16:47:05Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-08-09T16:47:05Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Verification Letter i
Acknowledgements ii 摘要iii Abstract iv Contents v List of Figures viii List of Tables ix Chapter 1 Introduction 1 1.1 Problems 1 1.2 Contribution 2 Chapter 2 Related Works 3 2.1 Approach of Pruning 3 2.1.1 Soft Filter Pruning (SFP) 3 2.1.2 Filter Pruning via Geometric Median (FPGM)3 2.1.3 Cluster Pruning (CUP) 4 2.1.4 Channel Pruning via Automatic Structure Search (ABCPruner) 4 2.2 Filter Pruning 5 2.3 Non-Dominated Sorting Genetic Algorithm II 6 2.3.1 Non-Dominated Sorting 7 2.3.2 Crowding Distance 10 Chapter 3 Approach 12 3.1 Problem Definition 12 3.1.1 Traditional Definition 13 3.1.2 Multi-Objective Problem Definition 14 3.2 Initialization 15 3.2.1 Random Initialization 15 3.2.2 AGD: Asymmetric Gaussian Distribution Initialization 16 3.3 Genetic Algorithm 19 3.3.1 Continuous Crossover 19 3.3.2 Random Mutation 20 Chapter 4 Experiments 23 4.1 Experiment Overview 23 4.2 Environment 24 4.3 Baseline Models 25 4.3.1 ResNet56/ResNet110 25 4.3.2 ResNet18 25 4.4 Dataset 27 4.4.1 CIFAR-10 28 4.4.2 Tiny-ImageNet 28 4.5 Ablation Experiment 30 4.5.1 Consider the Initialization 30 4.5.2 Consider the Balance of Parameters and FLOPs 31 4.6 Results and Discussion 33 4.6.1 Results for the Different Baseline Models 33 4.6.2 Results for the Same Baseline Models 34 4.6.3 Time Consumption of the Same Baseline Models 38 Chapter 5 Conclusion 40 References 41 Appendix A — Experiment Results 44 | - |
| dc.language.iso | en | - |
| dc.subject | 濾波器剪枝 | zh_TW |
| dc.subject | 非對稱高斯分佈 | zh_TW |
| dc.subject | 非支配排序 | zh_TW |
| dc.subject | 非支配排序遺傳算法II | zh_TW |
| dc.subject | 柏拉圖前緣 | zh_TW |
| dc.subject | 距離擁擠度 | zh_TW |
| dc.subject | Pareto Fronts | en |
| dc.subject | Crowding Distance | en |
| dc.subject | Asymmetric Gaussian Distribution | en |
| dc.subject | Non-Dominated Sorting | en |
| dc.subject | Non-Dominated Sorting Genetic Algorithms II | en |
| dc.subject | Filter Pruning | en |
| dc.title | 基於非支配排序遺傳算法的深度學習模型剪枝 | zh_TW |
| dc.title | NSGAP: Filter Pruning for Deep Learning Models using NSGA-II | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 于天立;鄧惟中 | zh_TW |
| dc.contributor.oralexamcommittee | Tian-Li Yu;Wei-Chung Teng | en |
| dc.subject.keyword | 非支配排序遺傳算法II,柏拉圖前緣,非支配排序,距離擁擠度,非對稱高斯分佈,濾波器剪枝, | zh_TW |
| dc.subject.keyword | Non-Dominated Sorting Genetic Algorithms II,Pareto Fronts,Non-Dominated Sorting,Crowding Distance,Asymmetric Gaussian Distribution,Filter Pruning, | en |
| dc.relation.page | 47 | - |
| dc.identifier.doi | 10.6342/NTU202301925 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2023-07-27 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 3.66 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
