請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88729完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 王勝德 | zh_TW |
| dc.contributor.advisor | Sheng-De Wang | en |
| dc.contributor.author | 陳鵬宇 | zh_TW |
| dc.contributor.author | Peng-Yu Chen | en |
| dc.date.accessioned | 2023-08-15T17:32:54Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-08-15 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-08-07 | - |
| dc.identifier.citation | [1] L. Cai, Z. An, C. Yang, and Y. Xu. Softer pruning, incremental regularization. In 25th International Conference on Pattern Recognition, ICPR 2020, Virtual Event / Milan, Italy, January 10-15, 2021, pages 224–230. IEEE, 2020.
[2] L. Cai, Z. An, C. Yang, and Y. Xu. Soft and hard filter pruning via dimension reduction. In International Joint Conference on Neural Networks, IJCNN 2021, Shenzhen, China, July 18-22, 2021, pages 1–8. IEEE, 2021. [3] Y. Guo, A. Yao, and Y. Chen. Dynamic network surgery for efficient dnns. In D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 1379–1387, 2016. [4] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In Y. Bengio and Y. LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. [5] S. Han, J. Pool, J. Tran, and W. J. Dally. Learning both weights and connections for efficient neural network. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1135–1143, 2015. [6] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society, 2016. [7] Y. He, X. Dong, G. Kang, Y. Fu, C. Yan, and Y. Yang. Asymptotic soft filter pruning for deep convolutional neural networks. IEEE Trans. Cybern., 50(8):3594–3604, 2020. [8] Y. He, G. Kang, X. Dong, Y. Fu, and Y. Yang. Soft filter pruning for accelerating deep convolutional neural networks. In J. Lang, editor, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 2234-2240. ijcai.org, 2018. [9] Y. He, X. Zhang, and J. Sun. Channel pruning for accelerating very deep neural networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 1398–1406. IEEE Computer Society, 2017. [10] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017. [11] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2261–2269. IEEE Computer Society, 2017. [12] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical Report 0, University of Toronto, Toronto, Ontario, 2009. [13] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficient convnets. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. [14] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang. Learning efficient convolutional networks through network slimming. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 2755–2763. IEEE Computer Society, 2017. [15] J. Luo, J. Wu, and W. Lin. Thinet: A filter level pruning method for deep neural network compression. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 5068–5076. IEEE Computer Society, 2017. [16] J. Oh, H. Kim, S. Baik, C. Hong, and K. M. Lee. Batch normalization tells you which filter is important. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, pages 3351–3360. IEEE, 2022. [17] X. Xu, Q. Chen, L. Xie, and H. Su. Batch-normalization-based soft filter pruning for deep convolutional neural networks. In 16th International Conference on Control, Automation, Robotics and Vision, ICARCV 2020, Shenzhen, China, December 13-15, 2020, pages 951–956. IEEE, 2020. [18] J. Yang, X. Shen, J. Xing, X. Tian, H. Li, B. Deng, J. Huang, and X. Hua. Quantization networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 7308–7316. Computer Vision Foundation / IEEE, 2019. [19] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun. Efficient and accurate approximations of nonlinear convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 1984–1992. IEEE Computer Society, 2015. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88729 | - |
| dc.description.abstract | 過往的剪枝技術大多使用網路中的單一結構來進行評估,如卷積層(convolutional layer)或批量標準化層(batch-normalization layer),來決定剪枝目標。然而,這種方法沒辦法有效利用每個層中所有結構,因此效果有限。為了能更 加全面的考慮每個層中的各種結構,我們提出雙重排名軟混合濾波器剪枝 (Soft Hybrid Filter Pruning using a Dual Ranking Approach, DR-SHFP),該方法建立在軟剪枝(Soft Filter Pruning, SFP)的基礎上,並引入了雙重排名的方法。DR-SHFP使用了一個排名機制,並給每個濾波器一個排名,這個排名是由卷積層和批量標準化層共同決定的。通過同時評估這兩個層,我們的方法能夠更全面地捕捉來自層結構的信息,打破了單一結構評估的限制。因此,DR-SHFP 能夠更有效地識別和選擇要進行剪枝的濾波器,從而提高性能。實驗結果表明,DR-SHFP在CIFAR-10、CIFAR-100和Tiny-ImageNet等資料集上具有優異的性能。 | zh_TW |
| dc.description.abstract | Conventional pruning techniques typically focus on evaluating a single structure in the network, such as the convolutional layer or batch normalization layer, to identify pruning targets. However, this approach fails to effectively leverage the potential of all structures within each layer of the network. In order to comprehensively consider the various structures in each layer, we propose a novel method called Soft Hybrid Filter Pruning using a Dual Ranking Approach (DR-SHFP), which builds upon Soft Filter Pruning (SFP) by introducing a dual-ranking approach. DR-SHFP incorporates a ranking system that assigns a rank to each filter in a collaborative manner, taking into account both convolutional layers and batch normalization layers. By simultaneously evaluating both types of layers, our method captures more information from the layer structures, surpassing the limitations of single-structure evaluation. Consequently, DR-SHFP can identify and select filters more effectively for pruning, leading to improved performance. Experimental results demonstrate the effectiveness of DR-SHFP on benchmark datasets such as CIFAR-10, CIFAR-100 and Tiny-ImageNet. The proposed method outperforms other soft pruning methods, showcasing its capability to achieve excellent performance in various settings. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-15T17:32:54Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-08-15T17:32:54Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Acknowledgements i
摘要 iii Abstract v Contents vii List of Figures ix List of Tables xi Chapter 1 Introduction 1 Chapter 2 Related Work 3 2.1 Model Compression . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 3 Method 9 3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Soft Filter Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Batch-Normalization-based Soft Filter Pruning . . . . . . . . . . . . 11 3.4 Soft Hybrid Filter Pruning using a Dual Ranking Approach . . . . . . 11 3.5 Other DR-SHRF Variations . . . . . . . . . . . . . . . . . . . . . . 15 3.5.1 Asymptotic Soft Filter Pruning . . . . . . . . . . . . . . . . . . . . 16 3.5.2 SofteR Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 4 Experiment 17 4.1 Datasets and Training Details . . . . . . . . . . . . . . . . . . . . . 17 4.2 ResNet on CIFAR-10/CIFAR-100 . . . . . . . . . . . . . . . . . . . 18 4.3 ResNet on Tiny-ImageNet . . . . . . . . . . . . . . . . . . . . . . . 22 4.4 Training Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Chapter 5 Ablation Study 25 5.1 Varying pruning rates . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.2 Varying ratios between FR and BR . . . . . . . . . . . . . . . . . . . 26 Chapter 6 Conclusion 27 References 29 | - |
| dc.language.iso | en | - |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 模型剪枝 | zh_TW |
| dc.subject | 模型壓縮 | zh_TW |
| dc.subject | 濾波器剪枝 | zh_TW |
| dc.subject | 軟剪枝 | zh_TW |
| dc.subject | Model Compression | en |
| dc.subject | Network pruning | en |
| dc.subject | Filter pruning | en |
| dc.subject | Deep learning | en |
| dc.subject | Soft pruning | en |
| dc.title | 基於雙重排名機制的深度學習模型軟混合濾波器剪枝 | zh_TW |
| dc.title | Soft Hybrid Filter Pruning using a Dual Ranking Approach | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 雷欽隆;余承叡 | zh_TW |
| dc.contributor.oralexamcommittee | Chin-Laung Lei;Cheng-Juei yu | en |
| dc.subject.keyword | 深度學習,模型壓縮,模型剪枝,濾波器剪枝,軟剪枝, | zh_TW |
| dc.subject.keyword | Deep learning,Model Compression,Network pruning,Filter pruning,Soft pruning, | en |
| dc.relation.page | 31 | - |
| dc.identifier.doi | 10.6342/NTU202303335 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2023-08-09 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| dc.date.embargo-lift | 2024-08-31 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 689.24 kB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
