Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69838
Title: 卷積濾波器修剪以用於小數據集上的遷移學習
Convolution Filter Pruning for Transfer Learning on Small Dataset
Authors: Ching Ya Liao
廖經亞
Advisor: 劉邦鋒(Pangfeng Liu)
Keyword: 模型壓縮,濾波器修剪,遷移學習,
Model Compression,Filter Pruning,Transfer Learning,
Publication Year : 2020
Degree: 碩士
Abstract: 在這篇論文中,我們提出了一個方去可以從一個已經預訓練過的大模型,萃取出一個能分辨特定範圍之資料集的小模型。這個工作主要是使用了模型壓縮及遷移學習技術達成的。在壓縮的步驟,目的是要去選擇模型中對於我們的目標資料集敏感的部分,然後將這些選擇出來的部分進行遷移學習以產生一個較小並且客製化的模型。我們考慮影像辨識方面的應用,使用卷積神經網路模型結構。我們觀察到不同類別的圖片會激發不同的濾波器,因此我們可以透過這個特性去選擇模型裡敏感的部分。
我們使用VGG-16 (使用ImageNet資料集預訓練)作為大模型,並且進行遷移學習在兩個不同的資料集上(Flowers-102和Cats vs. Dogs)。我們應用三種不同的修剪規則到我們的方法上。其中表現最好的在Flowers-102資料集上,可以在壓縮比率到達0.4時損失不到2\%的準確率。我們也修改了一些我們提出的方法的設定,去檢驗他們的可行性。我們提出的方法可以找出適合目標資料集的模型結構和參數,使得遷移學習能夠更有效率。
In this paper, we propose a scheme to obtain a reduced model for a domain-specific dataset from a pretrained big model. This is basically achieved by combining techniques of model compression and transfer learning. The compression step aims to select the sensitive parts of a big model for our target dataset, and then we do transfer learning on the selected part to construct a reduced and customized model. We consider the application of image classification, using the structure of convolutional neural networks. We observe that different image categories activate different filters, so we can select sensitive parts of the model through this property.
We take VGG-16 (pretrained by ImageNet dataset) as the big model and we do transfer learning on two different datasets (Flowers-102 and Cats vs. Dogs). We apply three pruning criteria to our scheme. The accuracy produced by the best criterion drops less than 2\% when the pruning ratio is 0.4 on Flower-102 dataset. We also change some settings of our scheme and examine their feasibility. Our scheme can find suitable pruned structure and parameters for our target dataset, which brings out more efficient transfer learning.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69838
DOI: 10.6342/NTU202003895
Fulltext Rights: 有償授權
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
U0001-1808202000483300.pdf
  Restricted Access
835.52 kBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved