Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90694
Title: 以高維通用向量量化器提升聯邦學習通訊效率
High-Dimensional Universal Vector Quantization for Efficient Federated Learning
Authors: 呂冠輝
Guan-Huei Lyu
Advisor: 林士駿
Shih-Chun Lin
Keyword: 聯邦學習,向量量化器,通用量化,低碼率量化,
federated learning,vector quantization,universal quantization,low-rate quantization,
Publication Year : 2023
Degree: 碩士
Abstract: 聯邦學習(FL)是一種分散式的機器學習,在進行機器學習的模型訓練時,由一個中央參數伺服器(PS)進行協調,讓多個邊緣設備在本地進行訓練而不會分享彼此的數據。在實際上,性能瓶頸是來自每個邊緣設備到 PS 的連接容量(link capacity)。為滿足嚴格的連接容量限制,需要在邊緣設備上對模型更新進行低碼率(low-rate)的壓縮。本文提出了一種低碼率的通用向量量化器,可以實現分數壓縮率。我們的方案包括兩個步驟:(1)模型更新的預處理和(2)使用通用trellis coded quantization(TCQ)的向量量化。在預處理步驟中,對模型更新進行稀疏化和縮放,以便與 TCQ 設計相匹配。使用 TCQ 進行的量化步驟允許分數壓縮率,並且輸入大小靈活,可以適應不同的神經網絡層。模擬結果顯示,我們的向量量化可以節省 75%的連接容量,並且與文獻中提出的其他壓縮器相比仍具有令人滿意的準確性。
Federated learning (FL) is a distributed training paradigm in which the training of a machine learning model is coordinated by a central parameter server (PS) while data is distributed across multiple edge devices. In practice, the performance bottleneck is the link capacity from each edge device to the PS. To satisfy stringent link capacity constraints, model updates need to be compressed rather aggressively at the edge devices. In this paper, we propose a low-rate universal vector quantizer that can attain low or even fractional-rate compression. Our scheme consists of two steps: (i) model update pre-processing and (ii) vector quantization using a universal trellis coded quantizer (TCQ). In the pre-processing steps, model updates are sparsified and scaled, so as to match the TCQ design. The quantization step using TCQ, allows for fractional compression rate and has a flexible input size so that it can be adapted to the different neural network layers. The simulations show that our vector quantization can save 75% link capacity and still have compelling accuracy, as compared with the other compressors proposed in the literature.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90694
DOI: 10.6342/NTU202302329
Fulltext Rights: 同意授權(限校園內公開)
metadata.dc.date.embargo-lift: 2028-07-28
Appears in Collections:電信工程學研究所

Files in This Item:
File SizeFormat 
ntu-111-2.pdf
  Restricted Access
1.75 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved