Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/20011
Title: | 對分散式深度學習的計算與傳輸之排程優化 Computation and Communication Scheduling Optimization for Distributed Deep Learning Systems |
Authors: | Ching-Yuan Tsai 蔡慶源 |
Advisor: | 劉邦鋒 |
Keyword: | 深度學習,機器學習,參數伺服器,網路瓶頸, Machine learning,Deep learning,Parameter server,Network limitation, |
Publication Year : | 2018 |
Degree: | 碩士 |
Abstract: | 深度學習是一種可以解決複雜問題的技術。因為數據的增長和模型的複雜性,大規模的深度學習已經成了一個重要的問題。分佈式深度學習是一種有效的方式訓練一個大型模型。在分散式環境下,網絡帶寬是性能瓶頸。本文的重點是如何安排網路活動以減少訓練時間。我們提出一些調度程序,並獲得最多25 % 的加速。 Deep learning is a technique that can solve complex problem. Due to the growth of data and model complexity, large-scale deep learning has became an important issue. Distributed deep learning is a efficient way to train a large model. Under distributed environment, network bandwidth is a performance bottleneck. This paper focus on how to schedule network events to reduce training time. We propose some schedulers and get at most 25% speedup. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/20011 |
DOI: | 10.6342/NTU201801485 |
Fulltext Rights: | 未授權 |
Appears in Collections: | 資訊工程學系 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-107-1.pdf Restricted Access | 633.75 kB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.