Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73765
Title: | 建立使用非同步隨機梯度下降法的分散式訓練之多參數伺服器模型 Multi-parameter-server modeling for distributed asynchronous SGD |
Authors: | Yu-Nuo Juan 阮昱諾 |
Advisor: | 周承復(Cheng-Fu Chou) |
Keyword: | 深度學習,深度神經網路,分散式機器學習,排隊網路,Tensorflow, Deep Learning,Deep Neural Networks,Distributed Machine Learning,Queueing Networks,TensorFlow, |
Publication Year : | 2019 |
Degree: | 碩士 |
Abstract: | 深度神經網路最近在各領域獲得了巨大的成功,並吸引了更多世界各地學者的目光。大量的訓練工作考驗著軟硬體的發展。分散式學習是一種常見的加速方式。在這篇論文中我們會提出解決擴展學習環境的其中一個問題,也會解釋整個模型與背後使用的工具。 Deep Neural Networks(DNNs) is very successful and has drawn more and more attentions from researchers all over the world. A huge demand of training jobs are challenging the development of both software tools and hardware systems. Distributed training is a common approach to speed up these jobs. In this paper, we propose a new method to address one of the problem in expanding the scale of your training environment, and we will also explain the model and tools behind. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73765 |
DOI: | 10.6342/NTU201903787 |
Fulltext Rights: | 有償授權 |
Appears in Collections: | 資訊網路與多媒體研究所 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-108-1.pdf Restricted Access | 2.34 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.