Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88653
Title: 深度網路訓練中反向傳播的GPU內存使用優化
GPU Memory Usage Optimization for Backward Propagation in Deep Network Training
Authors: 王甯
Ning Wang
Advisor: 劉邦鋒
Pangfeng Liu
Keyword: 深度學習,動態規劃,記憶體優化,記憶體壓力,檢查點,
Deep Learning,Dynamic Programming,memory usage optimization,memory pressure,Checkpointing,
Publication Year : 2023
Degree: 碩士
Abstract: 在現代深度學習中,設計更大的深度神經網絡(DNN)來執行更複雜的任務和更高的準確性已經成為一種趨勢。在另一方面,卷積神經網絡(CNN)已成為大多數計算機視覺任務的標準方法。 但是,那這些卷積層的中間數據的內存分配可能會在模型訓練期間造成嚴重的內存壓力。許多解決方案已經被提出來解決該問題。除了依賴於硬件的解決方案之外,有一個通用方法稱為使用運算換取記憶體空間,它可以通過增加計算量來減少 GPU 內存的使用。它延遲了前傳遞過程中部份層的子集的激勵計算,並在後向階段批量重新計算它們,以節省 GPU 內存。在這篇論文中,我們將專注於有效率地找到最佳檢查點以在模型訓練期間達到最小記憶體峰值。我們首先會描述訓練神經網絡的理論背景以及所用到的數學方程。我們使用這些方程來確定前傳導以及倒傳遞過程中必須要用到的所有資料以計算模型的權重。我們首先確定檢查點選擇問題並提出時間複雜度為 O(n^3) 的動態規划算法解決尋找最優檢查點子集問題。通過大量的實驗,我們使用理論分析做出更準確的描述並基於我們的追蹤修正目標函數,並提出一個 O(n^2) 動態規划算法來查找最優檢查點子集。
In modern Deep Learning, it has been a trend to design larger Deep Neural Networks(DNNs) for the execution of more complex tasks and better accuracy. On the other hand, Convolutional Neural Networks(CNNs) have become the standard method for most computer vision tasks. However, the memory allocation for the intermediate data of these convolution layers can cause severe memory pressure during model training. Many solutions have been proposed to resolve the problem. Besides hardware-dependent solutions, the general methodology known as trading computation for memory or rematerialization can reduce GPU memory usage by trading computation for memory efficiently. It delays the computation of activations of a subset of layers during the forward phase to save GPU memory and recomputes them in batch during the backward phase. In this paper, we will focus on efficiently finding the optimal checkpoint subset to achieve the least peak memory usage during the model training. We first describe the theoretical background of the training of a neural network using mathematical equations. We use these equations to identify all essential data required during both forward and backward phases to compute the gradient of weights of the model. We first identify the checkpoint selection problem and propose a dynamic programming algorithm with time complexity O(n^3) to solve the problem of finding the optimal checkpoint subset. With extensive experiments, we formulate a more accurate description of the problem using our theoretical analysis and revise the objective function based on the tracing, and propose an O(n^2) dynamic programming algorithm for finding the optimal checkpoint subset.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88653
DOI: 10.6342/NTU202302572
Fulltext Rights: 未授權
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-111-2.pdf
  Restricted Access
997.3 kBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved