Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2360
Title: 基於快取特性之記憶體式資料庫批次更新
Cache-Aware Batch Update for In-Memory Databases
Authors: Ting-Kang Chang
張庭綱
Advisor: 陳銘憲
Keyword: 記憶體式資料處理,快取記憶體,低局部性,
in-memory data processing,cache memory,low-locality,
Publication Year : 2017
Degree: 碩士
Abstract: 低延遲、高流通量的記憶體式資料庫管理系統(DBMS)近年來因為硬體的發展,受到了研究與應用領域越來越多的關注。更甚者,有許多因應未來使用非揮發性隨機存取記憶體(NVRAM)之記憶體式儲存系統的研究也被提出。然而這些研究皆沒有對於在低局部性、密集的更新作業負載下的處理器快取利用率進行討論。此類負載容易造成低度的快取利用率,導致不理想的整體效能。我們設計了一個基於快取特性之批次更新架構藉以提升在此類負載下之效能。藉由將更新請求暫存於快取中,此系統可將數個於不同時間到達之相近請求聚合並在同一批次內更新,以避免對記憶體不必要的重複存取,進而減少快取未命中(Cache miss)並提升整體流通量。實驗結果顯示本論文提出的快取模型相對於未考量快取特性的參考模型,能節省最高達75%的末級快取(Last level cache)未命中數,以及達到最大65%的速度提升。
Low-latency, high throughput in-memory DBMSs have been attracting attention in research and application increasingly in recent years, thanks to hardware evolutions. Furthermore, there are many works proposed to address issues in the upcoming NVRAM era for durable in-memory storage. However, very few of them were focused on cache utilization for low-locality update-intensive workloads, which usually lead to poor cache utilization and undesirable performance. We design a cache-aware batch update model to improve cache efficiency for such workloads. By buffering update requests in cache, the system can aggregate spatially close requests arriving at distant time into one batched update, and avoid unnecessary data re-fetch from memory, thus reducing read latency and improve overall throughput. The experiments show that our proposed cache-aware model can save up to 75\% last-level-cache misses and achieve up to 1.65x speedup over a cache-oblivious reference model.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2360
DOI: 10.6342/NTU201701572
Fulltext Rights: 同意授權(全球公開)
Appears in Collections:電機工程學系

Files in This Item:
File SizeFormat 
ntu-106-1.pdf2.19 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved