Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83541
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor洪士?(Shih-Hao Hung)
dc.contributor.authorHsiang-Jen Wangen
dc.contributor.author王祥任zh_TW
dc.date.accessioned2023-03-19T21:10:01Z-
dc.date.copyright2022-09-02
dc.date.issued2022
dc.date.submitted2022-08-31
dc.identifier.citation[1] DAMON: Data Access MONitor —The Linux Kernel documentation. https://docs.kernel.org/mm/damon/index.html. [2] FUSE source code. https://elixir.bootlin.com/linux/v5.18.11/source/fs/fuse/file.c#L837. [3] Intel? Memory Latency Checker v3.9a. https://www.intel.com/content/www/us/en/developer/articles/tool/intelr-memory-latency-checker.html. [4] Kata Containers - Open Source Container Runtime Software. https://katacontainers.io/. [5] libfuse/libfuse: The reference implementation of the Linux FUSE (Filesystem in Userspace) interface. https://github.com/libfuse/libfuse. [6] Memory Hot(Un)Plug—The Linux Kernel documentation. https://www.kernel.org/doc/html/latest/admin-guide/mm/memory-hotplug.html. [7] QEMU Memory Hotplug. https://github.com/qemu/qemu/blob/master/docs/memory-hotplug.txt. [8] QEMU NVDIMM. https://docs.pmem.io/persistent-memory/getting-started-guide/creating-development-environments/virtualization/qemu. [9] QEMU memory-backend-file object. https://www.qemu.org/docs/master/system/invocation.html#hxtool-10. [10] Redis. https://redis.io/. [11] Software Guard eXtensions (SGX). https://www.qemu.org/docs/master/system/i386/sgx.html. [12] RDMA Aware Networks Programming User Manual. https://indico.cern.ch/event/218156/attachments/351725/490089/RDMA_Aware_Programming_user_manual.pdf, 2017. [13] Enabling the Modern Data Center - RDMA for the Enterprise. https://www.infinibandta.org/wp-content/uploads/2019/05/IBTA_WhitePaper_May-20-2019.pdf, May 2019. [14] E. Amaro, C. Branner-Augmon, Z. Luo, A. Ousterhout, M. K. Aguilera, A. Panda,S. Ratnasamy, and S. Shenker. Can far memory improve job throughput? In Proceedings of the Fifteenth European Conference on Computer Systems, EuroSys ’20, New York, NY, USA, 2020. Association for Computing Machinery. [15] B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears. Benchmarking cloud serving systems with ycsb. In Proceedings of the 1st ACM Symposium on Cloud Computing, SoCC ’10, page 143–154, New York, NY, USA, 2010. Association for Computing Machinery. [16] A. Dragojevi?, D. Narayanan, M. Castro, and O. Hodson. FaRM: Fast remote memory. In 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14), pages 401–414, Seattle, WA, Apr. 2014. USENIX Association. [17] D. Dunning, G. Regnier, G. McAlpine, D. Cameron, B. Shubert, F. Berry, A. Merritt, E. Gronke, and C. Dodd. The virtual interface architecture. IEEE Micro, 18(2):66–76, 1998. [18] J. Gu, Y. Lee, Y. Zhang, M. Chowdhury, and K. G. Shin. Efficient memory disaggregation with infiniswap. In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), pages 649–667, Boston, MA, Mar. 2017. USENIX Association. [19] A. Kalia, M. Kaminsky, and D. G. Andersen. Design guidelines for high performance RDMA systems. In 2016 USENIX Annual Technical Conference (USENIX ATC 16), pages 437–450, Denver, CO, June 2016. USENIX Association. [20] K. Koh, K. Kim, S. Jeon, and J. Huh. Disaggregated cloud memory with elastic block management. IEEE Trans. Comput., 68(1):39–52, jan 2019. [21] P. Minet, E. Renault, I. Khoufi, and S. Boumerdassi. Analyzing traces from a google data center. In 2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC), pages 1167–1172, 2018. [22] S. Park, Y. Lee, Y. Kim, and H. Y. Yeom. Profiling dynamic data access patterns with bounded overhead and accuracy. In 2019 IEEE 4th International Workshops on Foundations and Applications of Self* Systems (FAS*W), pages 200–204, 2019. [23] Z. Ruan, M. Schwarzkopf, M. K. Aguilera, and A. Belay. AIFM: High-Performance, Application-Integrated far memory. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pages 315–332. USENIX Association, Nov. 2020. [24] S. Sultan, I. Ahmad, and T. Dimitriou. Container security: Issues, challenges, and the road ahead. IEEE Access, 7:52976–52996, 2019. [25] B. K. R. Vangoor, V. Tarasov, and E. Zadok. To FUSE or not to FUSE: Performance of User-Space file systems. In 15th USENIX Conference on File and Storage Technologies (FAST 17), pages 59–72, Santa Clara, CA, Feb. 2017. USENIX Association. [26] J. Yang, Y. Yue, and K. V. Rashmi. A large-scale analysis of hundreds of in-memory key-value cache clusters at twitter. ACM Trans. Storage, 17(3), aug 2021.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83541-
dc.description.abstract記憶體分解解決了資料中心長期存在的不平衡的記憶體使用和較低的平均記憶體利用率等問題,其通過將部分記憶體從低利用率節點中解離,使具有較高記憶體需求的節點可以通過高速網絡遠端使用。另一方面,虛擬化技術也提倡資源共享,且其作為靈活的資源管理和快速佈署的手段,在雲端中得到廣泛應用。然而,雖然結合記憶體分解和虛擬化來實現更高效和靈活的資源使用似乎是可行的,但現有的解決方案通常需要修改作業系統核心,並且不允許應用程式參與遠端和本地記憶體的管理,這可能導致兼容性問題和/或低效率的記憶體共享。因此,我們提出了一個針對虛擬機設計的高彈性遠端記憶體 (VOFiRM) 的框架,該框架通過使用者空間檔案系統和遠端直接記憶體存取為虛擬機提供遠端記憶體,並支援按需求進行熱插拔的功能。由於該框架使用使用者空間檔案系統,它不需要修改虛擬機和虛擬機監視器,從而簡化了部署並減少了兼容性上的問題。遠端直接記憶體存取的使用至關重要,因為它可以加速遠端記憶體訪問並大大降低本地和遠端節點上的 CPU 的負載。 作為概念驗證,我們在 QEMU 上實現了 VOFiRM 的原型,並展示了系統管理員如何通過 QEMU 的監視器提供的接口附加/分離 pc-dimm 設備的檔案後端記憶體對象,使其能在不中斷虛擬機上的應用程式的情況下動態擴展/收縮虛擬機的遠程記憶體。我們從原型中收集實驗結果,以評估 VOFiRM 在各種工作負載下的性能影響並分析效能瓶頸。雖然遠端記憶體訪問的延遲和頻寬取決於網絡介面,但遠端記憶體分頁可以由作業系統核心緩存在本地,有效縮短訪問時間並提高讀取密集型應用程式的頻寬。此外,我們使用記憶體鍵值存儲系統,Redis,來進行應用案例研究,以了解 VOFiRM 在各種讀/寫比率下的表現,並演示 VOFiRM 的用戶如何調整記憶體配置以實現最佳性價比。zh_TW
dc.description.abstractMemory disaggregation solves the long-existing memory imbalance and low average utilization problems in datacenters by decoupling parts of memory from low-utilization nodes so that memory-hungry nodes may use them remotely through a high-speed network. At the same time, virtualization technologies also advocate sharing of resources and are popularly used in the cloud as a means for flexible resource management and quickly service deployment. However, while combining memory disaggregation and virtualization to achieve more efficient and flexible resource usage seems viable, existing solutions often require modifications of operating system kernels and do not allow the applications to participate in the management of remote and local memories, which can lead to compatibility issues and/or inefficient memory sharing. Thus, we propose a framework called Virtual-Machine-Oriented Flexible Remote Memory (VOFiRM), which provides virtual machines with remote memories through Filesystem in USErspace (FUSE) and remote direct memory access (RDMA) with on-demand hotplug/hotunplug capabilities. As the proposed framework uses FUSE, it does not require kernel modifications in the virtual machine and hypervisor, which eases the deployment with less compatibility issues. The use of RDMA is critical as it accelerates remote memory accesses and greatly reduces the CPU overheads on local and remote nodes. As a proof of concept, we implement a prototype of VOFiRM on QEMU and show how a system administrator can dynamically extend/shrink the remote memories of a virtual machine without interrupting the applications on the virtual machine by attaching/detaching file-backend memory objects associated with pc-dimm devices via the interface provided by the QEMU monitor. We collect experimental results from the prototype to evaluate the performance impact of VOFiRM for various benchmark workloads and analyze the performance bottlenecks. While the latency and bandwidth for remote memoryaccess depend on the network interface, the remote memory pages can be cached locally by the kernel to effectively shorten the access time and improve the bandwidth for read-intensive applications. Furthermore, we conduct an application case study with Redis, an in-memory key-value store, to understand how VOFiRM performs under the various read/write ratios and demonstrate how the user of VOFiRM can adjust the memory configuration to achieve the best cost-performance.en
dc.description.provenanceMade available in DSpace on 2023-03-19T21:10:01Z (GMT). No. of bitstreams: 1
U0001-0808202217432900.pdf: 1310906 bytes, checksum: c3ab9d214691473708c0800588a780db (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents口試委員會審定書 i 致謝 iii 摘要 v Abstract vii Contents ix List of Figures xi List of Tables xiii Chapter 1 Introduction 1 Chapter 2 Background and Related Works 5 2.1 Related Works 5 2.2 Memory Objects of QEMU 6 2.3 FUSE 8 2.4 Remote Direct Memory Access 10 Chapter 3 Methodology 13 3.1 VOFiRM Architecture 14 3.1.1 VOFiRM FUSE Daemon 15 3.1.2 Consumer RDMA Service 16 3.1.3 Provider RDMA Service 18 3.2 Process Flow of Memory Access from VM 20 3.3 Sparse Files 24 Chapter 4 Evaluation 27 4.1 Experimental Setup 27 4.2 Block Size of VOFiRM RDMA Service 29 4.3 Performance with Different Read/Write Ratios 31 4.4 Redis with YCSB Workloads 38 4.5 Issues with Private-Mapping Backend 44 Chapter 5 Conclusion and Future Works 45 References 47
dc.language.isoen
dc.titleVOFiRM: 以RDMA和FUSE針對虛擬機設計的高彈性遠端記憶體zh_TW
dc.titleVOFiRM: Virtual-Machine-Oriented Flexible Remote Memory through RDMA and FUSEen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee施吉昇(Chi-Sheng Shih),郭大維(Tei-Wei Kuo),劉邦鋒(Pangfeng Liu),周志遠(Jerry Chi-Yuan Chou)
dc.subject.keywordRDMA,FUSE,虛擬機,遠端記憶體,資料中心,QEMU,zh_TW
dc.subject.keywordRDMA,FUSE,Virtual Machine,Remote Memory,Datacenter,QEMU,en
dc.relation.page50
dc.identifier.doi10.6342/NTU202202159
dc.rights.note未授權
dc.date.accepted2022-08-31
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
U0001-0808202217432900.pdf
  目前未授權公開取用
1.28 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved