請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90795
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 郭大維 | zh_TW |
dc.contributor.advisor | Tei-Wei Kuo | en |
dc.contributor.author | 黃資傑 | zh_TW |
dc.contributor.author | Tzu-Chieh Huang | en |
dc.date.accessioned | 2023-10-03T17:39:23Z | - |
dc.date.available | 2023-11-10 | - |
dc.date.copyright | 2023-10-03 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-09 | - |
dc.identifier.citation | [1] W. Cheong, C. Yoon, S. Woo, K. Han, D. Kim, C. Lee, Y. Choi, S. Kim, D. Kang, G. Yu, et al. A flash memory controller for 15µs ultra-low-latency ssd using high-speed 3d nand flash with 3µs read time. In 2018 IEEE International Solid-State Circuits Conference-(ISSCC), pages 338–340. IEEE, 2018.
[2] F. Team. (2020) Filebench - a file system and storage benchmark. https://github.com/filebench/filebench. Accessed: June 18, 2023. [3] K. Kim, S. Hong, and T. Kim. Supporting the priorities in the multi-queue block i/o layer for nvme ssds. Journal of Semiconductor Technology and Science, 20(1):55–62, 2020. [4] G. Lee, S. Shin, W. Song, T. J. Ham, J. W. Lee, and J. Jeong. Asynchronous i/o stack: A low-latency kernel i/o stack for ultra-low latency ssds. In 2019 USENIX Annual Technical Conference (USENIX ATC 19), pages 603–616, 2019. [5] H. Li, M. Hao, M. H. Tong, S. Sundararaman, M. Bjørling, and H. S. Gunawi. The case of FEMU: Cheap, accurate, scalable and extensible flash emulator. In 16th USENIX Conference on File and Storage Technologies (FAST 18), pages 83–90, 2018. [6] L. Community. (2023). Linux kernel. Github. https://github.com/torvalds/linux. Accessed: Jan 25, 2023. [7] LWN.net. (2017). Two new block I/O schedulers for 4.12. https://lwn.net/Articles/720675/. Accessed: Jan 27, 2023. [8] M. Liu, H. Liu, C. Ye, X. Liao, H. Jin, Y. Zhang, R. Zheng, and L. Hu. Towards low-latency i/o services for mixed workloads using ultra-low latency ssds. In Proceedings of the 36th ACM International Conference on Supercomputing, pages 1–12, 2022. [9] A. Margaritov, S. Gupta, R. Gonzalez-Alberquilla, and B. Grot. Stretch: Balancing qos and throughput for colocated server workloads on smt cores. In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 15–27, 2019. [10] T. Miemietz, H. Weisbach, M. Roitzsch, and H. Härtig. K2: Work-constraining scheduling of nvme-attached storage. In 2019 IEEE Real-Time Systems Symposium (RTSS), pages 56–68. IEEE, 2019. [11] S. Park and J. Lee. Analysis of the k2 scheduler for a real-time system with an ssd. Electronics, 10(7):865, 2021. [12] P. Team (2022) Github. https://github.com/phoronix-test-suite/phoronix-test-suite. Accessed: Febuary 5, 2023. [13] A. H. Sodhro, Z. Luo, A. K. Sangaiah, and S. W. Baik. Mobile edge computing based qos optimization in medical healthcare applications. International Journal of Information Management, 45:308–318, 2019. [14] B. Varghese, N. Wang, D. Bermbach, C.-H. Hong, E. D. Lara, W. Shi, and C. Stewart. A survey on edge performance benchmarking. ACM Comput. Surv., 54(3), apr 2021. [15] J. Woo, M. Ahn, G. Lee, and J. Jeong. D2FQ: Device-direct fair queueing for nvme ssds. In 19th USENIX Conference on File and Storage Technologies (FAST 21), pages 403–415, 2021. [16] J. Zhang, M. Kwon, D. Gouk, S. Koh, C. Lee, M. Alian, M. Chun, M. T. Kandemir, N. S. Kim, J. Kim, et al. Flashshare: Punching through server storage stack from kernel to firmware for ultra-low latency ssds. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), pages 477–492, 2018. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90795 | - |
dc.description.abstract | 近年來許多服務會執行在邊緣伺服器上,因為其提供了低延遲時間的好處。為了提升系統的效能,我們會使用 Linux 的輸出入排程方法來規劃輸出入順序,並且搭載超低延遲儲存裝置提升資料存取的速度。然而,我們觀察到在混合程式執行時,那些輸出入資料使用量大的程式執行,會影響到對於資料搬移延遲具有需求的程式執行,而導致其無法在規定的服務品質響應時間內完成。因此,我們提出延遲感知的輸出入排程,確保延遲感知程式執行時的服務品質。結果顯示延遲感知輸出入排程策略可以有效地減少錯失服務品質的響應時間,並且不會犧牲過多系統效能。 | zh_TW |
dc.description.abstract | In recent years, many services have been running on edge servers to take advantage of the low latency performance. To improve system performance and reduce data access time, we employ the I/O scheduler in Linux system to arrange the order of I/O operations with ultra-low latency storage devices. However, we observed that processes with higher I/O data transfer rate would impact the execution of latency-sensitive processes under hybrid task executions, leading to a failure to meet the Quality of Service (QoS) requirements for response time. Hence, we proposed a latency-aware I/O scheduling strategy to ensure the service quality during latency-sensitive process execution. The results show that the latency-aware I/O scheduling strategy effectively reduces the response time of missed Quality of Service (QoS), without significantly sacrificing system performance. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-10-03T17:39:23Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-10-03T17:39:23Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 口試委員審定書 i
致謝 ii 摘要 iii Abstract iv Contents v List of Figures vii List of Tables viii Denotation ix Chapter 1 Introduction 1 Chapter 2 Background, Observation and Motivation 4 2.1 Background 4 2.1.1 I/O Schedulers in Linux Kernel 4 2.1.2 Sporadic Jobs in Deadline-Driven Systems 5 2.2 Observation and Motivation 7 Chapter 3 Adaptive Request Dispatching Strategy 10 3.1 Overview 10 3.2 Deadline-Driven Scheduling 11 3.2.1 Optimizing I/O Scheduling with Real-Time Scheduling 11 3.2.2 Expanded Software Staging Queue for Hybrid Task Executions 11 3.3 Feedback Control Algorithm 13 3.4 Reordering Mechanism 14 Chapter 4 Performance Evaluation 18 4.1 Experimental Setup 18 4.2 Experimental Results 19 4.2.1 QoS Requirement 19 4.2.2 Average and Tail Latency 21 4.2.3 Throughput Performance 21 4.2.4 I/O Breakdown 23 Chapter 5 Conclusion 26 References 27 | - |
dc.language.iso | en | - |
dc.title | 支援超低延遲儲存裝置之混合程式執行的延遲感知輸出入排程 | zh_TW |
dc.title | Latency-Aware I/O Scheduling for Hybrid Task Executions over Ultra-Low Latency Storage | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.coadvisor | 張原豪 | zh_TW |
dc.contributor.coadvisor | Yuan-Hao Chang | en |
dc.contributor.oralexamcommittee | 施吉昇;徐慰中;洪士灝;王克中 | zh_TW |
dc.contributor.oralexamcommittee | Chi-Sheng Shih;Wei-Chung Hsu;Shih-Hao Hung;Keh-Chung Wang | en |
dc.subject.keyword | 邊緣運算,輸出入排程,超低延遲儲存裝置,即時系統,嵌入式系統, | zh_TW |
dc.subject.keyword | Edge Computing,I/O Scheduling,Ultra-Low Latency Device,Real-Time Systems,Embedded Systems, | en |
dc.relation.page | 29 | - |
dc.identifier.doi | 10.6342/NTU202302284 | - |
dc.rights.note | 同意授權(限校園內公開) | - |
dc.date.accepted | 2023-08-10 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 資訊工程學系 | - |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf 目前未授權公開取用 | 3.35 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。