請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89007
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 劉邦鋒 | zh_TW |
dc.contributor.advisor | Pangfeng Liu | en |
dc.contributor.author | 李昭妤 | zh_TW |
dc.contributor.author | Chao-Yu Lee | en |
dc.date.accessioned | 2023-08-16T16:44:33Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-08-16 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-09 | - |
dc.identifier.citation | Amazon. Aws lambda. https://aws.amazon.com/lambda/.
Apache. Openwhisk. https://openwhisk.apache.org/. E. Caron, F. Desprez, and A. Muresan. Forecasting for grid and cloud computing ondemand resources based on pattern matching. 2010 IEEE Second International Conference on Cloud Computing Technology and Science, 2010. N. Daw, U. Bellur, and P. Kulkarni. Xanadu: Mitigating cascading cold starts in serverless function chain deployments. Middleware ’20: Proceedings of the 21st International Middleware Conference, pages 356–370, 2020. A. Fuerst and P. Sharma. Faascache: keeping serverless computing alive with greedydual caching. ASPLOS ’21: Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating System, pages 386–400, 2021. A. U. Gias and G. Casale. Cocoa: Cold start aware capacity planning for function asaservice platforms. 2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), 2020. Google. Google cloud functions. https://cloud.google.com/functions. X. Liu, J. Wen, Z. Chen, D. Li, J. Chen, Y. Liu, H. Wang, and X. Jin. Faaslight: General applicationlevel coldstart latency optimization for functionas aservice in serverless computing. ACM Transactions on Software Engineering and Methodology, 2023. W. Matoussi and T. Hamrouni. A new temporal localitybased workload prediction approach for saas services in a cloud environment. 2022 Journal of King Saud University Computer and Information Sciences, 2022. Microsoft. Azure functions. https://learn.microsoft.com/en-us/azure/azure-functions/. A. Mohan, H. Sane, K. Doshi, S. Edupuganti, N. Nayak, and V. Sukhomlinov. Agile cold starts for scalable serverless. HotCloud’19: Proceedings of the 11th USENIX Conference on Hot Topics in Cloud Computing, page 21, 2019. E. Oakes, L. Yang, D. Zhou, K. Houck, T. Harter, A. C. ArpaciDusseau, and R. H. ArpaciDusseau. Sock: rapid task provisioning with serverlessoptimized containers. USENIX ATC ’18: Proceedings of the 2018 USENIX Conference on Usenix Annual Technical Conference, pages 57–69, 2018. M. Shahrad, R. Fonseca, I. Goiri, G. Chaudhry, P. Batum, J. Cooke, E. Laureano, C. Tresness, M. Russinovich, and R. Bianchini. Serverless in the wild: characterizing and optimizing the serverless workload at a large cloud provider. USENIX ATC’20: Proceedings of the 2020 USENIX Conference on Usenix Annual Technical Conference, pages 205–218, 2020. J. Shen, T. Yang, Y. Su, Y. Zhou, and M. R. Lyu. Defuse: A dependency guided function scheduler to mitigate cold starts on faas platforms. 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), 2021. P. Silva, D. Fireman, and T. E. Pereira. Prebaking functions to warm the serverless cold start. Middleware ’20: Proceedings of the 21st International Middleware Conference, pages 1–13, 2020. Wikipedia. 3dimensional matching — Wikipedia, The Free Encyclopedia. "https://en.wikipedia.org/wiki/3-dimensional_matching", 2023. Accessed 25-July-2023. Wikipedia. Ford–fulkerson algorithm — Wikipedia, The Free Encyclopedia. "https://en.wikipedia.org/wiki/Ford-Fulkerson_algorithm", 2023. Accessed 15-July-2023. Wikipedia. Knuth–morris–pratt algorithm — Wikipedia, The Free Encyclopedia. "https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm", 2023. Accessed 27-July-2023. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89007 | - |
dc.description.abstract | 近年來,容器技術在軟體行業中獲得了相當多的關注,許多企業選擇使用其彈性、成本效益和易於實作的特點。然而,使用容器並非沒有挑戰。"冷啟動"問題是部署容器時非常重要的問題。冷啟動是從容器被部署到實體伺服器上到準備好運行應用程式的延遲。終端使用者如果在容器剛啟動時,想要使用應用程式就必須忍受延遲。這些延遲導致用戶體驗的下降,可能進一步的對企業盈利產生負面的影響。
確保流暢的用戶體驗最常見的方法是在持續開啟大量的容器在伺服器上,但這會導致資源過度配置。相反的,如果在處理請求後立即關閉容器,可以減少記憶體消耗,但每次有新的請求時都會產生冷啟動。冷啟動的發生和資源使用是一種權衡,並在容器平台上提出了重大挑戰。 為了應對這一挑戰,我們觀察到使用同一個容器處理連續的請求可以顯著減少冷啟動的次數。我們提出了一種名為 TAC (Temporal Adjacency Function Clustering Algorithm) 的「相鄰時間函數分群」算法來應對這一挑戰。TAC 從歷史數據中選擇時間相鄰的請求,將函數進行分群,以減少冷啟動發生的次數並實現最佳化資源利用。實驗結果顯示,與最先進的方法(例如,Defuse 和 Hybrid histogram policy)相比,TAC可以減少8%的冷啟動次數和53%的記憶體使用量。 | zh_TW |
dc.description.abstract | In recent years, container technology has gained significant attention in the software industry, with many businesses opting for its elasticity, cost-effectiveness, and ease of implementation. However, the use of containers is not without challenges. The "cold-start" problem is the most critical issue in deploying containers. The cold-start time is the delay from a container being provisioned on a physical server to getting ready to run the application. The end users need to endure delays when they run the application at the time the container has just started. These delays cause a negative user experience and may deteriorate the business's profitability.
The most common way to ensure a seamless user experience is to keep a substantial number of containers active throughout the day, which causes resource over-provision. Conversely, closing the container right after handling the requests can reduce memory consumption but generate a cold start whenever the request arrives. Cold start occurrence and resource usage is a trade-off and presents a significant challenge on the container platform. To address this challenge, we observe that serving consecutive requests with the same container can notably decrease the number of cold starts. We propose TAC, a Temporal Adjacency Function Clustering algorithm, to meet the challenge. TAC selects the functions with time adjacency requests into a cluster from the historical data. TAC packs functions serving time adjacency requests into a cluster to reduce cold starts and enable efficient resource utilization. The experiment result shows that TAC reduces 8% cold start occurrences and 53% memory usage with the real-world traces compared to the state-of-the-art methods, e.g., Defuse and Hybrid histogram policy. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-16T16:44:33Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-08-16T16:44:33Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | Acknowledgements i
摘要 ii Abstract iii Contents v List of Figures vii List of Tables viii Denotation ix Chapter 1 Introduction 1 Chapter 2 Related Works 4 2.1 Keeping container warm 4 2.2 Predicting future function invocations 5 2.3 Enhancing the efficiency of the initialization process 5 Chapter 3 Problem Formulation 7 Chapter 4 Algorithms 10 4.1 Offline Clustering 10 4.2 Online Scheduling 13 4.2.1 Matching Algorithm 14 4.2.2 Opening New Containers 15 4.2.3 Updating Container Status 16 Chapter 5 Evaluation 18 5.1 Benchmark 18 5.2 Baseline Method 18 5.3 Performance of TAC 19 5.4 Performance of Different Matching Policy 20 5.5 Performance of Changing Container Creation Probability 22 Chapter 6 References 24 References 26 | - |
dc.language.iso | en | - |
dc.title | 利用函數分群降低容器平台上的資源使用率 | zh_TW |
dc.title | Function Clustering to Optimize Resource Utilization on Container Platform | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 洪鼎詠;吳真貞 | zh_TW |
dc.contributor.oralexamcommittee | Ding-Yong Hong;Jan-Jan Wu | en |
dc.subject.keyword | 容器平台,冷啟動,函式即服務,無服務器運算架構, | zh_TW |
dc.subject.keyword | Container Platform,Cold-start,Function-as-a-Service,Serverless, | en |
dc.relation.page | 28 | - |
dc.identifier.doi | 10.6342/NTU202303355 | - |
dc.rights.note | 同意授權(限校園內公開) | - |
dc.date.accepted | 2023-08-10 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 資訊工程學系 | - |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf 目前未授權公開取用 | 3.2 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。