Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72139
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor廖婉君
dc.contributor.authorAn-Dee Linen
dc.contributor.author林安笛zh_TW
dc.date.accessioned2021-06-17T06:25:20Z-
dc.date.available2023-08-19
dc.date.copyright2018-08-19
dc.date.issued2018
dc.date.submitted2018-08-17
dc.identifier.citation[1] L.A. Barroso, J. Clidaras, and U. Hölzle, 2nd ed., The Data Center as a Computer: An Introduction to the Design of Warehouse-Scale Machines, Morgan & Claypool Publishers, 2013.
[2] C.-S. Li, B. L. Brech, S. Crowder, D.M. Dias, H. Franke, M. Hogstrom, D. Lindquist, G. Pacifici, S. Pappe, B. Rajaraman, J. Rao, R.P. Ratnaparkhi, R.A. Smith, and M.D. Williams, “Software Defined Environments: An Introduction,” IBM J. Res. Dev., vol. 58, no. 2/3, Mar./May 2014, pp. 1-11.
[3] C.-S. Li, H. Franke, C. Parris, B. Abali, M. Kesavan, and V. Chang, “Composable Architecture for Rack Scale Big Data Computing,” Futur. Gener. Comp. Syst., vol. 67, Elsevier, Feb. 2017, pp. 180-193.
[4] J. Hamilton, “Internet-Scale Datacenter Economics: Where the Costs & Opportunities Lie,” Proc. 14th Int’l Workshop on High Performance Transaction Systems (HPTS), 2011.
[5] H. Herodotou, F. Dong, and S. Babu, “No One (Cluster) Size Fits All: Automatic Cluster Sizing for Data-intensive Analytics,” Proc. 2nd ACM Symp. Cloud Comput. (SoCC '11), 2011, article no. 18.
[6] P. Krug, “How Many Nodes? Part 1: An Introduction to Sizing a Couchbase Server 2.0 cluster,” blog, 16 Dec. 2014; https://blog.couchbase.com/how-many-nodes-part-1-introduction-sizing-couchbase-server-20-cluster.
[7] K. Lim, J. Chang, T. Mudge, P. Ranganathan, S.K. Reinhardt, and T.F. Wenisch , “Disaggregated Memory for Expansion and Sharing in Blade Servers,” Proc. 36th Annual Int’l Symp. Computer Architecture (ISCA '09), ACM, 2009, pp 267-278.
[8] K. Lim, Y. Turner, J.R. Santos, A. AuYoung, J. Chang, P. Ranganathan, and T.F. Wenisch, “System-level Implications of Disaggregated Memory,” Proc. IEEE 18th Int'l Symp. High Performance Computer Architecture (HPCA), 2012; doi: 10.1109/HPCA.2012.6168955.
[9] “GraphLab”; http://graphlab.com/.
[10] “Memcached - a Distributed Memory Object Caching System”; http://memcached.org/.
[11] Daniel, “PigMix,” Apr. 2013; https://cwiki.apache.org//confluence/display/PIG/PigMix.
[12] S.M. Rumble, D. Ongaro, R. Stutsman, M. Rosenblum, and J.K. Ousterhout, “It’s Time for Low Latency,” Proc. 13th USENIX Conf. Hot Topics in Operating Systems (HotOS'13), 2011.
[13] S. Han, N. Egi, A. Panda, S. Ratnasamy, G. Shi, and S. Shenker, “Network Support for Resource Disaggregation in Next-generation Data Centers,” Proc. 12th ACM Workshop on Hot Topics in Networks (HotNets-XII), 2013, article no. 10.
[14] Cisco, “Cisco UCS M-Series Modular Servers”; http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-m-series-modular-servers/index.html.
[15] P. Teich, “AMD Disaggregates the Server, Defines New Hyperscale Building Block,” Apr. 2013; www.seamicro.com/sites/default/files/MoorInsights.pdf.
[16] A. Rao, “SeaMicro Technology Overview,” Jan. 2012; http://seamicro.com/sites/default/files/SM_TO01_64_v2.5.pdf.
[17] Intel, “Intel, Facebook Collaborate on Future Data Center Rack Technologies,” Jan. 2013; https://newsroom.intel.com/news-releases/intel-facebook-collaborate-on-future-data-center-rack-technologies/.
[18] “Open Compute Project”; www.opencompute.org.
[19] D. Gross, J.F. Shortle, J.M. Thompson, and C.M. Harris, 4th ed., Fundamentals of Queueing Theory, John Wiley & Sons, 2008.
[20] R. Nelson, Probability, Stochastic Processes, and Queueing Theory: The Mathematics of Computer Performance Modeling, Springer Science & Business Media, 2013.
[21] N. Chowdhury and R. Boutaba, “Network Virtualization: State of the Art and Research Challenges,” IEEE Communications Magazine, vol. 47, issue 7, July 2009, pp. 20-26.
[22] M. Casado, T. Koponen, R. Ramanathan and S. Shenker, “Virtualizing the Network Forwarding Plane,” Proc. of ACM PRESTO, 2010.
[23] N. Chowdhury and R. Boutaba, “A survey of network virtualization,” Elsevier Computer Networks, vol. 54, issue 5, pp. 862-876, April 2010.
[24] B. Pfaff, J. Pettit, T. Koponen, K. Amidon, M. Casado and S. Shenker, “Extending Networking into the Virtualization Layer,” Proc. of workshop on Hot Topics in Networks HotNets-VIII, 2009.
[25] T. Narten, E. Gray, D. Black, L. Fang, L. Kreeger and M. Napierala, “Problem Statement: Overlays for Network Virtualization,” Internet Engineering Task Force, Internet-Draft, July 2013.
[26] L. Xia, Z. Cui, J. Lange, Y. Tang, P. Dinda and P. Bridges, 'Fast VMM-Based Overlay Networking for Bridging the Cloud and High Performance Computing,' Springer Cluster Computing, Volume 17, Issue 1, March 2014, pp 39-59.
[27] Z. Shen, S. Subbiah, X. Gu, and J. Wilkes, “Cloudscale: Elastic Resource Scaling for Multi-tenant Cloud Systems,” Proc. 2nd ACM Symp. Cloud Comput. (SoCC '11), 2011, article no. 5.
[28] Y. Hong, J. Xue, and M. Thottethodi, “Dynamic server provisioning to minimize cost in an IaaS cloud,” Proc. ACM SIGMETRICS Joint Int'l Conf. Measurement and Modeling of Computer Systems (SIGMETRICS '11), 2011, pp. 147-148.
[29] L. Chen and H. Shen, “Towards Resource-efficient Cloud Systems: Avoiding Over-provisioning in Demand-prediction Based Resource Provisioning,” Proc. 2016 IEEE Int'l Conf. Big Data (Big Data), 2016; doi: 10.1109/BigData.2016.7840604.
[30] L. Chen and H. Shen, “Considering Resource Demand Misalignments To Reduce Resource Over-Provisioning in Cloud Datacenters,” Proc. 36th Annual IEEE Int’l Conf. Computer Commun. (INFOCOM 2017), 2017.
[31] L. Chen, S. Patel, H. Shen, and Z. Zhou, “Profiling and Understanding Virtualization Overhead in Cloud,” Proc. 2015 44th Int’l Conf. Parallel Processing (ICPP '15), IEEE, 2015, pp. 31-40.
[32] S. Srikantaiah, A. Kansal, and F. Zhao, “Energy Aware Consolidation for Cloud Computing,” Proc. 2008 Conf. Power Aware Comput. and Systems (HotPower '08), USENIX, 2008.
[33] S. Yeo and H.-H.S. Lee, “Using Mathematical Modeling in Provisioning a Heterogeneous Cloud Computing Environment,” IEEE Computer, vol. 44, no. 8, Aug. 2011 pp. 55-62.
[34] S.T. Maguluri, R. Srikant, and L. Ying, “Stochastic Models of Load Balancing and Scheduling in Cloud Computing Clusters,” Proc. 31st Annual IEEE Int'l Conf. Computer Commun. (INFOCOM 2012), 2012; doi: 10.1109/INFCOM.2012.6195815.
[35] J. Ahn, C. Kim, J. Han, Y. Choi, and J. Huh, “Dynamic Virtual Machine Scheduling in Clouds for Architectural Shared Resources,” Proc. 4th USENIX Conf. Hot Topics in Cloud Comput. (HotCloud '12), 2012.
[36] A. Rai, R. Bhagwan, and S. Guha, “Generalized Resource Allocation for the Cloud,” Proc. 3rd ACM Symp. Cloud Comput. (SoCC '12), 2012, article no. 15.
[37] S. Kim, H. Eom, and H.Y. Yeom, “Virtual Machine Consolidation Based on Interference Modeling,” J. Supercomput., vol. 66, no. 3, Springer Science&Business Media, Dec. 2013, pp. 1489-1506.
[38] L. Chen, H. Shen, and K. Sapra, “RIAL: Resource Intensity Aware Load Balancing in Clouds,” Proc. 33rd Annual IEEE Int’l Conf. Computer Commun. (INFOCOM 2014), 2014; doi: 10.1109/INFOCOM.2014.6848062.
[39] L. Chen, H. Shen and K. Sapra, “Distributed Autonomous Virtual Resource Management in Datacenters Using Finite-Markov Decision Process,” Proc. ACM Symp. Cloud Comput. (SoCC '14), 2014; doi: 10.1145/2670979.2671003.
[40] L. Chen and H. Shen, “Consolidating Complementary VMs with Spatial/Temporal-awareness in Cloud Datacenters,” Proc. 33rd Annual IEEE Int'l Conf. Computer Commun. (INFOCOM 2014), 2014; doi: 10.1109/INFOCOM.2014.6848033.
[41] V. Jalaparti, P. Bodik, I. Menache, S. Rao, K. Makarychev, and M. Caesar, “Network-Aware Scheduling for Data-Parallel Jobs: Plan When You Can,” Proc. 2015 ACM Conf. Special Interest Group on Data Commun. (SIGCOMM '15), 2015, pp 407-420.
[42] H. Wang, C. Isci, L. Subramanian, J. Choi, D. Qian, and O. Mutlu, “A-DRM: Architecture-aware Distributed Resource Management of Virtualized Clusters,” Proc. 11th ACM SIGPLAN/SIGOPS Int'l Conf. Virtual Execution Environments (VEE '15), 2015, pp 93-106.
[43] C. Qiu, H. Shen, and L. Chen, “Probabilistic Demand Allocation for Cloud Service Brokerage,” Proc. 35th Annual IEEE Int'l Conf. Computer Commun. (INFOCOM 2016), 2016; doi: 10.1109/INFOCOM.2016.7524611.
[44] L. Chen, H. Shen, and S. Platt, “Cache Contention Aware Virtual Machine Placement and Migration in Cloud Datacenters,” Proc. IEEE 24th Int'l Conf. Network Protocols (ICNP), 2016; doi: 10.1109/ICNP.2016.7784447.
[45] Z. Wang, M.M. Hayat, N. Ghani, and K.B. Shaban, “Optimizing Cloud-Service Performance: Efficient Resource Provisioning via Optimal Workload Allocation,” IEEE Trans. Parallel Distrib. Syst., vol. 28, no. 6, June 2017, pp. 1689-1702.
[46] K. Xiong and H. Perros, “Service Performance and Analysis in Cloud Computing,” IEEE Congress on Services, 2009; doi:10.1109/SERVICES-I.2009.121.
[47] H. Khazaei, J. Misic, and V.B. Misic, “Performance Analysis of Cloud Computing Centers Using M/G/m/m+r Queuing Systems,” IEEE Trans. Parallel Distrib. Syst., vol. 23, no. 5, May 2012, pp. 936-943.
[48] X. Chang, B. Wang, J.K. Muppala, and J. Liu, “Modeling Active Virtual Machines on IaaS Clouds Using an M/G/m/m+K Queue,” IEEE Trans. Serv. Comput., vol. 9, no. 3, May/June 2016, pp. 408-420.
[49] T. Atmaca, T. Begin, A. Brandwajn, and H. Castel-Taleb, “Performance Evaluation of Cloud Computing Centers with General Arrivals and Service,” IEEE Trans. Parallel Distrib. Syst., vol. 27, no. 8, Aug. 2016, pp. 2341-2348.
[50] B. Liu, Y. Lin, and Y. Chen, “Quantitative Workload Analysis and Prediction Using Google Cluster Traces,” Proc. IEEE Conf. Computer Commun. Workshops (INFOCOM WKSHPS), 2016; doi: 10.1109/INFCOMW.2016.7562213.
[51] P. Beck, P. Clemens, S. Freitas, J. Gatz, M. Girola, J. Gmitter, H. Mueller, R. O'Hanlon, V. Para, J. Robinson, A. Sholomon, J. Walker, and J. Tate, “IBM and Cisco Together for a World Class Data Center,” July 2013; www.redbooks.ibm.com/redbooks/pdfs/sg248105.pdf.
[52] S.-P. Chung and K. W. Ross, “Reduced Load Approximations for Multirate Loss Networks,” IEEE Trans. Commun., vol. 41, no. 8, Aug. 1993, pp. 1222-1231.
[53] F.P. Kelly, “Blocking Probabilities in Large Circuit-Switched Networks,” Advances in Applied Probability, vol. 18, no. 2, June 1986, pp. 473-505.
[54] J.S. Kaufman, “Blocking in a Shared Resource Environment,” IEEE Trans. Commun., vol. 29, no. 10, Oct. 1981, pp. 1474-1481.
[55] J.W. Roberts, “A Service System with Heterogeneous User Requirements,” Performance of Data Communications Systems and Their Applications, 1981, pp. 423-431.
[56] A.K. Mishra, J.L. Hellerstein, W. Cirne, and C.R. Das, “Towards Characterizing Cloud Backend Workloads: Insights from Google Compute Clusters,” ACM SIGMETRICS Performance Evaluation Review, vol. 37, no. 4, Mar. 2010, pp. 34-41.
[57] Y. You, Audio Coding: Theory and Applications, Springer, 2010.
[58] Q. Zhang, M.F. Zhani, R. Boutaba, and J.L. Hellerstein, “Dynamic Heterogeneity-Aware Resource Provisioning in the Cloud,” IEEE Trans. Cloud Comput., vol. 2, no. 1, Jan-Mar. 2014, pp. 14-28.
[59] A. Vahdat, H. Liu, X. Zhao, and C. Johnson, “The Emerging Optical Data Center,” Proc. Optical Fiber Commun. Conf., Optical Society of America, 2011.
[60] A. Vahdat, “Delivering Scale Out Data Center Networking with Optics -- Why and How,” Proc. Optical Fiber Commun. Conf., Optical Society of America, 2012.
[61] N. Farrington, G. Porter, P.-C. Sun, A. Forencich, J. Ford, Y. Fainman, G. Papen, and A. Vahdat, “A Demonstration of Ultra-low-latency Data Center Optical Circuit Switching,” ACM SIGCOMM Computer Commun. Review - Special Oct. issue (SIGCOMM '12), vol. 42, no. 4, Oct. 2012, pp. 95-96.
[62] V. Suhendra, C. Raghavan, and T. Mitra, “Integrated Scratchpad Memory Optimization and Task Scheduling for MPSoC architectures,” Proc. 2006 Int'l Conf. Compilers, Architecture and Synthesis for Embedded Systems (CASES '06), ACM, 2006, pp. 401-410.
[63] D. Farinacci, T. Li, S. Hanks, D. Meyer and P. Traina, “Generic Routing Encapsulation (GRE),” Internet Engineering Task Force, RFC 2784, March 2000.
[64] M. Sridharan, A. Greenberg, Y. Wang, P. Garg, N. Venkataramiah, K. Duda, I. Ganga, G. Lin, M. Pearson, P. Thaler and C. Tumuluri, “NVGRE: Network Virtualization using Generic Routing Encapsulation,” Internet Engineering Task Force, Internet Draft, February 2014.
[65] M. Mahalingam, D. Dutt, K. Duda, P. Agarwal, and L. Kreeger, T. Sridhar, M. Bursell and C. Wright, “Virtual eXtensible Local Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks,” Internet Engineering Task Force, RFC 7348, August 2014.
[66] B. Davie and B. Davie, “A Stateless Transport Tunneling Protocol for Network Virtualization (STT),” Internet Engineering Task Force, Internet-Draft, April 2014.
[67] Open vSwitch, http://openvswitch.org/
[68] Iperf, https://iperf.fr/
[69] Wireshark, https://www.wireshark.org/
[70] 'Scaling in the Linux Networking Stack,' https://www.kernel.org/doc/Documentation/networking/scaling.txt
[71] Open Daylight, http://www.opendaylight.org/
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72139-
dc.description.abstract近年來的研究顯示,執行用戶的應用程式所需之運算資源有逐漸和資料中心能提供的硬體資源有不平衡的現象。當運算資源的供給量和需求量沒有匹配時,會導致運算工作無法有效率地映射到傳統資料中心內,固定大小的伺服器來執行。這當中所產生的資源空洞不但會降低硬體利用率,也會妨害資料中心乘載大型運算工作的能力。這個缺點啟發了一種新型態,以機櫃為單位的電腦架構,稱作「可組合系統(composable system)」。這個系統把原本在主機板上,固定大小的運算資源轉換成動態的運算平台。更精確地來說,這個新型態的電腦架構的目標是將各張傳統主機板上的運算資源,例如中央處理器、記憶體、硬碟儲存空間、以及專用指令集處理器等,全部用網路連結再一起。
這個架構形成了一台邏輯上的巨大電腦,並藉由打破各張傳統主機板上的資源界線,使這個系統將能乘載資源需求更多樣化的運算工作。
本項研究可分為三個部分:第一部份我們介紹這個可組合式的架構,並用它來建構雲端資料中心。我們推導運算資源使用量的數學模型,並藉由模擬幾種資料中心內常見的運算工作做驗證。
從模擬的結果可發現,可組合系統比傳統系統能乘載1.6倍的運算工作量,並且它對於運算工作需求的數值分布不敏感。這顯示出可組合是系統確實能支援資料中心的服務。
在接下來的部分,我們將考慮的系統架構從單一資料中心延伸到由多個資料中心所組成的網路。其中每座資料中心都在地理位置上散布在服務區內。一個運算工作有可能為了追求低延遲的傳輸而搬遷到離租用戶最近的資料中心內,並根據用戶的移動方式,這個運算工作在執行時可能被搬移數次。在此我們建構了一個雙層次的模型來推估整體的運算資源用量。我們將用戶的移動模式轉化成對各做資料中心的有效抵達速率。在柏松分布的抵達速率,以及以機率性的移動方式假設下,每座資料中心能利用上一章所推導的模型,平行地計算資源用量。
在最後一部分,我們把目光移到運算工作之間,彼此的通訊管道。一個整合式的服務可能由複數個運算工作聯合組成,每個工作都運作在獨立的虛擬機器內。這些虛擬機器組成的邏輯網路在實際上是散佈在由各個資料中心所組成的實體網路上。然而,傳統實作虛擬網路的協定可能無法直接套用至資料中心內的網路拓樸。最近的研究顯示將網路第二層封裝到第三層的隧道協定是可行的解。但是,我們在實際量測後發現,直接使用隧道協定的效能不佳。更精確地說,伺服器內存在瓶頸。因此,我們提出了能分散中央處理器的機制,這個機制能引導封包到可用的中央處理器上做後續的封包處理,所以能大量的提升網路效能。在以VXLAN所建構的虛擬網路中,我們的方案能在10Gb/s的線路上提升300%的隧道頻寬。
zh_TW
dc.description.abstractRecent research trends exhibit a growing imbalance between the demands of tenants’ software applications and the provisioning of hardware resources. Misalignment of demand and supply gradually hinders workloads from being efficiently mapped to fixed-sized server nodes in traditional data centers. The incurred resource holes not only lower infrastructure utilization but also cripple the capability of a data center for hosting large-sized workloads. This deficiency motivates the development of a new rack-wide architecture referred to as the composable system. The composable system transforms traditional server racks of static capacity into a dynamic compute platform. Specifically, this novel architecture aims to link up all compute components that are traditionally distributed on traditional server boards, such as central processing unit (CPU), random access memory (RAM), storage devices, and other application-specific processors. By doing so, a logically giant compute platform is created and this platform is more resistant against the variety of workload demands by breaking the resource boundaries among traditional server boards.
This research is divided into three parts. In the first part, we introduce the concepts of this reconfigurable architecture and design a framework of the composable system for cloud data centers. We then develop mathematical models to describe the resource usage patterns on this platform and enumerate some types of workloads that commonly appear in data centers. From the simulations, we show that the composable system sustains nearly up to 1.6 times stronger workload intensity than that of traditional systems and it is insensitive to the distribution of workload demands. This demonstrates that this composable system is indeed an effective solution to support cloud data center services.
In the next part, we extend the framework from a single data center into a network of data centers, where each of them is geographic distributed in the serving area. A workload may need to migrate to the data center that close to its tenants for lower transmission delay. The migration may happen multiple times during its runtime, conditioned on the mobility of its tenants. We develop a two-tier model to tell the overall resource usage patterns. The mobility patterns of tenants are transformed into the effective arrival rates to each data center. Under the conditions of the Poisson arrivals of incoming workloads and probabilistic mobility patterns, the resource usage patterns of each data center can be calculated in parallel.
In the last part, we turn our viewpoint to the communication links between workloads. An integrated application may consist of multiple workloads which run inside dedicated VMs. These VMs form a logical network which is physically distributed in the network of data centers. However, traditional protocols used in local area networks may not be applicable for data center networks due to the difference in network topology. Recent research suggests that layer-2-in-layer-3 tunneling protocols may be the solution to address the challenges. We find via testbed experiments that directly applying these tunneling protocols toward network virtualization only results in poor performance due to the scalability problems. Specifically, we observe that the bottlenecks actually reside inside the servers. We then propose a CPU offloading mechanism that exploits a packet steering function to balance packet processing among available CPU threads, thus greatly improving network performance. Compared to a virtualized network created based on VXLAN, our scheme improves the bandwidth for up to almost 300 percent on a 10 Gb/s link between a pair of tunnel endpoints.
en
dc.description.provenanceMade available in DSpace on 2021-06-17T06:25:20Z (GMT). No. of bitstreams: 1
ntu-107-D00942017-1.pdf: 1597727 bytes, checksum: eb7e6a293edfacc931258a0bece86e6b (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents中文摘要 i
英文摘要 iii
Contents vii
List of Figures ix
List of Tables xi
Chapter 1 Introduction 1
1.1 The Data Centers with Composable System 1
1.2 The Federated Data Centers and the Connection 5
1.3 Related Work 7
Chapter 2 The Models for Capacity Planning to a Single Data Center with Composable System 9
2.1 The Composable Data Center Architecture 9
2.2 The Performance Model of an Idealized Composable System 10
2.2.1 The Queueing Theory Technique 10
2.3 Extending the Performance Models for a Practical Composable System 19
2.3.1 Random Demands of Workloads 19
2.3.1.1 The Quantization Technique 19
2.3.1.2 Discussion 23
2.3.2 Internal Network Delay of a Composable System 24
2.4 Simulation Methodology and Results 25
2.4.1 The System Setup 25
2.4.2 Theoretical Formulation vs. Simulation 29
2.4.3 Performance Comparison Between the Composable System and the Traditional System 31
2.4.4 Performance Penalty Incurred by Communication Delays 34
2.5 Brief Summary of Capacity Planning to a Composable-based Data Center 36
Chapter 3 Capacity Planning to the Network of Data Centers Serving Mobile Tenants 39
3.1 Background 39
3.2 Resource Usage Models to the Network of Data Centers 41
3.2.1 The System Setup and Assumptions 41
3.2.2 The Derivation of the Usage Model 44
3.3 Numerical Simulation 46
Chapter 4 Toward Performance Optimization with CPU Offloading for Virtualized Multi-Tenant Data Centers 49
4.1 Overview of Encapsulation Protocols 49
4.1.1 Generic Routing Encapsulation 49
4.1.2 Virtual Extensible Local Area Network 49
4.1.3 Stateless Transport Tunneling 50
4.2 Evaluation Methodology and Setup 52
4.3 Preliminary Experiment Results 53
4.4 Performance Optimization 54
4.5 Conclusion of Tunnel Acceleration for Network Slice and Future Opportunities 58
Chapter 5 Conclusions 59
Reference 61
dc.language.isoen
dc.subject可組合式的資料中心zh_TW
dc.subject中央處理器的分攤工作zh_TW
dc.subject資料中心網路zh_TW
dc.subject隧道協議zh_TW
dc.subject效能分析zh_TW
dc.subjectperformance analysisen
dc.subjectcomposable data centeren
dc.subjectnetwork of data centersen
dc.subjecttunneling protocolen
dc.subjectCPU offloadingen
dc.title以可組合系統建置的虛擬化資料中心之容量規劃與網路傳輸最佳化zh_TW
dc.titleCapacity Planning and Goodput Optimization for Virtualized Data Centers with Composable Systemsen
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree博士
dc.contributor.oralexamcommittee吳曉光,李中生,林俊宏,陳俊良,郭文興
dc.subject.keyword效能分析,可組合式的資料中心,資料中心網路,隧道協議,中央處理器的分攤工作,zh_TW
dc.subject.keywordperformance analysis,composable data center,network of data centers,tunneling protocol,CPU offloading,en
dc.relation.page68
dc.identifier.doi10.6342/NTU201803814
dc.rights.note有償授權
dc.date.accepted2018-08-17
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
1.56 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved