Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92441
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor楊佳玲zh_TW
dc.contributor.advisorChia-Lin Yangen
dc.contributor.author高聖傑zh_TW
dc.contributor.authorSheng-Chieh Kaoen
dc.date.accessioned2024-03-22T16:31:20Z-
dc.date.available2024-03-23-
dc.date.copyright2024-03-22-
dc.date.issued2023-
dc.date.submitted2023-11-24-
dc.identifier.citation[1] X. Han and T. Yu, “Automated Performance Tuning for Highly-Configurable Software Systems”, in arXiv 2010.01397, 2020.
[2] B. Mackenzie-Low, “Understanding the Windows Disk Storage Architecture”, 2011. [Online]. Available: https://petri.com/windows-storage-disk-architecture. [Accessed: 01-Sep-2023].
[3] M. Bojarski et al., “End to End Learning for Self-Driving Cars”, in arXiv 1604.07316, 2016.
[4] K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification”, in Proceedings of the 2015 IEEE International Conference on Computer Vision, 2015.
[5] P. Baldi and S. Brunak, Bioinformatics: The Machine Learning Approach, 2nd ed. Bradford Books, 2001.
[6] J. H. Noordik, Cheminformatics Developments: History, Reviews and Current Research. IOS Press, 2004.
[7] DataCadamia, “I/O - Workload (Access Pattern)”. [Online]. Available: https://datacadamia.com/io/access_pattern. [Accessed: 01-Sep-2023].
[8] S. Kavalanekar, B. Worthington, Q. Zhang, and V. Sharda, “Characterization of storage workload traces from production Windows Servers”, in Proceedings of the 2008 IEEE International Symposium on Workload Characterization, 2008, pp. 119–128.
[9] J. Axboe, “Flexible I/O Tester”, 2022. [Online]. Available: https://github.com/axboe/fio. [Accessed: 01-Sep-2023].
[10] Microsoft, “logman”, Windows Server, Feb-2023. [Online]. Available: https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/logman. [Accessed: 01-Sep-2023].
[11] C. Geng, S.-J. Huang, and S. Chen, “Recent Advances in Open Set Recognition: A Survey”, in Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, vol. 43, pp. 3614–3631.
[12] Hewlett Packard Enterprise, HPE Smart Array SR Gen10 Controller User Guide, 10th ed. 2022.
[13] Hewlett Packard Enterprise, HPE Smart Storage Administrator User Guide, 8th ed. 2016.
[14] J. Miao and L. Niu, “A Survey on Feature Selection”, Procedia Computer Science, vol. 91, pp. 919–926, 2016.
[15] H. Liu, M. Cocea, and W. Ding, “Decision tree learning based feature evaluation and selection for image classification”, in Proceedings of the 2017 International Conference on Machine Learning and Cybernetics, 2017, vol. 2, pp. 569–574.
[16] K. Cho et al., “Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation”, in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 2014, pp. 1724–1734.
[17] T. Zia and U. Zahid, “Long short-term memory recurrent neural network architectures for Urdu acoustic modeling”, International Journal of Speech Technology, vol. 22, pp. 21–30, Mar. 2019.
[18] N. Wu and Y. Xie, “A Survey of Machine Learning for Computer Architecture and Systems”, ACM Computing Surveys, vol. 55, no. 3, pp. 1–39, Feb. 2022.
[19] S. J. Mielke et al., “Between words and characters: A Brief History of Open-Vocabulary Modeling and Tokenization in NLP”, in arXiv 2112.10508, 2021.
[20] R. Dey and F. Salem, “Gate-variants of Gated Recurrent Unit (GRU) neural networks”, in Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems, 2017, pp. 1597–1600.
[21] S. Hochreiter and J. Schmidhuber, “Long Short-term Memory”, Neural Computation, vol. 9, pp. 1735–1780, Dec. 1997.
[22] C. Feltus, “Learning Algorithm Recommendation Framework for IS and CPS Security - Analysis of the RNN, LSTM, and GRU Contributions”, International Journal of Systems and Software Security and Protection, vol. 13, Mar. 2022.
[23] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed. Bradford Books, 2018.
[24] E. Ipek, O. Mutlu, J. F. Martínez, and R. Caruana, “Self-Optimizing Memory Controllers: A Reinforcement Learning Approach”, in Proceedings of the 2008 International Symposium on Computer Architecture, 2008, pp. 39–50.
[25] PyTorch, [Online]. Available: https://pytorch.org/. [Accessed: 01-Sep-2023].
[26] Y. Shu, Y. Shi, Y. Wang, Y. Zou, Q. Yuan, and Y. Tian, “ODN: Opening the Deep Network for Open-Set Action Recognition”, in Proceedings of the 2018 IEEE International Conference on Multimedia and Expo, 2018, pp. 1–6.
[27] D. Vengerov, “A reinforcement learning framework for utility-based scheduling in resource-constrained systems”, Future Generation Computer Systems, vol. 25, no. 7, pp. 728–736, 2009.
[28] S. Krishnan et al., “ArchGym: An Open-Source Gymnasium for Machine Learning Assisted Architecture Design”, in Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023.
[29] L. Steiner, M. Jung, F. S. Prado, K. Bykov, and N. Wehn, “DRAMSys4.0: A Fast and Cycle-Accurate SystemC/TLM-Based DRAM Simulator”, in Proceedings of the Embedded Computer Systems: Architectures, Modeling, and Simulation, 2020, pp. 110–126.
[30] Bendale and T. E. Boult, “Towards Open Set Deep Networks”, in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1563–1572.
[31] W. J. Scheirer, A. Rocha, R. J. Micheals, and T. E. Boult, “Meta-Recognition: The Theory and Practice of Recognition Score Analysis”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1689–1695, 2011.
[32] S. Kotz and S. Nadarajah, Extreme Value Distributions: Theory and Applications. Imperial College Press, 2000.
[33] H. Zhang and V. M. Patel, “Sparse Representation-Based Open Set Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 8, pp. 1690–1696, 2017.
[34] P. Oza and V. M. Patel, “C2AE: Class Conditioned Auto-Encoder for Open-Set Recognition”, in Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2302–2311.
[35] P. Júnior et al., “Nearest neighbors distance ratio open-set classifier”, Machine Learning, vol. 106, pp. 1–28, Mar. 2017.
[36] B. Bada, “Performance Optimization of Web-Based Application”, International Journal of Computer Science Engineering, vol. 39–45, no. 2, pp. 1–28, Apr. 2021.
[37] Q. Zhang, D. Feng, and F. Wang, “Metadata Performance Optimization in Distributed File System”, in Proceedings of the 2012 IEEE/ACIS 11th International Conference on Computer and Information Science, 2012, pp. 476–481.
[38] Gottschall, L. Eeckhout, and M. Jahre, “TIP: Time-Proportional Instruction Profiling”, in Proceedings of the 54th Annual IEEE/ACM International Symposium on Microarchitecture, 2021, pp. 15–27.
[39] Gonzalez et al., “Profiling Hyperscale Big Data Processing”, in Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023.
[40] Gottschall, L. Eeckhout, and M. Jahre, “TEA: Time-Proportional Event Analysis”, in Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023.
[41] J. L. Bez, S. Byna, and S. Ibrahim, “I/O Access Patterns in HPC Applications: A 360-Degree Survey”, ACM Computing Surveys, Jul. 2023.
[42] F. Z. Boito, E. C. Inacio, J. L. Bez, P. O. A. Navaux, M. A. R. Dantas, and Y. Denneulin, “A Checkpoint of Research on Parallel I/O for High-Performance Computing”, ACM Computing Surveys, vol. 51, no. 2, Mar. 2018.
[43] A. Patterson and J. L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 5th ed. Morgan Kaufmann Publishers Inc., 2013.
[44] J. F. Kolen and S. C. Kremer, “Gradient Flow in Recurrent Nets: The Difficulty of Learning LongTerm Dependencies”, in A Field Guide to Dynamical Recurrent Networks, 2001, pp. 237–243.
[45] L. Lu, “Dying ReLU and Initialization: Theory and Numerical Examples”, Communications in Computational Physics, vol. 28, no. 5, pp. 1671–1706, Jun. 2020.
[46] Iometer, [Online]. Available: http://www.iometer.org/. [Accessed: 01-Sep-2023].
[47] J. He et al., “I/O Acceleration with Pattern Detection”, in Proceedings of the 22nd International Symposium on High-Performance Parallel and Distributed Computing, 2013, pp. 25–36.
[48] J. Marques-Silva, “Logic-Based Explainability in Machine Learning”, in Reasoning Web. Causality, Explanations and Declarative Knowledge, L. Bertossi and G. Xiao, Eds. Springer Nature Switzerland, 2023, pp. 24–104.
[49] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen, Classification and Regression Trees. Taylor & Francis, 1984.
[50] R. Polikar, L. Upda, S. S. Upda, and V. Honavar, “Learn++: an incremental learning algorithm for supervised neural networks”, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 31, no. 4, pp. 497–508, 2001.
[51] M. A. F. Pimentel, D. A. Clifton, L. Clifton, and L. Tarassenko, “A Review of Novelty Detection”, Signal Processing, vol. 99, pp. 215–249, Jun. 2014.
[52] T. Wu et al., “A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development”, IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 5, pp. 1122–1136, 2023.
[53] Red Hat, “Monitoring and managing system status and performance”, Red Hat Enterprise Linux 9, 2022. [Online]. Available: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/index. [Accessed: 01-Sep-2023].
[54] Microsoft, “Performance Tuning Guidelines for Windows Server 2022”, Windows Server, Jul-2022. [Online]. Available: https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning. [Accessed: 01-Sep-2023].
[55] J. L. Hennessy and D. A. Patterson, Computer Architecture: A Quantitative Approach, 5th ed. Elsevier Science, 2012.
[56] S. Kanev et al., “Profiling a Warehouse-Scale Computer”, SIGARCH Computer Architecture News, vol. 43, no. 3S, pp. 158–169, Jun. 2015.
[57] S. Snyder et al., “Techniques for Modeling Large-Scale HPC I/O Workloads”, in Proceedings of the 6th International Workshop on Performance Modeling, Benchmarking, and Simulation of High Performance Computing Systems, 2015.
[58] Y. Yin, J. Li, J. He, X.-H. Sun, and R. Thakur, “Pattern-Direct and Layout-Aware Replication Scheme for Parallel I/O Systems”, in Proceedings of the 2013 IEEE 27th International Symposium on Parallel and Distributed Processing, 2013, pp. 345–356.
[59] J. P. White, A. D. Kofke, R. L. DeLeon, M. Innus, M. D. Jones, and T. R. Furlani, “Automatic Characterization of HPC Job Parallel Filesystem I/O Patterns”, in Proceedings of the Practice and Experience on Advanced Research Computing, 2018.
[60] Y. Fu, T. Xiang, Y.-G. Jiang, X. Xue, L. Sigal, and S. Gong, “Recent Advances in Zero-Shot Recognition: Toward Data-Efficient Understanding of Visual Content”, IEEE Signal Processing Magazine, vol. 35, no. 1, pp. 112–125, 2018.
[61] Synopsys, “DSO.ai”, 2020. [Online]. Available: https://www.synopsys.com/ai/ai-powered-eda/dso-ai.html. [Accessed: 01-Sep-2023].
[62] Mirhoseini et al., “A graph placement methodology for fast chip design”, Nature, vol. 594, no. 7862, pp. 207–212, Jun. 2021.
[63] S.-C. Kao and T. Krishna, “GAMMA: Automating the HW Mapping of DNN Models on Accelerators via Genetic Algorithm”, in Proceedings of the 2020 IEEE/ACM International Conference on Computer Aided Design, 2020, pp. 1–9.
[64] K. Hegde, P.-A. Tsai, S. Huang, V. Chandra, A. Parashar, and C. W. Fletcher, “Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search”, in Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2021, pp. 943–958.
[65] J. Buckman, D. Hafner, G. Tucker, E. Brevdo, and H. Lee, “Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion”, in Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018, pp. 8234–8244.
[66] R. Roy et al., “PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning”, in Proceedings of the 2021 58th ACM/IEEE Design Automation Conference, 2021, pp. 853–858.
[67] gem5, [Online]. Available: https://www.gem5.org/. [Accessed: 01-Sep-2023].
[68] Z. Lyu, N. B. Gutierrez, and W. J. Beksi, “MetaMax: Improved Open-Set Deep Neural Networks via Weibull Calibration”, in Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2023, pp. 439–443.
[69] Oracle, “Vdbench”. May-2018.
[70] Amazon Web Service, “User Guide for Linux Instances”, Amazon Elastic Compute Cloud, 2023. [Online]. Available: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html. [Accessed: 01-Sep-2023].
[71] L. Yuan, J. Ren, L. Gao, Z. Tang, and Z. Wang, “Using Machine Learning to Optimize Web Interactions on Heterogeneous Mobile Systems”, IEEE Access, vol. 7, pp. 139394–139408, 2019.
[72] F. Z. Boito, R. V. Kassick, P. O. A. Navaux, and Y. Denneulin, “AGIOS: Application-Guided I/O Scheduling for Parallel File Systems”, in Proceedings of the 2013 International Conference on Parallel and Distributed Systems, 2013, pp. 43–50.
[73] T. Plötz and Y. Guan, “Deep Learning for Human Activity Recognition in Mobile Computing”, Computer, vol. 51, no. 5, pp. 50–59, 2018.
[74] J. Zhang, Z. Tang, M. Li, D. Fang, P. Nurmi, and Z. Wang, “CrossSense: Towards Cross-Site and Large-Scale WiFi Sensing”, in Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, 2018, pp. 305–320.
[75] Oracle, “Oracle Cloud Infrastructure (OCI) Compute bare metal instances”, Jun-2023. [Online]. Available: https://www.oracle.com/cloud/compute/bare-metal/. [Accessed: 01-Sep-2023].
[76] Cummins, P. Petoumenos, Z. Wang, and H. Leather, “End-to-End Deep Learning of Optimization Heuristics”, in Proceedings of the 2017 26th International Conference on Parallel Architectures and Compilation Techniques, 2017, pp. 219–232.
[77] Chen, J. Fang, S. Chen, C. Xu, and Z. Wang, “Optimizing Sparse Matrix---Vector Multiplications on an ARMv8-Based Many-Core Architecture”, International Journal of Parallel Programming, vol. 47, no. 3, pp. 418–432, Jun. 2019.
[78] M. K. Emani, Z. Wang, and M. F. P. O’Boyle, “Smart, adaptive mapping of parallelism in the presence of external workload”, in Proceedings of the 2013 IEEE/ACM International Symposium on Code Generation and Optimization, 2013, pp. 1–10.
[79] Grewe, Z. Wang, and M. F. P. O’Boyle, “OpenCL Task Partitioning in the Presence of GPU Contention”, in Proceedings of the Languages and Compilers for Parallel Computing, 2014, pp. 87–101.
[80] Y. Wen, Z. Wang, and M. F. P. O’Boyle, “Smart multi-task scheduling for OpenCL programs on CPU/GPU heterogeneous platforms”, in Proceedings of the 2014 21st International Conference on High Performance Computing, 2014, pp. 1–10.
[81] Z. Bei, Z. Yu, Q. Liu, C. Xu, S. Feng, and S. Song, “MEST: A Model-Driven Efficient Searching Approach for MapReduce Self-Tuning”, IEEE Access, vol. 5, pp. 3580–3593, 2017.
[82] Z. Yu, Z. Bei, and X. Qian, “Datasize-Aware High Dimensional Configurations Auto-Tuning of In-Memory Cluster Computing”, in Proceedings of the 23rd International Conference on Architectural Support for Programming Languages and Operating Systems, 2018, pp. 564–577.
[83] Y. Liu, M. Li, N. Alham, and S. Hammoud, “HSim: A MapReduce simulator in enabling Cloud Computing”, Future Generation Computer Systems - FGCS, vol. 29, Jan. 2013.
[84] Y. Zhu et al., “BestConfig: Tapping the Performance Potential of Systems via Automatic Configuration Tuning”, in Proceedings of the 2017 Symposium on Cloud Computing, 2017, pp. 338–350.
[85] M. A. Rahman, J. Hossen, and C. Venkataseshaiah, “SMBSP: A Self-Tuning Approach using Machine Learning to Improve Performance of Spark in Big Data Processing”, in Proceedings of the 2018 7th International Conference on Computer and Communication Engineering, 2018, pp. 274–279.
[86] M. A. Rahman et al., “A Survey of Machine Learning Techniques for Self-tuning Hadoop Performance”, International Journal of Electrical and Computer Engineering, vol. 8, pp. 1854–1862, Jun. 2018.
[87] A. Boyuka et al., “Transparent in Situ Data Transformations in ADIOS”, in Proceedings of the 2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, 2014, pp. 256–266.
[88] J. Liu, S. Tang, G. Xu, C. Ma, and M. Lin, “A Novel Configuration Tuning Method Based on Feature Selection for Hadoop MapReduce”, IEEE Access, vol. 8, pp. 63862–63871, 2020.
[89] Ganapathi, K. Datta, A. Fox, and D. Patterson, “A Case for Machine Learning to Optimize Multicore Performance”, in Proceedings of the First USENIX Conference on Hot Topics in Parallelism, 2009, p. 1.
[90] Dubach, T. M. Jones, E. V. Bonilla, and M. F. P. O’Boyle, “A Predictive Model for Dynamic Microarchitectural Adaptivity Control”, in Proceedings of the 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture, 2010, pp. 485–496.
[91] R. D. Blanton et al., “Statistical learning in chip (SLIC)”, in Proceedings of the 2015 IEEE/ACM International Conference on Computer-Aided Design, 2015, pp. 664–669.
[92] H. Hu, Y. Wen, T.-S. Chua, and X. Li, “Toward Scalable Systems for Big Data Analytics: A Technology Tutorial”, IEEE Access, vol. 2, pp. 652–687, 2014.
[93] L. Zhou, S. Pan, J. Wang, and A. Vasilakos, “Machine Learning on Big Data: Opportunities and Challenges”, Neurocomputing, vol. 237, Jan. 2017.
[94] Khaleel and H. Al-Raweshidy, “Optimization of Computing and Networking Resources of a Hadoop Cluster Based on Software Defined Network”, IEEE Access, vol. 6, pp. 61351–61365, 2018.
[95] Z. Boito et al., “On server-side file access pattern matching”, in Proceedings of the 2019 International Conference on High Performance Computing & Simulation, 2019, pp. 217–224.
[96] M. Trotter, T. Wood, and J. Hwang, “Forecasting a Storm: Divining Optimal Configurations using Genetic Algorithms and Supervised Learning”, in Proceedings of the 2019 IEEE International Conference on Autonomic Computing, 2019, pp. 136–146.
[97] J. Tan et al., “IBTune: Individualized Buffer Tuning for Large-Scale Cloud Databases”, Proceedings of the VLDB Endowment, vol. 12, no. 10, pp. 1221–1234, Jun. 2019.
[98] W. Xiong, Z. Bei, C. Xu, and Z. Yu, “ATH: Auto-Tuning HBase’s Configuration via Ensemble Learning”, IEEE Access, vol. 5, pp. 13157–13170, 2017.
[99] N. Yigitbasi, T. L. Willke, G. Liao, and D. Epema, “Towards Machine Learning-Based Auto-tuning of MapReduce”, in Proceedings of the 2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, 2013, pp. 11–20.
[100] N. Siegmund, A. Grebhahn, S. Apel, and C. Kästner, “Performance-Influence Models for Highly Configurable Systems”, in Proceedings of the Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, 2015, pp. 284–294.
[101] P. Jamshidi, N. Siegmund, M. Velez, C. Kästner, A. Patel, and Y. Agarwal, “Transfer Learning for Performance Modeling of Configurable Systems: An Exploratory Analysis”, in Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, 2017, pp. 497–508.
[102] Mahgoub et al., “SOPHIA: Online Reconfiguration of Clustered NoSQL Databases for Time-Varying Workload”, in Proceedings of the 2019 USENIX Annual Technical Conference, 2019, pp. 223–239.
[103] Dou, P. Chen, and Z. Zheng, “Hdconfigor: Automatically Tuning High Dimensional Configuration Parameters for Log Search Engines”, IEEE Access, vol. 8, pp. 80638–80653, 2020.
[104] Congiu, S. Narasimhamurthy, T. Süß, and A. Brinkmann, “Improving Collective I/O Performance Using Non-volatile Memory Devices”, in Proceedings of the 2016 IEEE International Conference on Cluster Computing, 2016, pp. 120–129.
[105] M. Dorier et al., “Damaris: Addressing Performance Variability in Data Management for Post-Petascale Simulations”, ACM Transactions on Parallel Computing, vol. 3, no. 3, Oct. 2016.
[106] S. Kumar et al., “Characterization and modeling of PIDX parallel I/O for performance optimization”, in Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, 2013, pp. 1–12.
[107] F. Tessier, V. Vishwanath, and E. Jeannot, “TAPIOCA: An I/O Library for Optimized Topology-Aware Data Aggregation on Large-Scale Supercomputers”, in Proceedings of the 2017 IEEE International Conference on Cluster Computing, 2017, pp. 70–80.
[108] Z. Wang, X. Shi, H. Jin, S. Wu, and Y. Chen, “Iteration Based Collective I/O Strategy for Parallel I/O Systems”, in Proceedings of the 2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, 2014, pp. 287–294.
[109] IBM, “IBM Cloud Bare Metal Servers”. [Online]. Available: https://www.ibm.com/products/bare-metal-servers. [Accessed: 01-Sep-2023].
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92441-
dc.description.abstract效能優化對於當代計算機系統至關重要,尤其是伺服器,然而,由於伺服器系統的複雜性和伺服器應用的多樣性,這項工作極具挑戰性。本研究提出一套開發利用機器學習自動優化伺服器應用的工具的方法,該工具能辨識伺服器應用的輸入/輸出模式(I/O pattern),並透過調整伺服器的系統和存儲裝置的設定來優化效能,而這些設定使用兩階段設定空間探索決定。該工具還備有拒絕機制,以避免對未知的模式進行不合宜的優化──因為現實世界飛速變化,不太可能涵蓋全部的應用。評估顯示,使用了該工具後,伺服器應用的效能平均提高了1.17—1.43倍。因此,本研究可以讓系統管理員毋須具備過多關於底層的系統和存儲裝置、甚至是工作負載本身的知識,便能優化伺服器應用的效能,此外,研究人員和系統開發人員更可將這套方法應用於其他平台。本研究亦針對伺服器效能優化探索了數個有潛力的機器學習技術,為未來的工作提供了寶貴的見解。zh_TW
dc.description.abstractPerformance tuning is critical to contemporary computer systems, especially servers. However, it is challenging due to the complexity of server systems and the diversity of server applications. This study proposes a methodology for developing a tool that leverages machine learning (ML) to automatically optimize server applications. The tool recognizes the server application’s I/O pattern and optimizes the performance by tuning the server’s system and storage configuration, which is determined with the two-stage configuration space exploration. Besides, the tool is equipped with a rejection mechanism to avoid performing undesired optimization for unknown patterns, since it is impossible to encompass all applications in a fast-changing world. The evaluation shows that after using the proposed tool, the performance of the server applications has a 1.17-1.43x speedup on average. As a result, the study can help system administrators tune workload performance without requiring much knowledge about the underlying system and storage and the workload itself. Researchers and system developers may apply the methodology to other platforms. Furthermore, this study explores different promising ML techniques for server performance tuning, providing valuable insights for future work.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-03-22T16:31:20Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-03-22T16:31:20Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
摘要 iii
Abstract iv
Table of Contents v
List of Figures vii
List of Tables ix
Chapter 1 Introduction 1
Chapter 2 Related Works 6
Chapter 3 Design and Implementation 11
3.1 Server Workload 11
3.2 ML Model 13
3.2.1 Classification 13
3.2.2 Rejection 15
3.2.3 Generality 18
3.3 Performance Optimizer 20
Chapter 4 Setup 22
Chapter 5 Evaluation 25
5.1 2-Stage CSE 25
5.2 Classification with Rejection 29
5.3 Performance Improvement 33
Chapter 6 Discussion 36
6.1 Decision Tree 36
6.2 Recurrent Neural Network 37
6.3 Reinforcement Learning 39
6.4 Comparison of ML Techniques 42
Chapter 7 Conclusion 43
References I
Appendix A – Performance Counter List XIV
Appendix B – CART’s Decision Tree XVII
-
dc.language.isoen-
dc.title針對伺服器應用的系統和儲存設備的一鍵效能優化zh_TW
dc.titlePush-Button System and Storage Performance Tuning for Server Applicationsen
dc.typeThesis-
dc.date.schoolyear112-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee張原豪;鄭湘筠zh_TW
dc.contributor.oralexamcommitteeYuan-Hao Chang;Hsiang-Yun Chengen
dc.subject.keyword效能調整,效能工具,伺服器工作負載,伺服器優化,輸入/輸出模式(I/O pattern),設定空間探索,開放集辨識(Open-set recognition),zh_TW
dc.subject.keywordPerformance tuning,Performance tool,Server workload,Server optimization,I/O pattern,Configuration space exploration,Open-set recognition,en
dc.relation.page62-
dc.identifier.doi10.6342/NTU202304444-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2023-11-27-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2028-11-20-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-112-1.pdf
  此日期後於網路公開 2028-11-20
2.56 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved